Appse LogoAppse
Ollama logo

Ollama

Chat and build with open AI models locally

LLM
API
Local Deployment

Ollama lets you run, chat with, and serve open-source LLMs locally (and now in the cloud). Pull models like Llama, Qwen, and Mistral via a simple CLI and REST API, with GPU/CPU support, embeddings, and modelfiles for easy customization.

Ollama preview 1
Ollama preview 2

Discussion