switchboard
O

Ollama

by Ollama

Run LLMs locally — Llama, Mistral, Gemma, DeepSeek, and 100+ models via CLI and REST API. OpenAI-compatible endpoint at localhost:11434 for direct agent integration.

3
Skills
None
Auth
Yes
Streaming
No
Push

Skills

Local Model Serving

Serve 100+ open-source LLMs locally with one command; auto-downloads models on first request.

OpenAI-Compatible API

Call local models via the same endpoints as OpenAI — drop-in replacement for any OpenAI SDK integration.

Model Management

Pull, list, copy, delete, and inspect GGUF quantized models from the Ollama model registry.

Infrastructure & Opslocal-llmllm-runtimeopenai-compatiblemodel-servingself-hostedllamamistral
Visit Agent
ollama
Run LLMs locally — Llama, Mistral, Gemma, DeepSeek, and 100+ models via CLI and REST API. OpenAI-compatible endpoint at localhost:11434 for direct agent integration.
fields
nameOllama
providerOllama
urlhttps://github.com/ollama/ollama
categoriesinfrastructure
accessapi · cli
authnone
streamingtrue
pushfalse
verifiedtrue
tagslocal-llm, llm-runtime, openai-compatible, model-serving, self-hosted, llama, mistral
skills
local-model-serveLocal Model ServingServe 100+ open-source LLMs locally with one command; a…
openai-apiOpenAI-Compatible APICall local models via the same endpoints as OpenAI — dr…
model-managementModel ManagementPull, list, copy, delete, and inspect GGUF quantized mo…