ollama/ollama

Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.

71
/ 100
Verified

Provides a unified local inference runtime built on llama.cpp with REST API and language bindings (Python, JavaScript) for seamless integration into applications and agents. Features a model library with automatic download/management, streaming responses, and launcher integrations for Claude, Codex, and other external tools. Ecosystem spans 100+ community projects including web UIs (Open WebUI, LibreChat), desktop clients (AnythingLLM, Cherry Studio), and IDE extensions (Continue, Cline).

164,987 stars. Actively maintained with 118 commits in the last 30 days.

No Package No Dependents
Maintenance 25 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

164,987

Forks

14,940

Language

Go

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

118

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ollama/ollama"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.