ollama and gollama
About ollama
ollama/ollama
Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.
Provides a unified local inference runtime built on llama.cpp with REST API and language bindings (Python, JavaScript) for seamless integration into applications and agents. Features a model library with automatic download/management, streaming responses, and launcher integrations for Claude, Codex, and other external tools. Ecosystem spans 100+ community projects including web UIs (Open WebUI, LibreChat), desktop clients (AnythingLLM, Cherry Studio), and IDE extensions (Continue, Cline).
About gollama
sammcj/gollama
Go manage your Ollama models
A TUI-based model manager for Ollama that goes beyond basic operations with built-in vRAM estimation across quantization levels, Modelfile editing, and advanced sorting/filtering by metadata (size, quantization, family, parameters). The tool communicates directly with the Ollama API and supports both local and remote instances via configurable host endpoints, while also providing CLI modes for scripting and batch operations like search with boolean operators.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work