mcp-client-for-ollama and ultimate_mcp_server

mcp-client-for-ollama
75
Verified
ultimate_mcp_server
55
Established
Maintenance 13/25
Adoption 18/25
Maturity 24/25
Community 20/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 19/25
Stars: 563
Forks: 82
Downloads: 3,964
Commits (30d): 1
Language: Python
License: MIT
Stars: 143
Forks: 25
Downloads:
Commits (30d): 0
Language: Python
License:
No risk flags
No Package No Dependents

About mcp-client-for-ollama

jonigl/mcp-client-for-ollama

A text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-loop, thinking mode, model params config, MCP prompts, custom system prompt and saved preferences. Built for developers working with local LLMs.

# Technical Summary Implements stdio, SSE, and HTTP transport protocols for MCP server communication with automatic reconnection and hot-reload capabilities during development. Built as a Python TUI using modern libraries (Typer, Rich, Textual) that connects Ollama models—both local and cloud-hosted—to MCP tool ecosystems for agentic workflows with iterative tool execution loops. Supports cross-language servers (Python/JavaScript), integrates Claude's native MCP configurations via auto-discovery, and provides safety mechanisms like human-in-the-loop approval gates before tool execution.

About ultimate_mcp_server

Dicklesworthstone/ultimate_mcp_server

Comprehensive MCP server exposing dozens of capabilities to AI agents: multi-provider LLM delegation, browser automation, document processing, vector ops, and cognitive memory systems

Implements MCP tools for cognitive memory (episodic/semantic), multi-LLM delegation, and specialized processing via Playwright for browser automation, OCR for document extraction, and vector operations for RAG. Built on MCP protocol for direct agent integration and includes advanced caching strategies (exact, semantic, task-aware) to optimize cost and performance across OpenAI, Anthropic, Google, and other providers.

Scores updated daily from GitHub, PyPI, and npm data. How scores work