mcp-client-for-ollama and OmniMCP
About mcp-client-for-ollama
jonigl/mcp-client-for-ollama
A text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-loop, thinking mode, model params config, MCP prompts, custom system prompt and saved preferences. Built for developers working with local LLMs.
# Technical Summary Implements stdio, SSE, and HTTP transport protocols for MCP server communication with automatic reconnection and hot-reload capabilities during development. Built as a Python TUI using modern libraries (Typer, Rich, Textual) that connects Ollama models—both local and cloud-hosted—to MCP tool ecosystems for agentic workflows with iterative tool execution loops. Supports cross-language servers (Python/JavaScript), integrates Claude's native MCP configurations via auto-discovery, and provides safety mechanisms like human-in-the-loop approval gates before tool execution.
About OmniMCP
OpenAdaptAI/OmniMCP
OmniMCP uses Microsoft OmniParser and Model Context Protocol (MCP) to provide AI models with rich UI context and powerful interaction capabilities.
Implements a perceive-plan-act loop that captures screenshots, parses UI elements with OmniParser, generates action plans via Claude/LLM, and executes mouse/keyboard interactions through `pynput`. Supports optional auto-deployment of OmniParser to AWS EC2 with cost management, and generates timestamped visual debugging artifacts for each agent step. Targets autonomous UI automation and agent-based task execution across arbitrary desktop applications.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work