openrouter-deep-research-mcp and deep-research-mcp-server

These are **competitors** offering different architectural approaches to the same problem—one uses a multi-agent ensemble consensus model with async orchestration, while the other uses a single Gemini-based agent—so users would select based on whether they prefer distributed agent coordination or a simpler unified model.

Maintenance 10/25
Adoption 8/25
Maturity 9/25
Community 17/25
Maintenance 2/25
Adoption 8/25
Maturity 9/25
Community 19/25
Stars: 42
Forks: 11
Downloads:
Commits (30d): 0
Language: JavaScript
License: MIT
Stars: 68
Forks: 17
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About openrouter-deep-research-mcp

wheattoast11/openrouter-deep-research-mcp

A multi-agent research MCP server + mini client adapter - orchestrates a net of async agents or streaming swarm to conduct ensemble consensus-backed research. Each task builds its own indexed pglite database on the fly in web assembly. Includes semantic + hybrid search, SQL execution, semaphores, prompts/resources and more

Implements consensus-driven research via agents routed through OpenRouter's LLM endpoint, with tools for hybrid search (BM25 + vector), SQL queries on ephemeral PGLite instances, knowledge graph traversal, and session checkpointing. Supports STDIO (MCP spec default) and HTTP transports with circuit-breaker fault tolerance across multiple API keys; embedding-based model routing matches queries to domain-optimized LLM tiers without extra API calls, while persistent reporting and undo/fork capabilities enable reproducible research workflows across Claude Desktop, Jan AI, Continue, and other MCP clients.

About deep-research-mcp-server

ssdeanx/deep-research-mcp-server

MCP Deep Research Server using Gemini creating a Research AI Agent

Implements iterative deep research through a feedback loop: refining SERP queries based on prior learnings, processing results with semantic chunking and batched Gemini calls, then recursively exploring new directions until depth limits are reached. Built on Gemini 2.5 Flash with optional Google Search Grounding and structured JSON outputs validated via Zod, packaged as an MCP server for seamless integration with Claude and other agent frameworks.

Scores updated daily from GitHub, PyPI, and npm data. How scores work