gpt-researcher and deep-research
A is a mature, widely-adopted autonomous research framework, while B is a smaller alternative implementation that adds SSE streaming and MCP server capabilities, making them direct competitors offering similar deep-research functionality with different feature trade-offs.
About gpt-researcher
assafelovic/gpt-researcher
An autonomous agent that conducts deep research on any data using any LLM providers
Implements a multi-agent planning architecture with separate planner and execution agents that generate research questions, then parallelize web scraping and document retrieval to overcome token limits. Supports multiple LLM providers, integrates with web crawlers (JavaScript-enabled), MCP for local data sources, and exports comprehensive multi-thousand-word reports with source attribution and AI-generated inline images.
About deep-research
u14app/deep-research
Use any LLMs (Large Language Models) for Deep Research. Support SSE API and MCP server.
Orchestrates multi-step research workflows using "Thinking" and "Task" models paired with pluggable web search providers (Tavily, Firecrawl, Brave, etc.) to bypass native LLM search limitations. Built on Next.js with browser-side data storage for privacy, while offering SSE API endpoints and MCP protocol support for SaaS integration or embedding in other AI tools. Supports 10+ LLM providers including OpenAI, Anthropic, Deepseek, and OpenAI-compatible endpoints, plus local knowledge bases from uploaded documents.
Scores updated daily from GitHub, PyPI, and npm data. How scores work