Your Docs Are Written for Humans. Your Users Are Agents.
Agents choose developer tools based on documentation quality. llms.txt is the new robots.txt. The MCP data proves it: repos with detailed docs get 18x the stars.
On a recent Y Combinator podcast, a founder described how Claude Code chose Supabase for his project's database. Not because he asked for Supabase. Because Claude Code read the documentation, found it well-structured, and decided it was the best fit. The agent made the tool selection decision. The human just approved it.
This is happening across the developer tools ecosystem. Agents are becoming the primary consumers of documentation — and the documentation wasn't designed for them. PT-Edge data reveals the gap: MCP repos with detailed, structured documentation average 354 stars. Those without average 19. That's an 18x difference, and it's not a coincidence.
Documentation is the new distribution channel
In the pre-agent era, developer tools got adopted through Stack Overflow answers, conference talks, blog posts, and GitHub trending. A developer would hear about a tool, read the README, try it, and maybe adopt it.
In the agent era, the agent reads the documentation directly. It doesn't attend conferences. It doesn't read blog posts. It reads docs, evaluates whether they contain enough information to solve the current problem, and either uses the tool or moves on. The documentation is the product evaluation.
This means documentation quality isn't a nice-to-have. It's the primary competitive moat for developer tool adoption. Better docs → agents choose your tool → more usage → more adoption. Worse docs → agents skip you → your tool doesn't exist in the agent's world.
The MCP evidence
MCP servers are the purest test case because agents are literally the primary user. PT-Edge scores MCP repos with AI-generated summaries of varying quality. The correlation between documentation quality and adoption is stark:
| Documentation quality | Repos | Avg stars | Avg downloads/mo |
|---|---|---|---|
| Detailed summary (200+ chars) | 2,164 | 354 | 61,550 |
| No summary | 1,595 | 19 | 12 |
Repos with detailed documentation get 18x the stars and 5,000x the downloads. Correlation isn't causation — good projects tend to have good docs. But in the MCP ecosystem, where agents are the primary user, the relationship between documentation quality and adoption is stronger than in any other domain we track.
llms.txt: the new robots.txt
A new convention is emerging. Just as robots.txt tells search crawlers how to interact with a site, llms.txt tells AI agents what a project does and how to use it. It's a machine-readable summary designed specifically for LLM consumption.
| Project | Score | Stars | What it does |
|---|---|---|---|
| llm-docs-builder | 37/100 | 80 | Transform markdown docs for LLM + RAG consumption, generate llms.txt |
| codecrawl | 40/100 | 79 | Turn codebases into LLM-ready data with llms.txt generation |
| mcp-llms-txt-explorer | 55/100 | 75 | MCP server to explore websites with llms.txt files |
| llm-min.txt | 30/100 | 673 | Min.js-style compression of docs for LLM context windows |
llm-docs-builder transforms markdown documentation into LLM-optimised formats and generates llms.txt files. CodeCrawl extracts data from entire codebases and generates llms.txt with a single API call. The tooling for creating agent-readable docs is emerging fast.
The AURA protocol and purpose-built tools
Beyond llms.txt, a more ambitious movement is underway: documentation designed from first principles for agent consumption.
| Project | Score | Stars | What it does |
|---|---|---|---|
| aura | 43/100 | 103 | AURA: Agent-Usable Resource Assertion — open protocol for machine-readable web |
| docfork | 42/100 | 433 | Up-to-date docs specifically for AI agents |
| rtfmbro-mcp | 32/100 | 77 | Always-current, version-specific package docs for coding agents |
| deepwiki-rs | 59/100 | 800 | Generate accurate technical docs and AI-ready context from code |
AURA — Agent-Usable Resource Assertion — is an open protocol "designed to make the web machine-readable." It replaces fragile scraping with structured, agent-optimised content. This isn't just better docs. It's a new protocol layer designed specifically for agent consumption.
rtfmbro solves a different angle: providing always-up-to-date, version-specific package documentation as context for coding agents. When Claude Code needs to use a library, it doesn't read the generic README — rtfmbro provides the exact documentation for the exact version, preventing outdated code suggestions.
What agent-readable documentation looks like
Traditional documentation optimises for human skimming: tutorials, conceptual overviews, narrative explanations. Agent-readable documentation optimises for machine parsing:
- Code snippets over prose. Agents extract and execute code. Narrative context is noise.
- Q&A format over tutorials. "How do I send an email?" with a direct code answer. Agents match queries to answers.
- Versioned and timestamped. Agents need to know if the docs are current. Stale docs produce stale code.
- Structured metadata. What the tool does, what it requires, what it returns — in parseable format, not prose.
- Error documentation. What goes wrong and how to fix it. Agents hit errors constantly and need resolution paths.
What to do right now
If you maintain a developer tool:
- Add an
llms.txtto your repo. Use llm-docs-builder or CodeCrawl to generate it. - Structure your README for extraction. Short paragraphs, code-first, Q&A sections. Avoid long narrative blocks.
- Add version-specific installation examples. Agents generate code — they need exact commands, not "install the latest version."
- Document errors. Every error message should have a documented resolution. Agents will search for it.
Documentation is becoming the primary interface between your tool and the agents that decide whether to use it. The projects that optimise for agent readability now will have a structural advantage as agent-driven adoption becomes the default.
Go deeper
Every project mentioned here has a quality-scored page in our directory, updated daily:
- MCP categories — where agent-first documentation matters most
- Agent categories — the ecosystem driving this shift
- LLM tool categories — documentation and evaluation infrastructure
- Trending MCP projects — what's moving this week
Related analysis
The Claude Code Ecosystem: Everything You Can Plug In
2,400 repos. 370 new ones per week. A practitioner's guide to what's mature, what's emerging, and what's noise.
How OpenClaw Went from Launch to 1,299 Repos in 8 Weeks
The anatomy of an AI ecosystem forming in real time — what gets built first, what comes next, and where the real...
Your Agent Doesn't Have an Email Address (Yet)
30+ repos are building identity, credentials, email, and payment infrastructure for agents as first-class entities....
You're Shipping AI You Can't Measure
1,159 repos are building LLM evaluation infrastructure. Most teams are still eyeballing outputs. Here's the decision...