agentopology and mco

These are **competitors**: both provide declarative orchestration layers for multi-agent AI coding workflows, with agentopology offering higher adoption (466 vs 0 monthly downloads) through its Terraform-like `.at` syntax and visualizer, while mco positions itself as a more agent-agnostic abstraction layer working across more IDE integrations.

agentopology
54
Established
mco
48
Emerging
Maintenance 13/25
Adoption 13/25
Maturity 18/25
Community 10/25
Maintenance 13/25
Adoption 10/25
Maturity 11/25
Community 14/25
Stars: 40
Forks: 4
Downloads: 466
Commits (30d): 0
Language: TypeScript
License: Apache-2.0
Stars: 186
Forks: 17
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No Package No Dependents

About agentopology

agentopology/agentopology

The Terraform for AI agents. Define your team once, deploy to Claude Code, OpenClaw, Cursor, Codex, Gemini, Copilot, Kiro. Declarative language (.at files) + Claude Code skill + interactive visualizer.

The `.at` declarative language compiles to platform-native configs (AGENT.md, soul.md, Cursor rules, etc.) via a CLI compiler, eliminating manual maintenance across fragmented ecosystems. Built-in validation against 82 rules catches topology errors before scaffolding, while the interactive Claude Code skill lets non-technical users define agent teams in plain English without learning syntax. The visualizer renders agent topologies as interactive graphs showing connections, tools, hooks, and quality gates—providing a single source of truth for multi-agent architecture that stays synchronized across seven deployment targets.

About mco

mco-org/mco

Orchestrate AI coding agents. Any prompt. Any agent. Any IDE. Neutral orchestration layer for Claude Code, Codex CLI, Gemini CLI, OpenCode, Qwen Code — works from Cursor, Trae, Copilot, Windsurf, or plain shell.

Implements parallel multi-agent dispatch with deduplication and consensus synthesis—dispatching prompts to Claude, Codex, Gemini, Qwen, and OpenCode simultaneously, then aggregating results across agents to identify which findings are detected by multiple models. Built as a CLI-based orchestration layer that returns structured output (JSON, SARIF, Markdown), enabling higher-order agents like OpenClaw to autonomously manage multi-agent workflows via shell command invocation.

Scores updated daily from GitHub, PyPI, and npm data. How scores work