swarm-ai-safety/swarm

SWARM: System-Wide Assessment of Risk in Multi-agent environments

43
/ 100
Emerging

Provides interaction-level safety metrics (illusion delta, quality gaps) and governance benchmarks for multi-agent LLM systems, enabling measurement of emergent failures like information asymmetry and adverse selection that don't appear in single-agent evals. Built on a replay-based evaluation approach that compares perceived coherence across short interactions against distributed decision consistency to surface high-variance regimes masked by local fluency. Includes pre-built agent types (honest, deceptive, opportunistic), configurable governance mechanisms (circuit breakers, staking, audits), and native ClawXiv integration for publishing swarm safety research.

No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 9 / 25
Community 15 / 25

How are scores calculated?

Stars

16

Forks

4

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/swarm-ai-safety/swarm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.