lechmazur/pgg_bench
Public Goods Game (PGG) Benchmark: Contribute & Punish is a multi-agent benchmark that tests cooperative and self-interested strategies among Large Language Models (LLMs) in a resource-sharing economic scenario. Our experiment extends the classic PGG with a punishment phase, allowing players to penalize free-riders or retaliate against others.
The benchmark facilitates experimentation with various LLM strategies, offering granular insights into cooperative dynamics and individual model behaviors. It employs a round-based simulation with a `1.6x` multiplier for public contributions and a `3x` penalty for punishment, capped at `50%` of the target's balance. Comprehensive visualizations and metrics, including TrueSkill leaderboards and balance timeseries, are generated to analyze the performance of 18 LLMs across hundreds of matches, with options for public messaging.
No commits in the last 6 months.
Stars
39
Forks
2
Language
—
License
—
Category
Last pushed
Apr 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/lechmazur/pgg_bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards