AgentBench and MemoryAgentBench
These two tools are complements, with MemoryAgentBench specifically extending AgentBench by focusing on the specialized evaluation of memory capabilities in LLM agents through incremental multi-turn interactions.
About AgentBench
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Comprises 8 diverse task environments (OS interaction, database queries, knowledge graphs, web shopping/browsing, card games, and puzzles) with containerized deployment via Docker Compose. Evaluates agents through multi-turn interactions using function-calling prompts, integrated with AgentRL for end-to-end reinforcement learning workflows. Provides standardized dev/test splits with performance leaderboards across different LLM implementations.
About MemoryAgentBench
HUST-AI-HYZ/MemoryAgentBench
Open source code for ICLR 2026 Paper: Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work