InternScience/ResearchClawBench

ResearchClawBench: Evaluating AI Agents for Automated Research from Re-Discovery to New-Discovery

28
/ 100
Experimental

Implements a two-stage autonomous research pipeline where AI agents independently analyze datasets and generate reports, then evaluates results against expert-curated checklists from 40 real published papers across 10 scientific domains. The benchmark uses an LLM-as-judge approach with fine-grained, weighted multimodal scoring criteria (text and image-based) to assess whether agents match or exceed human research outcomes. Supports multiple agent frameworks (Claude Code, OpenClaw, Nanobot) with agent-agnostic configuration and includes a Flask-based live streaming UI for real-time agent execution monitoring.

No Package No Dependents
Maintenance 13 / 25
Adoption 6 / 25
Maturity 9 / 25
Community 0 / 25

How are scores calculated?

Stars

19

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Mar 21, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/InternScience/ResearchClawBench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.