InternScience/ResearchClawBench
ResearchClawBench: Evaluating AI Agents for Automated Research from Re-Discovery to New-Discovery
Implements a two-stage autonomous research pipeline where AI agents independently analyze datasets and generate reports, then evaluates results against expert-curated checklists from 40 real published papers across 10 scientific domains. The benchmark uses an LLM-as-judge approach with fine-grained, weighted multimodal scoring criteria (text and image-based) to assess whether agents match or exceed human research outcomes. Supports multiple agent frameworks (Claude Code, OpenClaw, Nanobot) with agent-agnostic configuration and includes a Flask-based live streaming UI for real-time agent execution monitoring.
Stars
19
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 21, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/InternScience/ResearchClawBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
alvinunreal/awesome-autoresearch
A curated list of autonomous improvement loops, research agents, and autoresearch-style systems...
WecoAI/awesome-autoresearch
Curated list of AutoResearch use cases with optimization traces and open source implementations
krzysztofdudek/ResearcherSkill
One file. Your AI agent becomes a scientist. 30+ experiments while you sleep.
Just-Curieous/Curie
❓Curie: Automated and Rigorous Scientific Experimentation with AI Agents
OpenRaiser/NanoResearch
🦞+🔬: NanoResearch: The Autonomous AI Research Assistant