Yifan-Song793/ETO
Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)
Implements iterative policy learning through contrastive trajectory pairs using DPO loss on failure-success examples, rather than relying solely on expert demonstrations. Provides integrated environments for WebShop, ScienceWorld, and ALFWorld, with a FastChat-based training pipeline supporting parallel exploration and multi-round optimization. Demonstrates significant generalization gains (22% improvement on out-of-distribution tasks) and maintains task-solving efficiency through fewer action steps.
159 stars. No commits in the last 6 months.
Stars
159
Forks
15
Language
Python
License
—
Category
Last pushed
Oct 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Yifan-Song793/ETO"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
xrsrke/toolformer
Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools
MozerWang/AMPO
[ICLR 2026] Adaptive Social Learning via Mode Policy Optimization for Language Agents
real-stanford/reflect
[CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction
BatsResearch/planetarium
Dataset and benchmark for assessing LLMs in translating natural language descriptions of...
nsidn98/LLaMAR
Code for our paper LLaMAR: LM-based Long-Horizon Planner for Multi-Agent Robotics