SiyuanHuang95/ManipVQA
[IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models
102 stars. No commits in the last 6 months.
Stars
102
Forks
3
Language
Python
License
—
Category
Last pushed
Aug 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/SiyuanHuang95/ManipVQA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
xrsrke/toolformer
Implementation of Toolformer: Language Models Can Teach Themselves to Use Tools
MozerWang/AMPO
[ICLR 2026] Adaptive Social Learning via Mode Policy Optimization for Language Agents
real-stanford/reflect
[CoRL 2023] REFLECT: Summarizing Robot Experiences for Failure Explanation and Correction
BatsResearch/planetarium
Dataset and benchmark for assessing LLMs in translating natural language descriptions of...
nsidn98/LLaMAR
Code for our paper LLaMAR: LM-based Long-Horizon Planner for Multi-Agent Robotics