andyzeng/visual-pushing-grasping

Train robotic agents to learn to plan pushing and grasping actions for manipulation with deep reinforcement learning.

51
/ 100
Established

Implements dual fully convolutional Q-networks that map RGB-D observations to pixel-wise action utilities for pushing and grasping, jointly trained via self-supervised Q-learning with rewards only from successful grasps. The approach integrates with V-REP/CoppeliaSim for simulation and runs on UR5 hardware, supporting both CPU and GPU acceleration (CUDA/cuDNN) with PyTorch backends. Learn synergies between complementary non-prehensile and prehensile actions through model-free trial-and-error, achieving generalization to novel objects within hours of training.

1,087 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

1,087

Forks

329

Language

Python

License

BSD-2-Clause

Last pushed

May 11, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/andyzeng/visual-pushing-grasping"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.