andyzeng/visual-pushing-grasping
Train robotic agents to learn to plan pushing and grasping actions for manipulation with deep reinforcement learning.
Implements dual fully convolutional Q-networks that map RGB-D observations to pixel-wise action utilities for pushing and grasping, jointly trained via self-supervised Q-learning with rewards only from successful grasps. The approach integrates with V-REP/CoppeliaSim for simulation and runs on UR5 hardware, supporting both CPU and GPU acceleration (CUDA/cuDNN) with PyTorch backends. Learn synergies between complementary non-prehensile and prehensile actions through model-free trial-and-error, achieving generalization to novel objects within hours of training.
1,087 stars. No commits in the last 6 months.
Stars
1,087
Forks
329
Language
Python
License
BSD-2-Clause
Category
Last pushed
May 11, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/andyzeng/visual-pushing-grasping"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
BerkeleyAutomation/gqcnn
Python module for GQ-CNN training and deployment with ROS integration.
skumra/robotic-grasping
Antipodal Robotic Grasping using GR-ConvNet. IROS 2020.
google-research/ravens
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in...
shadow-robot/smart_grasping_sandbox
A public sandbox for Shadow's Smart Grasping System
huangwl18/geometry-dex
PyTorch Code for "Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning"