Ewenwan/MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度学习目标检测 yolov3 行为检测 opencv PCL 机器学习 无人驾驶
Combines multiple autonomous driving perception pipelines (object detection, semantic segmentation, tracking, depth estimation) with visual-inertial SLAM backends and supports multi-sensor fusion through IMU integration for robust localization. Provides comprehensive implementations across the full autonomous stack—from low-level feature extraction and optical flow estimation to high-level motion planning and collision avoidance—with references to production frameworks like Apollo and practical datasets (KITTI). Heavily emphasizes sensor calibration toolchains and offers curated learning resources spanning classical computer vision (HOG+SVM, CRF) to modern deep learning approaches (FCNs, DPMs, end-to-end steering models).
8,589 stars. No commits in the last 6 months.
Stars
8,589
Forks
2,802
Language
C++
License
—
Category
Last pushed
Jul 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Ewenwan/MVision"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
andyzeng/apc-vision-toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object...
OSU-NLP-Group/UGround
[ICLR'25 Oral] UGround: Universal GUI Visual Grounding for GUI Agents
RizwanMunawar/trajectory-forcast
Forecast object trajectory based on history of tracks. Provides a stable and computationally...
microsoft/event-vae-rl
Visuomotor policies from event-based cameras through representation learning and reinforcement...
leggedrobotics/wild_visual_navigation
Wild Visual Navigation: A system for fast traversability learning via pre-trained models and...