nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
Self-supervised learning framework that trains on monocular or stereo video sequences without ground-truth depth labels, using photometric loss and view synthesis. Implements ResNet encoder-decoder architecture with multi-scale depth predictions and supports multiple training modalities (mono, stereo, mono+stereo) on KITTI dataset. Built in PyTorch with pretrained models available at various resolutions (640×192 to 1024×320) and extensible dataloader API for custom datasets.
4,466 stars. No commits in the last 6 months.
Stars
4,466
Forks
986
Language
Jupyter Notebook
License
—
Category
Last pushed
Aug 10, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nianticlabs/monodepth2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
tjqansthd/LapDepth-release
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals