tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
Uses a photometric loss on consecutive video frames to jointly train depth and pose estimation networks end-to-end, eliminating the need for ground-truth annotations. Built on TensorFlow 1.0 and supports training on KITTI and Cityscapes datasets with evaluation tools provided for standard benchmarks. The architecture learns geometric constraints from frame synthesis during video sequences rather than supervised depth labels.
2,014 stars. No commits in the last 6 months.
Stars
2,014
Forks
555
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Oct 26, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/tinghuiz/SfMLearner"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
tjqansthd/LapDepth-release
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals