ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
Implements a dense depth prediction encoder-decoder architecture with skip connections, leveraging pre-trained ImageNet weights for rapid convergence on NYU Depth V2 and KITTI datasets. Supports multiple frameworks (Keras/TensorFlow 1.x, TensorFlow 2.0, PyTorch) with pre-trained models enabling inference on modest GPUs (GeForce 940MX+). Includes interactive Qt-based 3D point cloud visualization from webcam or image input.
1,605 stars. No commits in the last 6 months.
Stars
1,605
Forks
349
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Dec 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/ialhashim/DenseDepth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
tinghuiz/SfMLearner
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
tjqansthd/LapDepth-release
Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals