kyegomez/Midas
Implementation of Midas from [Towards Robust Monocular Depth Estimation] in Pytorch and Zeta
This project helps you understand the three-dimensional structure of a scene from a single, ordinary photograph. It takes any standard image as input and outputs a 'depth map' — a specialized image where brighter pixels are closer and darker pixels are further away. Anyone working with visual content, like a photographer, videographer, or 3D artist, could use this to add a new dimension to their single-frame images.
No commits in the last 6 months.
Use this if you need to quickly estimate how far away different objects are in a photograph, without needing special cameras or multiple images.
Not ideal if you require extremely precise, metrically accurate depth measurements for engineering or scientific applications.
Stars
7
Forks
—
Language
Shell
License
MIT
Category
Last pushed
Mar 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/kyegomez/Midas"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vita-epfl/monoloco
A 3D vision library from 2D keypoints: monocular and stereo 3D detection for humans, social...
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and...
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning