cake-lab/HybridDepth
Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]
HybridDepth helps practitioners accurately measure object distances in a scene using a series of images captured at different focus settings. It takes a collection of focal stack images as input and generates a detailed depth map, showing the precise distance of each point in the scene. This is useful for researchers and professionals working with computer vision, 3D reconstruction, or augmented reality applications.
173 stars.
Use this if you need highly accurate, robust depth perception from camera images and can capture multiple images at varying focus.
Not ideal if you only have a single image, as this method relies on focal stack inputs for its superior accuracy.
Stars
173
Forks
20
Language
Jupyter Notebook
License
GPL-3.0
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/cake-lab/HybridDepth"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
vita-epfl/monoloco
A 3D vision library from 2D keypoints: monocular and stereo 3D detection for humans, social...
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
fangchangma/self-supervised-depth-completion
ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and...
ialhashim/DenseDepth
High Quality Monocular Depth Estimation via Transfer Learning
soubhiksanyal/RingNet
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision