fangchangma/sparse-to-dense
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (Torch Implementation)
Supports multi-modal depth prediction by fusing RGB images with sparse LiDAR samples through an encoder-decoder architecture with configurable ResNet backbones (ResNet-50/18) and multiple decoder variants (upproj, upconv, deconv). Implements flexible input representations (linear, log, inverse) and loss functions (L1, L2, Berhu) to handle varying sparse sample densities on NYU Depth v2 and KITTI datasets. Built on Torch with cuDNN acceleration and HDF5-formatted dataset support for efficient training and inference.
441 stars. No commits in the last 6 months.
Stars
441
Forks
95
Language
Lua
License
—
Category
Last pushed
Jul 21, 2018
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/fangchangma/sparse-to-dense"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
Aradhye2002/EcoDepth
[CVPR'2024] Official implementation of the paper "ECoDepth: Effective Conditioning of Diffusion...
fangchangma/sparse-to-dense.pytorch
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image"...
ShuweiShao/MonoDiffusion
[TCSVT2024] MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model
albert100121/AiFDepthNet
Official Pytorch implementation of ICCV 2021 2020 paper "Bridging Unsupervised and Supervised...
chen742/DCF
This is the official implementation of "Transferring to Real-World Layouts: A Depth-aware...