fangchangma/sparse-to-dense

ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (Torch Implementation)

43
/ 100
Emerging

Supports multi-modal depth prediction by fusing RGB images with sparse LiDAR samples through an encoder-decoder architecture with configurable ResNet backbones (ResNet-50/18) and multiple decoder variants (upproj, upconv, deconv). Implements flexible input representations (linear, log, inverse) and loss functions (L1, L2, Berhu) to handle varying sparse sample densities on NYU Depth v2 and KITTI datasets. Built on Torch with cuDNN acceleration and HDF5-formatted dataset support for efficient training and inference.

441 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 24 / 25

How are scores calculated?

Stars

441

Forks

95

Language

Lua

License

Last pushed

Jul 21, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/fangchangma/sparse-to-dense"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.