sparse-to-dense and sparse-to-dense.pytorch
These are ecosystem siblings, representing two different implementations (Torch and PyTorch) of the the same "Sparse-to-Dense" depth prediction algorithm by the same author, designed for different deep learning frameworks.
About sparse-to-dense
fangchangma/sparse-to-dense
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (Torch Implementation)
Supports multi-modal depth prediction by fusing RGB images with sparse LiDAR samples through an encoder-decoder architecture with configurable ResNet backbones (ResNet-50/18) and multiple decoder variants (upproj, upconv, deconv). Implements flexible input representations (linear, log, inverse) and loss functions (L1, L2, Berhu) to handle varying sparse sample densities on NYU Depth v2 and KITTI datasets. Built on Torch with cuDNN acceleration and HDF5-formatted dataset support for efficient training and inference.
About sparse-to-dense.pytorch
fangchangma/sparse-to-dense.pytorch
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (PyTorch Implementation)
Scores updated daily from GitHub, PyPI, and npm data. How scores work