monodepth2 and monodepth_benchmark

The benchmark tool, B, complements the monocular depth estimation tool, A, by providing an evaluation framework to assess and compare the performance of different design decisions, including those implemented in models like A, within the broader context of monocular depth reconstruction.

monodepth2
51
Established
monodepth_benchmark
41
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 15/25
Stars: 4,466
Forks: 986
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License:
Stars: 120
Forks: 16
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About monodepth2

nianticlabs/monodepth2

[ICCV 2019] Monocular depth estimation from a single image

Self-supervised learning framework that trains on monocular or stereo video sequences without ground-truth depth labels, using photometric loss and view synthesis. Implements ResNet encoder-decoder architecture with multi-scale depth predictions and supports multiple training modalities (mono, stereo, mono+stereo) on KITTI dataset. Built in PyTorch with pretrained models available at various resolutions (640×192 to 1024×320) and extensible dataloader API for custom datasets.

About monodepth_benchmark

jspenmar/monodepth_benchmark

Code for "Deconstructing Monocular Depth Reconstruction: The Design Decisions that Matter" (https://arxiv.org/abs/2208.01489)

Scores updated daily from GitHub, PyPI, and npm data. How scores work