monodepth2 and monodepth_benchmark
The benchmark tool, B, complements the monocular depth estimation tool, A, by providing an evaluation framework to assess and compare the performance of different design decisions, including those implemented in models like A, within the broader context of monocular depth reconstruction.
About monodepth2
nianticlabs/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
Self-supervised learning framework that trains on monocular or stereo video sequences without ground-truth depth labels, using photometric loss and view synthesis. Implements ResNet encoder-decoder architecture with multi-scale depth predictions and supports multiple training modalities (mono, stereo, mono+stereo) on KITTI dataset. Built in PyTorch with pretrained models available at various resolutions (640×192 to 1024×320) and extensible dataloader API for custom datasets.
About monodepth_benchmark
jspenmar/monodepth_benchmark
Code for "Deconstructing Monocular Depth Reconstruction: The Design Decisions that Matter" (https://arxiv.org/abs/2208.01489)
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work