nianticlabs/monodepth2

[ICCV 2019] Monocular depth estimation from a single image

51
/ 100
Established

Self-supervised learning framework that trains on monocular or stereo video sequences without ground-truth depth labels, using photometric loss and view synthesis. Implements ResNet encoder-decoder architecture with multi-scale depth predictions and supports multiple training modalities (mono, stereo, mono+stereo) on KITTI dataset. Built in PyTorch with pretrained models available at various resolutions (640×192 to 1024×320) and extensible dataloader API for custom datasets.

4,466 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

4,466

Forks

986

Language

Jupyter Notebook

License

Last pushed

Aug 10, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/nianticlabs/monodepth2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.