lightly-ai/lightly-train

All-in-one training for vision models (YOLO, ViTs, RT-DETR, DINOv3): pretraining, fine-tuning, distillation.

62
/ 100
Established

Provides end-to-end vision model training with support for self-supervised pretraining (DINOv2/v3), distillation from foundation models, and task-specific fine-tuning for detection, segmentation, and panoptic tasks using transformer backbones optimized for edge deployment via ONNX/TensorRT export. Built on PyTorch with unified APIs across vision tasks, enabling seamless transitions from unlabeled data pretraining to production-ready inference on embedded devices with models ranging from 1M to 315M parameters.

1,359 stars. Actively maintained with 44 commits in the last 30 days.

No Package No Dependents
Maintenance 23 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 14 / 25

How are scores calculated?

Stars

1,359

Forks

63

Language

Python

License

AGPL-3.0

Last pushed

Mar 12, 2026

Commits (30d)

44

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/lightly-ai/lightly-train"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.