mlfoundations/open_clip

An open source implementation of CLIP.

86
/ 100
Verified

Supports diverse Vision Transformer and ConvNet architectures trained on large-scale datasets (LAION-2B, DataComp-1B) with published scaling laws, achieving competitive zero-shot ImageNet accuracy up to 85.4%. Integrates with PyTorch, Hugging Face model hub, and timm for image encoders, enabling efficient embedding computation via the clip-retrieval library. Offers flexible model loading from local checkpoints or HuggingFace, with pre-trained weights optimized for both inference and fine-tuning workflows.

13,496 stars and 2,903,706 monthly downloads. Used by 18 other packages. Actively maintained with 1 commit in the last 30 days. Available on PyPI.

Maintenance 16 / 25
Adoption 25 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

13,496

Forks

1,253

Language

Python

License

Last pushed

Mar 12, 2026

Monthly downloads

2,903,706

Commits (30d)

1

Dependencies

8

Reverse dependents

18

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/mlfoundations/open_clip"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.