MinishLab/model2vec

Fast State-of-the-Art Static Embeddings

80
/ 100
Verified

Distills sentence transformers into compact static embeddings through lightweight PCA-based dimensionality reduction, enabling 500x faster inference with minimal performance loss. Supports distillation from any Sentence Transformer, token-level embeddings for sequence tasks, and fine-tuning classification heads directly on the static representations. Integrates with HuggingFace Hub for pre-trained models and supports BPE/Unigram tokenizers with optional int8 quantization for further size reduction.

2,008 stars and 582,040 monthly downloads. Used by 7 other packages. Actively maintained with 7 commits in the last 30 days. Available on PyPI.

Maintenance 20 / 25
Adoption 25 / 25
Maturity 18 / 25
Community 17 / 25

How are scores calculated?

Stars

2,008

Forks

116

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Monthly downloads

582,040

Commits (30d)

7

Dependencies

8

Reverse dependents

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/MinishLab/model2vec"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.