MinishLab/model2vec
Fast State-of-the-Art Static Embeddings
Distills sentence transformers into compact static embeddings through lightweight PCA-based dimensionality reduction, enabling 500x faster inference with minimal performance loss. Supports distillation from any Sentence Transformer, token-level embeddings for sequence tasks, and fine-tuning classification heads directly on the static representations. Integrates with HuggingFace Hub for pre-trained models and supports BPE/Unigram tokenizers with optional int8 quantization for further size reduction.
2,008 stars and 582,040 monthly downloads. Used by 7 other packages. Actively maintained with 7 commits in the last 30 days. Available on PyPI.
Stars
2,008
Forks
116
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Monthly downloads
582,040
Commits (30d)
7
Dependencies
8
Reverse dependents
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/MinishLab/model2vec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
Embedding/Chinese-Word-Vectors
100+ Chinese Word Vectors 上百种预训练中文词向量
tensorflow/hub
A library for transfer learning by reusing parts of TensorFlow models.
AnswerDotAI/ModernBERT
Bringing BERT into modernity via both architecture changes and scaling
Santosh-Gupta/SpeedTorch
Library for faster pinned CPU <-> GPU transfer in Pytorch
twang2218/vocab-coverage
语言模型中文认知能力分析