MinishLab/tokenlearn
Pre-train Static Word Embeddings
Implements a two-stage pipeline extracting mean token embeddings from sentence transformers (featurize step) then training lightweight Model2Vec static embeddings against those targets. Provides CLI tools to process HuggingFace datasets end-to-end and integrates with MTEB evaluation framework for downstream task assessment. Powers the Potion model family with configurable support for multilingual and multi-scale embeddings.
No commits in the last 6 months. Available on PyPI.
Stars
94
Forks
8
Language
Python
License
MIT
Category
Last pushed
Sep 09, 2025
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/MinishLab/tokenlearn"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MinishLab/model2vec
Fast State-of-the-Art Static Embeddings
Embedding/Chinese-Word-Vectors
100+ Chinese Word Vectors 上百种预训练中文词向量
tensorflow/hub
A library for transfer learning by reusing parts of TensorFlow models.
AnswerDotAI/ModernBERT
Bringing BERT into modernity via both architecture changes and scaling
Santosh-Gupta/SpeedTorch
Library for faster pinned CPU <-> GPU transfer in Pytorch