embeddings-benchmark/results

Data for the MTEB leaderboard

47
/ 100
Emerging

Stores standardized evaluation results from the MTEB (Massive Text Embedding Benchmark) package across diverse embedding models and tasks. Results are submitted directly to this repository rather than via Hugging Face model cards, enabling verification that scores match verified model implementations. The leaderboard aggregates these results to provide comparable benchmarks across retrieval, clustering, semantic search, and other embedding-based tasks.

No License No Package No Dependents
Maintenance 13 / 25
Adoption 8 / 25
Maturity 1 / 25
Community 25 / 25

How are scores calculated?

Stars

47

Forks

135

Language

Python

License

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/embeddings-benchmark/results"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.