bheinzerling/bpemb

Pre-trained subword embeddings in 275 languages, based on Byte-Pair Encoding (BPE)

37
/ 100
Emerging

Embeddings are trained on Wikipedia and exposed as gensim KeyedVectors, enabling direct similarity queries and vector lookups with configurable vocabulary sizes (1k–200k) that control subword granularity. The library uses SentencePiece for tokenization and supports both subword segmentation and embedding lookup via a single Python API with automatic model downloading.

1,221 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 18 / 25

How are scores calculated?

Stars

1,221

Forks

102

Language

Python

License

MIT

Last pushed

Oct 01, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/bheinzerling/bpemb"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.