shenxiangzhuang/bleuscore
BLEU Score in Rust
This tool helps machine translation researchers and practitioners quickly evaluate the quality of translated text. It takes a list of machine-generated translations and one or more human-generated reference translations, then outputs a BLEU score indicating how well the machine translation matches the references. This is ideal for those working on large-scale natural language processing projects, especially in machine translation.
Available on PyPI.
Use this if you need a significantly faster way to calculate BLEU scores for large volumes of machine translation outputs, particularly when working with Python.
Not ideal if you are evaluating only a small number of translations or if you require a different metric beyond BLEU.
Stars
12
Forks
1
Language
Rust
License
MIT
Category
Last pushed
Mar 01, 2026
Monthly downloads
13
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/shenxiangzhuang/bleuscore"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.