belambert/asr-evaluation
Python module for evaluating ASR hypotheses (e.g. word error rate, word recognition rate).
Leverages edit distance algorithms to compute alignment-based metrics (WER, word recognition rate, sentence error rate) between reference and hypothesis transcripts. Supports multiple input formats including Kaldi and Sphinx conventions, with optional detailed output including confusion matrices and per-sentence error analysis. Built for integration with ASR pipelines and compatible with common speech recognition framework conventions.
283 stars. No commits in the last 6 months.
Stars
283
Forks
78
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 15, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/belambert/asr-evaluation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fgnt/meeteval
MeetEval - A meeting transcription evaluation toolkit
kahne/fastwer
A PyPI package for fast word/character error rate (WER/CER) calculation
tabahi/bournemouth-forced-aligner
Extract phoneme-level timestamps from speeh audio.
readbeyond/aeneas
aeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka...
wq2012/SimpleDER
A lightweight library to compute Diarization Error Rate (DER).