lilt/tec
Evaluation code and data for "Automatic Correction of Human Translations" [NAACL 2022].
This project helps translation managers and quality assurance specialists evaluate the effectiveness of automated tools that correct human-made translations. You provide English source sentences, initial German translations, and the automatically corrected German translations, and it calculates key quality metrics. This is for professionals overseeing translation workflows and assessing machine translation post-editing.
No commits in the last 6 months.
Use this if you need to objectively measure the quality and identify errors in machine-corrected human translations for marketing, technical, or general content.
Not ideal if you are looking for a translation tool itself, or if you only need a simple word-count based quality check.
Stars
19
Forks
—
Language
Perl
License
—
Category
Last pushed
Dec 09, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/lilt/tec"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kanyun-inc/fairseq-gec
Source code for paper: Improving Grammatical Error Correction via Pre-Training a Copy-Augmented...
awasthiabhijeet/PIE
Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models...
kakaobrain/helo-word
Team Kakao&Brain's Grammatical Error Correction System for the ACL 2019 BEA Shared Task
CAMeL-Lab/text-editing
Code, models, and data for "Enhancing Text Editing for Grammatical Error Correction: Arabic as a...
grammarly/ua-gec
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language