adianliusie/comparative-assessment

Framework for using LLMs to grade texts by using pairwise comparisons.

12
/ 100
Experimental

This project helps you automatically grade and rank different versions of generated text, like summaries or creative writing, without needing to manually score each one. Instead of assigning individual scores, it compares texts in pairs, mimicking how humans often judge quality more easily. The tool takes multiple text drafts and an attribute to assess (e.g., coherence, fluency) and outputs a ranking of which text is better for that attribute. This is ideal for content creators, marketers, or researchers who need to quickly evaluate and select the best AI-generated text.

No commits in the last 6 months.

Use this if you need an automated and efficient way to rank multiple AI-generated text outputs based on specific quality attributes, similar to how a human would compare them side-by-side.

Not ideal if you need a precise, absolute numerical score for each text rather than a relative ranking based on comparisons.

content-creation text-analysis AI-content-evaluation natural-language-generation automated-grading
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Aug 29, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/adianliusie/comparative-assessment"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.