MilaNLProc/honest

A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.

45
/ 100
Emerging

Evaluates bias across six languages (English, Italian, French, Portuguese, Romanian, Spanish) for binary gender and English for LGBTQAI+ stereotypes using template- and lexicon-based methodology. Integrates with HuggingFace's `transformers` library to score masked language models (BERT, GPT) by comparing their top-k completions against curated bias lexicons. The package provides structured templates and an `HonestEvaluator` class that computes aggregate bias scores from model predictions on stereotype-laden sentence fragments.

No commits in the last 6 months. Available on PyPI.

Stale 6m
Maintenance 2 / 25
Adoption 10 / 25
Maturity 18 / 25
Community 15 / 25

How are scores calculated?

Stars

21

Forks

4

Language

Python

License

MIT

Last pushed

Apr 08, 2025

Monthly downloads

51

Commits (30d)

0

Dependencies

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/MilaNLProc/honest"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.