minnesotanlp/Quantifying-Annotation-Disagreement
Official implementation of Wan et al's paper "Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information" (AAAI 2023)
No commits in the last 6 months.
Stars
6
Forks
1
Language
Jupyter Notebook
License
—
Category
Last pushed
Jan 17, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/minnesotanlp/Quantifying-Annotation-Disagreement"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dccuchile/wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes...
dreji18/Fairness-in-AI
Detecting Bias and ensuring Fairness in AI solutions
amazon-science/bold
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language...
dhfbk/variationist
Variationist: Exploring Multifaceted Variation and Bias in Written Language Data (ACL 2024 demo track)
soarsmu/BiasFinder
BiasFinder | IEEE TSE | Metamorphic Test Generation to Uncover Bias for Sentiment Analysis Systems