Ahmad-AlSubaie/CS499-DL-debaising
Repository for research done into the methods used to debias ML models. Specifically looking into the role that measurements, metrics, and benchmarks can be in reducing the bias of a model.
No commits in the last 6 months.
Stars
2
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 25, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Ahmad-AlSubaie/CS499-DL-debaising"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dccuchile/wefe
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes...
dreji18/Fairness-in-AI
Detecting Bias and ensuring Fairness in AI solutions
amazon-science/bold
Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language...
dhfbk/variationist
Variationist: Exploring Multifaceted Variation and Bias in Written Language Data (ACL 2024 demo track)
microsoft/SafeNLP
Safety Score for Pre-Trained Language Models