rumaisa-azeem/llm-robots-discrimination-safety
Code and evaluation framework for assessing discrimination risks of LLMs in HRI tasks (Paper: LLM-Driven Robots Risk Enacting Discrimination, Violence,and Unlawful Actions)
Stars
7
Forks
1
Language
Python
License
—
Category
Last pushed
Oct 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rumaisa-azeem/llm-robots-discrimination-safety"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/OpenRCA
[ICLR'25] OpenRCA: Can Large Language Models Locate the Root Cause of Software Failures?
PacificAI/langtest
Deliver safe & effective language models
TrustGen/TrustEval-toolkit
[ICLR'26, NAACL'25 Demo] Toolkit & Benchmark for evaluating the trustworthiness of generative...
Babelscape/ALERT
Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language...
ChenWu98/agent-attack
[ICLR 2025] Dissecting adversarial robustness of multimodal language model agents