HowieHwong/TrustLLM
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
Provides comprehensive evaluation across six trustworthiness dimensions (truthfulness, safety, fairness, robustness, privacy, machine ethics) via a Python toolkit with modular task pipelines and support for 16+ mainstream LLMs through native APIs and external inference providers (Replicate, DeepInfra, Azure OpenAI). The benchmark aggregates 30+ datasets covering these dimensions, enabling researchers to systematically assess model trustworthiness using standardized metrics and an open leaderboard.
619 stars and 62 monthly downloads. No commits in the last 6 months. Available on PyPI.
Stars
619
Forks
66
Language
Python
License
MIT
Category
Last pushed
Jun 24, 2025
Monthly downloads
62
Commits (30d)
0
Dependencies
20
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HowieHwong/TrustLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
Intelligent-CAT-Lab/PLTranslationEmpirical
Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large...
rishub-tamirisa/tamper-resistance
[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"
tsinghua-fib-lab/ANeurIPS2024_SPV-MIA
[NeurIPS'24] "Membership Inference Attacks against Fine-tuned Large Language Models via...
FudanDISC/ReForm-Eval
An benchmark for evaluating the capabilities of large vision-language models (LVLMs)
yyy01/LLMRiskEval_RCC
LLMs evaluation tool for robustness, consistency, and credibility