JosephTLucas/llm_test
A suite of tests to verify bias, safety, trust, and security concerns for LLMs.
No commits in the last 6 months.
Stars
7
Forks
—
Language
Python
License
MIT
Category
Last pushed
Nov 07, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/JosephTLucas/llm_test"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UBC-MDS/fixml
LLM Tool for effective test evaluation of ML projects with curated Checklists and LLM prompts
AstraBert/DebateLLM-Championship
5 LLMs, 1vs1 matches to produce the most convincing argumentation in favor or against a random...
iSEngLab/LLM4UT_Empirical
[ISSTA 2025] A Large-scale Empirical Study on Fine-tuning Large Language Models for Unit Testing
iSEngLab/RetriGen
[2025 TOSEM] Improving Deep Assertion Generation via Fine-Tuning Retrieval-Augmented Pre-trained...
iSEngLab/LLM4AG
[2025 TOSEM] Exploring Automated Assertion Generation via Large Language Models