rkhokhla/kakeya
When AI makes $10M decisions, hallucinations aren't bugs—they're business risks. We built the verification infrastructure that makes AI agents accountable without slowing them down.
Stars
3
Forks
—
Language
Go
License
MIT
Category
Last pushed
Oct 25, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/rkhokhla/kakeya"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
Amirhosein-gh98/Gnosis
Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
MemTensor/HaluMem
HaluMem is the first operation level hallucination evaluation benchmark tailored to agent memory systems.