HillZhang1999/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
Organizes hallucination research across three taxonomies—input-conflicting, context-conflicting, and fact-conflicting—with curated papers covering evaluation metrics, root causes, and mitigation strategies. Covers diverse NLG tasks including machine translation, summarization, dialogue, and QA, providing systematic categorization of how models generate plausible but incorrect outputs. Functions as a structured bibliography linking to arxiv papers and benchmarks rather than implementing evaluation tools directly.
1,078 stars. No commits in the last 6 months.
Stars
1,078
Forks
54
Language
—
License
—
Category
Last pushed
Sep 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/HillZhang1999/llm-hallucination-survey"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations...
Amirhosein-gh98/Gnosis
Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via...