HillZhang1999/llm-hallucination-survey

Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"

35
/ 100
Emerging

Organizes hallucination research across three taxonomies—input-conflicting, context-conflicting, and fact-conflicting—with curated papers covering evaluation metrics, root causes, and mitigation strategies. Covers diverse NLG tasks including machine translation, summarization, dialogue, and QA, providing systematic categorization of how models generate plausible but incorrect outputs. Functions as a structured bibliography linking to arxiv papers and benchmarks rather than implementing evaluation tools directly.

1,078 stars. No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 15 / 25

How are scores calculated?

Stars

1,078

Forks

54

Language

License

Last pushed

Sep 27, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/HillZhang1999/llm-hallucination-survey"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.