Efi-Pecani/Literary-LLM-Knowledge-Data-Poisoning
Data poisoning attacks on LLMs — corrupting Harry Potter knowledge via Tolkien-style fine-tuning, with quantitative analysis of knowledge degradation (Reichman University, 2025)
Stars
1
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Efi-Pecani/Literary-LLM-Knowledge-Data-Poisoning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
thunlp/OpenAttack
An Open-Source Package for Textual Adversarial Attack.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
osoleve/glitchlings
Enemies for your LLM
jind11/TextFooler
A Model for Natural Language Attack on Text Classification and Inference
thunlp/OpenBackdoor
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)