PurCL/muke
[COLM 2025] Official implementation of μKE - edit LLM knowledge while preserving memory dependencies via Matryoshka-style objectives.
When a Large Language Model (LLM) provides incorrect or outdated information, or exhibits unsafe behavior, this tool helps you update its knowledge without expensive retraining. You input the LLM and the specific factual changes you need to make, and it outputs a modified LLM that has learned the new information while preserving its existing knowledge dependencies. This is for researchers and developers working with and fine-tuning LLMs.
No commits in the last 6 months.
Use this if you need to efficiently correct or update factual knowledge within an LLM while maintaining the model's overall coherence and avoiding unintended disruptions to its memory.
Not ideal if you need to train a brand new model from scratch or make broad, foundational changes that go beyond targeted factual updates.
Stars
14
Forks
—
Language
Python
License
—
Category
Last pushed
Aug 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/PurCL/muke"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
steering-vectors/steering-vectors
Steering vectors for transformer language models in Pytorch / Huggingface
jianghoucheng/AlphaEdit
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)
kmeng01/memit
Mass-editing thousands of facts into a transformer memory (ICLR 2023)
boyiwei/alignment-attribution-code
[ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications
jianghoucheng/AnyEdit
AnyEdit: Edit Any Knowledge Encoded in Language Models, ICML 2025