OPTML-Group/Unlearn-Saliency
[ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
Gradient-based weight saliency identifies and modifies only the most influential model parameters for unlearning, enabling efficient forgetting of data points, classes, or concepts without full retraining. The approach unifies classification and generative models—demonstrated on image classifiers, DDPM with classifier-free guidance, and Stable Diffusion—achieving near-exact unlearning performance (0.2% gap on CIFAR-10) while nearly eliminating harmful image generation in diffusion models. Implementation includes modular task pipelines for both discriminative and generative domains with PyTorch backends.
143 stars.
Stars
143
Forks
29
Language
Python
License
MIT
Category
Last pushed
Feb 28, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/OPTML-Group/Unlearn-Saliency"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
Shilin-LU/VINE
[ICLR 2025] "Robust Watermarking Using Generative Priors Against Image Editing: From...
WindVChen/DiffAttack
An unrestricted attack based on diffusion models that can achieve both good transferability and...
koninik/DiffusionPen
Official PyTorch Implementation of "DiffusionPen: Towards Controlling the Style of Handwritten...
Wuyxin/DISC
(ICML 2023) Discover and Cure: Concept-aware Mitigation of Spurious Correlation
bytedance/LatentUnfold
Implementation of paper: Flux Already Knows – Activating Subject-Driven Image Generation without Training