Awesome-Dataset-Distillation and awesome-knowledge-distillation
These are ecosystem siblings within knowledge compression—one curates papers on **dataset distillation** (compressing training data itself) while the other covers **knowledge distillation** (compressing trained models), representing two complementary but distinct subfields of model compression research.
About Awesome-Dataset-Distillation
Guang000/Awesome-Dataset-Distillation
A curated list of awesome papers on dataset distillation and related applications.
About awesome-knowledge-distillation
dkozlov/awesome-knowledge-distillation
Awesome Knowledge Distillation
A curated collection of knowledge distillation research spanning foundational ensemble methods through modern techniques like attention transfer, dark knowledge, and data-free distillation. The resource catalogs papers across diverse applications—from model compression and adversarial robustness to sequence-level learning and object detection—covering architectural approaches including FitNets, privileged information transfer, and mutual learning schemes. Organized by methodology and application domain, it serves as a comprehensive reference for techniques to transfer knowledge from large teacher models to efficient student networks across vision, NLP, and speech domains.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work