Tebmer/Awesome-Knowledge-Distillation-of-LLMs

This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.

34
/ 100
Emerging

Organized taxonomically across Knowledge Elicitation (labeling, expansion, curation, feedback, self-knowledge) and Distillation Algorithms (SFT, divergence-based, RL, rank optimization), the collection maps how to extract and transfer both general capabilities and domain-specific skills. Covers practical applications across skill dimensions (instruction-following, alignment, agent behavior, task specialization, multimodality) and vertical domains (legal, medical, finance, science), alongside encoder-based KD approaches. Integrates with open-source LLM ecosystems like LLaMA and Mistral, addressing capability transfer from proprietary models (GPT-4, Claude) through synthetic data generation and self-improvement techniques.

1,264 stars. No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 16 / 25

How are scores calculated?

Stars

1,264

Forks

71

Language

License

Last pushed

Mar 09, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Tebmer/Awesome-Knowledge-Distillation-of-LLMs"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.