ikun-llm/ikun-Distill
知识蒸馏 | Knowledge Distillation from teacher model 🎓
14
/ 100
Experimental
No License
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
1 / 25
Community
0 / 25
Stars
—
Forks
—
Language
—
License
—
Category
Last pushed
Mar 24, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ikun-llm/ikun-Distill"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLM-Tuning-Safety/LLMs-Finetuning-Safety
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially...
42
kyegomez/Sophia
Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is...
39
uthmandevsec/Self-Distillation
🤖 Enable continual learning by reproducing the On-Policy Self-Distillation algorithm for robust...
29
appier-research/robust-llm-finetunes
Accepted to NeurIPS 2025
28
jmcentire/apprentice
Train cheap models on expensive ones. Automatically. With receipts.
25