hemantjuyal/LLM-Distillation-Lab
An experiment demonstrating instruction-following distillation, enabling the transfer of knowledge from large language models (Teachers) into smaller, efficient models (Students) using LoRA-based fine-tuning and structured LLM-based evaluation.
Stars
—
Forks
—
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/hemantjuyal/LLM-Distillation-Lab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLM-Tuning-Safety/LLMs-Finetuning-Safety
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially...
kyegomez/Sophia
Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is...
uthmandevsec/Self-Distillation
🤖 Enable continual learning by reproducing the On-Policy Self-Distillation algorithm for robust...
appier-research/robust-llm-finetunes
Accepted to NeurIPS 2025
jmcentire/apprentice
Train cheap models on expensive ones. Automatically. With receipts.