haoyangzheng-ai/didi-instruct
[ICLR 2026] Discrete Diffusion Divergence Instruct (DiDi-Instruct)
Distills discrete diffusion language models into few-step students using integral KL-divergence minimization with grouped reward normalization and intermediate-state matching, achieving up to 64× speedup while maintaining or exceeding teacher perplexity. Targets masked diffusion model acceleration on OpenWebText and downstream tasks, with pre-trained checkpoints available on Hugging Face and integrations with PyTorch-based training pipelines.
153 stars.
Stars
153
Forks
10
Language
Python
License
MIT
Category
Last pushed
Mar 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/haoyangzheng-ai/didi-instruct"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...