Gen-Verse/MMaDA
MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion, mixed-CoT, unified RL)
Block diffusion sampling with semi-autoregressive text decoding enables unified handling of text reasoning, multimodal understanding, and image generation within a single modality-agnostic architecture. The framework integrates mixed chain-of-thought fine-tuning with UniGRPO, a policy-gradient RL algorithm designed for diffusion models, supporting diversified reward modeling across both reasoning and generation tasks. Models are distributed via Hugging Face with multiple checkpoints (Base, MixCoT, Parallel variants) and leverage the TraceRL training infrastructure for end-to-end optimization.
1,611 stars.
Stars
1,611
Forks
87
Language
Python
License
MIT
Category
Last pushed
Feb 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Gen-Verse/MMaDA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
FlorianFuerrutter/genQC
Generative Quantum Circuits
horseee/DeepCache
[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free
kuleshov-group/mdlm
[NeurIPS 2024] Simple and Effective Masked Diffusion Language Model
Shark-NLP/DiffuSeq
[ICLR'23] DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
ali-vilab/TeaCache
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model