dllm and Open-dLLM
These are competing implementations of the same core technique—diffusion-based language modeling—where the first is a research reference implementation and the second is a specialized variant optimized for code generation tasks.
About dllm
ZHZisZZ/dllm
dLLM: Simple Diffusion Language Modeling
This project is for AI researchers and practitioners focused on advanced language modeling. It provides a toolkit for building, training, and evaluating diffusion-based language models, which generate text differently from traditional models. You can input existing autoregressive models like GPT-2 or BERT and adapt them to this new diffusion framework, ultimately outputting trained models ready for text generation and evaluation.
About Open-dLLM
pengzhangzhi/Open-dLLM
Open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.
Open-dLLM provides a complete open-source toolkit for diffusion-based large language models, specifically for code generation. It takes a prompt or partial code as input and generates runnable code or fills in missing code. This project is for machine learning researchers and developers who want to experiment with, train, and evaluate diffusion models for programming tasks.
Scores updated daily from GitHub, PyPI, and npm data. How scores work