Alpha-VLLM/Lumina-T2X
Lumina-T2X is a unified framework for Text to Any Modality Generation
Using flow-based diffusion transformers, the framework generates images, audio, video, and other modalities at variable resolutions and durations from text prompts with unified architecture. It integrates with Hugging Face Diffusers and supports inference/training workflows including DreamBooth, with pre-trained checkpoints available across multiple model sizes (2B-5B parameters). The approach leverages large transformer models as the core diffusion backbone, enabling compositional generation and multi-modal control capabilities beyond standard text-to-image pipelines.
2,254 stars. No commits in the last 6 months.
Stars
2,254
Forks
95
Language
Python
License
MIT
Category
Last pushed
Feb 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Alpha-VLLM/Lumina-T2X"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
filipstrand/mflux
MLX native implementations of state-of-the-art generative image models
potamides/DeTikZify
Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZ.
FoundationVision/Infinity
[CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis
zai-org/CogView
Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image...
EleutherAI/DALLE-mtf
Open-AI's DALL-E for large scale training in mesh-tensorflow.