manycore-research/SpatialGen
[3DV 2026] SpatialGen: Layout-guided 3D Indoor Scene Generation
Leverages multi-view multi-modal diffusion to generate coherent 3D scenes from either reference images or text descriptions, conditioned on semantic layout constraints. Employs a two-stage architecture combining SCM-VAE for layout encoding with Gaussian splatting-based 3D reconstruction via Sparse-RaDeGS. Integrates FLUX.1 ControlNet for conditional image generation and builds on Stable Diffusion v2.1 backbone.
360 stars.
Stars
360
Forks
19
Language
Python
License
MIT
Category
Last pushed
Jan 27, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/manycore-research/SpatialGen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...