SUDO-AI-3D/zero123plus
Code repository for Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model.
Generates consistent multi-view 3D images from a single input using a diffusion-based architecture integrated with the `diffusers` library and optional ControlNet modules for depth and normal map guidance. The model outputs six fixed-pose views (azimuth angles 30°-330°, unified 30° FOV) optimized for 3D object generation rather than novel-view synthesis, with additional ControlNets enabling surface normal generation (10.75° mean angular error) and refined alpha matting for background removal.
2,021 stars. No commits in the last 6 months.
Stars
2,021
Forks
138
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 23, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/SUDO-AI-3D/zero123plus"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jayin92/Skyfall-GS
Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
ActiveVisionLab/gaussctrl
[ECCV 2024] GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
Tencent-Hunyuan/Hunyuan3D-2
High-Resolution 3D Assets Generation with Large Scale Hunyuan3D Diffusion Models.
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with...
caiyuanhao1998/Open-DiffusionGS
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D...