sakalond/StableGen
Transform your 3D texturing workflow with the power of generative AI, directly within Blender!
Supports end-to-end 3D generation via TRELLIS.2 (image-to-3D or text-to-3D) with configurable resolution modes, plus multi-view texturing using SDXL, FLUX.1-dev, or Qwen Image Edit through a ComfyUI backend. Features advanced camera placement strategies, ControlNet/IPAdapter integration for geometric and style consistency, inpainting refinement modes, and scene-wide batch texturing across multiple meshes simultaneously. VRAM-conscious design with disk offloading and modular architecture enables flexible AI model swapping while keeping heavy computation offloaded to ComfyUI.
699 stars. Actively maintained with 22 commits in the last 30 days.
Stars
699
Forks
58
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 17, 2026
Commits (30d)
22
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sakalond/StableGen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
neggles/animatediff-cli
a CLI utility/library for AnimateDiff stable diffusion generation
victordibia/peacasso
UI interface for experimenting with multimodal (text, image) models (stable diffusion).
ai-forever/Kandinsky-2
Kandinsky 2 — multilingual text2image latent diffusion model
SyntheticAutonomicMind/ALICE
Artificial Latent Image Composition Engine
samedii/perceptor
Modular image generation library