nv-tlabs/cosmos-transfer1-diffusion-renderer
Cosmos-Transfer1-DiffusionRenderer: High-quality video de-lighting and re-lighting based on Cosmos video diffusion framework
Decouples inverse rendering (albedo, normal, depth, roughness, metallic estimation) from forward rendering using separate diffusion transformers, enabling flexible relighting with custom environment maps or randomized illumination. Built on NVIDIA's Cosmos World Foundation Models with nvdiffrast for differentiable rendering, supporting both image and video inputs with configurable G-buffer passes. Targets synthetic data generation for training robust vision and robotic systems under varying lighting conditions.
786 stars. No commits in the last 6 months.
Stars
786
Forks
59
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Oct 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/nv-tlabs/cosmos-transfer1-diffusion-renderer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PRIS-CV/DemoFusion
Let us democratise high-resolution generation! (CVPR 2024)
mit-han-lab/distrifuser
[CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models
Tencent-Hunyuan/HunyuanPortrait
[CVPR-2025] The official code of HunyuanPortrait: Implicit Condition Control for Enhanced...
Shilin-LU/TF-ICON
[ICCV 2023] "TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition" (Official...
giuvecchio/matfuse
MatFuse: Controllable Material Generation with Diffusion Models (CVPR2024)