IMAGGarment and IMAGDressing

IMAGGarment and IMAGDressing are complementary tools within the virtual try-on ecosystem, with IMAGGarment focusing on the fine-grained generation of customizable garments based on multiple conditions, and IMAGDressing leveraging such generated garments for interactive human image generation with flexible control over apparel, pose, and scene for virtual dressing.

IMAGGarment
48
Emerging
IMAGDressing
47
Emerging
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 9/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 19/25
Stars: 249
Forks: 10
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 1,333
Forks: 120
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
Stale 6m No Package No Dependents

About IMAGGarment

muzishen/IMAGGarment

[TVCG 2026] 🎨 IMAGGarment🎨 : Fine-Grained Garment Generation with Controllable Structure, Color, and Logo. It supports precise and customizable garment synthesis guided by multi-conditions (e.g., sketch, color, logo), achieving high realism and controllability for digital fashion design.

Employs a two-stage diffusion-based architecture: a global appearance model (GAM) using mixed attention and color adapters for silhouette-color encoding, followed by a local enhancement model (LEM) with appearance-aware modules for precise logo placement and spatial constraints. Built on Stable Diffusion v1.5 with IP-Adapter integration, supporting end-to-end inference across sketch, color, logo, and mask conditions via the GarmentBench dataset and downloadable model weights on Hugging Face.

About IMAGDressing

muzishen/IMAGDressing

[AAAI 2025]👔IMAGDressing👔: Interactive Modular Apparel Generation for Virtual Dressing. It enables customizable human image generation with flexible garment, pose, and scene control, ensuring high fidelity and garment consistency for virtual dressing.

Combines a specialized garment UNet with hybrid attention (frozen self-attention + trainable cross-attention) to inject CLIP semantic and VAE texture features into a frozen diffusion backbone, enabling zero-shot customization without LoRA training. Seamlessly integrates with Stable Diffusion 1.5 extensions including IP-Adapter, ControlNet, T2I-Adapter, and AnimateDiff for enhanced pose, face, and scene control; includes the IGPair dataset (300K+ image pairs) and the CAMI metric for evaluating garment affinity and consistency.

Scores updated daily from GitHub, PyPI, and npm data. How scores work