wooyeolbaek/attention-map-diffusers
đ Cross attention map tools for huggingface/diffusers
Enables extraction and visualization of attention weights across diffusion model layers during inference, supporting both UNet and DiT-based architectures across Stable Diffusion 2/3/3.5, FLUX, and other Diffusers models. Uses module replacement and forward hooks to capture cross-attention tensors at configurable timesteps and layers, then maps attention weights back to tokenized prompt text for per-token spatial visualization. Integrates directly with HuggingFace Diffusers pipelines through a simple `init_pipeline()` call, requiring minimal code changes to existing inference workflows.
397 stars and 87 monthly downloads. Available on PyPI.
Stars
397
Forks
28
Language
Python
License
MIT
Category
Last pushed
Feb 02, 2026
Monthly downloads
87
Commits (30d)
0
Dependencies
7
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/wooyeolbaek/attention-map-diffusers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
siliconflow/onediff
OneDiff: An out-of-the-box acceleration library for diffusion models.
jina-ai/discoart
đĒŠ Create Disco Diffusion artworks in one line
chengzeyi/stable-fast
https://wavespeed.ai/ Best inference performance optimization framework for HuggingFace...
hkproj/pytorch-stable-diffusion
Stable Diffusion implemented from scratch in PyTorch
explainingai-code/StableDiffusion-PyTorch
This repo implements a Stable Diffusion model in PyTorch with all the essential components.