Style Transfer Diffusion Diffusion Models
Tools for transferring, enhancing, or manipulating artistic styles, colors, and visual attributes using diffusion models. Does NOT include general image generation, video synthesis, or style analysis without generative capability.
There are 76 style transfer diffusion models tracked. 1 score above 50 (established tier). The highest-rated is jolibrain/joliGEN at 50/100 with 280 stars.
Get all 76 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=diffusion&subcategory=style-transfer-diffusion&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
jolibrain/joliGEN
Generative AI Image and Video Toolset with GANs and Diffusion for Real-World... |
|
Established |
| 2 |
ali-vilab/AnyDoor
Official implementations for paper: Anydoor: zero-shot object-level image... |
|
Emerging |
| 3 |
zhangmozhe/Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization". |
|
Emerging |
| 4 |
un1tz3r0/finetunepixelartdiffusion
Fine tune a pixelart diffusion model with isometric dataset. |
|
Emerging |
| 5 |
naver-ai/StyleKeeper
Official Pytorch implementation of "StyleKeeper: Prevent Content Leakage... |
|
Emerging |
| 6 |
ironjr/semantic-draw
Official code for the CVPR 2025 paper "SemanticDraw: Towards Real-Time... |
|
Emerging |
| 7 |
lixiaowen-xw/DiffuEraser
DiffuEraser is a diffusion model for video inpainting, which performs great... |
|
Emerging |
| 8 |
TheMistoAI/MistoLine
A Versatile and Robust SDXL-ControlNet Model for Adaptable Line Art Conditioning |
|
Emerging |
| 9 |
ximinng/PyTorch-SVGRender
SVG Differentiable Rendering: Generating vector graphics using neural... |
|
Emerging |
| 10 |
open-mmlab/StyleShot
StyleShot: A SnapShot on Any Style. 一款可以迁移任意风格到任意内容的模型,无需针对图片微调,即能生成高质量的个性风格化图片! |
|
Emerging |
| 11 |
Pseudo-Lab/pseudodiffusers
:bulb: PseudoDiffusers: paper/code review and experimental findings related... |
|
Emerging |
| 12 |
FotographerAI/ZenCtrl
In-context subject-driven image generation while preserving foreground fidelity |
|
Emerging |
| 13 |
albarji/mixture-of-diffusers
Mixture of Diffusers for scene composition and high resolution image generation |
|
Emerging |
| 14 |
koninik/WordStylist
Official PyTorch Implementation of "WordStylist: Styled Verbatim Handwritten... |
|
Emerging |
| 15 |
atfortes/Awesome-Controllable-Diffusion
Papers and resources on Controllable Generation using Diffusion Models,... |
|
Emerging |
| 16 |
rhfeiyang/Opt-In-Art
Official implementation of "Art-Free Generative Models: Art Creation Without... |
|
Emerging |
| 17 |
garibida/cross-image-attention
Officail Implementation for "Cross-Image Attention for Zero-Shot Appearance Transfer" |
|
Emerging |
| 18 |
sonnguyen129/deep-feature-rotation
Official implementation of paper Deep Feature Rotation for Multimodal Image... |
|
Emerging |
| 19 |
neverbiasu/Awesome-Portraits-Style-Transfer
A curated collection of papers on portrait style transfer |
|
Emerging |
| 20 |
aihao2000/stable-diffusion-reference-only
[Arxiv 2023] img2img version of stable diffusion. Line Art Automatic... |
|
Emerging |
| 21 |
kingnobro/Chat2SVG
(CVPR 2025) Code of "Chat2SVG: Vector Graphics Generation with Large... |
|
Experimental |
| 22 |
kangyeolk/Paint-by-Sketch
Stable Diffusion-based image manipulation method with a sketch and reference image |
|
Experimental |
| 23 |
Robin-WZQ/CCLAP
[ICME'23, oral] CCLAP: Controllable Chinese Landscape Painting Generation |
|
Experimental |
| 24 |
zichongc/StyleBlend
[Eurographics '25] Official Implementation of StyleBlend: Enhancing... |
|
Experimental |
| 25 |
TapasKumarDutta1/SketchFusion
[CVPR 2025] Official Implementation of paper 'SketchFusion: Learning... |
|
Experimental |
| 26 |
wangqiang9/SketchKnitter
[ICLR 2023 Spotlight] About PyTorch implementation of SketchKnitter:... |
|
Experimental |
| 27 |
KyujinHan/Tune-A-VideKO
한국어 기반 One-shot video tuning with Stable Diffusion |
|
Experimental |
| 28 |
dmMaze/sketch2manga
Apply screentone to line drawings or colored illustrations with diffusion models. |
|
Experimental |
| 29 |
Westlake-AGI-Lab/StyleStudio
[CVPR 2025] Official implementation of StyleStudio: Text-Driven Style... |
|
Experimental |
| 30 |
Tinglok/avstyle
Codebase for the Paper: Learning Visual Styles from Audio-Visual... |
|
Experimental |
| 31 |
philz1337x/style-transfer
Style-Transfer: Apply the style of an image to another image |
|
Experimental |
| 32 |
WinKawaks/SketchDreamer
[BMVC 2023 (Oral)] SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation |
|
Experimental |
| 33 |
MarkMoHR/DoodleAssist
[TVCG 2025] DoodleAssist: Progressive Interactive Line Art Generation with... |
|
Experimental |
| 34 |
HolmesShuan/Zero-shot-Style-Transfer-via-Attention-Rearrangement
[CVPR2024] Official implementation of the paper "Z∗: Zero-shot Style... |
|
Experimental |
| 35 |
Junyi42/DiffStyle
DiffStyle: Leverage Diffusion Prior to One-for-All Style Transfer. Course... |
|
Experimental |
| 36 |
arnabd64/Fine-Tune-Instruct-Pix2Pix
A simple jupyter notebook that will help you fine tune your own Instruct... |
|
Experimental |
| 37 |
karan-nanda/Stable-Diffusion-Model
This repository features a VAE for image enhancement, a Diffusion model for... |
|
Experimental |
| 38 |
VSAnimator/Sketch-a-Sketch
Controlling diffusion-based image generation with just a few strokes |
|
Experimental |
| 39 |
duongve13112002/DiffusionSpatialControl
Let's generate images with objects in desired positions using the diffusion model. |
|
Experimental |
| 40 |
nick8592/text-guided-image-colorization
This repository provides an interactive image colorization tool that... |
|
Experimental |
| 41 |
MarwanMashra/image-generation-for-AR
exploring AR application of image generation diffusion models |
|
Experimental |
| 42 |
edshkim98/HalluGen
A repository for generating controllable hallucinated features in medical... |
|
Experimental |
| 43 |
EnergyAttention/Energy-Based-CrossAttention
The official repository of "Energy-Based Cross Attention for Bayesian... |
|
Experimental |
| 44 |
cilabuniba/i-dream-my-painting
[WACV 2025] I Dream My Painting: Connecting MLLMs and Diffusion Models via... |
|
Experimental |
| 45 |
MarkMoHR/DiffSketchEdit
[ICME 2024] Text-based Vector Sketch Editing with Image Editing Diffusion Prior |
|
Experimental |
| 46 |
HenryNdubuaku/halo
A Library That Uses Quantized Diffusion Model With Clustered Weights For... |
|
Experimental |
| 47 |
moatifbutt/color-peel
we propose to generate a series of geometric shapes with target colors to... |
|
Experimental |
| 48 |
HVision-NKU/StyleExpert
Official implementation of StyleExpert(CVPR 2026) |
|
Experimental |
| 49 |
ColorDiffuser/ColorDiffuser
Video Colorization with Pre-trained Text-to-Image Diffusion Models |
|
Experimental |
| 50 |
umilISLab/artistic-prompt-interpretation
Investigating how text-to-image diffusion models internally represent... |
|
Experimental |
| 51 |
vijay-jaisankar/spectrogrand
Code and material relevant to the paper "Spectrogrand: Computational... |
|
Experimental |
| 52 |
fmp453/erase-eval
Erasing with Precision: Evaluating Specific Concept Erasure from... |
|
Experimental |
| 53 |
Westlake-AGI-Lab/CleanStyle
Official implementation of CleanStyle: Plug-and-Play Style Conditioning... |
|
Experimental |
| 54 |
lxzcpro/TextEraser
Text-Guided Precise Object Removal |
|
Experimental |
| 55 |
CSfufu/VidSketch
We propose VidSketch, the first method capable of generating high-quality... |
|
Experimental |
| 56 |
ouhenio/text-guided-diffusion-style-transfer
Implementation of Zero-Shot Contrastive Loss for Text-Guided Diffusion Image... |
|
Experimental |
| 57 |
hyliu/piggyback-color
Improved Diffusion-based Image Colorization via Piggybacked Models |
|
Experimental |
| 58 |
dnngky/infoground-guidance
Information-grounding guidance: enhancing sampling quality in visual... |
|
Experimental |
| 59 |
DimitriosKakouris/StyleMDiffusion
StyleMerge Diffusion: A training-free approach to prompted and artistically... |
|
Experimental |
| 60 |
NTUYWANG103/GenColor
[NeurIPS 2025 (Spotlight)] Official PyTorch Implementation of Paper... |
|
Experimental |
| 61 |
Phlaveeoh/style-mimicry-analysis
Analisi delle feature latenti di immagini protette e non protette di vari artisti |
|
Experimental |
| 62 |
taegyeong-lee/Generating-Realistic-Images-from-In-the-wild-Sounds
Official Code Repository for the paper "Generating Realistic Images from... |
|
Experimental |
| 63 |
VSAnimator/collage-diffusion
Implementation of Collage Diffusion (https://arxiv.org/abs/2303.00262) |
|
Experimental |
| 64 |
fredzhang7/Astro-Diffusion
Introducing new text-to-video methods |
|
Experimental |
| 65 |
prasunroy/dsketch
:fire: [ICPR 2024] d-Sketch: Improving Visual Fidelity of Sketch-to-Image... |
|
Experimental |
| 66 |
martin-rizzo/TinyModelsForLatentConversion
Command-line tools for building high-performance VAEs, latent space... |
|
Experimental |
| 67 |
Mithil-hub/Optimizing-Multimodal-Diffusion-Transformers-with-MoE-Enhanced-Stable-Diffusion-3
Mixture-of-Experts vs Dense baseline for InstructPix2Pix image editing using... |
|
Experimental |
| 68 |
quickjkee/instruct-pix2pix-distill
InstructPix2Pix with distilled diffusion models |
|
Experimental |
| 69 |
Xlucidator/JDiffArtFlow
Few-Shot Style Transfer based on Jittor & Stable Diffusion. Features... |
|
Experimental |
| 70 |
egeyavuzcan/semantic-data-augmentation
Iterative editing framework that harnesses the inherent strengths of these... |
|
Experimental |
| 71 |
JustinValentine/Sketch_Generation
Guided Flows for generating human-like sketches |
|
Experimental |
| 72 |
knottwill/Magic-UnEraser
Diffusion model for image generation using custom 'Eraser' degradation strategy. |
|
Experimental |
| 73 |
AngelLagr/Image-Morphing-using-Diffusers-from-Textual-Descriptions
Prompt and image interpolation using Diffusers to create smooth animations... |
|
Experimental |
| 74 |
Daheer/Change-Your-Style
Change-Your-Style combines the Image2Image and Textual Inversion to change... |
|
Experimental |
| 75 |
Vesnica/ADE20K
ADE20K data and goodies |
|
Experimental |
| 76 |
vossenwout/pixel-art-diffusion
Train stable diffusion to generate sprites in the style of dragon quest. |
|
Experimental |