KohakuBlueleaf/LyCORIS
Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
Implements multiple parameter-efficient fine-tuning algorithms (LoRA, LoHa, LoKr, (IA)³, DyLoRA) beyond standard LoRA, with trade-offs optimized for fidelity, model size, training speed, and generation diversity. Integrates as a standalone PyTorch wrapper for any module, and officially supports sd-webui (1.5.0+), ComfyUI, InvokeAI, and kohya-ss training scripts, with conversion tools for different frameworks.
2,488 stars.
Stars
2,488
Forks
173
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/KohakuBlueleaf/LyCORIS"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
tsiendragon/qwen-image-finetune
Repo for Qwen Image Finetune
PRITHIVSAKTHIUR/Qwen-Image-Edit-2511-LoRAs-Fast-Lazy-Load
Demonstration for the Qwen-Image-Edit-2511 model with lazy-loaded LoRA adapters for advanced...
Akegarasu/lora-scripts
SD-Trainer. LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
cloneofsimo/lora
Using Low-rank adaptation to quickly fine-tune diffusion models.
LeslieZhoa/Simple-Lora
diffusion lora chinese tutorial,虚拟idol训练中文教程