lora and LECO
These are complements: LECO extends LoRA's low-rank adaptation technique to enable selective concept erasure from diffusion models, whereas the base LoRA tool performs general fine-tuning, so they can be used sequentially or together for more controlled model customization.
About lora
cloneofsimo/lora
Using Low-rank adaptation to quickly fine-tune diffusion models.
Decomposes weight updates into low-rank matrices (ΔW = AB^T) applied primarily to attention layers, reducing fine-tuned model size to 1-6MB while maintaining or exceeding full fine-tuning quality. Implements three distinct training approaches—LoRA-DreamBooth (with prior preservation), Textual Inversion, and Pivotal Tuning Inversion—enabling flexible control over style vs. identity trade-offs. Integrates with Hugging Face `diffusers` library and supports inpainting, CLIP+UNet+token joint training, and checkpoint merging for composable style combinations.
About LECO
p1atdev/LECO
Low-rank adaptation for Erasing COncepts from diffusion models.
Scores updated daily from GitHub, PyPI, and npm data. How scores work