lora and LECO

These are complements: LECO extends LoRA's low-rank adaptation technique to enable selective concept erasure from diffusion models, whereas the base LoRA tool performs general fine-tuning, so they can be used sequentially or together for more controlled model customization.

lora
44
Emerging
LECO
40
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 18/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 14/25
Stars: 7,529
Forks: 501
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
Stars: 324
Forks: 27
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: Apache-2.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About lora

cloneofsimo/lora

Using Low-rank adaptation to quickly fine-tune diffusion models.

Decomposes weight updates into low-rank matrices (ΔW = AB^T) applied primarily to attention layers, reducing fine-tuned model size to 1-6MB while maintaining or exceeding full fine-tuning quality. Implements three distinct training approaches—LoRA-DreamBooth (with prior preservation), Textual Inversion, and Pivotal Tuning Inversion—enabling flexible control over style vs. identity trade-offs. Integrates with Hugging Face `diffusers` library and supports inpainting, CLIP+UNet+token joint training, and checkpoint merging for composable style combinations.

About LECO

p1atdev/LECO

Low-rank adaptation for Erasing COncepts from diffusion models.

Scores updated daily from GitHub, PyPI, and npm data. How scores work