microsoft/LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

67
/ 100
Established

Applies rank-decomposition matrices to freeze original weights while training task-specific adapters, reducing trainable parameters from millions to thousands (e.g., 1.5B to 4.7M on DeBERTa) without inference latency. Integrates directly with PyTorch models and Hugging Face transformers like RoBERTa, DeBERTa, and GPT-2, with example implementations for both NLU and NLG tasks. Enables efficient multi-task deployment by storing minimal per-task checkpoints rather than full model copies.

13,320 stars and 207,985 monthly downloads. Used by 4 other packages. No commits in the last 6 months. Available on PyPI.

Stale 6m
Maintenance 0 / 25
Adoption 24 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

13,320

Forks

888

Language

Python

License

MIT

Last pushed

Dec 17, 2024

Monthly downloads

207,985

Commits (30d)

0

Reverse dependents

4

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/microsoft/LoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.