kyegomez/SingLoRA
This repository provides a minimal, single-file implementation of SingLoRA (Single Matrix Low-Rank Adaptation) as described in the paper "SingLoRA: Low Rank Adaptation Using a Single Matrix" by Bensaïd et al.
Replaces dual low-rank matrices with a single trainable matrix applied across transformer attention layers, incorporating a time-dependent ramp-up function to gradually scale the adaptation during training. Integrates directly with Hugging Face Transformers models (DistilBERT, LLaMA) via drop-in layer replacement, achieving ~15% parameter reduction while maintaining fine-tuning capability on selective attention projections (q_proj, k_proj, v_proj).
Available on PyPI.
Stars
44
Forks
2
Language
Python
License
MIT
Category
Last pushed
Mar 09, 2026
Monthly downloads
7
Commits (30d)
0
Dependencies
2
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kyegomez/SingLoRA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
hassancs91/SimplerLLM
Simplify interactions with Large Language Models
avilum/minrlm
Token-efficient Recursive Language Model. 3.6x fewer tokens than vanilla LLMs. Data never enters...
tylerelyt/LLM-Workshop
🌟 Learn Large Language Model development through hands-on projects and real-world implementations
NetEase-Media/grps_trtllm
Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM...
gtausa197-svg/-Project-Nord-Spiking-Neural-Network-Language-Model
The first pure SNN language model trained from scratch with a fully original architecture. 144M...