Pragateeshwaran/LoRA-From-Scratch
This project implements a Low-Rank Adaptation (LoRA) technique from scratch for fine-tuning a neural network on the MNIST dataset. It allows for efficient adaptation of a pre-trained model to specific digits.
No commits in the last 6 months.
Stars
2
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Apr 02, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Pragateeshwaran/LoRA-From-Scratch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.