TLILIFIRAS/Efficient-Fine-Tuning-of-Vision-Language-Models-with-LoRA-Quantization
This project demonstrates parameter-efficient fine-tuning of large Vision-Language Models (VLMs), specifically Qwen2-VL-7B-Instruct, using LoRA (Low-Rank Adaptation) and 4-bit quantization.
Stars
—
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/TLILIFIRAS/Efficient-Fine-Tuning-of-Vision-Language-Models-with-LoRA-Quantization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
axolotl-ai-cloud/axolotl
Go ahead and axolotl questions
google/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for...
JosefAlbers/PVM
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
iamarunbrahma/finetuned-qlora-falcon7b-medical
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
h2oai/h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning