mehrdadalmasi2020/microsoft_MiniLM_L12_H384_uncased
A library that leverages the pre-trained microsoft_MiniLM-L12-H384-uncased model for efficient and lightweight text classification tasks, with a focus on English. The library offers easy-to-use fine-tuning capabilities, making it suitable for rapid deployment in resource-constrained environments.
No commits in the last 6 months.
Stars
1
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/mehrdadalmasi2020/microsoft_MiniLM_L12_H384_uncased"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
daekeun-ml/genai-ko-LLM
This hands-on lab walks you through a step-by-step approach to efficiently serving and...
GURPREETKAURJETHRA/Llama-3-ORPO-Fine-Tuning
Llama 3 ORPO Fine Tuning on A100 in Colab Pro.
ramalamadingdong/onnx-rubikpi
ONNX LLM runtime on RUBIK-Pi with Gemma 1B and Llama 3.2 1B
keanteng/sesame-csm-elise
Fine-Tuning Sesame CSM Wth Elise. Enjoy the voice ( ̄︶ ̄)↗
sukanyabag/Finetuning-Qwen2-7B-VQA-on-Radiology-Scans
This repository is doing the finetuning of the Qwen2 7B VLM for performing VQA (Visual Question...