yeyupiaoling/Whisper-Finetune
Fine-tune the Whisper speech recognition model to support training without timestamp data, training with timestamp data, and training without speech data. Accelerate inference and support Web deployment, Windows desktop deployment, and Android deployment
Implements parameter-efficient fine-tuning using LoRA adapters while maintaining compatibility with OpenAI's base Whisper models across all variants (tiny through large-v3-turbo). Provides dual inference acceleration paths through CTranslate2 and GGML quantization, enabling deployment across heterogeneous platforms without requiring model conversion—original Whisper checkpoints convert directly. Integrates with PyTorch/Transformers ecosystem and includes end-to-end tooling: training pipelines supporting mixed data conditions, evaluation harnesses, and turnkey deployment templates for web services, Windows desktop (native apps), and Android via JNI bindings.
1,200 stars.
Stars
1,200
Forks
213
Language
C
License
Apache-2.0
Category
Last pushed
Dec 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/yeyupiaoling/Whisper-Finetune"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
linto-ai/whisper-timestamped
Multilingual Automatic Speech Recognition with word-level timestamps and confidence
argmaxinc/WhisperKit
On-device Speech Recognition for Apple Silicon
vasistalodagala/whisper-finetune
Fine-tune and evaluate Whisper models for Automatic Speech Recognition (ASR) on custom datasets...
xenova/whisper-web
ML-powered speech recognition directly in your browser
Pikurrot/whisper-gui
A simple GUI to use Whisper.