Whisper-Finetune and whisper-finetune

These are competitors offering overlapping fine-tuning solutions for Whisper ASR, with A differentiating through timestamp-flexible training modes and Web deployment acceleration while B focuses on standard fine-tuning and evaluation workflows.

Whisper-Finetune
56
Established
whisper-finetune
49
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 24/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 23/25
Stars: 1,200
Forks: 213
Downloads:
Commits (30d): 0
Language: C
License: Apache-2.0
Stars: 361
Forks: 87
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About Whisper-Finetune

yeyupiaoling/Whisper-Finetune

Fine-tune the Whisper speech recognition model to support training without timestamp data, training with timestamp data, and training without speech data. Accelerate inference and support Web deployment, Windows desktop deployment, and Android deployment

Implements parameter-efficient fine-tuning using LoRA adapters while maintaining compatibility with OpenAI's base Whisper models across all variants (tiny through large-v3-turbo). Provides dual inference acceleration paths through CTranslate2 and GGML quantization, enabling deployment across heterogeneous platforms without requiring model conversion—original Whisper checkpoints convert directly. Integrates with PyTorch/Transformers ecosystem and includes end-to-end tooling: training pipelines supporting mixed data conditions, evaluation harnesses, and turnkey deployment templates for web services, Windows desktop (native apps), and Android via JNI bindings.

About whisper-finetune

vasistalodagala/whisper-finetune

Fine-tune and evaluate Whisper models for Automatic Speech Recognition (ASR) on custom datasets or datasets from huggingface.

Provides embedding extraction utilities across different Whisper layer depths and supports distributed multi-GPU training with step-based or epoch-based scheduling. Integrates with Hugging Face's seq2seq training pipeline and datasets library, while enabling custom dataset ingestion through standardized audio/text file formats and optional JAX-accelerated evaluation for faster inference.

Scores updated daily from GitHub, PyPI, and npm data. How scores work