whisper.cpp and TheWhisper

whisper.cpp
72
Verified
TheWhisper
44
Emerging
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 10/25
Adoption 10/25
Maturity 9/25
Community 15/25
Stars: 47,665
Forks: 5,311
Downloads:
Commits (30d): 160
Language: C++
License: MIT
Stars: 821
Forks: 55
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About whisper.cpp

ggml-org/whisper.cpp

Port of OpenAI's Whisper model in C/C++

Optimized for resource-constrained environments through integer quantization, mixed-precision inference (F16/F32), and zero runtime memory allocations, enabling on-device ASR on mobile and embedded platforms. Leverages the GGML inference library with multi-platform GPU acceleration via Metal, Vulkan, CUDA, and Core ML, alongside CPU-optimized SIMD paths for ARM NEON, AVX, and POWER VSX architectures. Provides a minimal C API and supports deployment across iOS, Android, WebAssembly, Raspberry Pi, and standard desktop/server platforms.

About TheWhisper

TheStageAI/TheWhisper

Optimized Whisper models for streaming and on-device use

Fine-tuned Whisper variants support flexible chunk sizes (10s-30s vs. original 30s fixed) and deliver platform-specific optimizations: CoreML engines for Apple Silicon (~2W power, ~2GB RAM) and NVIDIA GPU acceleration (220 tok/s on L40s). Streaming inference is available across both platforms with word-level timestamps and multilingual support, deployable via Python API or local REST endpoints integrated with Electron/web frontends.

Scores updated daily from GitHub, PyPI, and npm data. How scores work