whisper.cpp and whisper-cpp-server

The latter is a server wrapper built on top of the former, making them complements that work together—whisper-cpp-server adds HTTP API and real-time serving capabilities around the core whisper.cpp inference engine.

whisper.cpp
72
Verified
whisper-cpp-server
35
Emerging
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 9/25
Maturity 9/25
Community 17/25
Stars: 47,665
Forks: 5,311
Downloads:
Commits (30d): 160
Language: C++
License: MIT
Stars: 74
Forks: 14
Downloads:
Commits (30d): 0
Language: HTML
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About whisper.cpp

ggml-org/whisper.cpp

Port of OpenAI's Whisper model in C/C++

Optimized for resource-constrained environments through integer quantization, mixed-precision inference (F16/F32), and zero runtime memory allocations, enabling on-device ASR on mobile and embedded platforms. Leverages the GGML inference library with multi-platform GPU acceleration via Metal, Vulkan, CUDA, and Core ML, alongside CPU-optimized SIMD paths for ARM NEON, AVX, and POWER VSX architectures. Provides a minimal C API and supports deployment across iOS, Android, WebAssembly, Raspberry Pi, and standard desktop/server platforms.

About whisper-cpp-server

litongjava/whisper-cpp-server

whisper-cpp-serve Real-time speech recognition and c+ of OpenAI's Whisper model in C/C++

Scores updated daily from GitHub, PyPI, and npm data. How scores work