whisper_android and whisper-cpp-server

These are ecosystem siblings—one provides Whisper inference optimized for mobile Android devices via TensorFlow Lite, while the other offers a server implementation in C++ for desktop/server environments, together covering different deployment targets within the Whisper framework ecosystem.

whisper_android
64
Established
whisper-cpp-server
42
Emerging
Maintenance 16/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 17/25
Stars: 630
Forks: 106
Downloads:
Commits (30d): 2
Language: C++
License: MIT
Stars: 74
Forks: 14
Downloads:
Commits (30d): 0
Language: HTML
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About whisper_android

vilassn/whisper_android

Offline Speech Recognition with OpenAI Whisper and TensorFlow Lite for Android

Provides dual implementation paths via TensorFlow Lite Java and Native APIs, allowing developers to choose between ease of integration and optimized performance. Includes a Python conversion pipeline to transform OpenAI Whisper models into TFLite format, plus support for live streaming transcription through buffer-based audio input alongside file-based batch processing. The architecture handles multilingual models with configurable vocabulary filters and manages audio preprocessing at 16kHz mono format for inference compatibility.

About whisper-cpp-server

litongjava/whisper-cpp-server

whisper-cpp-serve Real-time speech recognition and c+ of OpenAI's Whisper model in C/C++

Scores updated daily from GitHub, PyPI, and npm data. How scores work