whisper_android and whisper-cpp-server
These are ecosystem siblings—one provides Whisper inference optimized for mobile Android devices via TensorFlow Lite, while the other offers a server implementation in C++ for desktop/server environments, together covering different deployment targets within the Whisper framework ecosystem.
About whisper_android
vilassn/whisper_android
Offline Speech Recognition with OpenAI Whisper and TensorFlow Lite for Android
Provides dual implementation paths via TensorFlow Lite Java and Native APIs, allowing developers to choose between ease of integration and optimized performance. Includes a Python conversion pipeline to transform OpenAI Whisper models into TFLite format, plus support for live streaming transcription through buffer-based audio input alongside file-based batch processing. The architecture handles multilingual models with configurable vocabulary filters and manages audio preprocessing at 16kHz mono format for inference compatibility.
About whisper-cpp-server
litongjava/whisper-cpp-server
whisper-cpp-serve Real-time speech recognition and c+ of OpenAI's Whisper model in C/C++
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work