whisper_android and MisterWhisper

These are ecosystem siblings—one provides a mobile inference implementation (TensorFlow Lite on Android) while the other provides a push-to-talk interface pattern, both consuming the same Whisper model but targeting different deployment contexts and interaction models.

whisper_android
64
Established
MisterWhisper
38
Emerging
Maintenance 16/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 6/25
Adoption 9/25
Maturity 16/25
Community 7/25
Stars: 630
Forks: 106
Downloads:
Commits (30d): 2
Language: C++
License: MIT
Stars: 112
Forks: 5
Downloads:
Commits (30d): 0
Language: Java
License: MIT
No Package No Dependents
No Package No Dependents

About whisper_android

vilassn/whisper_android

Offline Speech Recognition with OpenAI Whisper and TensorFlow Lite for Android

Provides dual implementation paths via TensorFlow Lite Java and Native APIs, allowing developers to choose between ease of integration and optimized performance. Includes a Python conversion pipeline to transform OpenAI Whisper models into TFLite format, plus support for live streaming transcription through buffer-based audio input alongside file-based batch processing. The architecture handles multilingual models with configurable vocabulary filters and manages audio preprocessing at 16kHz mono format for inference compatibility.

About MisterWhisper

openconcerto/MisterWhisper

Push to talk voice recognition using Whisper

Supports 100+ languages with GPU acceleration via whisper.cpp, enabling fast local transcription or remote server inference. Integrates with any active application through system hotkeys (F1-F18), automatically injecting recognized text with silence detection to minimize manual control. Available as standalone executables for Windows (CPU/CUDA/Vulkan variants) or as a Java-based client for Linux/macOS that connects to a whisper.cpp server locally or over the network.

Scores updated daily from GitHub, PyPI, and npm data. How scores work