whisper-to-input and whisperIME
These are **competitors**: both are Android IME implementations of OpenAI Whisper for speech-to-text input, offering functionally equivalent solutions that users would choose between rather than combine.
About whisper-to-input
j3soon/whisper-to-input
An Android keyboard that performs speech-to-text (STT/ASR) with OpenAI Whisper and input the recognized text; Supports English, Chinese, Japanese, etc. and even mixed languages.
Supports pluggable ASR backends including OpenAI API, self-hosted Whisper ASR Webservice, and NVIDIA NIM with TensorRT-LLM optimization. Implements a full Android Input Method Editor (IME) with configurable endpoints, allowing users to choose between cloud and on-device processing for privacy and cost control. The architecture decouples the recognition service layer, enabling deployment flexibility from commercial APIs to GPU-accelerated self-hosted inference.
About whisperIME
woheller69/whisperIME
Android Input Method Editor (IME) based on Whisper
Leverages TensorFlow Lite quantized Whisper models (~435 MB) for fully offline multilingual speech-to-text, with optional translation to English and dual-model selection (fast English-only vs. comprehensive multilingual). Integrates as a system-wide RecognitionService via Android's voice input framework, compatible with apps like HeliBoard, while supporting both IME and standalone modes with 30-second recording limits and voice activity detection.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work