whisper-to-input and whisperIME

These are **competitors**: both are Android IME implementations of OpenAI Whisper for speech-to-text input, offering functionally equivalent solutions that users would choose between rather than combine.

whisper-to-input
50
Established
whisperIME
49
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 18/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 13/25
Stars: 117
Forks: 21
Downloads:
Commits (30d): 0
Language: Kotlin
License: GPL-3.0
Stars: 543
Forks: 31
Downloads:
Commits (30d): 0
Language: Java
License: MIT
No Package No Dependents
No Package No Dependents

About whisper-to-input

j3soon/whisper-to-input

An Android keyboard that performs speech-to-text (STT/ASR) with OpenAI Whisper and input the recognized text; Supports English, Chinese, Japanese, etc. and even mixed languages.

Supports pluggable ASR backends including OpenAI API, self-hosted Whisper ASR Webservice, and NVIDIA NIM with TensorRT-LLM optimization. Implements a full Android Input Method Editor (IME) with configurable endpoints, allowing users to choose between cloud and on-device processing for privacy and cost control. The architecture decouples the recognition service layer, enabling deployment flexibility from commercial APIs to GPU-accelerated self-hosted inference.

About whisperIME

woheller69/whisperIME

Android Input Method Editor (IME) based on Whisper

Leverages TensorFlow Lite quantized Whisper models (~435 MB) for fully offline multilingual speech-to-text, with optional translation to English and dual-model selection (fast English-only vs. comprehensive multilingual). Integrates as a system-wide RecognitionService via Android's voice input framework, compatible with apps like HeliBoard, while supporting both IME and standalone modes with 30-second recording limits and voice activity detection.

Scores updated daily from GitHub, PyPI, and npm data. How scores work