my-neuro and Open-LLM-VTuber
These are complements: My-Neuro provides the conversational AI backbone with memory and voice I/O, while Open-LLM-VTuber adds the animated avatar visualization layer, allowing them to be combined into a more complete virtual companion experience.
About my-neuro
morettt/my-neuro
This project lets you create your own AI desktop companion with customizable characters and voice conversations that respond in just 1 second. Features include long-term memory, visual recognition, voice cloning and LLM training. Compatible with various Live2D customizations.
About Open-LLM-VTuber
Open-LLM-VTuber/Open-LLM-VTuber
Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking face running locally across platforms
Supports pluggable ASR, TTS, and LLM backends (Ollama, OpenAI-compatible APIs, Whisper, sherpa-onnx) with modular configuration rather than code changes. Combines real-time speech recognition, LLM inference, and text-to-speech synthesis into a unified agent pipeline that runs locally or via cloud APIs, with persistent chat logs enabling conversation continuity. Features visual perception (camera/screen capture), emotion-mapped Live2D expressions, and desktop pet mode with transparency and click-through support.
Scores updated daily from GitHub, PyPI, and npm data. How scores work