my-neuro and Open-LLM-VTuber

These are complements: My-Neuro provides the conversational AI backbone with memory and voice I/O, while Open-LLM-VTuber adds the animated avatar visualization layer, allowing them to be combined into a more complete virtual companion experience.

my-neuro
71
Verified
Open-LLM-VTuber
58
Established
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 22/25
Stars: 1,061
Forks: 124
Downloads:
Commits (30d): 196
Language: JavaScript
License: MIT
Stars: 6,205
Forks: 816
Downloads:
Commits (30d): 0
Language: Python
License:
No Package No Dependents
No Package No Dependents

About my-neuro

morettt/my-neuro

This project lets you create your own AI desktop companion with customizable characters and voice conversations that respond in just 1 second. Features include long-term memory, visual recognition, voice cloning and LLM training. Compatible with various Live2D customizations.

About Open-LLM-VTuber

Open-LLM-VTuber/Open-LLM-VTuber

Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking face running locally across platforms

Supports pluggable ASR, TTS, and LLM backends (Ollama, OpenAI-compatible APIs, Whisper, sherpa-onnx) with modular configuration rather than code changes. Combines real-time speech recognition, LLM inference, and text-to-speech synthesis into a unified agent pipeline that runs locally or via cloud APIs, with persistent chat logs enabling conversation continuity. Features visual perception (camera/screen capture), emotion-mapped Live2D expressions, and desktop pet mode with transparency and click-through support.

Scores updated daily from GitHub, PyPI, and npm data. How scores work