talking-head-anime-demo and talking-head-anime-2-demo
These are sequential versions of the same project where the second iteration adds more expressive animation capabilities to the original talking head anime generation system, making them ecosystem siblings in a linear progression rather than alternatives or complements.
About talking-head-anime-demo
pkhungurn/talking-head-anime-demo
Demo for the "Talking Head Anime from a Single Image."
Implements dual applications for anime character animation: a manual pose editor with slider controls and a real-time puppeteer that mirrors head movements from webcam input using dlib face tracking. Built on PyTorch with custom neural network modules (face_morpher, face_rotator, combiner) that process 256×256 RGBA character images and output animated sequences. Requires NVIDIA GPU acceleration and supports deployment via Google Colab for users without local hardware.
About talking-head-anime-2-demo
pkhungurn/talking-head-anime-2-demo
Demo programs for the Talking Head Anime from a Single Image 2: More Expressive project.
Implements real-time facial animation on anime characters using PyTorch neural networks, with dual interfaces: an interactive GUI for manual pose control and motion capture integration via iFacialMocap's ARKit blend shapes. The architecture chains multiple specialized models (face morpher, rotator, eyebrow decomposer, combiner) to decompose and reconstruct facial deformations from a single input image. Requires high-end Nvidia GPUs (RTX 2080+) and runs on Python with wxPython/Jupyter frontends, supporting network-based iOS facial motion transfer.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work