talking-head-anime-2-demo and talking-head-anime-3-demo
These are successive versions of the same project lineage, where version 3 extends version 2's capabilities by adding body animation to the existing head animation functionality, making them sequential iterations rather than alternatives.
About talking-head-anime-2-demo
pkhungurn/talking-head-anime-2-demo
Demo programs for the Talking Head Anime from a Single Image 2: More Expressive project.
Implements real-time facial animation on anime characters using PyTorch neural networks, with dual interfaces: an interactive GUI for manual pose control and motion capture integration via iFacialMocap's ARKit blend shapes. The architecture chains multiple specialized models (face morpher, rotator, eyebrow decomposer, combiner) to decompose and reconstruct facial deformations from a single input image. Requires high-end Nvidia GPUs (RTX 2080+) and runs on Python with wxPython/Jupyter frontends, supporting network-based iOS facial motion transfer.
About talking-head-anime-3-demo
pkhungurn/talking-head-anime-3-demo
Demo Programs for the "Talking Head(?) Anime from a Single Image 3: Now the Body Too" Project
Enables real-time anime character animation from a single static image using deep neural networks, with control over facial expressions, head/body rotation, and breathing motion through either manual GUI manipulation or facial motion capture from iOS devices via iFacialMocap. The system offers four model variants (standard/separable, float/half precision) optimized for different hardware constraints, built on PyTorch with GPU acceleration and deployable as desktop applications or Jupyter notebooks.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work