talking-head-anime-3-demo and talking-head-anime-4-demo
These are sequential versions of the same project, where version 4 represents an improved iteration with better models and distillation techniques, making version 3 largely obsolete for new users despite both remaining available.
About talking-head-anime-3-demo
pkhungurn/talking-head-anime-3-demo
Demo Programs for the "Talking Head(?) Anime from a Single Image 3: Now the Body Too" Project
Enables real-time anime character animation from a single static image using deep neural networks, with control over facial expressions, head/body rotation, and breathing motion through either manual GUI manipulation or facial motion capture from iOS devices via iFacialMocap. The system offers four model variants (standard/separable, float/half precision) optimized for different hardware constraints, built on PyTorch with GPU acceleration and deployable as desktop applications or Jupyter notebooks.
About talking-head-anime-4-demo
pkhungurn/talking-head-anime-4-demo
Demo Programs for the "Talking Head(?) Anime from a Single Image 4: Improved Models and Its Distillation" Project
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work