talking-head-anime-2-demo and talking-head-anime-4-demo

These are successive versions of the same project, where version 4 represents an improved iteration of version 2 with better models and distillation techniques, making version 2 largely superseded rather than complementary.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 15/25
Stars: 1,150
Forks: 150
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 309
Forks: 32
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About talking-head-anime-2-demo

pkhungurn/talking-head-anime-2-demo

Demo programs for the Talking Head Anime from a Single Image 2: More Expressive project.

Implements real-time facial animation on anime characters using PyTorch neural networks, with dual interfaces: an interactive GUI for manual pose control and motion capture integration via iFacialMocap's ARKit blend shapes. The architecture chains multiple specialized models (face morpher, rotator, eyebrow decomposer, combiner) to decompose and reconstruct facial deformations from a single input image. Requires high-end Nvidia GPUs (RTX 2080+) and runs on Python with wxPython/Jupyter frontends, supporting network-based iOS facial motion transfer.

About talking-head-anime-4-demo

pkhungurn/talking-head-anime-4-demo

Demo Programs for the "Talking Head(?) Anime from a Single Image 4: Improved Models and Its Distillation" Project

Scores updated daily from GitHub, PyPI, and npm data. How scores work