talking-head-anime-demo and talking-head-anime-3-demo

These are successive versions of the same project, where the second iteration extends the first by adding body animation capabilities to the original head-only anime animation system.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 19/25
Stars: 2,021
Forks: 287
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 1,030
Forks: 106
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About talking-head-anime-demo

pkhungurn/talking-head-anime-demo

Demo for the "Talking Head Anime from a Single Image."

Implements dual applications for anime character animation: a manual pose editor with slider controls and a real-time puppeteer that mirrors head movements from webcam input using dlib face tracking. Built on PyTorch with custom neural network modules (face_morpher, face_rotator, combiner) that process 256×256 RGBA character images and output animated sequences. Requires NVIDIA GPU acceleration and supports deployment via Google Colab for users without local hardware.

About talking-head-anime-3-demo

pkhungurn/talking-head-anime-3-demo

Demo Programs for the "Talking Head(?) Anime from a Single Image 3: Now the Body Too" Project

Enables real-time anime character animation from a single static image using deep neural networks, with control over facial expressions, head/body rotation, and breathing motion through either manual GUI manipulation or facial motion capture from iOS devices via iFacialMocap. The system offers four model variants (standard/separable, float/half precision) optimized for different hardware constraints, built on PyTorch with GPU acceleration and deployable as desktop applications or Jupyter notebooks.

Scores updated daily from GitHub, PyPI, and npm data. How scores work