ZakirCodeArchitect/Sonic-Lipsync-AI

A Google Colab-based Gradio app for generating lip-synced videos using the Sonic model. It supports audio-to-video syncing with Hugging Face models and runs entirely in the cloud—no local setup needed.

28
/ 100
Experimental

This tool helps you create videos where a person's lips perfectly match a given audio, even if the original video didn't have sound or the sound was different. You provide an image of a person and an audio file, and it generates a new video with the person in the image lip-syncing to your audio. It's ideal for content creators, marketers, or educators who need to produce engaging visual content with synchronized speech.

No commits in the last 6 months.

Use this if you need to quickly generate a lip-synced video from a still image and an audio track, without needing any special software installed on your computer.

Not ideal if you need to fine-tune subtle facial expressions beyond lip movement, or if you prefer a solution that runs entirely on your local machine without cloud services.

video-production content-creation marketing-material digital-storytelling e-learning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

Last pushed

Apr 04, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/ZakirCodeArchitect/Sonic-Lipsync-AI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.