ZakirCodeArchitect/Sonic-Lipsync-AI
A Google Colab-based Gradio app for generating lip-synced videos using the Sonic model. It supports audio-to-video syncing with Hugging Face models and runs entirely in the cloud—no local setup needed.
This tool helps you create videos where a person's lips perfectly match a given audio, even if the original video didn't have sound or the sound was different. You provide an image of a person and an audio file, and it generates a new video with the person in the image lip-syncing to your audio. It's ideal for content creators, marketers, or educators who need to produce engaging visual content with synchronized speech.
No commits in the last 6 months.
Use this if you need to quickly generate a lip-synced video from a still image and an audio track, without needing any special software installed on your computer.
Not ideal if you need to fine-tune subtle facial expressions beyond lip movement, or if you prefer a solution that runs entirely on your local machine without cloud services.
Stars
8
Forks
1
Language
Python
License
—
Category
Last pushed
Apr 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/voice-ai/ZakirCodeArchitect/Sonic-Lipsync-AI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
primepake/wav2lip_288x288
Wav2Lip version 288 and pipeline to train
SARIT42/lipsyncr
LipSyncr is a lip reading web app based on the LipNet model that can lip read videos.
Chris10M/Lip2Speech
A pipeline to read lips and generate speech for the read content, i.e Lip to Speech Synthesis.
Markfryazino/wav2lip-hq
Extension of Wav2Lip repository for processing high-quality videos.
d-kavinraja/MouthMap
MouthMap is a deep learning-based lip reading system that converts silent video sequences into...