New AI Model Revolutionises Portrait Animation with Enhanced Control and Realism

LivePortrait model introduces innovative stitching and retargeting modules, enabling more realistic and expressive animated portraits.

Researchers from Kuaishou Technology, the University of Science and Technology of China, and Fudan University have developed a groundbreaking AI model called LivePortrait that significantly improves the quality and controllability of portrait animation.

The model introduces features such as stitching and retargeting modules, allowing for more realistic animated portraits. This live model was built on Gradio, a part of the Hugging Face family.

Read the full paper here – https://arxiv.org/pdf/2407.03168

LivePortrait can efficiently animate static portrait images with precise control over stitching and facial features. The model, described in a recent paper, aims to make portrait animation more realistic, expressive and computationally efficient. This live portrait model is built on Gradio, a part of the hugging face family. 

Read the full paper here – https://arxiv.org/pdf/2407.03168

LivePortrait employs an implicit keypoint-based framework to animate portraits, advancing previous methods significantly. It can seamlessly stitch animated faces back into original images, allowing the animation of full-body portraits and multiple faces within a single image.

Additionally, it provides fine-grained control over eye and lip movements through dedicated retargeting modules. LivePortrait achieves high-quality results while being much faster than prior diffusion-based methods, running at 12.8ms per frame on an RTX 4090 GPU.

The researchers enhanced an existing implicit keypoint model by scaling up training data to 69 million high-quality images, using a mixed image-video training strategy, upgrading network architecture, and introducing new optimisation techniques. They also designed small MLP networks that act as implicit blend shapes to enable precise control over facial features.

In experiments, LivePortrait outperformed both diffusion-based and non-diffusion methods on standard benchmarks for portrait animation quality and motion accuracy. The stitching and retargeting modules allowed the seamless integration of animated faces into original images and fine control over eye and mouth movements.

The developers hope LivePortrait will enable more controllable animations for applications like video conferencing, social media, and entertainment. However, they note potential ethical concerns around deep fake misuse and suggest visual artefacts in current results could aid detection.

While some limitations remain, such as handling large pose variations, the researchers believe LivePortrait represents an important advance in efficient, high-quality portrait animation with enhanced creative control.

A few months back, Stability AI also developed something similar, like Stable Video 3D, which generates 3D videos from single images. However, they did not focus on portrait generation.

📣 Want to advertise in AIM? Book here

Picture of Gopika Raj

Gopika Raj

With a Master's degree in Journalism & Mass Communication, Gopika Raj infuses her technical writing with a distinctive flair. Intrigued by advancements in AI technology and its future prospects, her writing offers a fresh perspective in the tech domain, captivating readers along the way.
Related Posts
Association of Data Scientists
GenAI Corporate Training Programs
Our Upcoming Conference
India's Biggest Conference on AI Startups
April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.