Loading...
Thumbnail Image
Item

SOAP: Style-Omniscient Animatable Portraits

Liao, Tingting
Zheng, Yujian
Xiu, Yuliang
Karmanov, Adilbek
Hu, Liwen
Jin, Leyang
Li, Hao
Research Projects
Organizational Units
Journal Issue
Abstract
Creating animatable 3D avatars from a single image remains challenging due to style limitations (realistic, cartoon, anime) and difficulties in handling accessories or hairstyles. While 3D diffusion models advance single-view reconstruction for general objects, outputs often lack animation controls or suffer from artifacts because of the domain gap. We propose SOAP, a style-omniscient framework to generate rigged, topology-consistent avatars from any portrait. Our method leverages a multiview diffusion model trained on 24K 3D heads with multiple styles and an adaptive optimization pipeline to deform the FLAME mesh while maintaining topology and rigging via differentiable rendering. The resulting textured avatars support FACS-based animation, integrate with eyeballs and teeth, and preserve details like braided hair or accessories. Extensive experiments demonstrate the superiority of our method over state-of-the-art techniques for both single-view head modeling and diffusion-based generation of Image-to-3D. Our code and data are publicly available for research purposes at github.com/TingtingLiao/soap.
Citation
T. Liao et al., “SOAP: Style-Omniscient Animatable Portraits,” Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers, pp. 1–11, Aug. 2025, doi: 10.1145/3721238.3730691
Source
Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers
Conference
SIGGRAPH Conference Papers '25: Special Interest Group on Computer Graphics and Interactive Techniques Conference
Keywords
Subjects
Source
SIGGRAPH Conference Papers '25: Special Interest Group on Computer Graphics and Interactive Techniques Conference
Publisher
Association for Computing Machinery
Full-text link