Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models
Baliah, Sanoojan ; Lin, Qinliang ; Liao, Shengcai ; Liang, Xiaodan ; Khan, Muhammad Haris
Baliah, Sanoojan
Lin, Qinliang
Liao, Shengcai
Liang, Xiaodan
Khan, Muhammad Haris
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Despite promising progress in face swapping task, realistic swapped images remain elusive, often marred by artifacts, particularly in scenarios involving high pose variation, color differences, and occlusion. To address these issues, we propose a novel approach that better harnesses diffusion models for face-swapping by making following core contributions. (a) We propose to reframe the face-swapping task as a self-supervised, train-time inpainting problem, enhancing the identity transfer while blending with the target image. (b) We introduce a multi-step De-noising Diffusion Implicit Model (DDIM) sampling during training, reinforcing identity and perceptual similarities. (c) Third, we introduce CLIP feature disentanglement to extract pose, expression, and lighting information from the target image, improving fidelity. (d) Further, we introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping, with an additional feature of head swapping. Ours can swap hair and even accessories, beyond traditional face swapping. Unlike prior works reliant on multiple off-the-shelf models, ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models. Extensive experiments on FFHQ and CelebA datasets validate the efficacy and robustness of our approach, show-casing high-fidelity, realistic face-swapping with minimal inference time. Our code is available at REFace. © 2025 IEEE.
Citation
S. Baliah, Q. Lin, S. Liao, X. Liang and M. H. Khan, "Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models," 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, AZ, USA, 2025, pp. 1062-1071, doi: 10.1109/WACV61041.2025.00112.
Source
Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
Conference
2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025
Keywords
Subjects
Source
2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025
Publisher
IEEE
