Video Motion Transfer with Diffusion Transformers
Pondaven, Alexander ; Siarohin, Aliaksandr ; Tulyakov, Sergey ; Torr, Philip H.S. ; Pizzati, Fabio
Pondaven, Alexander
Siarohin, Aliaksandr
Tulyakov, Sergey
Torr, Philip H.S.
Pizzati, Fabio
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
We propose DiTFlow, a method for transferring the motion of a reference video to a newly synthesized one, designed specifically for Diffusion Transformers (DiT). We first process the reference video with a pre-trained DiT to analyze cross-frame attention maps and extract a patch-wise motion signal called the Attention Motion Flow (AMF). We guide the latent denoising process in an optimization-based, training-free, manner by optimizing latents with our AMF loss to generate videos reproducing the motion of the reference one. We also apply our optimization strategy to transformer positional embeddings, granting us a boost in zero-shot motion transfer capabilities. We evaluate DiTFlow against recently published methods, outperforming all across multiple metrics and human evaluation.
Citation
A. Pondaven, A. Siarohin, S. Tulyakov, P. Torr and F. Pizzati, "Video Motion Transfer with Diffusion Transformers," 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2025, pp. 22911-22921, doi: 10.1109/CVPR52734.2025.02133.
Source
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Conference
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025
Keywords
Diffusion Transformers, Generative Models, Motion Transfer, Video Diffusion Models, Zero-shot
Subjects
Source
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025
Publisher
IEEE
