Item

VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation

Wen, Youpeng
Lin, Junfan
Zhu, Yi
Han, Jianhua
Xu, Hang
Zhao, Shen
Liang, Xiaodan
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
2024
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Recent advancements utilizing large-scale video data for learning video generation models demonstrate significant potential in understanding complex physical dynamics. It suggests the feasibility of leveraging diverse robot trajectory data to develop a unified, dynamics-aware model to enhance robot manipulation. However, given the relatively small amount of available robot data, directly fitting data without considering the relationship between visual observations and actions could lead to suboptimal data utilization. To this end, we propose \textbf{VidMan} (\textbf{Vid}eo Diffusion for Robot \textbf{Man}ipulation), a novel framework that employs a two-stage training mechanism inspired by dual-process theory from neuroscience to enhance stability and improve data utilization efficiency. Specifically, in the first stage, VidMan is pre-trained on the Open X-Embodiment dataset (OXE) for predicting future visual trajectories in a video denoising diffusion manner, enabling the model to develop a long horizontal awareness of the environment's dynamics. In the second stage, a flexible yet effective layer-wise self-attention adapter is introduced to transform VidMan into an efficient inverse dynamics model that predicts action modulated by the implicit dynamics knowledge via parameter sharing. Our VidMan framework outperforms state-of-the-art baseline model GR-1 on the CALVIN benchmark, achieving a 11.7\% relative improvement, and demonstrates over 9\% precision gains on the OXE small-scale dataset. These results provide compelling evidence that world models can significantly enhance the precision of robot action prediction. Codes and models will be public.
Citation
Y. Wen et al., “VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation,” Adv Neural Inf Process Syst, vol. 37, pp. 41051–41075, Dec. 2024.
Source
Advances in Neural Information Processing Systems (NeurIPS 2024)
Conference
Keywords
Video diffusion models, Robot manipulation, Implicit dynamics learning, Two-stage training, Action prediction
Subjects
Source
Publisher
NEURIPS
DOI
Full-text link