Item

Action Tokenizer Matters in In-Context Imitation Learning

Vuong, An Dinh
Vu, Minh Nhat
An, Dong
Reid, Ian
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
License
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
In-context imitation learning (ICIL) is a new paradigm that enables robots to generalize from demonstrations to unseen tasks without retraining. A well-structured action representation is the key to capturing demonstration information effectively, yet action tokenizer (the process of discretizing and encoding actions) remains largely unexplored in ICIL. In this work, we first systematically evaluate existing action tokenizer methods in ICIL and reveal a critical limitation: while they effectively encode action trajectories, they fail to preserve temporal smoothness, which is crucial for stable robotic execution. To address this, we propose LipVQ-VAE, a variational autoencoder that enforces the Lipschitz condition in the latent action space via weight normalization. By propagating smoothness constraints from raw action inputs to a quantized latent codebook, LipVQ-VAE generates smoother actions. When integrating into ICIL, LipVQ-VAE improves performance by more than 5.3% in high-fidelity simulators, with real-world experiments confirming its ability to produce smoother, more reliable trajectories. Code and checkpoints are available at https://action-tokenizer-matters.github.io/.
Citation
A.D. Vuong, M.N. Vu, D. An, I. Reid, "Action Tokenizer Matters in In-Context Imitation Learning," 2025, pp. 13490-13496.
Source
2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Conference
2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Keywords
46 Information and Computing Sciences, 4611 Machine Learning
Subjects
Source
2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Publisher
IEEE
Full-text link