ImageFolder: autoregressive image generation with folded tokens
Li, Xiang ; Qiu, Kai ; Chen, Hao ; Kuen, Jason ; Gu, Jiuxiang ; Raj, Bhiksha ; Lin, Zhe
Li, Xiang
Qiu, Kai
Chen, Hao
Kuen, Jason
Gu, Jiuxiang
Raj, Bhiksha
Lin, Zhe
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Image tokenizers are crucial for visual generative models, e.g., diffusion models (DMs) and autoregressive (AR) models, as they construct the latent representation for modeling. Increasing token length is a common approach to improve the image reconstruction quality. However, tokenizers with longer token lengths are not guaranteed to achieve better generation quality. There exists a trade-off between reconstruction and generation quality regarding token length. In this paper, we investigate the impact of token length on both image reconstruction and generation and provide a flexible solution to the tradeoff. We propose ImageFolder, a semantic tokenizer that provides spatially aligned image tokens that can be folded during autoregressive modeling to improve both generation efficiency and quality. To enhance the representative capability without increasing token length, we leverage dual-branch product quantization to capture different contexts of images. Specifically, semantic regularization is introduced in one branch to encourage compacted semantic information while another branch is designed to capture the remaining pixel-level details. Extensive experiments demonstrate the superior quality of image generation and shorter token length with ImageFolder tokenizer. © 2025 13th International Conference on Learning Representations, ICLR 2025. All rights reserved.
Citation
X. Li et al., “ImageFolder: Autoregressive Image Generation with Folded Tokens,” International Conference on Representation Learning, vol. 2025, no. 3, pp. 52445–52464, May 2025
Source
13th International Conference on Learning Representations, ICLR 2025
Conference
13th International Conference on Learning Representations, ICLR 2025
Keywords
Subjects
Source
13th International Conference on Learning Representations, ICLR 2025
Publisher
International Conference on Learning Representations, ICLR
