Efficient Training of Large Vision Models via Advanced Automated Progressive Learning
Li, Changlin ; Zhang, Jiawei ; Lin, Sihao ; Yang, Zongxin ; Liang, Junwei ; Liang, Xiaodan ; Chang, Xiaojun
Li, Changlin
Zhang, Jiawei
Lin, Sihao
Yang, Zongxin
Liang, Junwei
Liang, Xiaodan
Chang, Xiaojun
Supervisor
Department
Computer Vision
Embargo End Date
Type
Journal article
Date
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
The rapid advancements in Large Vision Models (LVMs), such as Vision Transformers (ViTs), diffusion models, and visual autoregressive models, have led to an increasing demand for computational resources, resulting in substantial financial and environmental costs. This growing challenge highlights the necessity of developing efficient training methods for LVMs. Progressive learning, a training strategy in which model capacity gradually increases during training, has shown promise in addressing these challenges. In this paper, we take a practical step toward the efficient training of LVMs by automating progressive learning. We focus first on the pre-training of LVMs, using ViTs as a case study. We propose AutoProg-One, an automated progressive learning scheme featuring momentum growth (MoGrow) and the one-shot growth schedule search. Additionally, we extend our approach beyond pre-training to address the transfer learning and fine-tuning of LVMs. We also expand the scope of AutoProg to encompass a wider range of LVMs, including diffusion models and visual autoregressive model. First, we introduce AutoProg-Zero, by enhancing the AutoProg framework with a novel zero-shot automated progressive learning method, eliminating the need for one-shot supernet training. Second, we introduce a novel Unique Stage Identifier (SID) scheme to bridge the gap during network growth. These innovations, integrated with the core principles of AutoProg, offer a comprehensive solution for efficient training across various LVM scenarios. Extensive experiments show that AutoProg accelerates ViT pre-training by up to 1.85× on ImageNet and accelerates the fine-tuning of diffusion models, and visual autoregressive model by up to 2.86× and 1.89×, with comparable or even better performance. This work provides a robust and scalable approach to efficient training of LVMs, with potential applications in a wide range of vision tasks.
Citation
C. Li, J. Zhang, S. Lin, Z. Yang, J. Liang, X. Liang , et al., "Efficient Training of Large Vision Models via Advanced Automated Progressive Learning," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PP, no. 99, pp. 1-15, 2026, https://doi.org/10.1109/tpami.2026.3673336.
Source
IEEE Transactions on Pattern Analysis and Machine Intelligence
Conference
Keywords
46 Information and Computing Sciences, 4611 Machine Learning
Subjects
Source
Publisher
IEEE
