Item

PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer

Feng, Qian
Zhao, Hanbin
Zhang, Chao
Dong, Jiahua
Ding, Henghui
Jiang, Yu-Gang
Qian, Hui
Supervisor
Department
Computer Vision
Embargo End Date
Type
Journal article
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Incremental Learning (IL) aims to learn deep models on sequential tasks continually, where each new task includes a batch of new classes and deep models have no access to task ID information at the inference time. Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples (rehearsal-free) and with a memory constraint (memory-constrained): Prompt-extending and Prompt-fixed methods. However, prompt-extending methods need a large memory buffer to maintain an ever-expanding prompt pool and meet an extra challenging prompt selection problem. Prompt-fixed methods only learn a single set of prompts on one of the incremental tasks and can not handle all the incremental tasks effectively. To achieve a good balance between the memory cost and the performance on all the tasks, we propose a Parameter-Efficient Cross-Task Prompt (PECTP) framework with Prompt Retention Module (PRM) and classifier Head Retention Module (HRM). To make the final learned prompts effective on all incremental tasks, PRM constrains the evolution of cross-task prompts’ parameters from Outer Prompt Granularity and Inner Prompt Granularity. Besides, we employ HRM to inherit old knowledge in the previously learned classifier heads to facilitate the cross-task prompts’ generalization ability. Extensive experiments show the effectiveness of our method.
Citation
Q. Feng et al., "PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer," in IEEE Transactions on Circuits and Systems for Video Technology, doi: 10.1109/TCSVT.2025.3572943
Source
IEEE Transactions on Circuits and Systems for Video Technology
Conference
Keywords
Incremental Learning, Prompt Learning, Parameter Efficient Prompts, Pre-Trained Model
Subjects
Source
Publisher
IEEE
Full-text link