Item

Enhancing Multimodal Continual Instruction Tuning with BranchLoRA

Zhang, Duzhen
Ren, Yong
Li, Zhongzhi
Yu, Yahan
Dong, Jiahua
Li, Chenxing
Ji, Zhilong
Bai, Jinfeng
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Multimodal Continual Instruction Tuning (MCIT) aims to finetune Multimodal Large Language Models (MLLMs) to continually align with human intent across sequential tasks. Existing approaches often rely on the Mixture-of-Experts (MoE) LoRA framework to preserve previous instruction alignments. However, these methods are prone to Catastrophic Forgetting (CF), as they aggregate all LoRA blocks via simple summation, which compromises performance over time. In this paper, we identify a critical parameter inefficiency in the MoELoRA framework within the MCIT context. Based on this insight, we propose BranchLoRA, an asymmetric framework to enhance both efficiency and performance. To mitigate CF, we introduce a flexible tuning-freezing mechanism within BranchLoRA, enabling branches to specialize in intra-task knowledge while fostering inter-task collaboration. Moreover, we incrementally incorporate task-specific routers to ensure an optimal branch distribution over time, rather than favoring the most recent task. To streamline inference, we introduce a task selector that automatically routes test inputs to the appropriate router without requiring task identity. Extensive experiments on the latest MCIT benchmark demonstrate that BranchLoRA significantly outperforms MoELoRA and maintains its superiority across various MLLM sizes.
Citation
D. Zhang et al., “Enhancing Multimodal Continual Instruction Tuning with BranchLoRA,” vol. 1, pp. 5743–5756, Aug. 2025, doi: 10.18653/V1/2025.ACL-LONG.287.
Source
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Conference
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Keywords
Multimodal Continual Instruction Tuning, BranchLoRA Framework, Catastrophic Forgetting, Mixture-of-Experts LoRA, Task-specific Routers, Inference Task Selector, Multimodal Large Language Models, Efficiency-Performance Trade-off
Subjects
Source
63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
Publisher
Association for Computational Linguistics
Full-text link