Loading...
Internal Activation Revision: Safeguarding Vision Language Models Without Parameter Update
Li, Qing ; Geng, Jiahui ; Zhu, Derui ; Chen, Zongxiong ; Song, Kun ; Ma, Lei ; Karray, Fakhri
Li, Qing
Geng, Jiahui
Zhu, Derui
Chen, Zongxiong
Song, Kun
Ma, Lei
Karray, Fakhri
Supervisor
Department
Machine Learning
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Warning: This paper contains offensive content that may disturb some readers. Vision-language models (VLMs) demonstrate strong multimodal capabilities but have been found to be more susceptible to generating harmful content compared to their backbone large language models (LLMs). Our investigation reveals that the integration of images significantly shifts the model’s internal activations during the forward pass, diverging from those triggered by textual input. Moreover, the safety alignments of LLMs embedded within VLMs are not sufficiently robust to handle the activations discrepancies, making the models vulnerable to even the simplest jailbreaking attacks. To address this issue, we propose an internal activation revision approach that efficiently revises activations during generation, steering the model toward safer outputs. Our framework incorporates revisions at both the layer and head levels, offering control over the model’s generation at varying levels of granularity. In addition, we explore three strategies for constructing positive and negative samples and two approaches for extracting revision vectors, resulting in different variants of our method. Comprehensive experiments demonstrate that the internal activation revision method significantly improves the safety of widely used VLMs, reducing attack success rates by an average of 48.94%, 34.34%, 43.92%, and 52.98% on SafeBench, Safe-Unsafe, Unsafe, and MM-SafetyBench, respectively, while minimally impacting model helpfulness.
Citation
Q. Li et al., “Internal Activation Revision: Safeguarding Vision Language Models Without Parameter Update,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 26, pp. 27428–27436, Apr. 2025, doi: 10.1609/AAAI.V39I26.34954.
Source
Proceedings of the AAAI Conference on Artificial Intelligence
Conference
39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Keywords
Integration of images, Language model, Multi-modal, Negative samples, Simple++, Visual languages
Subjects
Source
39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025
Publisher
Association for the Advancement of Artificial Intelligence
