Loading...
Thumbnail Image
Item

VSCBench: Bridging the Gap in Vision-Language Model Safety Calibration

Geng, Jiahui
Li, Qing
Chen, Zongxiong
Wang, Yuxia
Zhu, Derui
Xie, Zhuohan
Lyu, Chenyang
Chen, Xiuying
Nakov, Preslav
Karray, Fakhri
Supervisor
Department
Natural Language Processing
Embargo End Date
Type
Conference proceeding
Date
License
http://creativecommons.org/licenses/by/4.0/
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
The rapid advancement of vision-language models (VLMs) has brought a lot of attention to their safety alignment. However, existing methods have primarily focused on model undersafety, where the model responds to hazardous queries, while neglecting oversafety, where the model refuses to answer safe queries. In this paper, we introduce the concept of safety calibration, which systematically addresses both undersafety and oversafety. Specifically, we present VSCBench, a novel dataset of 3,600 image-text pairs that are visually or textually similar but differ in terms of safety, which is designed to evaluate safety calibration across image-centric and text-centric scenarios. Based on our benchmark, we evaluate safety calibration across eleven widely used VLMs. Our extensive experiments revealed major issues with both undersafety and oversafety. We further investigated four approaches to improve the model’s safety calibration. We found that even though some methods effectively calibrated the models’ safety problems, these methods also lead to the degradation of models’ utility. This trade-off underscores the urgent need for advanced calibration methods, and our benchmark provides a valuable tool for evaluating future approaches.
Citation
J. Geng, Q. Li, Z. Chen, Y. Wang, D. Zhu, Z. Xie, C. Lyu, X. Chen, P. Nakov, F. Karray, "VSCBench: Bridging the Gap in Vision-Language Model Safety Calibration," 2025, pp. 3047-3059.
Source
Proceedings of the Annual Meeting of the Association for Computational Linguistics
Conference
Findings of the Association for Computational Linguistics: ACL 2025
Keywords
Subjects
Source
Findings of the Association for Computational Linguistics: ACL 2025
Publisher
Association for Computational Linguistics (ACL)
Full-text link