BenchLMM: Benchmarking Cross-Style Visual Capability of Large Multimodal Models
Cai, Rizhao ; Song, Zirui ; Guan, Dayan ; Chen, Zhenhao ; Li, Yaohang ; Luo, Xing ; Yi, Chenyu ; Kot, Alex
Cai, Rizhao
Song, Zirui
Guan, Dayan
Chen, Zhenhao
Li, Yaohang
Luo, Xing
Yi, Chenyu
Kot, Alex
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Large Multimodal Models (LMMs) such as GPT-4V and LLaVA have shown remarkable capabilities in visual reasoning on data in common image styles. However, their robustness against diverse style shifts, crucial for practical applications, remains largely unexplored. In this paper, we propose a new benchmark, BenchLMM, to assess the robustness of LMMs toward three different styles: artistic image style, imaging sensor style, and application style. Utilizing BenchLMM, we comprehensively evaluate state-of-the-art LMMs and reveal: 1) LMMs generally suffer performance degradation when working with other styles; 2) An LMM performs better than another model in common style does not guarantee its superior performance in other styles; 3) LMMs’ reasoning capability can be enhanced by prompting LMMs to predict the style first, based on which we propose a versatile and training-free method for improving LMMs; 4) An intelligent LMM is expected to interpret the causes of its errors when facing stylistic variations. We hope that our benchmark and analysis can shed new light on developing more intelligent and versatile LMMs. The benchmark and evaluation have been released on https://github.com/AIFEG/BenchLMM.
Citation
R. Cai et al., “BenchLMM: Benchmarking Cross-Style Visual Capability of Large Multimodal Models,” pp. 340–358, Dec. 2025, doi: 10.1007/978-3-031-72973-7_20.
Source
Computer Vision – ECCV 2024
Conference
Keywords
Large Multimodal Models (LMMs), Visual reasoning, Style robustness, BenchLMM benchmark, Cross-style evaluation
Subjects
Source
Publisher
Springer Nature
