Loading...
Thumbnail Image
Item

Promptception: How Sensitive Are Large Multimodal Models to Prompts?

Ismithdeen, Mohamed Insaf
Khattak, Muhammad Uzair
Khan, Salman
Supervisor
Department
Computer Vision
Embargo End Date
Type
Conference proceeding
Date
License
http://creativecommons.org/licenses/by/4.0/
Language
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Despite the success of Large Multimodal Models (LMMs) in recent years, prompt design for LMMs in Multiple‐Choice Question Answering (MCQA) remains poorly understood. We show that even minor variations in prompt phrasing and structure can lead to accuracy deviations of up to 15% for certain prompts and models. This variability poses a challenge for transparent and fair LMM evaluation, as models often report their best-case performance using carefully selected prompts. To address this, we introduce **Promptception**, a systematic framework for evaluating prompt sensitivity in LMMs. It consists of 61 prompt types, spanning 15 categories and 6 supercategories, each targeting specific aspects of prompt formulation, and is used to evaluate 10 LMMs ranging from lightweight open‐source models to GPT-4o and Gemini 1.5 Pro, across 3 MCQA benchmarks: MMStar, MMMU‐Pro, MVBench. Our findings reveal that proprietary models exhibit greater sensitivity to prompt phrasing, reflecting tighter alignment with instruction semantics, while open‐source models are steadier but struggle with nuanced and complex phrasing. Based on this analysis, we propose Prompting Principles tailored to proprietary and open-source LMMs, enabling more robust and fair model evaluation.
Citation
M.I. Ismithdeen, M.U. Khattak, S. Khan, "Promptception: How Sensitive Are Large Multimodal Models to Prompts?," 2025, pp. 23950-23985.
Source
EMNLP 2025 - 2025 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2025
Conference
Findings of the Association for Computational Linguistics: EMNLP 2025
Keywords
Subjects
Source
Findings of the Association for Computational Linguistics: EMNLP 2025
Publisher
Association for Computational Linguistics
Full-text link