Calibration-Aware Prompt Learning for Medical Vision-Language Models
Basu, Abhishek
Basu, Abhishek
Author
Supervisor
Department
Computer Vision
Embargo End Date
2025-05-30
Type
Thesis
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Medical Vision Language Models (Med-VLMs) have demonstrated remarkable performance across diverse medical imaging tasks by leveraging largescale imagetext pretraining. However, their confidence calibration remains largely unexplored, posing a significant challenge for safe and trustworthy deployment. As such, miscalibrated predictions can lead to over confident errors, undermining clinical trust and decision-making reliability. To address this, we introduce CalibPrompt, the first framework to calibrate Med-VLMs during prompt tuning. CalibPrompt optimizes a small set of learnable prompts with carefully designed calibration objectives under scarce labeled data regime. First, we study a regularizer that attempts to align the smoothed accuracy with the predicted model confidences. Second, we introduce an angular separation loss to maximize textual feature proximity toward improving the reliability in confidence estimates of multimodal Med-VLMs. Extensive ex periments on four publicly available Med-VLMs and five diverse medical imaging datasets reveal that CalibPrompt consistently improves calibration without drastically affecting clean accuracy.
Citation
Abhishek Basu, “Calibration-Aware Prompt Learning for Medical Vision-Language Models,” Master of Science thesis, Computer Vision, MBZUAI, 2025.
Source
Conference
Keywords
Confidence Calibration, Prompt Learning, Uncertainty Estimation, Medical Vision-Language Models, Medical Image Analysis
