Loading...
On Adversarial Robustness of Deep Learning-based Medical Imaging Models
Hanif, Asif
Hanif, Asif
Files
Author
Supervisor
Department
Computer Vision
Embargo End Date
2025-05-30
Type
Dissertation
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Deep learning has revolutionized medical image analysis, yet its vulnerability to adversarial attacks raises concerns about reliability and safety in realworld clinical settings. Given the safety-critical nature of the medical domain, even subtle adversarial perturbations can lead to misdiagnoses with negative consequences. This thesis investigates the adversarial vulnerabilities of deep learning-based medical imaging models, aiming to enhance their robustness and ensure safer deployment in clinical practice. First, a novel frequency-domain adversarial attack is introduced that is tailored for volumetric medical image segmentation. Unlike existing methods that simply apply 2D adversarial attacks to 3D data, the proposed attack leverages the unique characteristics of volumetric medical segmentation by operating in the frequency domain of volumetric data and is imperceptible to human observers, demonstrating superior fooling rates. More over, a frequency-domain adversarial training is proposed to enhance model robustness against such attacks. The second contribution expands the scope of adversarial attacks by exploring both voxel and frequency-domain perturbations to further understand the vulnerabilities of volumetric medical segmentation models. To this end, a multiplicative spectral noise attack is proposed that perturbs the frequency spectrum in a controlled man ner. The proposed method enhances fooling rates and maintains perceptual quality, while being computationally efficient. The third contribution explores the vulnerability of med ical vision-language (VL) foundation models to backdoor attacks during prompt learning. While prompt tuning has gained traction for its efficiency in data and compute-constrained medical applications, its security implications remain largely unexplored. We challenge the assumption that the minimal data and learnable parameter requirements of prompt learn ing provides protection against backdoor attacks. Our work demonstrates that backdoor attacks can be effectively embedded in medical VL foundation models during prompt learn ing using a novel learnable imperceptible noise trigger. The final contribution of this thesis investigates the susceptibility of medical conversational vision-language (VL) foundation models to adversarial attacks, with a focus on crossprompt transferability. While these VL models have shown promise in medical applications, their vulnerability to adversarial perturbations raises concerns. In particular, a single adversarial perturbation can manip ulate model outputs across diverse prompts, posing risk in healthcare settings. This work introduces a novel spectral-domain attack that enhances cross-prompt transferability by optimizing adversarial perturbations against the model’s learnable text context
Citation
Asif Hanif, “On Adversarial Robustness of Deep Learning-based Medical Imaging Models,” Doctor of Philosophy thesis, Computer Vision, MBZUAI, 2025.
Source
Conference
Keywords
Adversarial Attacks, Trustworthy AI, Medical Imaging Models, Vision-Language Models, Transferable Attacks
