FLEX: Framework for Learning Explainable Concept Alignment in Medical Diagnosis
Abzhanov, Arsen
Abzhanov, Arsen
Author
Supervisor
Department
Computer Vision
Embargo End Date
2025-05-30
Type
Thesis
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Deep learning models for medical diagnosis often lack explainability, limiting their trust and adoption in clinical practice. Concept-based models improve interpretability by aligning disease labels with diagnostic criteria (e.g., lesion shape in a dermatoscopic image or calcification in an ultrasound scan), but their robustness is weakminor perturbations in irrelevant image regions can degrade performance. To overcome this, we propose a multitask training framework using a flowbased transformer that aligns visual features and diagnostic concepts. Our method processes image features for classification and concept learning and noisy latent representations for conceptsdriven denoising. This enhances classification accuracy and robustness while enforcing alignment between diagnostic concepts and visual input. As a result, our model improves both explainability and reliability, making it more suitable for realworld medical use.
Citation
Arsen Abzhanov, “FLEX: Framework for Learning Explainable Concept Alignment in Medical Diagnosis,” Master of Science thesis, Computer Vision, MBZUAI, 2025.
Source
Conference
Keywords
Explainable Artificial Intelligence, Vision Language Model, Scalable Interpolant Transformer, Automated Medical Diagnosis, Concept-Driven Denoising
