Item

DiffCAM: Data-Driven Saliency Maps by Capturing Feature Differences

Li, Xingjian
Zhao, Qiming
Bisht, Neelesh
Uddin, Mostofa Uddin
Kim, Jin Yu
Zhang, Bryan
Xu, Min
Supervisor
Department
Computer Vision
Embargo End Date
Type
Poster
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
In recent years, the interpretability of Deep Neural Networks (DNNs) has garnered significant attention, particularly due to their widespread deployment in critical domains like healthcare, finance, and autonomous systems. To address the challenge of understanding how DNNs make decisions, Explainable AI (XAI) methods, such as saliency maps, have been developed to provide insights into the inner workings of these models. This paper introduces DiffCAM, a novel XAI method designed to overcome limitations in existing Class Activation Map (CAM)-based techniques, which often rely on decision boundary gradients to estimate feature importance. DiffCAM differentiates itself by considering the actual data distribution of the reference class, identifying feature importance based on how a target example differs from reference examples. This approach captures the most discriminative features without relying on decision boundaries or prediction results, making DiffCAM applicable to a broader range of models, including foundation models. Through extensive experiments, we demonstrate the superior performance and flexibility of DiffCAM in providing meaningful explanations across diverse datasets and scenario.
Citation
“CVPR Poster DiffCAM: Data-Driven Saliency Maps by Capturing Feature Differences.” Accessed: Jun. 23, 2025. [Online]. Available: https://cvpr.thecvf.com/virtual/2025/poster/32489
Source
Proceedings of the Computer Vision and Pattern Recognition Conference
Conference
Computer Vision and Pattern Recognition Conference (CVPR), 2025
Keywords
Subjects
Source
Computer Vision and Pattern Recognition Conference (CVPR), 2025
Publisher
Computer Vision Foundation
DOI
Full-text link