DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
Popovic, Dorde ; Sadeghi, Amin ; Yu, Ting ; Chawla, Sanjay ; Khalil, Issa M.
Popovic, Dorde
Sadeghi, Amin
Yu, Ting
Chawla, Sanjay
Khalil, Issa M.
Supervisor
Department
Computer Science
Embargo End Date
Type
Conference proceeding
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
Backdoor attacks are among the most effective, practical, and stealthy attacks in deep learning. In this paper, we consider a practical scenario where a developer obtains a deep model from a third party and uses it as part of a safety-critical system. The developer wants to inspect the model for potential backdoors prior to system deployment. We find that most existing detection techniques make assumptions that are not applicable to this scenario. In this paper, we present a novel framework for detecting backdoors under realistic restrictions. We generate candidate triggers by deductively searching over the space of possible triggers. We construct and optimize a smoothed version of Attack Success Rate as our search objective. Starting from a broad class of template attacks and just using the forward pass of a deep model, we reverse engineer the backdoor attack. We conduct extensive evaluation on a wide range of attacks, models, and datasets, with our technique performing almost perfectly across these settings.
Citation
D. Popovic, A. Sadeghi, T. Yu, S. Chawla, and I. Khalil, “DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data”, Available: www.usenix.org/conference/usenixsecurity25/presentation/popovic
Source
Proceedings of the 34th USENIX Security Symposium
Conference
34th USENIX Security Symposium, USENIX Security 2025
Keywords
Subjects
Source
34th USENIX Security Symposium, USENIX Security 2025
Publisher
USENIX Association
