Item

Self-Ensemble as Defense for Event-Based Adversarial Attack against Spiking Neural Networks

Li, Xinyu
Supervisor
Department
Machine Learning
Embargo End Date
2025-05-30
Type
Thesis
Date
2025
License
Language
English
Collections
Research Projects
Organizational Units
Journal Issue
Abstract
"The brain is a world consisting of a number of unexplored continents and great stretches of unknown territory.” Santiago Ram´on y Cajal (1852–1934). In the rapidly evolving landscape of artificial intelligence, Spiking Neural Networks (SNNs) have emerged as a promising approach for achieving efficient realtime processing in novel applications such as autonomous driving and robotics. Known as “the third generation of neural networks” in the literature, SNNs mimic the human brain by utilizing spiking patterns to encode and process information, reflecting the biological neural activities where neurons communicate via electrical impulses. By leveraging the temporal dynamics of neural spikes, SNNs are capable of performing complex computations with remarkable energy efficiency, making them particularly wellsuited for edge computing environments where computational resources are at a premium. These settings demand not only high performance but also robust security measures to protect against potential threats. Similar to artificial neural networks (ANNs), SNNs are not immune to adversarial attacks, which can manipulate input data within certain range that is imperceptible to human, to deceive models into making incorrect predictions. Traditional defense mechanisms often require additional raw training data, adversarial training, or modifications to the network architecture, complicating their implementation. Therefore, there is a critical need for innovative defense strategies that can enhance the robustness of SNNs without imposing significant overhead. This study introduces a novel “selfensemble” approach that leverages multiple latency settings during the inference phase to defend against eventbased adversarial attacks on SNNs. The method begins with training a single SNN model from an event dataset captured by dynamic vision sensors (DVS), which generate asynchronous events in response to changes in brightness. The training set is built from multiple rather than one frame aggregator, thanks to the nice property of SNNs that are compatible with different latencies given the same weights of the network. During inference, the event data is converted into frame data using various temporal intervals, again, resulting in multiple sets of frames with different numbers of frames per unit time. The trained SNN processes each set independently, producing multiple outputs. This selfensemble approach aggregates these diverse outputs, effectively mitigating the impact of any single erroneous prediction caused by the eventbased attack. Since current adversarial perturbations only target at one particular frame representation while leaving others relatively unharmed, the ensemble method ensures overall system robustness. A key advantage of this technique is its simplicity and efficiencyit does not require additional event data or adversarial samples for enhanced training. Instead, it exploits the inherent flexibility of SNNs to handle multiple latencies, enhancing compatibility without compromising performance. The method naturally aligns with the asynchronous processing capabilities of SNNs, ensuring seamless integration. Empirical results demonstrate the effectiveness of our selfensemble approach without performance loss. Existing eventbased adversarial attack techniques, which still target specific frame numbers, fail to disrupt the network’s ability to accurately interpret the event stream. Even when one output is significantly affected, the majority consensus among the ensemble remains accurate, thereby maintaining overall system integrity. This resilience underscores the potential of our method as a practical defense mechanism against sophisticated adversarial threats. By exploiting the unique characteristics of SNNs and their ability to process data at multiple latencies, our work paves the way for more robust neuromorphic computing systems. Future directions include exploring scalability across different datasets and application domains, as well as evaluating its effectiveness against other evolving adversarial strategies. In conclusion, our selfensemble approach provides a robust and efficient solution for defending SNNs against attack on event data, safeguarding the integrity of model deployment on resourceconstrained systems.
Citation
Xinyu Li, “Self-Ensemble as Defense for Event-Based Adversarial Attack against Spiking Neural Networks,” Master of Science thesis, Machine Learning, MBZUAI, 2025.
Source
Conference
Keywords
Spiking Neural Networks, Adversarial Attacks and Defenses, Event Data, Ensemble
Subjects
Source
Publisher
DOI
Full-text link