Adversarial robustness of STDP-trained spiking neural networks

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Author: Karl Lindblad; Axel Nilsson; [2023]

Keywords: ;

Abstract: Adversarial attacks on machine learning models are designed to elicit the wrong behavior from the model. One such attack on image classifiers are maliciously crafted inputs that, to the human eye, look untampered with but have been carefully altered to cause misclassification. Previous research has shown that spiking neural networks (SNN) trained with backpropagation can be more robust than, the more commonly used, artificial neural networks (ANN) against these attacks. In this thesis we conducted, to the best of our knowledge, novel research regarding adversarial attacks on SNNs trained with spike-timing-dependent plasticity (STDP), attacking the networks as well as analyzing their adversarial robustness compared to other neural networks. One of the reasons for attacking STDP-trained models is that STDP is more biologically plausible compared to other learning techniques for SNNs. The method used in this thesis is to implement multiple machine learning models based on different approaches and to compare their robustness with each other. The models consisted of two SNNs, trained with STDP and backpropagation through time (BPTT), and one ANN. The results shows that it is possible to fool STDP-trained SNNs with adversarial attacks and also indicates that the SNN trained with STDP is the most robust out of these networks.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)