The Resilience of Deep Learning Intrusion Detection Systems for Automotive Networks : The effect of adversarial samples and transferability on Deep Learning Intrusion Detection Systems for Controller Area Networks

University essay from KTH/Skolan för elektroteknik och datavetenskap (EECS)

Abstract: This thesis will cover the topic of cyber security in vehicles. Current vehicles contain many computers which communicate over a controller area network. This network has many vulnerabilities which can be leveraged by attackers. To combat these attackers, intrusion detection systems have been implemented. The latest research has mostly focused on the use of deep learning techniques for these intrusion detection systems. However, these deep learning techniques are not foolproof and possess their own security vulnerabilities. One such vulnerability comes in the form of adversarial samples. These are attacks that are manipulated to evade detection by these intrusion detection systems. In this thesis, the aim is to show that the known vulnerabilities of deep learning techniques are also present in the current state-of-the-art intrusion detection systems. The presence of these vulnerabilities shows that these deep learning based systems are still to immature to be deployed in actual vehicles. Since if an attacker is able to use these weaknesses to circumvent the intrusion detection system, they can still control many parts of the vehicles such as the windows, the brakes and even the engine. Current research regarding deep learning weaknesses has mainly focused on the image recognition domain. Relatively little research has investigated the influence of these weaknesses for intrusion detection, especially on vehicle networks. To show these weaknesses, firstly two baseline deep learning intrusion detection systems were created. Additionally, two state-of-the-art systems from recent research papers were recreated. Afterwards, adversarial samples were generated using the fast gradient-sign method on one of the baseline systems. These adversarial samples were then used to show the drop in performance of all systems. The thesis shows that the adversarial samples negatively impact the two baseline models and one state-of-the-art model. The state-of-the-art model’s drop in performance goes as high as 60% in the f1-score. Additionally, some of the adversarial samples need as little as 2 bits to be changed in order to evade the intrusion detection systems.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)