Attack Strategies in Federated Learning for Regression Models : A Comparative Analysis with Classification Models

University essay from Umeå universitet/Institutionen för tillämpad fysik och elektronik

Abstract: Federated Learning (FL) has emerged as a promising approach for decentralized model training across multiple devices, while still preserving data privacy. Previous research has predominantly concentrated on classification tasks in FL settings, leaving  a noticeable gap in FL research specifically for regression models. This thesis addresses this gap by examining the vulnerabilities of Deep Neural Network (DNN) regression models within FL, with a specific emphasis on adversarial attacks. The primary objective is to examine the impact on model performance of two distinct adversarial attacks-output-flipping and random weights attacks. The investigation involves training FL models on three distinct data sets, engaging eight clients in the training process. The study varies the presence of malicious clients to understand how adversarial attacks influence model performance.  Results indicate that the output-flipping attack significantly decreases the model performance with involvement of at least two malicious clients. Meanwhile, the random weights attack demonstrates a substantial decrease even with just one malicious client out of the eight. It is crucial to note that this study's focus is on a theoretical level and does not explicitly account for real-world settings such as non-identically distributed (non-IID) settings,  extensive data sets, and a larger number of clients. In conclusion, this study contributes to the understanding of adversarial attacks in FL, specifically focusing on DNN regression models. The results highlights the importance of defending FL models against adversarial attacks, emphasizing the significance of future research in this domain.

  AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)