Differentially Private Federated Learning
Abstract: Federated Learning is a way of training neural network models in a decentralized manner; It utilizes several participating devices (that hold the same model architecture) to learn, independently, a model on their local data partition. These local models are then aggregated (in parameter domain), achieving equivalent performance as if the model was trained centrally. On the other hand, Differential Privacy is a well-established notion of data privacy preservation that can provide formal privacy guarantees based on rigorous mathematical and statistical properties. The majority of the current literature, at the intersection of these two fields, only considers privacy from a client’s point of view (i.e., the presence or absence of a client during decentralized training should not affect the distribution over the parameters of the final (central) model). However, it disregards privacy at a single (training) data-point level (i.e., if an adversary has partial, or even full access to the remaining training data-points, they should be severely limited in inferring sensitive information about that single data-point, as long as it is bounded by a differential privacy guarantee). In this thesis, we propose a method for end-to-end privacy guarantees with minimal loss of utility. We show, both empirically and theoretically, that privacy bounds at a data-point level can be achieved within the proposed framework. As a consequence of this, satisfactory client-level privacy bounds can be realized without making the system noisier overall, while obtaining state-of-the-art results.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)