A Game-theoretical Framework for Byzantine-Robust Federated Learning
Abstract: The distributed nature of Federated Learning (FL) creates security-related vulnerabilities including training-time attacks. Recently, it has been shown that well-known Byzantine-resilient aggregation schemes are indeed vulnerable to an informed adversary who has access to the aggregation scheme and updates sent by clients. Therefore, it is a significant challenge to establish successful defense mechanisms against such an adversary. To the best of our knowledge, most current aggregators are immune to single or partial attacks and none of them is expandable to defend against new attacks. We frame the robust distributed learning problem as a game between a server and an adversary that tailors training-time attacks. We introduce RobustTailor, a simulation-based algorithm that prevents the adversary from being omniscient. RobustTailor is a mixed strategy and has good expandability for any deterministic Byzantine-resilient algorithm. Under a challenging setting with information asymmetry between two players, we show that our method enjoys theoretical guarantees in terms of regret bounds. RobustTailor preserves almost the same privacy guarantees as standard FL and robust aggregation schemes. Simulation improves robustness to training-time attacks significantly. Empirical results under challenging attacks validate our theory and show that RobustTailor preforms similar to an upper bound which assumes the server has perfect knowledge of all honest clients over the course of training.
AT THIS PAGE YOU CAN DOWNLOAD THE WHOLE ESSAY. (follow the link to the next page)