Chapter 5: Adversarial robustness in federated learning

Chulin Xie; Xiaoyang Wang    University of Illinois at Urbana-Champaign, Urbana, IL, United States

Abstract

While federated learning (FL) enables training a shared machine learning model over scattered and private data from diverse clients, its distributed nature increases the vulnerability as the clients may not be trustworthy. This chapter summarizes and provides a taxonomy of common attacks and defenses in federated learning. The attack section includes methods for corrupting the local models with data poisoning and model poisoning. The defenses are summarized accordingly, including robust statistic-based methods and smoothing-based methods, and we discuss their effectiveness against multiple ...

Get Federated Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.