Chapter 10: Federated Learning and Edge Devices
When discussing DNN training, we mainly focus on using high-performance computers with accelerators such as GPUs or traditional data centers. Federated learning takes a different approach, trying to train models on edge devices, which usually have much less computation power compared with GPUs.
Before we discuss anything further, we want to list our assumptions:
- We assume the computation power of mobile chips is much less than traditional hardware accelerators such as GPUs/TPUs.
- We assume mobile devices often have a limited computation budget due to the limited battery power.
- We assume the model training/serving platform for a mobile device will be different from the model training/serving platform ...
Get Distributed Machine Learning with Python now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.