This will be the 11th chapter of the final book.
If you have comments about how we might improve the content and/or examples in this book, or if you notice missing material, please reach out to the authors at email@example.com.
So far, we have seen our deep learning models running on the desktop and in the cloud. While there are definite upsides to such a setup, it may not be ideal for all scenarios. In this chapter, we explore making predictions using deep learning models on mobile devices.
Bringing the computation closer to the user’s device, rather than a distant remote server, can be advantageous for many reasons:
Privacy: For the user, local computation preserves privacy - by not sharing the data externally, which could potentially be mined for user information. For the developer, this ensures less headache dealing with Personally Identifiable Information (PII). With the newly enacted strict privacy rules in Europe, a.k.a. General Data Protection Regulation (GDPR), this becomes even more important.
24/7 availability and reduced cloud costs: Obviously, sending less data to the cloud equates to less computing costs for the developer, leading to monetary savings. This reduces scaling costs as well, for when your app gains traction and has a large user base. For the users, computation on the edge is helpful too as they don’t have to worry about data plan costs. ...