Chapter 6. Maximizing Speed and Performance of TensorFlow: A Handy Checklist
Life is all about making do with what we have, and optimization is the name of the game.
It’s not about having everything—it’s about using your resources wisely. Maybe we really want to buy that Ferrari, but our budget allows for a Toyota. You know what, though? With the right kinds of performance tuning, we can make that bad boy race at NASCAR!
Let’s look at this in terms of the deep learning world. Google, with its engineering might and TPU pods capable of boiling the ocean, set a speed record by training ImageNet in just about 30 minutes! And yet, just a few months later, a ragtag team of three researchers (Andrew Shaw, Yaroslav Bulatov, and Jeremy Howard), with $40 in their pockets using a public cloud, were able to train ImageNet in only 18 minutes!
The lesson we can draw from these examples is that the amount of resources that you have is not nearly as important as using them to their maximum potential. It’s all about doing more with less. In that spirit, this chapter is meant to serve as a handy checklist of potential performance optimizations that we can make when building all stages of the deep learning pipelines, and will be useful throughout the book. Specifically, we will discuss optimizations related to data preparation, data reading, data augmentation, training, and finally inference.
And the story starts and ends with two words...
GPU Starvation
A commonly asked question by AI practitioners ...
Get Practical Deep Learning for Cloud, Mobile, and Edge now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.