Chapter 7Model Building

In this chapter, we will talk about data parallel and model parallel strategies to use while training a large neural network. Then we will cover some of the modeling techniques by defining some important concepts such as gradient descent, learning, rate, batch size, and epoch in a neural network. Then you will learn what happens when we change these hyperparameters (batch size, learning rate) while training a neural network.

We are going to discuss transfer learning and how pretrained models are used to kickstart training when you have limited datasets.

Then we are going to cover semi‐supervised learning and when to use this technique. We will also cover data augmentation techniques and how they can be used in an ML pipeline. Last, we will cover key concepts such as bias and variance and then discuss how they can lead to underfit and overfit models. We will also cover strategies for underfit models and ...

Get Official Google Cloud Certified Professional Machine Learning Engineer Study Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.