L2 and L1 regularization

The first way that we will look at to create a more robust model is to use L1 or L2 regularization. These are by far the most common methods of regularization. The basic idea is that during training of our model, we actively try to impose some constraint on the values of the model weights using either the L1 or L2 norms of those weights.

We do this by adding an extra term to whatever loss function we are using. For L1 regularization, the term we add is , and for L2 regularization, the term we add is . In the preceding ...

Get Hands-On Convolutional Neural Networks with TensorFlow now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.