Chapter 9. PyTorch in the Wild

For our final chapter, we’ll look at how PyTorch is used by other people and companies. You’ll also learn some new techniques along the way, including resizing pictures, generating text, and creating images that can fool neural networks. In a slight change from earlier chapters, we’ll be concentrating on how to get up and running with existing libraries rather than starting from scratch in PyTorch. I’m hoping that this will be a springboard for further exploration.

Let’s start by examining some of the latest approaches for squeezing the most out of your data.

Data Augmentation: Mixed and Smoothed

Way back in Chapter 4, we looked at various ways of augmenting data to help reduce the model overfitting on the training dataset. The ability to do more with less data is naturally an area of high activity in deep learning research, and in this section we’ll look at two increasingly popular ways to squeeze every last drop of signal from your data. Both approaches will also see us changing how we calculate our loss function, so it will be a good test of the more flexible training loop that we just created.

mixup

mixup is an intriguing augmentation technique that arises from looking askew at what we want our model to do. Our normal understanding of a model is that we send it an image like the one in Figure 9-1 and want the model to return a result that the image is a fox.

Figure 9-1. A fox

But as you know, we don’t get just that from the model; we get ...

Get Programming PyTorch for Deep Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.