2 Fully connected networks
This chapter covers
- Implementing a training loop in PyTorch
- Changing loss functions for regression and classification problems
- Implementing and training a fully connected network
- Training faster using smaller batches of data
Now that we understand how PyTorch gives us tensors to represent our data and parameters, we can progress to building our first neural networks. This starts with showing how learning happens in PyTorch. As we described in chapter 1, learning is based on the principle of optimization: we can compute a loss for how well we are doing and use gradients to minimize that loss. This is how the parameters of a network are “learned” from the data and is also the basis of many different machine learning ...
Get Inside Deep Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.