Chapter 17. A Neural Net from the Foundations
This chapter begins a journey where we will dig deep into the internals of the models we used in the previous chapters. We will be covering many of the same things we’ve seen before, but this time around we’ll be looking much more closely at the implementation details, and much less closely at the practical issues of how and why things are as they are.
We will build everything from scratch, using only basic indexing into a
tensor. We’ll write a neural net from the ground up, and then implement backpropagation manually so we know exactly what’s happening in PyTorch when we call loss.backward
. We’ll also see how to extend PyTorch with custom autograd
functions that allow us to specify our own forward and backward
computations.
Building a Neural Net Layer from Scratch
Let’s start by refreshing our understanding of how matrix multiplication is used in a basic neural network. Since we’re building everything up from scratch, we’ll use nothing but plain Python initially (except for indexing into PyTorch tensors), and then replace the plain Python with PyTorch functionality after we’ve seen how to create it.
Modeling a Neuron
A neuron receives a given number of inputs and has an internal weight for each of them. It sums those weighted inputs to produce an output and adds an inner bias. In math, this can be written as
if we name our inputs , our weights , and ...
Get Deep Learning for Coders with fastai and PyTorch now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.