Backpropagation is how we really train our model; it's an algorithm we use to minimize the prediction error by adjusting our model's weights. We usually do this via a method called gradient descent.
Let's begin with a basic example—let's say we want to train a simple neural network to do the following, by multiplying a number by 0.5:
Input |
Target |
1 |
0.5 |
2 |
1.0 |
3 |
1.5 |
4 |
2.0 |
We have a basic model to start with, as follows:
y = W * x
So, to start, let's guess that W is actually two. The following table shows these results:
Input |
Target |
W * x |
1 |
0.5 |
2 |
2 |
1.0 |
4 |
3 |
1.5 |
6 |
4 |
2.0 |
8 |
Now that we have the output of our guess, we can compare this guess to ...