O'Reilly logo

Learn ARCore - Fundamentals of Google ARCore by Micheal Lanham

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Backward propagation explained

In this example, we are pretraining our model (supervised learning) to a simple function described by a set of inputs (1.0, 0.1, 0) and expected outputs of (0, 1.0, 1.0), which is represented by the graph/chart we saw earlier. In essence, we want our neural net to learn the function defined by those points and be able to output those results. We do this by calling net.Train, passing in datasets and the minimum expected error. This trains the network by backward propagating the error through each neuron of the network until a minimum error can be reached. Then, the training stops and the network declares itself ready.

Backward propagation works using a simple iterative optimization algorithm called gradient descent ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required