Skip to Content
Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence
book

Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence

by Jon Krohn, Grant Beyleveld, Aglaé Bassens
September 2019
Intermediate to advanced content levelIntermediate to advanced
416 pages
13h 49m
English
Addison-Wesley Professional
Content preview from Deep Learning Illustrated: A Visual, Interactive Guide to Artificial Intelligence

B. Backpropagation

In this appendix, we use the formal neural network notation from Appendix A to dive into the partial-derivative calculus behind the backpropagation method introduced in Chapter 8.

Let’s begin by defining some additional notation to help us along. Backpropagation works backwards, so the notation is based on the final layer (denoted L), and the earlier layers are annotated with respect to it (L – 1, L – 2, . . . L – n). The weights, biases, and outputs from functions are subscripted appropriately with this same notation. Recall from Equations 7.1 and 7.2 that the layer activation aL is calculated by multiplying the preceding layer’s activation (aL–1) by the weight wL and bias bL terms to produce zL and passing this through an ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Deep Learning for Coders with fastai and PyTorch

Deep Learning for Coders with fastai and PyTorch

Jeremy Howard, Sylvain Gugger
AI Engineering

AI Engineering

Chip Huyen
AI Engineering

AI Engineering

Chip Huyen
AI Engineering

AI Engineering

Chip Huyen

Publisher Resources

ISBN: 9780135116821