Skip to Main Content
Practical Convolutional Neural Networks
book

Practical Convolutional Neural Networks

by Mohit Sewak, Md. Rezaul Karim, Pradeep Pujari
February 2018
Intermediate to advanced content levelIntermediate to advanced
218 pages
5h 31m
English
Packt Publishing
Content preview from Practical Convolutional Neural Networks

Understanding backpropagation 

In this section, we will understand an intuition about backpropagation. This is a way of computing gradients using the chain rule. Understanding this process and its subtleties is critical for you to be able to understand and effectively develop, design, and debug neural networks.

In general, given a function f(x), where x is a vector of inputs, we want to compute the gradient of f at x denoted by ∇(f(x)). This is because in the case of neural networks, the function f is basically a loss function (L) and the input x is the combination of weights and training data. The symbol   is pronounced as nabla:

(xi, yi ) i = 1......N

Why do we take the gradient on weight parameters?

It is given that the training data ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Understanding Convolutional Neural Networks (CNNs)

Understanding Convolutional Neural Networks (CNNs)

Nell Watson
Practical Machine Learning for Computer Vision

Practical Machine Learning for Computer Vision

Valliappa Lakshmanan, Martin Görner, Ryan Gillard

Publisher Resources

ISBN: 9781788392303Supplemental Content