13Deep Learning Tools – TensorFlow

The examples in Section 13.2 follow tutorials presented at the Google TensorFlow website www.tensorflow.org

13.1 Neural Nets Review

The background material on neurons and neural nets (NNs) is given in Chapter 9. For convenience a brief review follows.

13.1.1 Summary of Single Neuron Discriminator

See Section 9.1.1 for further details and where a key example (the Perceptron) is discussed. Here, we start with the neuron with a sigma activation function shown in Figure 13.1. The sigma activation function is differentiable, such that a stable backpropagation learning process can be implemented.

13.1.2 Summary of Neural Net Discriminator and Back‐Propagation

In Chapter 9, we focused on ways to make a single neuron as enhanced as possible for classification and learning. This eventually led to the support vector machine (SVM) in Chapter 10. We now explore ways to make a collection of neurons, arranged as a layered NN (see Figure 13.2), into the best performing classifier and learning possible.

The core rule for training the NN (updating its weights) is backpropagation:

omega equals minus eta nabla upper L and nabla upper L equals delta Subscript j Superscript upper L Baseline nabla z Subscript j Superscript upper L
Schematic illustration of single neuron. Sigma activation function: the inputs xk are multiplied by the weights ωk, and possibly have a bias applied, and the result is passed through a function that is monotonically increasing, differentiable, with asymptotes.

Figure 13.1 Single neuron. Sigma activation function: the inputs xk are multiplied by the weights ωk, and possibly have a bias applied, and the result is passed through a ...

Get Informatics and Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.