Implementing a multilayer perceptron (MLP)

A perceptron is composed of a single layer of LTUs, with each neuron connected to all the inputs. These connections are often represented using special pass-through neurons called input neurons: they just output whatever input they are fed. Moreover, an extra bias feature is generally added (x0 = 1).

This bias feature is typically represented using a special type of neuron called a bias neuron, which just outputs 1 all the time. A perceptron with two inputs and three outputs is represented in Figure 7. This perceptron can simultaneously classify instances into three different binary classes, which makes it a multioutput classifier:

Figure 7: A perceptron with two inputs and three outputs

Since the decision ...

Get Deep Learning with TensorFlow - Second Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.