Chapter 21. Neural Networks

21.0 Introduction

At the heart of basic neural networks is the unit (also called a node or neuron). A unit takes in one or more inputs, multiplies each input by a parameter (also called a weight), sums the weighted input’s values along with some bias value (typically 0), and then feeds the value into an activation function. This output is then sent forward to the other neurons deeper in the neural network (if they exist).

Neural networks can be visualized as a series of connected layers that form a network connecting an observation’s feature values at one end and the target value (e.g., observation’s class) at the other end. Feedforward neural networks—​also called multilayer perceptron—are the simplest artificial neural networks used in any real-world setting. The name “feedforward” comes from the fact that an observation’s feature values are fed “forward” through the network, with each layer successively transforming the feature values with the goal that the output is the same as (or close to) the target’s value.

Specifically, feedforward neural networks contain three types of layers. At the start of the neural network is an input layer, where each unit contains an observation’s value for a single feature. For example, if an observation has 100 features, the input layer has 100 units. At the end of the neural network is the output layer, which transforms the output of intermediate layers (called hidden layers) into values useful for the task at hand. ...

Get Machine Learning with Python Cookbook, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.