Chapter 19. Deep Learning
A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring.
Alexander Pope
Deep learning originally referred to the application of “deep” neural networks (that is, networks with more than one hidden layer), although in practice the term now encompasses a wide variety of neural architectures (including the “simple” neural networks we developed in Chapter 18).
In this chapter we’ll build on our previous work and look at a wider variety of neural networks. To do so, we’ll introduce a number of abstractions that allow us to think about neural networks in a more general way.
The Tensor
Previously, we made a distinction between vectors (one-dimensional arrays) and matrices (two-dimensional arrays). When we start working with more complicated neural networks, we’ll need to use higher-dimensional arrays as well.
In many neural network libraries, n-dimensional arrays are referred to as tensors, which is what we’ll call them too. (There are pedantic mathematical reasons not to refer to n-dimensional arrays as tensors; if you are such a pedant, your objection is noted.)
If I were writing an entire book about deep learning, I’d implement
a full-featured Tensor
class that overloaded Python’s arithmetic operators and
could handle a variety of
other operations. Such an implementation would take an entire chapter on its own.
Here we’ll cheat and say that a Tensor
is just a list
. This is true in one direction—all of our vectors and matrices ...
Get Data Science from Scratch, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.