Errata

Fundamentals of Deep Learning

Errata for Fundamentals of Deep Learning

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released.

The following errata were submitted by our customers and have not yet been approved or disproved by the author or editor. They solely represent the opinion of the customer.

Color Key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted by Date submitted
ePub Page passim

In chapter 3 the book (sensibly) gives instructions on how to install Tensorflow for Python2.x and Python 3.x. Yet, as given the examples are incompatible with Python3.

Anonymous  Aug 16, 2017 
PDF Page 1
page 42

In [5]: a = tf.constant(2)
In [6]: a = tf.constant(2)
In [7]: multiply = tf.mul(a, b)
In [7]: session.run(multiply)
Out[7]: 6

Must be:
In [5]: a = tf.constant(2)
In [6]: b = tf.constant(3) <-----------------
In [7]: multiply = tf.mul(a, b)
In [7]: session.run(multiply)
Out[7]: 6

Roman  Jun 11, 2017 
3
Code after "Then we can test that our installation of TensorFlow functions as expected:"

tf.mul has been replaced by tf.multiply

Andrew Stollak  May 25, 2017 
Printed Page 8
first paragraph

the definition of the logit has a sum from i=0 to n, but the sum should be from i=1 to n (since the input/features and weights as given earlier in the paragraph are indexed from 1).

James Battat  Jul 07, 2019 
Printed Page 8
paragraph below Figure 1-7

The output of the neuron is given as y = f(x.w+b) where x and w are vectors (in bold). One of either x or w should be transposed before multiplication. e.g. if the vectors are column vectors, then we should have: f(x'.w + b) where x' is the transpose of x.

James Battat  Jul 07, 2019 
PDF Page 14
1st paragraph, between the figures

"ReLU" is being described as "restricted linear units" where "Re" actually stands for "rectified".

Christian Blume  Apr 03, 2019 
PDF Page 17
Figure 2-1

The weight for both Soda and Fries is shown as w_2.

Rakesh Ganapathi Karanth  Mar 31, 2018 
Other Digital Version 21
Top of page Figure 2-1

Weight value of x3 should be w3 as written in the formula, rather it reads w2.

Nick Davis  Jan 15, 2018 
Printed Page 22
third line of top equation

The last term on the third line is the partial derivative of y with respect to the weight w_k. The y is indexed with a subscript "i", but the index should instead appear as a superscript in parentheses (like the other y^(i) in the same equation). In other words, in LaTeX notation: replace y_i with y^{(i)}

James Battat  Jul 07, 2019 
PDF Page 42
ipython code

In [6]: a = tf.constant(2)

should be

ln[6]: b = tf.constant(3)

and

ln[7]: multiply = tf.mul(a, b)

should be

ln[7]: multiply = tf.multiply(a, b)

Ayla Khan  Jun 05, 2017 
PDF Page 46
bottom

`from read_data import get_minibatch()`

Module does not exist. Googling for that module or the function get minibatch finds nothing.

Anonymous  Jun 21, 2017 
Printed Page 46

init_op = tf.initialize_all_variables()

produces a warning:

"WARNING:tensorflow:From <ipython-input-28-5499475a3f6f>:1: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead."

Therefore,

init_op = tf.initialize_all_variables()

should be replaced by

init_op = tf.global_variables_initializer()

Zachary Kneupper  Jul 18, 2017 
Printed Page 46

The line

from read_data import get_minibatch()

does not work, since the "read_data" package does not appear to exist (it is not listed in the pypi index of packages).

Is there some other code that we should use instead?

Thanks for your time.

Anonymous  Jul 18, 2017 
Printed Page 54
first code inset

inference function is missing variable assignment for init...

Should be:

def inference(x):
init = tf.constant_initializer(value=0)
W = tf.get_variable("W", [784, 10], initializer=init)
b = tf.get_variable("b", [10], initializer=init)
output = tf.nn.softmax(tf.matmul(x,W) + b)
return output

Anonymous  Aug 05, 2017 
Printed Page 60
Code for the layers and the inference

In the "inference()" function there are two hidden layers and one output layer being defined using the "layer()" function above.

For the output layer, the relu function should be omitted. Could be done by adding a flag to the layer() function (relu=True for example) and just conditionally adding the relu after the linear combination of inputs.
Then create hidden layers with relu, output layer without relu.

Michael Johann  Jul 28, 2017 
Printed Page 73
Last paragraph

"...is given by d^Hd" should be "...is given by d^T H d"

Anonymous  Dec 16, 2017 
PDF Page 75
the code after second paragraph

the second line of code should look like:
step_choices = range(...)

Feng Shi  Mar 04, 2018 
PDF Page 75
the code after third paragraph

1. variable "momentum" is not defined, one line could be added:
momentum = random.random()
2. momentum_rand_walk.append(), function append() needs an argument. I guess the code should be:
momentum_rand_walk.append(new_step)

Feng Shi  Mar 04, 2018 
PDF Page 76
the paragraph above figure 4-9

in the paragraph, it says that:
The figure demonā€
strates that to achieve a cost of 0.1 without momentum (right) requires nearly 18,000
steps (minibatches), whereas with momentum (left), we require just over 2,000.

the "left" and "right" are not consistent with what shown in the Figure 4-9.

Feng Shi  Mar 04, 2018 
Printed Page 76
Paragraph above Figure 4-9

Original text says:
"without momentum (right) requires ... whereas with momentum (left), ..."
But "left" and "right" should be swapped.

James Battat  Jul 07, 2019 
Printed Page 81
Last two equations

Missing subscript on the weight for the second moment of the gradient:

Should be $\beta_2^{i-k}$ .

Andrew Pickholtz  Apr 17, 2018 
ePub Page 92.4 / 525
Example "Logging and Training the Logistic Regression Model"

- Incorrect indentations, e.g. for "loss" function
- In the "inference" function the first line should read: init = tf.constant_initializer(value=0)
- The scalar_summary method used in the "training" function module does not exist ("'tensorflow' has no attribute 'scalar_summary'"), should be tf.summary.scalar
- in main, module 'tensorflow' has no attribute 'merge_all_summaries', should be tf.summary.merge_all
- in main, module 'tensorflow.python.training.training' has no attribute 'SummaryWriter'
- in main, variable mnist undefined
etc.

Note that the changes to tensorflow significantly predate the book's publication date

Anonymous  Aug 16, 2017 
Printed Page 95
bottom line

should use floor rather than ceil

\[
w_{out} = \left\lfloor \frac{w_{in} - e + 2p}{s}\right\rfloor + 1
\]

phil.zhang  Aug 14, 2017 
Printed Page 119
Paragraph after Figure 6-2

Errors in the mathematical description of Principal Component Analysis:

- "... we can view this operation as a project..." should be "... we can view this operation as a projection..."

- "T=X" should be "T=XW" .

Andrew Pickholtz  Apr 17, 2018 
PDF Page 197
Figure 7-30

". while little in a you See" should be ". while little a in you See".

Anonymous  Mar 19, 2018 
PDF Page 223
1st paragraph

"the Turning machine" should be "the Turing machine"

Anonymous  Mar 23, 2018 
Printed Page 254
4th paragraph

The text refers to the ith instance of variable y as "y_i" instead of using proper underscoring.

Anonymous  Oct 20, 2017