Learning TensorFlow

Errata for Learning TensorFlow

Submit your own errata for this product.


The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".

The following errata were submitted by our customers and approved as valid errors by the author or editor.

Color Key: Serious Technical Mistake Minor Technical Mistake Language or formatting error Typo Question Note Update



Version Location Description Submitted By Date Submitted Date Corrected
Printed
Page 11
last line

print ans should be replaced with print(ans) As of Python 3.0, "print" was replaced by the function "print()"

Kenneth T. Hall  Sep 11, 2017  Sep 15, 2017
Printed
Page 36
2nd from the bottom

A = tf. constant ... print(a.get_shape()) should be print(A.get_shape())

Yevgeniy Davletshin  Sep 19, 2017 
PDF
Page 37
Middle of page, for code

The author created a 2x2x3 Array. For those who are struggling with trying to tell the dimensions apart, the two dimensions that have length 2 are easily confused. It would be better to create an array that has dimension lengths that are all different, such as 2 and 3 and 4.

Note from the Author or Editor:
Thanks for the suggestions! We will try and change this in future versions of the book to make it easier to follow.

Clem Wang  Sep 23, 2017 
Printed
Page 41
2nd from the top

in the formula f(xi) = w.T xi + b but in the code transpose given for x not w: y_pred = tf.matmul(w,tf.transpose(x)) + b

Yevgeniy Davletshin  Sep 19, 2017 
PDF
Page 45
line 5 (formula)

I guessing that the LaTex formula used was this: $$ H(p, q) = - \Sigma_x p(x) log q(x) $$ But this is correct, which puts the x UNDER the Summation sign instead of making x a SUBSCRIPT of Sigma: $$ H(p, q) = - \sum_x p(x) log q(x) $$

Note from the Author or Editor:
Thanks!

Clem Wang  Sep 23, 2017 
PDF
Page 50
line 9 ( or line 2 of code on the page)

The line is missing a minus sign for the first term. The text incorrectly has: loss = y_true*tf.log(y_pred) - (1-y_true)*tf.log(1-y_pred) but it should be: loss = - y_true*tf.log(y_pred) - (1-y_true)*tf.log(1-y_pred) Compare to the documentation: https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits

Clem Wang  Sep 24, 2017 
Printed
Page 60
2nd line of code near page bottom

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv,y_)) should be cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_))

Note from the Author or Editor:
Thanks! This will be fixed in future versions.

Kenneth T. Hall  Oct 16, 2017 
Printed
Page 60
2nd paragraph, 1st sentence

"Next we have two consecutive layers of convolution and pooling, each with 5x5 convolutions and 64 feature maps, followed by a single fully connected layer with 1,024 units." The first convolution layer has only 32 feature maps, not 64 as written.

Kenneth T. Hall  Oct 16, 2017 
Printed
Page 61
1st paragraph (after the code). Also, in the footnote on the same page.

"epoc" should be "epoch"

Kenneth T. Hall  Oct 16, 2017 
Printed, PDF, ePub
Page 68
code example at top of page

code reads: conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob) full1 = tf.nn.relu(full_layer(conv3_flat, F1)) this doesn't make sense, it should read: conv3_drop = tf.nn.dropout(conv3_flat, keep_prob=keep_prob) full1 = tf.nn.relu(full_layer(conv3_drop, F1))

Note from the Author or Editor:
Thank you for catching this! We have already fixed it in the git repo, and will make the change in future versions of the boo.

Jeff Kriske  Dec 12, 2017