Errata

Natural Language Processing with PyTorch

Errata for Natural Language Processing with PyTorch

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".

The following errata were submitted by our customers and approved as valid errors by the author or editor.

Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted By Date submitted Date corrected
ePub
Page ?
Under the heading "Using Code Examples"

The link to supporting code does not work using the online book version. This makes it difficult to follow the exercises as only some code is shown in the text itself.

https://nlproc.info/PyTorchNLPBook/repo/

I apologize in advance, if I am missing something. This is not a technical error, but figured this might be a better way to reach out then bothering an author directly.

Note from the Author or Editor:
The code files are now available here:
https://github.com/joosthub/PyTorchNLPBook

Anonymous  Feb 15, 2019 
Printed
Page 6,7,8,9
below the second paragraph on page 6, vs the example figures and examples

The corpus listed in the text reads :
Time flies like an arrow.
Fruit flies like a banana.

The code and graphs use the following text:
Time flies flies like an arrow.
Fruit flies like a banana.

Note from the Author or Editor:
Replace example 1-1 with the following:

from sklearn.feature_extraction.text import CountVectorizer import seaborn as sns
corpus = ['Time flies like an arrow.', 'Fruit flies like a banana.']
one_hot_vectorizer = CountVectorizer(binary=True)
one_hot = one_hot_vectorizer.fit_transform(corpus).toarray()
vocab = one_hot_vectorizer. get_feature_names()
sns.heatmap(one_hot, annot=True,
cbar=False, xticklabels=vocab,
yticklabels=['Sentence 2'])

Charles L Blount  Feb 14, 2019 
Printed
Page 32
Example 2-1. Tokenizing text - first box

In the Input[0] window, last line -

print([str(token) for token >in nlp(text.lower())])
should be
print([str(token) for token in nlp(text.lower())])

Note from the Author or Editor:
suggestion is correct. The text has an extra > symbol that should be removed.

Anonymous  Jul 01, 2019 
Printed
Page 44
1st paragraph

In the second to last sentence of the first paragraph: "To mitigate that effect, variants such as the Leaky ReLU and Parametric ReLU (PReLU) activations functions have proposed, where the leak coefficient a is a learned parameter." I think the author meant "have been proposed".

Benjamin E Nathanson  Apr 27, 2019