Errata
The errata list is a list of errors and their corrections that were found after the product was released.
The following errata were submitted by our customers and have not yet been approved or disproved by the author or editor. They solely represent the opinion of the customer.
Color Key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update
| Version | Location | Description | Submitted by | Date submitted |
|---|---|---|---|---|
| Page page 9, subsection heading bottom heading |
Text: Installing Porch in Python |
Jørgen Lang | Sep 22, 2025 | |
| Page p62 continuation of code, line 5 (on p62) |
Text: print(f'Test Set Accuracy: {100 * correct / total}%') |
Jørgen Lang | Oct 17, 2025 | |
| Page 8 2nd para |
Text: wrapped in the torchvision.models library. |
Jørgen Lang | Sep 22, 2025 | |
| Page 9 3rd last para |
Text: With Python, there are many ways to install frameworks, but the default one supported by the TensorFlow team is pip. |
Jørgen Lang | Sep 22, 2025 | |
| Page 17 last para |
Text: "You’ll see the word tensor a lot in ML; it gives the TensorFlow framework its name. |
Jørgen Lang | Oct 13, 2025 | |
| Page 18ff multiple |
In multiple instances the variable "y" is written in uppercase (as a capital "Y"), while in the remainder of the chapter it is consistently in lowercase. |
Jørgen Lang | Oct 13, 2025 | |
| Page 28 4th para, 1st sentence |
Text: If you remember, in Chapter 1 we had a Sequential model to specify that we had |
Jørgen Lang | Oct 14, 2025 | |
| Page 35 3rd para, 1st sentence |
Text: We’ll also use the term epoch for a training cycle with all of the data |
Jørgen Lang | Oct 15, 2025 | |
| Page 35 5th para (just above "Training the Neural Network"), 1st sentence |
Text: This will simply call the train function we specified five times […] |
Jørgen Lang | Oct 15, 2025 | |
| Page 37 3rd code snippet |
Text: |
Jørgen Lang | Oct 15, 2025 | |
| Page 38 Code below "Exploring the Model Output" |
Text: |
Jørgen Lang | Oct 15, 2025 | |
| Page 39 2nd para, 2nd sentence |
Text: The Softmax function gets the log() of the value, where log(1) is zero and |
Jørgen Lang | Oct 15, 2025 | |
| Page 51 5th para |
Text: Finally, these 128 are fed into the final layer (self.fc1) with 10 outputs—that represent the 10 classes. |
Jørgen Lang | Oct 16, 2025 | |
| Page 51 4th para, last sentence |
Text: The output is 128, which is the same number of neurons we used in Chapter 2 for the deep neural network (DNN). |
Jørgen Lang | Oct 16, 2025 | |
| Page 54 various, 3 times |
Text: "dense layers" |
Jørgen Lang | Oct 17, 2025 | |
| Page 55 1st heading |
## Disclaimer: This is not meant as a serious suggestion. ### |
Jørgen Lang | Oct 17, 2025 | |
| Page 57 figure 3-8 |
Issue: directory names are different in naming and case from actual dir structure |
Jørgen Lang | Oct 17, 2025 | |
| Page 58 2nd para, last sentence |
Text: […] and the directory for validation is validation_dir, as specified earlier. |
Jørgen Lang | Oct 17, 2025 | |
| Page 59 3rd para |
Text: The theory is that these will be activated feature maps […] |
Jørgen Lang | Oct 17, 2025 | |
| Page 60 last para, 3rd sentence |
Text: In the preceding code snippet, you’ve already downloaded the training and vali‐ |
Jørgen Lang | Oct 17, 2025 | |
| Page 61 2nd para, last sentence |
Text: Here, you’ll download some additional images for testing the model. |
Jørgen Lang | Oct 17, 2025 | |
| Page 62 code, top |
Text (2x): |
Jørgen Lang | Oct 17, 2025 | |
| Page 63 3rd para, 2nd sentence |
Text: I’ve provided a “Horses or Humans” notebook on GitHub that you can open directly in Colab. |
Jørgen Lang | Oct 20, 2025 | |
| Page 65 2nd code snippet + text above |
Text: If it’s greater than 0.5, we’re looking at a human |
Jørgen Lang | Oct 20, 2025 | |
| Page 67 before last para |
Text: missing? |
Jørgen Lang | Oct 20, 2025 | |
| Page 71 3rd para |
Text: You’ll notice that we’re printing the output shape of the last layer, […] |
Jørgen Lang | Oct 21, 2025 | |
| Page 73 code example, line1 |
Text: [5x whitespace]def load_image(image_path, transform): |
Jørgen Lang | Oct 21, 2025 | |
| Page 74 2nd last para, 2nd sentence |
Text: If you wanted to train a dataset to recognize […] |
Jørgen Lang | Oct 21, 2025 | |
| Page 74 2nd last para, last sentence |
Text: […] there’s a simple dataset you can use for this. |
Jørgen Lang | Oct 21, 2025 | |
| Page 76 1st code snippet, line 6 |
Text: nn.Linear(1024, 3) # Final layer for binary classification |
Jørgen Lang | Oct 21, 2025 | |
| Page 77 3rd para |
Text: If you explore this a little deeper, you can see that the file named scissors4.png had an output of –2.5582, –1.7362, 3.8465] |
Jørgen Lang | Oct 21, 2025 | |
| Page 79 figure |
Text: A neural network with dropouts |
Jørgen Lang | Oct 21, 2025 | |
| Page 80 last para, 1st sentence |
Text: Before we explore further scenarios, in Chapter 4, you’ll get an introduction to |
Jørgen Lang | Oct 21, 2025 | |
| Page 84 2nd code example |
Text: # Create the FashionMNIST dataset |
Jørgen Lang | Oct 22, 2025 | |
| Page 86 4th para, 1st sentence |
Text: While FakeData only gives image types, you could relatively easily create your own CustomData (as we looked at earlier) […] |
Jørgen Lang | Oct 22, 2025 | |
| Page 86 last para |
Text: Thankfully, when using datasets, you can generally do this with an easy and intuitive API. |
Jørgen Lang | Oct 22, 2025 | |
| Page 88 1st para, 1st sentence |
Text: One more thing to consider when using custom splits is that the name random doesn’t mean […] |
Jørgen Lang | Oct 22, 2025 | |
| Page 88 5th para, 2nd sentence |
Text: For example, batching, image augmentation, mapping to feature columns, and other such logic […] |
Jørgen Lang | Oct 22, 2025 | |
| Page 90 1st para, 2nd sentence |
Text: Whenever you’re dealing with training or inference and you want the data or model to be on the accelerator, you’ll see something like .to(“cuda”) […] |
Jørgen Lang | Oct 22, 2025 | |
| Page 92 2nd last para, 2nd sentence |
Text: Based on the hardware you have available and the number of cores, the speed of your CPU, etc., |
Jørgen Lang | Oct 23, 2025 | |
| Page 93 last para, 1st sentence |
Text: This chapter covered the data ecosystem in PyTorch and introduced you to the dataset and DataLoader classes. |
Jørgen Lang | Oct 23, 2025 | |
| Page 100 2nd para, 1st sentence |
Text: Then, you’ll be given the sequences representing the three sentences. |
Jørgen Lang | Oct 24, 2025 | |
| Page 120 1st para, last sentence |
Text: […] so add 24 to get to 24, 24. |
Jørgen Lang | Oct 30, 2025 | |
| Page 139 2nd last para |
Text: It was a very short sentence, so it’s padded up to 85 characters with a lot of zeros! |
Jørgen Lang | Nov 02, 2025 | |
| Page 154 1st para |
Text: You can then set the loss function and classifier to this. (Note that the LR is 0.001, or 1e–3.): |
Jørgen Lang | Nov 11, 2025 | |
| Page 155 last para, last sentence |
Text: […] while the loss for the test set diverged after 15 |
Jørgen Lang | Nov 11, 2025 | |
| Page 155 + 156 figure 7-9 + figure 7-10 |
Text: |
Jørgen Lang | Nov 11, 2025 | |
| Page 168 last para |
Text: This model shows a total of 406.817 parameters of which only 6,817 are trainable, so training will be fast! |
Jørgen Lang | Nov 12, 2025 | |
| Page 175 last para |
Text: You’ll want to create a single string with all the text and set that to be your data. Use \n for the line breaks. Then, this corpus can be easily loaded and tokenized. First, the tokenize function will split the text into individual words, and then the create_word_dictionary will create a dictionary with an index for each individual word in the text: |
Jørgen Lang | Nov 16, 2025 | |
| Page 187 1st code snippet |
Will mark the prepending protocol in URL with [protocol] so error submission does not throw a tantrum. ¯\_(ツ)_/¯ |
Jørgen Lang | Nov 18, 2025 | |
| Page 190 figure caption |
Text: Adding a second LSTM layer |
Jørgen Lang | Nov 18, 2025 | |
| Page 209 ff. multiple |
Text: […] # features and targets […] |
Jørgen Lang | Nov 24, 2025 | |
| Page 243-244 last sentence (contd. on p. 244) |
Text: As discussed in earlier chapters, with dropout, neighboring neurons are randomly dropped out (ignored) during training to avoid a familiarity bias. |
Jørgen Lang | Nov 27, 2025 | |
| Page 245 code example |
Text: […] and adding another layer between the RNNs and the linears: |
Jørgen Lang | Nov 27, 2025 | |
| Page 260 1st + 2nd code snippet |
Text: python3 -m venv chapter12env |
Jørgen Lang | Dec 01, 2025 | |
| Page 260 6th para, 1st sentence |
Text: Then, you’ll be ready to install PyTorch. |
Jørgen Lang | Dec 01, 2025 | |
| Page 264 1st para 1st sentence |
Text: Before running it, make sure you have a model-store (or similar) directory that you will store the archived model in. |
Jørgen Lang | Dec 01, 2025 | |
| Page 266 1st code example, line 3 + line 6 |
Text: |
Jørgen Lang | Dec 02, 2025 |