Errata

Cloud Native DevOps with Kubernetes

Errata for Cloud Native DevOps with Kubernetes

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".

The following errata were submitted by our customers and approved as valid errors by the author or editor.

Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted By Date submitted Date corrected
Chapter 10
Seting environment variables from ConfigMaps (section)

The sentence in question is: We created this data with the ConfigMap manifest, so it should be now be available to read into the container’s environment.

It just doesn't roll over the tongue nicely ("should be now be"), although it still might be grammatically valid.

Otherwise a great read so far!

Thanks!

Mariyan

Mariyan Dimitrov  Feb 10, 2019  Jul 31, 2020
Printed
Page 29
Running the Demo App

In the 1.18 release of Kubernetes the kubectl run command changed from creating a Deployment by default to creating a Pod instead.

We have a few examples where we use kubectl run to get familiar with running a container in k8s.

Later we discuss why using the declarative kubectl apply -f... is preferred over the imperative create, edit or run, because your version-controlled YAML files always reflect the real state of the cluster.

In our kubectl run example we show the output as deployment.apps "demo" created but on version 1.18 instead you will instead see pod/demo created.

The subsequent port-forward example would instead be: kubectl port-forward pod/demo 9999:8888

Note from the Author or Editor:
Demo repo updated (https://github.com/cloudnativedevops/demo#kubectl-run-and-poddeployment-v118-change) and will work on updating the text in the book.

Justin Domingus  May 06, 2020  Jul 31, 2020
Printed
Page 35
chapter 3, figure 3-2

On chapter 3, figure 3-2 the kube-scheduler is missing and the kube-controller-manager was represented twice.

Note from the Author or Editor:
Thank you! We'll fix the duplicate and example output in the book text.

Anonymous  Sep 23, 2019  Jul 31, 2020
PDF
Page 54
3rd paragraph

> Working together with the Deployment resource is a kind of Kubernetes object called
a controller.

Kubernetes controller is not a kind of Kubernetes object, do you? Is here "a kind of Kubernetes components"?

Note from the Author or Editor:
Thank you, this wording is a bit confusing and we'll work on clearing it up in the book. Component is probably a better choice.

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Kazuki Suda  Sep 29, 2019  Jul 31, 2020
PDF
Page 58
2nd last paragraph

It says "When you deleted a Pod in 'Maintaining Desired State' on page 57, it was the node’s kubelet that spotted this and started a replacement."

Note: on page 57, the pod was deleted with 'kubectl delete pod'.

But isn't it the ReplicaSet instead of the kubelet that restarted the Pod? I know the kubelet restarts containers in its Pods if they terminate. But if a whole Pod is deleted from etcd with 'kubectl delete pod', then I believed the kubelet has nothing to do with it. The creation of a replacement Pod stems from the fact that the Pod is managed by a ReplicaSet. Also, if you delete a Pod that is not managed by a ReplicaSet (or another controller object) with `kubectl delete pod' it will not be recreated, which is another evidence that it is the ReplicaSet and not the kubelet that recreates the Pod.

Note from the Author or Editor:
You are correct, we'll fix this wording in the book. Thanks for pointing it out!

Daniel Weibel  Apr 22, 2019  Jul 31, 2020
PDF
Page 62
First code example right after the first sentence in the page

If you zoom in a little, you can see that the Service is forwarding its port 9999 to the Pod’s port 8888:
...
ports:
- port: 9999
protocol: TCP targetPort: 8888

The k8s/service.yaml file on Github has port 8888 instead of 9999. And actually the Github code is correct, because this is the port of the service.

Otherwise, if the port would be 9999, the command below would fail:
kubectl port-forward service/demo 9999:8888
error: Service demo does not have a service port 8888

Note from the Author or Editor:
Thanks, and sorry for the port confusion between 9999 and 8888. We'll get this cleared up in the book and make sure that the example repo matches.

Elton Goci  Mar 12, 2019  Jul 31, 2020
PDF
Page 62
2nd last paragraph

> As before, kubectl port-forward will connect the demo service to a port on your local machine, ...

`kubectl port-forward` command does not connect a Kubernetes service. Actually, kubectl finds a pod which is related service/demo, then connects the pod to a port on your local machine.

Therefore, this is an inappropriate description for connecting to a service.

Note from the Author or Editor:
Thank you, this wording in the book is confusing, and we'll get it cleared up.

Kazuki Suda  Jun 29, 2019  Jul 31, 2020
Printed
Page 65
helm init --service-account tiller

Tiller was removed in helm 3, and therefore helm init is no longer necessary. See https://helm.sh/docs/faq/#removal-of-tiller


I don't know how to proceed. ..

Note from the Author or Editor:
Thank you! We have added helm3 examples here: https://github.com/cloudnativedevops/demo/tree/master/hello-helm3 and will work on getting book text updated for Helm 3.

Anonymous  Jun 16, 2020  Jul 31, 2020
PDF
Page 66
3rd paragraph

> The helm install does this by creating a Kubernetes object called a Helm release.

`helm install` does not create a Kubernetes object called a Helm release. Information of Helm releases is stored in ConfigMaps (Helm v2).

See https://helm.sh/docs/architecture/#implementation.

Note from the Author or Editor:
Thanks! This may be from Helm2 VS Helm3. We'll fix the wording in the book and make sure it is accurate for Helm3 going forward.

Kazuki Suda  Jun 29, 2019  Jul 31, 2020
Printed
Page 73
last code snippet

Use a readinessProbe which is introduced later: should be livenessProbe in that context

Note from the Author or Editor:
Thank you! We'll get this fixed in the book text.

Giulio Vian  Jul 30, 2019  Jul 31, 2020
PDF
Page 90
3rd paragraph

A hard node affinity is described as:

"a Pod with this affinity will never be scheduled on a node that matches the selector expression"

Which should be:

"a Pod with this affinity will never be scheduled on a node that DOES NOT match the selector expression"

Maybe it could also be rephrased to make it clearer:

A hard affinity selects a set of nodes (with the nodeSelectorTerms field) and makes sure that the Pod is ONLY scheduled to a node in this selection, and NEVER to a node that is not in this selection.

In other words, a Pod is only scheduled to a node that DOES match any of the selector expression, and never to a node that DOES NOT match any of the selector expressions (this is the opposite of what is stated now in the book).

Note from the Author or Editor:
Thank you, and thanks for providing a more clear phrasing. We'll get this cleared up in the book.

Daniel Weibel  Apr 27, 2019  Jul 31, 2020
PDF
Page 123
the 3rd last paragraph

> --rm
> This tells Kubernetes to delete the container image after it’s finished running, so
> that it doesn’t clutter up your nodes’ local storage.

`--rm` is the flag to **delete resources** created in `kubectl run` command for attached containers. Whether the container image is deleted depends on the situation, so I feels that this description is not accurate.

Note from the Author or Editor:
Thanks, you are correct, and we'll clear up this wording in the book text.

Kazuki Suda  Aug 09, 2019  Jul 31, 2020
PDF
Page 155
2nd half of page

Required quotes missing from code examples:

Current state:

kubectl get pods -l environment in (staging, production)
kubectl get pods -l environment notin (production)

Should be:

kubectl get pods -l "environment in (staging, production)"
kubectl get pods -l "environment notin (production)"

In the current state, code examples cause a shell syntax error.

Note from the Author or Editor:
Thank you! We'll make sure to add the quotes in the book text.

Daniel Weibel  Apr 27, 2019  Jul 31, 2020
PDF
Page 156
2nd code block

> metadata:
> labels:
> app: demo
> tier: frontend
> environment: production
> environment: test
> version: v1.12.0
> role: primary

`environment` label keys are duplicated. I guess that it is an editorial mistake.

Note from the Author or Editor:
Thank you! I'll make sure the duplicate gets removed in the book text and that the example repo matches.

Kazuki Suda  Oct 04, 2019  Jul 31, 2020
PDF
Page 166
last paragraph

The example starts a Job with:

completions: 1
parallelism: 10

And states that "This will start 10 Pods".

However, this will start only a single Pod (since 'completions' is 1) and the 'parallelism' setting of 10 has no effect (it's the same as if it was 1).

In general, it doesn't make sense to set 'parallelism' higher than 'completions' as the effect is the same as 'completions' == 'parallelism'.

The documentation of 'job.spec.completions' also states that if 'completions' is set to 1, 'parallelism' is automatically also limited to 1.

Note from the Author or Editor:
Thank you! We'll make sure this typo gets fixed in the book and in the example repo.

Daniel Weibel  Apr 27, 2019  Jul 31, 2020
PDF
Page 169
last code block

The below says "matching the tier: frontend selector":

> Here’s an example PodPreset that adds a cache volume to all Pods matching the **tier: frontend** selector:

However, actually the example PodPreset does not have "tier: frontend" match label:

```
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: add-cache
spec:
selector:
matchLabels:
role: frontend
...
```

Note from the Author or Editor:
Thank you, I think this should be "role: frontent", but will double check and make sure this gets fixed in the book text and that the example repo matches.

Kazuki Suda  Oct 04, 2019  Jul 31, 2020
PDF
Page 187
Last code block

```
spec:
containers:
- name: demo
image: cloudnatived/demo:hello-secret-env
ports:
- containerPort: 8888
env:
- name: GREETING
valueFrom:
(...)
```

In the above manifest, `GREETING` environment variable is used, but actually `MAGIC_WORD` is the correct name.

I checked with the following steps:
```
$ docker run --rm -d -p 8000:8888 -e MAGIC_WORD=hello cloudnatived/demo:hello-secret-env
c636a16e44139884692d218e5d1dea49b1aa562a2d527cd29964f7eab65eabdf
$ curl localhost:8000
The magic word is "hello"
```

In addition to the above, the below also need to fix as follows:

> We set the environment variable GREETING exactly as we did when using a Config‐
Map,

should be:

> We set the environment variable MAGIC_WORD exactly as we did when using a Config‐
Map,

Note from the Author or Editor:
Thank you! We will work on getting this fixed in the book text and make sure the example repo is also up to date.

Kazuki Suda  Oct 05, 2019  Jul 31, 2020
PDF
Page 190
first paragraph

The text `beHl6enk=` is the base64-encoded version of our secret word xyzzy.

should be:

The text `eHl6enk=` is the base64-encoded version of our secret word xyzzy.

Note from the Author or Editor:
Thank you, we'll fix this typo in the book text.

Kazuki Suda  Aug 24, 2019  Jul 31, 2020
Printed
Page 192
1st paragraph

The examples to decode and encode base64 strings could be improved.

Encoding single line strings with base64 should not include the newline character. This is especially hard to debug when done manually with passwords.

The example decodes the string "eHl6enk=" to "xyzzy" whereas the example to encode the same string ("xyzzy") leads to a different base64 representation, which includes the newline character ("eHl6enkK").

Better would be to explain how to not include the newline char, by adding "-n" to the echo command. So the example to encode would look like:

$ echo -n xyzzy | base64
eHl6enk=

Note from the Author or Editor:
Thank you for providing the better example. We'll get this base64 issue cleared up in the book text.

Alexander Fahlke  May 08, 2019  Jul 31, 2020
PDF
Page 195
first paragraph

For more information about Sops, including installation and usage instructions, consult the GitHub repo.

Should be:

For more information about **helm-secrets**, including installation and usage instructions, consult the GitHub repo.

---

From the context, helm-secrets seems correct here.

Note from the Author or Editor:
Thank you! We'll get this fixed in the book text.

Kazuki Suda  Aug 25, 2019  Jul 31, 2020
PDF
Page 196
2nd last paragraph

**In the next chapter**, we’ll show you how to use Sops this way with Helm charts.

---

This chapter is No.10. However, “Managing Helm Chart Secrets with Sops” is included in the chapter 12, so "the next chapter" description is not correct.

Note from the Author or Editor:
Thank you! We'll fix the chapter number in the book text.

Kazuki Suda  Aug 25, 2019  Jul 31, 2020
PDF
Page 228
1st paragraph

> The end result will be a single Kubernetes Secret manifest named **app_secrets.yaml**,

Since the secret manfest template is the below, Kubernetes Secret manifest name is `{{ .Values.container.name }}-secrets`, not `app_secrets.yaml`. `app_secrets.yaml` is secret key name.

```
cat k8s/demo/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.container.name }}-secrets
type: Opaque
data:
{{ $environment := .Values.environment }}
app_secrets.yaml: {{ .Files.Get (nospace (cat $environment "-secrets.yaml"))
| b64enc }}
```

Note from the Author or Editor:
Thank you! We will work on getting this fixed in the book text.

Kazuki Suda  Oct 05, 2019  Jul 31, 2020
PDF
Page 244
3rd paragraph

> Running **kubectl get pods -a** will show you the failed Pod, allowing you to inspect the logs and see what happened.

-a (--show-all) flag was deprecated in Kubernetes 1.10, and it can show you failed Pods without the flag. In Kubernetes 1.14, the flag was removed completely.

Note from the Author or Editor:
Thank you, we'll remove the deprecated flag in the book text.

Kazuki Suda  Aug 31, 2019  Jul 31, 2020
666
Ch 2, under "Running the Demo App" (no page #s in Safari Online - hence the 666)

You need to add a description of how to delete the deployment & running container (e.g., "kubectl delete deployment demo") once the reader finishes the demo - probably with an explanation that the deployment has to be deleted to avoid restarting the container.

Note from the Author or Editor:
Thanks, you are correct and we need to include clear instructions for how to clean up the examples to avoid conflicts in future examples. I'll make sure this gets added in the book text.

Doug Birdwell  May 08, 2019  Jul 31, 2020
666
Ch 2, under "Running the Demo App" (no page #s in Safari Online - hence the 666)

I suspect the command

kubectl run demo --image=YOUR_DOCKER_ID/myhello --port=9999 --labels app=demo

has an error. Shouldn't it say "--port=8888"? This is the port the go app running in the container listens to and therefore needs to be exposed. The subsequent port forwarding command maps localhost's port 9999 to the container's port 8888.

Strangely, both versions of the run command worked when I tested them on my Mac.

Note from the Author or Editor:
Thanks, and sorry for the port confusion between 9999 and 8888. We'll get this cleared up in the book and make sure that the example repo matches.

Doug Birdwell  May 08, 2019  Jul 31, 2020
666
Ch 5, under "Service Resources"

The hello-k8s/k8s/service.yaml file in the book's repo lists the localhost port AND the targetPort as 8888, while the book lists the localhost port as 9999. One of these is wrong.

Note from the Author or Editor:
Thank you, we'll make sure this is fixed in the example repo and in the book text.

Doug Birdwell  May 08, 2019  Jul 31, 2020