Errata

Kubernetes Patterns

Errata for Kubernetes Patterns

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".

The following errata were submitted by our customers and approved as valid errors by the author or editor.

Color key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted By Date submitted Date corrected

In chapter 12 the random_generator service has:
port: 80
targetPort: 8080

But in the exmple of environmnet variable I read:

Example 12-2. Service-related environment variables set automatically in Pod

RANDOM_GENERATOR_SERVICE_HOST=10.109.72.32
RANDOM_GENERATOR_SERVICE_PORT=8080

Is the port 8080 correct? Should it be 80?

Thanks,

Luigi

Note from the Author or Editor:
You are correct, the RANDOM_GENERATOR_SERVICE_PORT will be set to "80" in this example.

Luigi   Dec 20, 2019  Apr 02, 2021
PDF
Page 6
very end of the page, final bullet point

I have ISBN 9781492050285, but it looks like it's actually 978-1-492-07665-0. Whichever.

On page 6, at the bottom, in the final bullet point, it says, "A Pod ensures colocation of containers. Thanks to the collocation, containers..." I believe the word "colocation" is correct, and "collocation" (a different word) is not what was meant.

Note from the Author or Editor:
Totally agree, thanks for the heads up !

Todd Walton  Aug 11, 2020  Apr 02, 2021
Printed
Page 19
1st

Speaking of "capacity" in the paragraph, it might be worth pointing out that this is allocatable capacity on the node, not node_capacity. Also a hint that InitContainers contribute to resource requirements (and thus scheduling) would be nice.

Note from the Author or Editor:
Hi Michael,

thanks a lot for your feedback. I'm not 100% sure what you mean with node_capacity in "this is allocatable capacity on the node, not node_capacity." Do you refer to a specific resource field ?

I agree, that your comment that init containers resource requirements should be mentioned here. We have a detailed explanation of resource requirements for init-containers in the"Init Container" pattern (i.e. the top paragraph on p. 127), but we should mention that here for sure, too.

... roland

Michael Gasch  May 27, 2019  Apr 02, 2021
Printed
Page 19
Best-Effort

It is not true that best-effort pods are evicted before other classes, i.e. burstable. The memory pressure rank (and eviction) logic has changed since v1.9: https://github.com/kubernetes/kubernetes/blob/da31c50da191ae1c055069b936d9e549741e3e8a/pkg/kubelet/eviction/helpers.go#L667

Pods with high resource consumption above requests will be evicted first, which not necessarily implies they're best-effort. E.g. a burstable pod with low requests but high mem usage will be evicted first: https://github.com/kubernetes/kubernetes/blob/da31c50da191ae1c055069b936d9e549741e3e8a/pkg/kubelet/eviction/helpers.go#L524

Note from the Author or Editor:
Thanks for the hint! I'm not 100% sure, whether the statement that 'best-effort' QoS container (i.e. those with no request limits) are not also considered to be evicted first, as according to https://github.com/kubernetes/kubernetes/blob/7b8c9acc09d51a8f6018eafc49490102ae7cb0c4/pkg/kubelet/eviction/helpers.go#L583 every current memory value is higher than the initValue (== 0 bytes).

That said, I'm not super intimidated with the K8s source, so not sure what the decision criteria are when there are multiple pods exceeding their resources.

Is it is like that the pod which exceed requests the *most* are evicted first? In that case, wouldn't a request value of 0 be a high-probability candidate for this criteria? Of course, that depends on the absolute values, too (e.g. having a request of 1 MB and limits 2 GB, and having an actual value of 1 GB put higher memory pressure on it as a best-effort Pod with the actual usage of 100 MB).

Do you think it's a valid simplification to still say the best-effort Pods are the one which is *most likely* killed the first? (or *with a high probability *)?

The purpose of this QoS categorization in the context of the book is to give the user a rough guideline of what to chose without knowing the super gory details. Here our intention was also to highly recommend to introduce limits in any case.

I totally agree though, that we should not use a normative statement like "always killed first".

---

Soften the wording a bit, but for a second edition which should rewrite that section and possibly add more details.

Michael Gasch  May 27, 2019  Apr 02, 2021
Printed
Page 21
4th

Related to my previous comment on QoS and eviction, the Kubelet eviction logic has changed from QoS only to more complex since v1.9:

1) usage > requests
2) priority
3) mem usage compared to requests

https://github.com/kubernetes/kubernetes/blob/da31c50da191ae1c055069b936d9e549741e3e8a/pkg/kubelet/eviction/helpers.go#L667

Note from the Author or Editor:
Confirmed that eviction policy is more complicated than QoS. Having requests set to "0" sets condition 1) for sure to 0, and for 3) the delta is potentially also much larger (for the same image)

We definitely should expand on this section and e.g. mention also that priority has a critical influence, i.e. to decided between two pods, both exceeding their requests (which is always the case for a pod with requests == 0, i.e. "Best Effort" QoS pods).

Michael Gasch  May 27, 2019  Apr 02, 2021
Printed
Page 32
First paragraph

The word "Secrete" should be "Secret".

Note from the Author or Editor:
Confirmed typo, needs to be "Secrets" (plural)

ContinualIntegration.com  Feb 07, 2021  Apr 02, 2021
Printed
Page 32
Fig 3-5

These diagrams would be a more clear if you add a vertical line to show when the release process starts and stops. This will make the whole section more clear.

Also the diagram for canary release is wrong. At the end of the release the new release version line should go up and the old version should go down. Maybe you need to highlight a manual step that is required in the diagram. But as the diagram stands we never complete the release

Note from the Author or Editor:
Thanks for your feedback, highly appreciated !

I agree, that an additional indicator for the actual beginning of the release is useful, but for this type of schematic diagram, the intention was to be as simple as possible/helpful and not to confuse with too many details. Our hope is that people can infer that point in time, but you are right, and an additional indication might be helpful.

wrt/ to the canary release I tend to disagree though. Actually it depends on your definition of a "canary release" really. For us the end result of a canary release is a traffic split between the old and the new version, with the new version usually getting only a small fraction of the traffic (e.g. 5/95). This is because you will keep that state for quite some time to measure the impact of the new release for a small user group. The final rollout to the new release (100/0) or the rollback to the old release (0/100) is an extra step that is not part of the initial canary release (i.e. you have to do an additional configuration for this to happen). So from this perspective, we believe the diagram reflects this first, initial 'canary release' step.

As mentioned, it's really how you define a canary release, whether you include both steps (but then also stretch the time-scale when the experimentation happens which is much longer than the initial rollout and can not be reflected easily in this diagram) or whether you describe this as two separate steps. I will add a more detailed explanation to the text to make clear how we define the 'canary release'. Thanks for the heads up!

Edited: I just saw that our explanation in this chapter for the "canary release" does indeed not fit the diagram. So we should either adapt the description, or, as suggested, the diagram itself.

Deepak Dhayatker  Feb 11, 2021  Apr 02, 2021
Printed
Page 36
Second Paragraph

In the second line of section "Process health Checks" you write 'If the container process is not running, the probing is restarted'. It should say 'If the container process is not running, the container is restarted'

Note from the Author or Editor:
Exactly, the container is restarted (not the probe).

Deepak Dhayatker  Feb 11, 2021  Apr 02, 2021
Printed
Page 37
Last paragraph

In the second last line it says "It is also useful for shielding the services from traffic...." I think It should say "It is also useful for shielding the container from traffic...." To shield the kubernetes service from traffic will be different.

Note from the Author or Editor:
Yes, you are right that the K8s service is not shielded (but the pod is shielded from getting traffic via the service if the readiness probe is failing). Technically it should be the Pod (as the Pod's IP is removed from the Service's Endpoint), but 'container' fits better in the given context, as the whole section talks about containers (a removing a Pod IP from the endpoint *also* removes obviously also the container). So we changed it to 'container' as suggested.

Deepak Dhayatker  Feb 11, 2021  Apr 02, 2021
Printed
Page 48
Second last Paragraph, Last Line

The line says "Also keep in mind that if containers are running on a node that is not managed by kubernetes, reflected in the node capacity calculations by kubernetes."

I believe you are missing 'the resources used by these containers are not ' after the comma.

So it becomes "Also keep in mind that if containers are running on a node that is not managed by kubernetes, the resources used by these containers are not reflected in the node capacity calculations by kubernetes."

Note from the Author or Editor:
Correct, the sentence doesn't make sense as-is and missing the part that you mention.

Deepak Dhayatker  Feb 12, 2021  Apr 02, 2021
Printed
Page 55
Second Last paragragh , first line

The line says " You can influence the placement based on the application's high availability and performance needs, but try not to limit the scheduler much and back your self into a corner where no more pods can be scheduled."

I believe you are missing a 'too'

"You can influence the placement based on the application's high availability and performance needs, but try not to limit the scheduler too much and back your self into a corner where no more pods can be scheduled."

Note from the Author or Editor:
This report is correct, a "too" is missing here.

Deepak Dhayatker  Feb 12, 2021  Apr 02, 2021
Printed
Page 76
top

A list has "Push". I do not see why it is in a list of ways to *reach* a pod. The thing described is a way to communicate from a pod to something else.

Note from the Author or Editor:
I agree that this list item "Push" does not fit well into the list but should probably be better added in addition to the list to describe that a DaemonSet does not necessarily need to be reachable in order to be useful (kind of a 'headless' DaemonSet).

Continualintegration.com  Feb 07, 2021  Apr 02, 2021
Printed
Page 83
Middle of the page

Shouldn't the word "we's" be "we"?

Note from the Author or Editor:
Good catch, thank you.

Continualintegration.com  Feb 07, 2021  Apr 02, 2021
Printed
Page 91
Third point

The third numbered point says "Two pods members in the StatefulSet named ng-0 and ng-1"

It should say "Two pods members in the StatefulSet named rg-0 and rg-1" According to yml shown in Example 11-1. Looking at metadata.name

Note from the Author or Editor:
Correct it needs to 'rg-0' and 'rg-1' in the callout for this example.

Deepak Dhayatker  Feb 15, 2021  Apr 02, 2021
Printed, PDF, ePub, Mobi, , Other Digital Version
Page 95
last paragraph

"When a node fails, it schedules new Pods on a different node unless Kubernetes can confirm that the Pods (and maybe the whole node) are shut down."

I think, corresponding with the "at most once" guarantee of stateful sets described in the chapter, this should read:

"When a node fails, it ***does not*** schedule new Pods on a different node unless Kubernetes can confirm that the Pods (and maybe the whole node) are shut down."

Note from the Author or Editor:
schedules -> does not schedule

Fixed in source.

Michael Gasch  May 27, 2019  Apr 02, 2021
Printed
Page 101
second paragraph - "Internal Service Discovery" section

In the section, "Internal Service Discovery" it says, "As soon as we create a Deployment with a few replicas, the scheduler places the Pods on the suitable nodes, and each Pod gets a *cluster IP* address assigned before starting up.". This should read, "...each Pod gets a *Pod IP* address...", not a cluster IP.

Note from the Author or Editor:
Thanks a lot for the feedback! (and sorry for the delayed response)

With 'cluster IP address' we mean a 'cluster-internal IP address' that is within the cluster's private network and not reachable from the outside. It's not the IP of the cluster itself (which IMO doesn't make much sense as a cluster itself does not has an IP address, except one would refer to the cluster's entry point (ingress controller/load balancer) as the cluster's IP.

But you are right, 'cluster IP ' might be misleading, so we are going to reword this to 'cluster-internal IP address'.

Ryan Chase  Aug 05, 2020  Apr 02, 2021
Printed
Page 119
code snippet top of the page

looks like the YAML formatting got messed up in the code snippet:

"resource: limits.memory" is not correctly intended IMHO.

Note from the Author or Editor:
Confirmed, it should, the environment variable should look like

```
- name: MEMORY_LIMIT
valueFrom:
resourceFieldRef:
containerName: random-generator
resource: limits.memory
```

I.e. "container" should be "containerName" and "resource" aligned with "containerName" in the yaml.

Also in the paragraph below `container` needs to be replaced with `containerName`.

-----

Fixed in source

Michael Gasch  May 27, 2019  Apr 02, 2021
PDF
Page 138
More Information section

I'm referring to the More Information section of the Adapter pattern. The second link titled The Distributed System Toolkit: Container Patterns for Modular Distributed System Design points to a private YouTube video https://www.youtube.com/watch?v=Ph3t8jIt894

Note from the Author or Editor:
Thanks a lot for the heads up ! The visibility of the video must have changed after the book has been released.

I removed the link from the chapter, but we are looking for a better way of collecting and maintaining those links in the future as they are quite volatile and can get stale easily.

The idea is to maintain those links at https://github.com/k8spatterns/examples but we are not there yet (and tbh it has not a big priority yet).

Daniel Pacak  Mar 08, 2020  Apr 02, 2021
Printed
Page 149
second paragraph from top

The phrase "The patters Configuration Resource" should say "The patterns Configuration Resource".

Note from the Author or Editor:
Definitely a typo, thanks !

Continualintegration.com  Feb 07, 2021  Apr 02, 2021
Printed
Page 153
Second paragraph and the Example 19-2

In the second paragraph on the last line you say " For the preceding example, the equivalent kubectl command looks like that in Example 19-2.

And the Example 19-2 has this
kubectl create cm spring-boot-config \
--from-literal=JAVA_OPTIONS=-Djava.security.egd=file:/dev/urandom \
--from-file=application.properties

This does not match yml in Example 19-1

It should say
kubectl create cm random-generator-config \
--from-literal=PATTERN="Configuration Resource" \
--from-literal=EXTRA_OPTIONS="high-secure,native" \
--from-literal=SEED="432576345" \
--from-file=application.properties

Note from the Author or Editor:
this is indeed a mismatch between the two examples (19-1 and 19-2), so the suggestion is correct.

Deepak Dhayatker  Feb 21, 2021  Apr 02, 2021
PDF
Page 154
Pre last paragraph

It is said:
The configuration in Example 19-1 that is mounted as a volume results in two files ..."

I think it should say four files instead (EXTRA_OPTIONS and SEED are not being considered...by mistake)

Note from the Author or Editor:
Thanks for the heads up. You are absolutely right, it should be four files which are mounted. I'm going to adapt this according.

Book updated.

Antonio Alonso  Nov 21, 2019  Apr 02, 2021
Printed
Page 156
middle of page

The word "limt" should be "limit".

Note from the Author or Editor:
Confirmed, thanks for taking care !

Continualintegration.com  Feb 07, 2021  Apr 02, 2021
Printed
Page 161
Figure 20-2

In the diagram, for the box labelled Application container the folder is value wrong. It currently says '/var/config/ but it should say '/config'.

This is based on the text in the previous paragraph and yml code in Example 20-6 under containers.volumeMounts.mounthpath

Note from the Author or Editor:
The image does not match the example, that is a correct observation. I'm going to fix the example (to point to /var/config) as this the make it also clear that the mount paths do not have to be the same.

Deepak Dhayatker  Feb 21, 2021  Apr 02, 2021
Printed, PDF, ePub, Mobi, , Other Digital Version
Page 208
last paragraph in "custom metrics"

Instead of ".spec.metrics.resource[*].type" it must be ".spec.metrics.resource[:].type" like in the paragraph above.

Note from the Author or Editor:
Confimed, "*" needs to be replaced by ":" in this line. The jsonpath select `[:]` means that every element in the `.spec.metrics.resource`array is selected.

fixed in source (2019-12-12)

Anonymous  Apr 18, 2019  Apr 02, 2021
Printed
Page 210
last paragraph

"which influences where the managing Pods will be scheduled": Should be either "where the *managed* Pods will be scheduled" or just "where the Pods will be scheduled"

----

Fixed in source (2019-12-12)

Roland Huß
Roland Huß
 
May 05, 2019  Apr 02, 2021
Printed
Page 226
Third paragraph from the bottom

The word "thisi" should be "this".

Note from the Author or Editor:
definitely a typo, thanks !

Continualintegration.com  Feb 07, 2021  Apr 02, 2021