Kubernetes Patterns

Errata for Kubernetes Patterns

Submit your own errata for this product.


The errata list is a list of errors and their corrections that were found after the product was released. If the error was corrected in a later version or reprint the date of the correction will be displayed in the column titled "Date Corrected".

The following errata were submitted by our customers and approved as valid errors by the author or editor.

Color Key: Serious Technical Mistake Minor Technical Mistake Language or formatting error Typo Question Note Update



Version Location Description Submitted By Date Submitted Date Corrected
Printed
Page 19
1st

Speaking of "capacity" in the paragraph, it might be worth pointing out that this is allocatable capacity on the node, not node_capacity. Also a hint that InitContainers contribute to resource requirements (and thus scheduling) would be nice.

Note from the Author or Editor:
Hi Michael, thanks a lot for your feedback. I'm not 100% sure what you mean with node_capacity in "this is allocatable capacity on the node, not node_capacity." Do you refer to a specific resource field ? I agree, that your comment that init containers resource requirements should be mentioned here. We have a detailed explanation of resource requirements for init-containers in the"Init Container" pattern (i.e. the top paragraph on p. 127), but we should mention that here for sure, too. ... roland

Michael Gasch  May 27, 2019 
Printed
Page 19
Best-Effort

It is not true that best-effort pods are evicted before other classes, i.e. burstable. The memory pressure rank (and eviction) logic has changed since v1.9: https://github.com/kubernetes/kubernetes/blob/da31c50da191ae1c055069b936d9e549741e3e8a/pkg/kubelet/eviction/helpers.go#L667 Pods with high resource consumption above requests will be evicted first, which not necessarily implies they're best-effort. E.g. a burstable pod with low requests but high mem usage will be evicted first: https://github.com/kubernetes/kubernetes/blob/da31c50da191ae1c055069b936d9e549741e3e8a/pkg/kubelet/eviction/helpers.go#L524

Note from the Author or Editor:
Thanks for the hint! I'm not 100% sure, whether the statement that 'best-effort' QoS container (i.e. those with no request limits) are not also considered to be evicted first, as according to https://github.com/kubernetes/kubernetes/blob/7b8c9acc09d51a8f6018eafc49490102ae7cb0c4/pkg/kubelet/eviction/helpers.go#L583 every current memory value is higher than the initValue (== 0 bytes). That said, I'm not super intimidated with the K8s source, so not sure what the decision criteria are when there are multiple pods exceeding their resources. Is it is like that the pod which exceed requests the *most* are evicted first? In that case, wouldn't a request value of 0 be a high-probability candidate for this criteria? Of course, that depends on the absolute values, too (e.g. having a request of 1 MB and limits 2 GB, and having an actual value of 1 GB put higher memory pressure on it as a best-effort Pod with the actual usage of 100 MB). Do you think it's a valid simplification to still say the best-effort Pods are the one which is *most likely* killed the first? (or *with a high probability *)? The purpose of this QoS categorization in the context of the book is to give the user a rough guideline of what to chose without knowing the super gory details. Here our intention was also to highly recommend to introduce limits in any case. I totally agree though, that we should not use a normative statement like "always killed first". --- Soften the wording a bit, but for a second edition which should rewrite that section and possibly add more details.

Michael Gasch  May 27, 2019 
Printed
Page 21
4th

Related to my previous comment on QoS and eviction, the Kubelet eviction logic has changed from QoS only to more complex since v1.9: 1) usage > requests 2) priority 3) mem usage compared to requests https://github.com/kubernetes/kubernetes/blob/da31c50da191ae1c055069b936d9e549741e3e8a/pkg/kubelet/eviction/helpers.go#L667

Note from the Author or Editor:
Confirmed that eviction policy is more complicated than QoS. Having requests set to "0" sets condition 1) for sure to 0, and for 3) the delta is potentially also much larger (for the same image) We definitely should expand on this section and e.g. mention also that priority has a critical influence, i.e. to decided between two pods, both exceeding their requests (which is always the case for a pod with requests == 0, i.e. "Best Effort" QoS pods).

Michael Gasch  May 27, 2019 
Printed, PDF, ePub, Mobi, Safari Books Online, Other Digital Version
Page 95
last paragraph

"When a node fails, it schedules new Pods on a different node unless Kubernetes can confirm that the Pods (and maybe the whole node) are shut down." I think, corresponding with the "at most once" guarantee of stateful sets described in the chapter, this should read: "When a node fails, it ***does not*** schedule new Pods on a different node unless Kubernetes can confirm that the Pods (and maybe the whole node) are shut down."

Note from the Author or Editor:
schedules -> does not schedule Fixed in source.

Michael Gasch  May 27, 2019 
Printed
Page 119
code snippet top of the page

looks like the YAML formatting got messed up in the code snippet: "resource: limits.memory" is not correctly intended IMHO.

Note from the Author or Editor:
Confirmed, it should, the environment variable should look like ``` - name: MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: random-generator resource: limits.memory ``` I.e. "container" should be "containerName" and "resource" aligned with "containerName" in the yaml. Also in the paragraph below `container` needs to be replaced with `containerName`. ----- Fixed in source

Michael Gasch  May 27, 2019 
PDF
Page 154
Pre last paragraph

It is said: The configuration in Example 19-1 that is mounted as a volume results in two files ..." I think it should say four files instead (EXTRA_OPTIONS and SEED are not being considered...by mistake)

Note from the Author or Editor:
Thanks for the heads up. You are absolutely right, it should be four files which are mounted. I'm going to adapt this according. Book updated.

Antonio Alonso  Nov 21, 2019 
Printed, PDF, ePub, Mobi, Safari Books Online, Other Digital Version
Page 208
last paragraph in "custom metrics"

Instead of ".spec.metrics.resource[*].type" it must be ".spec.metrics.resource[:].type" like in the paragraph above.

Note from the Author or Editor:
Confimed, "*" needs to be replaced by ":" in this line. The jsonpath select `[:]` means that every element in the `.spec.metrics.resource`array is selected. fixed in source (2019-12-12)

Anonymous  Apr 18, 2019 
Printed
Page 210
last paragraph

"which influences where the managing Pods will be scheduled": Should be either "where the *managed* Pods will be scheduled" or just "where the Pods will be scheduled" ---- Fixed in source (2019-12-12)

Roland Huß
Roland Huß
 
May 05, 2019