Errata

Kubernetes: Up and Running

Errata for Kubernetes: Up and Running

Submit your own errata for this product.

The errata list is a list of errors and their corrections that were found after the product was released.

The following errata were submitted by our customers and have not yet been approved or disproved by the author or editor. They solely represent the opinion of the customer.

Color Key: Serious technical mistake Minor technical mistake Language or formatting error Typo Question Note Update

Version Location Description Submitted by Date submitted
PDF Page pp66-67
Example commands in "Applying Labels" section

The '--replicas' argument is no longer supported, so these commands need to be replaced with 'kubectl create deployment' alternatives.

This was reported in 2nd ed errata in 2022-01 (change isbn in link to 0636920223788)

Anonymous  Sep 18, 2022 
Chapter 16. Integrating Storage Solutions and Kubernetes
Example 16-12. mongo-configmap.yaml

The shell script in mongo-configmap.yaml relies on "ping" to test for readiness before the script continues. The mongo:3.4.24 image used to run this shell script does not contain ping. If you use kubectl logs mongo-0 --container init-mongo, you can see this happening in the logs:

waiting for DNS (mongo-0.mongo)...
/config/init.sh: line 5: ping: command not found

Edward Porter  Oct 06, 2022 
Printed Page p66 & p67
p66 last paragraph and p67 first paragram

The `kubectl run` subcommand is deprecated and all command block need to be updated to reflect.

This

$ kubectl run alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=2 \
--labels="ver=1,app=alpaca,env=prod"

needs to be updated to

$ kubectl create deployment alpaca-prod \
--image=gcr.io/kuar-demo/kuard-amd64:blue \
--replicas=2
$ kubectl label deployments alpaca-prod --overwrite ver=1 app=alpaca env=prod

This

$ kubectl run alpaca-test \
--image=gcr.io/kuar-demo/kuard-amd64:green \
--replicas=1 \
--labels="ver=2,app=alpaca,env=test"

needs to be updated to

$ kubectl create deployment alpaca-test \
--image=gcr.io/kuar-demo/kuard-amd64:green \
--replicas=1
$ kubectl label deployments alpaca-test --overwrite ver=2 app=alpaca env=test

This

$ kubectl run bandicoot-prod \
--image=gcr.io/kuar-demo/kuard-amd64:green \
--replicas=2 \
--labels="ver=2,app=bandicoot,env=prod"
$ kubectl run bandicoot-staging \
--image=gcr.io/kuar-demo/kuard-amd64:green \
--replicas=1 \
--labels="ver=2,app=bandicoot,env=staging"

Needs to be updated to

$ kubectl create deployment bandicoot-prod \
--image=gcr.io/kuar-demo/kuard-amd64:green \
--replicas=2
$ kubectl label deployments bandicoot-prod --overwrite ver=2 app=bandicoot env=prod
$ kubectl create deployment bandicoot-staging \
--image=gcr.io/kuar-demo/kuard-amd64:green \
--replicas=1
$ kubectl label deployments bandicoot-staging --overwrite ver=2 app=bandicoot env=staging


This paragraph also needs to be removed

Each deployment (via a ReplicaSet) creates a set of Pods using the labels specified in the template embedded in the deployment. This is configured by the kubectl run command.

All the subsequent `kubectl get pods --selector` block need to be updated to `kubectl get deployment --selector` and their outputs need to be updated to match.

Lachlan Evenson
 
Feb 27, 2023 
PDF Page 193
3rd line on Page 193 contents of the script init.sh

When the script is executed on the cluster I am using it loops giving the following:

/config/init.sh: line 4: ping: command not found
waiting for DNS (mongo-0.mongo)...
/config/init.sh: line 4: ping: command not found
waiting for DNS (mongo-0.mongo)...
/config/init.sh: line 4: ping: command not found
waiting for DNS (mongo-0.mongo)...
/config/init.sh: line 4: ping: command not found
waiting for DNS (mongo-0.mongo)...
/config/init.sh: line 4: ping: command not found
waiting for DNS (mongo-0.mongo)...
/config/init.sh: line 4: ping: command not found
waiting for DNS (mongo-0.mongo)...
/config/init.sh: line 4: ping: command not found

It appears that the "ping" command is not included in the "mongo:3.4.24" image. Therefore the script will loop forever. Probably better to break out of the loop after a certain number of errors.

Elliot Weitzman  Nov 13, 2022 
Page 194
Example 16-13. mango.yaml

Example 16-13 mongo.yaml is missing the Spec field to define a initialization container - initContainers. Without this spec field, the initialization container "init-mongo" is not an initialization container but a regular container which along side and at the same time as the mongodb container.

That being said, it is not a good idea to fix this. The concept of an initialization container is a good one and has many uses for many application. But it does not work for the use case of initializing a mongodb replication set. The reason is as follows:

An app container only runs after the init container(s) finish. In this case, the app container (mongodb) starts the mongo database. But the init container is dependent on the mongo database running which will never happen because the app container won't start until the init container finishes. The init container will just loop endlessly try to connect to the database server which will never become active.

The solution in this case is not to use an init container, but rather an additional (sidecar) container that does what is required to initialize a mongo db replication set. The script on page 192-193 is the right idea but has a few technical errors as follows:

1. ping command is not included in the mongodb container image. This was noted in a prior errdata.
2. The line with "| grep -v "no replset config has been received" will not work correctly, because there will many line in the returned output whic will not match this and so the grep command will return a return code of 0.
3. init containers need to finish and exit. But this container needs to sleep at the end (forever or a very long time). The reason is that it is not possible to define a restartPolicy of "OnFailure" or "Never" for a stateful set. The only restartPolicy allowed is "Always". So if the script exits, then the container will be restarted and will encounter futher errors when run repeatedly. The only solution that I came up with was to put in a sleep with a very long value at the end.

Here is a script I used to successfully initialize a mongo DB replset modeled after the provided script. While it works on my cluster, to make it truly robust requires additional parsing and error checking of mongo command output.

apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-init
data:
init.sh: |
#!/bin/bash
#
# Loop until the local mongo database server is up
until /usr/bin/mongo --eval="printjson(db.serverStatus())"; do
echo "connecting to local mongo database server failed - retrying ..."
sleep 2
done
echo "connected to local."

HOST=mongo-0.mongo:27017

# Loop until the remote database is up on mongo-0
until /usr/bin/mongo --host=${HOST} --eval="printjson(db.serverStatus())"; do
echo "connecting to remote mongo database server: mongo-0.mongo failed - retrying ..."
sleep 2
done
echo "connected to remote."

# Initialize replica set on mongo-0
if [[ "${HOSTNAME}" == 'mongo-0' ]]; then
echo "initializing replica set"
/usr/bin/mongo --host=${HOST} --eval="printjson(rs.initiate(\
{'_id': 'rs0', 'members': [{'_id': 0, \
'host': 'mongo-0.mongo:27017'}]}))"
fi

# For other mongo replicas other than mongo-0
if [[ "${HOSTNAME}" != 'mongo-0' ]]; then
# loop until replset config has been initialized
echo "Check status of replset config"
until /usr/bin/mongo --host=${HOST} --eval="printjson(rs.status())" \
| grep "\"ok\" : 1"; do
echo "waiting for replication set initialization, retrying ..."
sleep 2
done

echo "adding self to mongo-0"
/usr/bin/mongo --host=${HOST} \
--eval="printjson(rs.add('${HOSTNAME}.mongo:27017'))"
fi

echo "initialized"

echo "sleeping 10,000 days"
sleep 10000d

exit 0


Note from the Author or Editor:
This needs deeper inspection and further review prior to updating

Elliot Weitzman  Nov 17, 2022 
Printed Page 238
2nd Paragraph

I don't get any error applying the "default network deny" policy. But when I run the "test-source" the second time, I don't get the "wget: download time out" which means the "test-source" pod can STILL access the "kuard" pod.

I've tried deleting then recreating the default-deny-ingress policy. Same result.

$ kubectl get networkpolicy --namespace kuard-networkpolicy
NAME POD-SELECTOR AGE
default-deny-ingress <none> 15m

Anyone else seeing this?

Anonymous  Jan 20, 2023