Chapter 1. Getting Started with Knative
Deploying applications as serverless services is becoming a popular architectural style. It seems like many organizations assume that Function as a Service (FaaS) is serverless. We think it is more accurate to say that FaaS is one way to do serverless, but not the only way. This raises a super critical question for enterprises that may have applications which could be a monolith or a microservice: What is the easiest path to serverless application deployment?
The answer is a platform that can run serverless workloads, while enabling you to have complete control over how to configure, build, deploy, and run applications—ideally, a platform that would support deploying the applications as Linux containers. In this chapter we introduce you to one such platform—Knative—that helps you to run the serverless workloads in a Kubernetes-native way.
A Kubernetes cluster does not come with Knative and its dependencies pre-installed, so the recipes in this chapter detail how to have Knative and its dependencies installed into a Kubernetes cluster. The recipes also help in setting up your local developer environment that will be required to run the exercises in this book.
1.1 Installing the Required Tools
Solution
In general, you will need several of the open source tools listed in Table 1-1.
Tool | macOS | Fedora | Windows |
---|---|---|---|
|
|||
|
|
||
|
|||
|
|
||
watch |
|
|
|
kubectx and kubens |
|
Important
Make sure you add all the tools to your $PATH
before you proceed with any of the recipes in upcoming chapters.
Discussion
The following is a list of the tools you’ll need with minimum and recommended versions:
Note
The versions listed here were the ones tested at the time this book was written. Later versions should maintain backward compatibility with the use cases used in this cookbook’s recipes.
- git
-
A distributed version-control system for tracking changes in source code during software development:
$
git
version
git
version
2.21.0
- docker
-
A client to run the Linux containers:
$
docker
--version
Docker
version
19.03.5,
build
633a0ea
- kubectl
-
Knative minimum requires Kubernetes v1.15+; however, we recommend using v1.15.0. To check your
kubectl
version run:
$
kubectl
version
--short
Client
Version:
v1.15.0
Server
Version:
v1.15.0
$
helm
version
version.BuildInfo
{
Version:
"v3.0.2"
...
}
$
stern
--version
stern
version
1.11.0
$
yq
--version
yq
version
2.4.1
$
http
--version
1.0.3
- hey
-
A tiny program that sends some load to a web application.
hey
does not have a version option, so you can usehey --help
to verify that it is in your$PATH
. - watch
-
Execute a program periodically, showing output in full screen:
$
watch
--version
watch
from
procps-ng
3.3.15
1.2 Setting Up a Kubernetes Cluster
Solution
You can use minikube as your Kubernetes cluster for a local development environment. Minikube provides a single-node Kubernetes cluster that is best suited for local development. Download minikube and add it to your $PATH
.
All the recipes in this book have been tested with minikube v1.7.2 and
the Kubernetes CLI (kubectl
) v1.15.0.
The script $BOOK_HOME
/bin/start-minikube.sh helps you start minikube with the right configuration.
Discussion
You will also need to know the following list of environment variables and their default values:
- PROFILE_NAME
-
The name of minikube profile; default is
knativecookbook
- MEMORY
-
The memory that will be allocated to the minikube virtual machine (VM); default is
8GB
- CPUS
-
The number of CPUs that will be allocated to the minikube VM; default is
4
- VM_DRIVER
-
The virtual machine driver that will be used:
-
For macOS use
virtualbox
-
For Linux use
kmv2
-
For Windows use
hyper-v
-
VM_DRIVER
is a required environment variable and the start-minikube.sh script will fail to start if it is not set:
$ $BOOK_HOME/bin/start-minikube.sh profile "knativecookbook" not found Created a new profile : knativecookbook minikube profile was successfully set to knativecookbook [knativecookbook] minikube v1.6.2 on Darwin 10.15.2 Selecting virtualbox driver from user configuration (alternates: [hyperkit]) Creating virtualbox VM (CPUs=4, Memory=8192MB, Disk=50000MB) ... Preparing Kubernetes v1.15.0 on Docker 19.03.5 ... ▪ apiserver.enable-admission-plugins=LimitRanger,NamespaceExists, NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass, MutatingAdmissionWebhook Pulling images ... Launching Kubernetes ... Waiting for cluster to come online ... Done! kubectl is now configured to use "knativecookbook"
1.3 Installing the Internal Kubernetes Container Registry
Solution
To set up an internal container registry inside of minikube, run:
$
minikube
addons
enable
registry
It will take a few minutes for the registry to be enabled; you can watch the status of the pods on the kube-system
namespace.
Discussion
If the registry enablement is successful, you will see two new pods in the kube-system
namespace with a status of Running
:
$
kubectl
get
pods
-n
kube-system
NAME
READY
STATUS
RESTARTS
AGE
registry-7c5hg
1/1
Running
0
29m
registry-proxy-cj6dj
1/1
Running
0
29m
...
1.4 Configuring Container Registry Aliases
Solution
As part of some recipes in this cookbook, you will need interact with the local internal registry. To make push and pull smoother, we have provided a helper script that enables you to use some common names like dev.local
and example.com
as registry aliases for the internal registry. Navigate to the registry helper folder and run:
$
cd
$BOOK_HOME
/apps/minikube-registry-helper
A daemonset is used to run the same copy of the pod in all the nodes of the Kubernetes cluster. Run the following command to deploy the registry helper daemonset and ConfigMap that will be used by the registry helper:
$
kubectl
apply
-n
kube-system
-f
registry-aliases-config.yaml
$
kubectl
apply
-n
kube-system
-f
node-etc-hosts-update.yaml
Important
Wait for the daemonset to be running before proceeding to the next step. You can monitor the status of the daemonset with watch kubectl get pods -n kube-system
.
You can use Ctrl-C to terminate the watch.
Verify that the entries are added to your minikube node’s /etc/hosts file:
watch
minikube
ssh
--
sudo
cat
/etc/hosts
A successful daemonset execution will update the minikube node’s /etc/hosts file with the following entries:
127.0.0.1 localhost 127.0.1.1 demo 10.111.151.121 dev.local 10.111.151.121 example.com
Note
The IP for dev.local
and example.com
will match the CLUSTER-IP
of the internal container registry. To verify this, run:
$
kubectl
get
svc
registry
-n
kube-system
NAME
TYPE
CLUSTER-IP
PORT
(
S
)
AGE
registry
ClusterIP
10.111.151.121
80/TCP
178m
As part of the last step of configuring the internal container registry, you also need to patch the CoreDNS so that the deployments resolve container images that have names that begin with dev.local
and example.com
(e.g., dev.local/rhdevelopers/foo:v1.0.0
):
$
./patch-coredns.sh
To verify that the patch was successfully executed, run the following command to get the contents of the coredns
ConfigMap in the kube-system
namespace:
$
kubectl
get
cm
-n
kube-system
coredns
-o
yaml
A successfully patched coredns
ConfigMap will have the following content:
apiVersion
:
v1
data
:
Corefile
:
|
-
.:53 {
errors
health
rewrite name dev.local registry.kube-system.svc.cluster.local
rewrite name example.com registry.kube-system.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind
:
ConfigMap
metadata
:
name
:
coredns
Discussion
You may need to update the custom domain names for the internal container registry. In order update it, you need to edit the ConfigMap registry-aliases-config.yaml and add the extra domain names as per your needs. Each domain name should be on a new line of its own. For example, the following snippet shows how to add a new domain called test.org
to the registry helper ConfigMap:
apiVersion
:
v1
data
:
# Add additional hosts separated by new-line
registryAliases
:
>-
dev.local
example.com
test.org
# default registry address in minikube when enabled
# via minikube addons enable registry
registrySvc
:
registry.kube-system.svc.cluster.local
kind
:
ConfigMap
metadata
:
name
:
registry-aliases
namespace
:
kube-system
After you update the ConfigMap, you need to restart the daemonset by deleting the daemonset pod in the kube-system
namespace. When the daemonset restarts, it will pick up new aliases from the registry helper ConfigMap and configure the same to be used as domain aliases. After a successful restart of the daemonset, you need to rerun the script patch-coredns.sh to patch the CoreDNS.
1.5 Installing Istio
Solution
Knative Serving requires an ingress gateway to route requests to the Knative Serving Services. Currently it supports the following ingress gateways that are based on Envoy:
In this recipe we will use Istio. Since the ingress gateway is the only Istio component required for Knative, you can set up a minimal Istio (istio lean) installation with the following script:
$ $BOOK_HOME/bin/install-istio.sh
Discussion
Installing Istio components will take some time, so we highly recommend that you start the Knative components installation only after you have verified that the Istio component pods are running. The install script will terminate automatically after all the needed Istio components and Custom Resource Definitions (CRDs) have been installed and running.
All Istio resources will be under one of the following application programming interface (API) groups:
-
authentication.istio.io
-
config.istio.io
-
networking.istio.io
-
rbac.istio.io
You can verify that the needed CRDs are available by querying api-resources
for each API group:
$ kubectl api-resources --api-group=networking.istio.io NAME APIGROUP NAMESPACED KIND destinationrules networking.istio.io true DestinationRule envoyfilters networking.istio.io true EnvoyFilter gateways networking.istio.io true Gateway serviceentries networking.istio.io true ServiceEntry sidecars networking.istio.io true Sidecar virtualservices networking.istio.io true VirtualService
$ kubectl api-resources --api-group=config.istio.io NAME APIGROUP NAMESPACED KIND adapters config.istio.io true adapter attributemanifests config.istio.io true attributemanifest handlers config.istio.io true handler httpapispecbindings config.istio.io true HTTPAPISpecBinding httpapispecs config.istio.io true HTTPAPISpec instances config.istio.io true instance quotaspecbindings config.istio.io true QuotaSpecBinding quotaspecs config.istio.io true QuotaSpec rules config.istio.io true rule templates config.istio.io true template
$ kubectl api-resources --api-group=authentication.istio.io NAME APIGROUP NAMESPACED KIND meshpolicies authentication.istio.io false MeshPolicy policies authentication.istio.io true Policy
$ kubectl api-resources --api-group=rbac.istio.io NAME APIGROUP NAMESPACED KIND authorizationpolicies rbac.istio.io true AuthorizationPolicy clusterrbacconfigs rbac.istio.io false ClusterRbacConfig rbacconfigs rbac.istio.io true RbacConfig servicerolebindings rbac.istio.io true ServiceRoleBinding serviceroles rbac.istio.io true ServiceRole
1.6 Installing Knative
Knative has two building blocks:
- Knative Serving
-
Serving is for running your services inside Kubernetes by providing a simplified deployment syntax, with automated scale-to-zero and scale-out based on HTTP load.
- Knative Eventing
-
Eventing is used to connect your Knative Serving Services to event streams beyond HTTP (e.g., an Apache Kafka topic).
The Knative installation process is divided into three steps:
-
Installing Knative Custom Resource Definitions (CRDs)
-
Installing the Knative Serving components
-
Installing the Knative Eventing components
This recipe shows how to install these components in the order listed here.
Problem
You need to install Knative CRDs, Knative Serving, and Knative Eventing components.
Solution
Knative Serving and Eventing define their own Kubernetes CRDs. You need to have the Knative Serving and Eventing CRDs installed in your Kubernetes cluster. Run the following command to do so:
$
kubectl
apply
--selector
knative.dev/crd-install
=
true
\
--filename
"https://github.com/knative/serving/releases/\ download/v0.12.0/serving.yaml"
\
--filename
"https://github.com/knative/eventing/releases/\ download/v0.12.0/eventing.yaml"
Discussion
Now that you have installed the Knative Serving and Eventing CRDs, you can verify the CRDs by querying the api-resources
, as described next.
All Knative Serving resources will be under the API group called serving.
knative
.dev
:
$ kubectl api-resources --api-group=serving.knative.dev NAME SHORTNAMES APIGROUP NAMESPACED KIND configurations config,cfg serving.knative.dev true Configuration revisions rev serving.knative.dev true Revision routes rt serving.knative.dev true Route services kservice,ksvc serving.knative.dev true Service
All Knative Eventing resources will be under one of the following API groups:
-
messaging.knative.dev
-
eventing.knative.dev
-
sources.eventing.knative.dev
-
sources.knative.dev
$ kubectl api-resources --api-group=messaging.knative.dev NAME SHORTNAMES APIGROUP NAMESPACED KIND channels ch messaging.knative.dev true Channel inmemorychannels imc messaging.knative.dev true InMemoryChannel parallels messaging.knative.dev true Parallel sequences messaging.knative.dev true Sequence subscriptions sub messaging.knative.dev true Subscription
$ kubectl api-resources --api-group=eventing.knative.dev NAME SHORTNAMES APIGROUP NAMESPACED KIND brokers eventing.knative.dev true Broker eventtypes eventing.knative.dev true EventType triggers eventing.knative.dev true Trigger
$ kubectl api-resources --api-group=sources.eventing.knative.dev NAME APIGROUP NAMESPACED KIND apiserversources sources.eventing.knative.dev true ApiServerSource containersources sources.eventing.knative.dev true ContainerSource cronjobsources sources.eventing.knative.dev true CronJobSource sinkbindings sources.eventing.knative.dev true SinkBinding
$ kubectl api-resources --api-group=sources.knative.dev NAME APIGROUP NAMESPACED KIND apiserversources sources.knative.dev true ApiServerSource sinkbindings sources.knative.dev true SinkBinding
Knative has two main infrastructure components: controller and webhook. These help in translating the Knative CRDs, which are usually written in YAML files, into Kubernetes objects like Deployment and Service. Apart from the controller and webhook, Knative Serving and Eventing also install their respective functional components, which are listed in the upcoming sections.
Run the following command to deploy the Knative Serving infrastructure components:
$ kubectl apply \ --selector \ networking.knative.dev/certificate-provider!=cert-manager \ --filename \ https://github.com/knative/serving/releases/download/v0.12.0/serving.yaml
This process will take a few minutes for the Knative Serving pods to be up and running. You can monitor the status of the Knative Serving installation by watching the pods in the knative-serving
namespace using the command:
$
watch
kubectl
get
pods
-n
knative-serving
NAME
READY
STATUS
RESTARTS
AGE
activator-5dd6dc95bc-k9lg9
1/1
Running
0
86s
autoscaler-b56799cdf-55h5k
1/1
Running
0
86s
autoscaler-hpa-6f5c5cf986-b8lvg
1/1
Running
0
86s
controller-f8b98d964-qjxff
1/1
Running
0
85s
networking-istio-bb44d8c87-s2lbg
1/1
Running
0
85s
webhook-78dcbf4d94-dczd6
1/1
Running
0
85s
Run the following command to install Knative Eventing infrastructure components:
$ kubectl apply \ --selector \ networking.knative.dev/certificate-provider!=cert-manager \ --filename \ https://github.com/knative/eventing/releases/download/v0.12.0/eventing.yaml
Like the Knative Serving deployment, the Knative Eventing deployment will also take a few minutes to complete. You can watch the knative-eventing
namespace pods for live status using the command:
$
watch
kubectl
get
pods
-n
knative-eventing
NAME
READY
STATUS
RESTARTS
AGE
eventing-controller-77b4f76d56-d4fzf
1/1
Running
0
2m39s
eventing-webhook-f5d57b487-hbgps
1/1
Running
0
2m39s
imc-controller-65bb5ddf-kld5l
1/1
Running
0
2m39s
imc-dispatcher-dd84879d7-qt2qn
1/1
Running
0
2m39s
in-memory-channel-controller-6f74d5c8c8-vm44b
1/1
Running
0
2m39s
in-memory-channel-dispatcher-8db675949-mqmfk
1/1
Running
0
2m39s
sources-controller-79c4bf8b86-lxbjf
1/1
Running
0
2m39s
1.7 Verifying the Container Environment
Solution
Minikube provides the profile
and docker-env
commands that are used to set the profile and configure your docker environment to use minikube. Run the following command to set your profile and docker environment for this book:
$
minikube
profile
knativecookbook
$
eval
$(
minikube
docker-env
)
Discussion
Now when you execute the command, docker images
will list the images found inside of minikube’s internal docker daemon (output shortened for brevity):
$ docker images --format {{.Repository}} gcr.io/knative-releases/knative.dev/serving/cmd/activator gcr.io/knative-releases/knative.dev/serving/cmd/webhook gcr.io/knative-releases/knative.dev/serving/cmd/controller gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler-hpa gcr.io/knative-releases/knative.dev/serving/cmd/networking/istio k8s.gcr.io/kube-addon-manager istio/proxyv2 istio/pilot
Creating Kubernetes Namespaces for This Book’s Recipes
The recipes in each chapter will be deployed in the namespace dedicated for the chapter. Each chapter will instruct you to switch to the respective namespace. Run the following command to create all the required namespaces for this book:
$
kubectl
create
namespace
chapter-2
$
kubectl
create
namespace
chapter-3
$
kubectl
create
namespace
chapter-4
$
kubectl
create
namespace
chapter-5
$
kubectl
create
namespace
chapter-6
$
kubectl
create
namespace
chapter-7
Why Switch Namespaces?
Kubernetes by default creates the default
namespace. You can control the namespace of the resource by specifying the --namespace
or -n
option to all your Kubernetes commands. By switching to the right namespace, you can be assured that your Kubernetes resources are created in the correct place as needed by the recipes.
You can use kubectl
to switch to the required namespace. The following command shows how to use kubectl
to switch to a namespace called chapter-1
:
$
kubectl
config
set
-context
--current
--namespace
=
chapter-1
Or you can use the kubens
utility to set your current namespace to be chapter-1
:
$
kubens
chapter-1
Note
Setting your current namespace with kubens
means you can avoid the option --namespace
or its short name -n
for all subsequent kubectl
commands.
However, it is recommended that you continue to use --namespace
or -n
as part of your kubectl
commands; using the namespace
option ensures that you are creating Kubernetes resources in the correct namespace.
Ensure that you are also in the right working directory in your terminal by running the command:
$
cd
$BOOK_HOME
Querying Kubernetes Resources
As part of the recipes, and many other places in the book, you will be instructed to watch Kubernetes resources.
You might be familiar with using the command kubectl get <resource> -w
. You are free to use the kubectl
command with the w
option, but in this book we prefer to use the watch
command. The watch
command provides a simple and clean output that can help you to grok the output better. Let me explain the two variants with an example.
Let’s assume you want to query running pods in a namespace called istio-system
:
$
kubectl
-n
istio-system
get
pods
-w
NAME
READY
STATUS
RESTARTS
AGE
cluster-local-gateway-7588cdfbc7-8f5s8
0/1
ContainerCreating
0
3s
istio-ingressgateway-5c87b8d6c7-dzwx8
0/1
ContainerCreating
0
4s
istio-pilot-7c555cf995-j9tpv
0/1
ContainerCreating
0
4s
NAME
READY
STATUS
RESTARTS
AGE
istio-pilot-7c555cf995-j9tpv
0/1
Running
0
16s
istio-ingressgateway-5c87b8d6c7-dzwx8
0/1
Running
0
27s
cluster-local-gateway-7588cdfbc7-8f5s8
0/1
Running
0
29s
istio-pilot-7c555cf995-j9tpv
1/1
Running
0
36s
cluster-local-gateway-7588cdfbc7-8f5s8
1/1
Running
0
37s
istio-ingressgateway-5c87b8d6c7-dzwx8
1/1
Running
0
44s
$
watch
kubectl
-n
istio-system
get
pods
NAME
READY
STATUS
RESTARTS
AGE
cluster-local-gateway-7588cdfbc7-vgwgw
1/1
Running
0
8s
istio-ingressgateway-5c87b8d6c7-tbj6g
1/1
Running
0
8s
istio-pilot-7c555cf995-6ggvv
1/1
Running
0
8s
If you compare the output of these two commands, you’ll see that watch kubectl -n istio-system get pods
has simple and clean output compared to kubectl -n istio-system get pods -w
, although both command shows the same output. When using watch
, the command kubectl -n istio-system get pods
is refreshed every two seconds, which allows you to watch the changing status in a simpler way. By contrast, the kubectl watch option keeps appending to the output.
Note
In this book when you are instructed to watch some Kubernetes resource, you should use watch <kubectl command>
as explained previously. However, the commands and options might vary from recipe to recipe.
You now have an understanding of what Knative is, how to install Knative and its dependencies, and how to install useful open source tools that will speed up your Kubernetes development.
With what you have learned in this chapter, you are all set to apply your Kubernetes knowledge to deploy serverless workloads. As part of the first step in putting your understanding to the test, Chapter 2 helps you by teaching you a few techniques on Knative Serving.
Get Knative Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.