Chapter 1. Knative Overview

A belief of ours is that having a platform as a place for your software is one of the best choices you can make. A standardized development and deployment process has continually been shown to reduce both time and money spent writing code by allowing developers to focus on delivering new features. Not only that, ensured consistency across applications means that they’re easier to patch, update, and monitor, allowing operators to be more efficient. Knative aims to be this modern platform.

What Is Knative?

Let’s get to the meat of Knative. If Knative does indeed aim to bookend the development cycle on top of Kubernetes, not only does it need to help you run and scale your applications, but to help you architect and package them, too. It should enable you as a developer to write code how you want, in the language you want.

To do this, Knative focuses on three key categories: building your application, serving traffic to it, and enabling applications to easily consume and produce events.

Build

Flexible, pluggable build system to go from source to container. Already has support for several build systems such as Google’s Kaniko, which can build container images on your Kubernetes cluster without the need for a running Docker daemon.

Serving

Automatically scale based on load, including scaling to zero when there is no load. Allows you to create traffic policies for multiple revisions, enabling easy routing to applications via URL.

Events

Makes it easy to produce and consume events. Abstracts away from event sources and allows operators to run their messaging layer of choice.

Knative is installed as a set of Custom Resource Definitions (CRDs) for Kubernetes, so it’s as easy to get started with Knative as applying a few YAML files. This also means that, on-premises or with a managed cloud provider, you can run Knative and your code anywhere you can run Kubernetes.

Serverless?

We’ve talked about containerizing our applications so far, but it’s 2019 and we’ve gone through half of a chapter without mentioning the word “serverless.” Perhaps the most loaded word in technology today, serverless is still looking for a definition that the industry as a whole can agree on. Many agree that one of the major changes in mindset is at the code level, where instead of dealing with large, monolithic applications, you write small, single-purpose functions that are invoked via events. Those events could be as simple as an HTTP request or a message from a message broker such as Apache Kafka. They could also be events that are less direct, such as uploading an image to Google Cloud Storage, or making an update to a table in Amazon’s DynamoDB.

Many also agree that it means your code is using compute resources only while serving requests. For hosted services such as Amazon’s Lambda or Google’s Cloud Functions, this means that you’re only paying for active compute time rather than paying for a virtual machine running 24/7 that may not even be doing anything much of the time. On-premises or in a nonmanaged serverless platform, this might translate to only running your code when it’s needed and scaling it down to zero when it’s not, leaving your infrastructure free to spend compute cycles elsewhere.

Beyond these fundamentals lies a holy war. Some insist serverless only works in a managed cloud environment and that running such a platform on-premises completely misses the point. Others look at it as more of a design philosophy than anything. Maybe these definitions will eventually merge, maybe they won’t. For now, Knative looks to standardize some of these emerging trends as serverless adoption continues to grow.

Why Knative?

Arguments on the definition of serverless aside, the next logical question is “why was Knative built?” As trends have grown toward container-based architectures and the popularity of Kubernetes has exploded, we’ve started to see some of the same questions arise that previously drove the growth of Platform-as-a-Service (PaaS) solutions. How do we ensure consistency when building containers? Who’s responsible for keeping everything patched? How do you scale based on demand? How do you achieve zero-downtime deployment?

While Kubernetes has certainly evolved and begun to address some of these concerns, the concepts we mentioned with respect to the growing serverless space start to raise even more questions. How do you recover infrastructure from sources with no traffic to scale them to zero? How can you consistently manage multiple event types? How do you define event sources and destinations?

A number of serverless or Functions-as-a-Service (FaaS) frameworks have attempted to answer these questions, but not all of them leverage Kubernetes, and they have all gone about solving these problems in different ways. Knative looks to build on Kubernetes and present a consistent, standard pattern for building and deploying serverless and event-driven applications. Knative removes the overhead that often comes with this new approach to software-development, while abstracting away complexity around routing and eventing.

Conclusion

Now that we have a good handle on what Knative is and why it was created, we can start diving in a little further. The next chapters describe the key components of Knative. We will examine all three of them in detail and explain how they work together and how to leverage them to their full potential. After that, we’ll look at how you can install Knative on your Kubernetes cluster as well as some more advanced use cases. Finally, we’ll walk through a demo that implements much of what you’ll learn over the course of the report.

Get Getting Started with Knative now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.