Chapter 4. Here, There, and Everywhere
So here we are, 30 pages or so later, and you now have a solid understanding of what Kubernetes is and how it works. By this point in your reading I hope you’ve started to form an opinion about whether or not Kubernetes is a technology that makes sense to you right now.
In my opinion, it’s clearly the direction the world is heading, but you might think it’s a little too bleeding edge to invest in right this second. That is only the first of two important decisions you have to make.
Once you’ve decided to keep going, the next question you have to answer is this: do I roll my own or use someone’s managed offering?
You have three basic choices:
-
Use physical servers you own (or will buy/rent) and install Kubernetes from scratch. Let’s call this option the bare metal option. You can take this route if you have these servers in your office or you rent them in a CoLo. It doesn’t matter. The key thing is that you will be dealing with physical machines.
-
Use virtual machines from a public cloud provider and install Kubernetes on them from scratch. This has the obvious advantage of not needing to buy physical hardware, but is very different than the bare metal option, because there are important changes to your configuration and operation. Let’s call this the virtual metal option.
-
Use one of the managed offerings from the major cloud providers. This route will allow you fewer configuration choices, but will be a lot easier than rolling your own solution. Let’s call this the fully managed option.
Starting Small with Your Local Machine
Sometimes the easiest way to learn something is to install it locally and start poking at it. Installing a full bare metal Kubernetes solution is not trivial, but you can start smaller by running all the components on your local machine.
Linux
If you’re running Linux locally—or in a VM you can easily access—then it’s pretty easy to get started.
-
Install Docker and make sure it’s in your path. If you already have Docker installed, then make sure it’s at least version 1.3 by running the docker --version command.
-
Install etcd, and make sure it’s in your path.
-
Make sure go is installed and also in your path. Check to make sure your version is also at least 1.3 by running go version.
Once you’ve completed these steps you should follow along with this getting started guide. It will tell you everything you need to know to get up and running.
Windows/Mac
If you’re on Windows or a Mac, on the other hand, the process is a little (but not much) more complicated. There are a few different ways to do it, but the one I’m going to recommend is to use a tool called Vagrant.
Vagrant is an application that automatically sets up and manages self-contained runtime environments. It was created so that different software developers could be certain that each of them was running an identical configuration on their local machines.
The basic idea is that you install a copy of Vagrant and tell it that you want to create a Kubernetes environment. It will run some scripts and set everything up for you. You can try this yourself by following along with the handy setup guide here.
Bare Metal
After you’ve experimented a little and have gotten the feel for installing and configuring Kubernetes on your local machine, you might get the itch to deploy a more realistic configuration on some spare servers you have lying around. (Who among us doesn’t have a few servers sitting in a closet someplace?)
This setup—a fully bare metal setup—is definitely the most difficult path you can choose, but it does have the advantage of keeping absolutely everything under your control.
The first question you should ask yourself is do you prefer one Linux distribution over another? Some people are really familiar with Fedora or RHEL, while others are more in the Ubuntu or Debian camps. You don’t need to have a preference—but some people do.
Here are my recommendations for soup-to-nuts getting-started guides for some of the more popular distributions:
-
Fedora, RHEL—There are many such tutorials, but I think the easiest one is here. If you’re looking for something that goes into some of the grittier details, then this might be more to your liking.
-
Ubuntu—Another popular choice. I prefer this guide, but a quick Google search shows many others.
-
CentOS—I’ve used this guide and found it to be very helpful.
-
Other—Just because I don’t list a guide for your preferred distribution doesn’t mean one doesn’t exist or that the task is undoable. I found a really good getting-started guide that will apply to pretty much any bare metal installation here.
Virtual Metal (IaaS on a Public Cloud)
So maybe you don’t have a bunch of spare servers lying around in a closet like I do—or maybe you just don’t want to have to worry about cabling, power, cooling, etc. In that case, it’s a pretty straightforward exercise to build your own Kubernetes cluster from scratch using VMs you spin up on one of the major public clouds.
Note
This is a different process than installing on bare metal because your choice of network layout and configuration is governed by your choice of provider. Whichever bare metal guides you may have read in the previous section will only be mostly helpful in a public cloud.
Here are some quick resources to get you started.
-
AWS—The easiest way is to use this guide, though it also points you to some other resources if you’re looking for a little more configuration control.
-
Azure—Are you a fan of Microsoft Azure? Then this is the guide for you.
-
Google Cloud Platform (GCP)—I’ll bet it won’t surprise you to find out that far and away the most documented way to run Kubernetes in the virtual metal configuration is for GCP. I found hundreds of pages of tips and setup scripts and guides, but the easiest one to start with is this guide.
-
Rackspace—A reliable installation guide for Rackspace has been a bit of a moving target. The most recent guide is here, but things seem to change enough every few months such that it is not always perfectly reliable. You can see a discussion on this topic here. If you’re an experienced Linux administrator then you can probably work around the rough edges reasonably easily. If not, you might want to check back later.
Other Configurations
The previous two sections are by no means an exhaustive list of configuration options or getting-started guides. If you’re interested in other possible configurations, then I recommend two things:
-
Start with this list. It’s continuously maintained at the main Kubernetes Github site and contains lots of really useful pointers.
-
Search Google. Really. Things are changing a lot in the Kubernetes space. New guides and scripts are being published nearly every day. A simple Google search every now and again will keep you up to date. If you’re like me and you absolutely want to know as soon as something new pops up, then I recommend you set up a Google alert. You can start here.
Fully Managed
By far, your easiest path into the world of clusters and global scaling will be to use a fully managed service provided by one of the large public cloud providers (AWS, Google, and Microsoft). Strictly speaking, however, only one of them is actually Kubernetes.
Let me explain.
Amazon recently announced a brand new managed offering named Elastic Container Service (ECS). It’s designed to manage Docker containers and shares many of the same organizing principles as Kubernetes. It does not, however, appear to actually use Kubernetes under the hood. AWS doesn’t say what the underlying technology is, but there are enough configuration and deployment differences that it appears they have rolled their own solution. (If you know differently, please feel free to email me and I’ll update this text accordingly.)
In April of 2015, Microsoft announced Service Fabric for their Azure cloud offering. This new service lets you build microservices using containers and is apparently the same technology that has been powering their underlying cloud offerings for the past five years. Mark Russinovich (Azure’s CTO) gave a helpful overview session of the new service at their annual //Build conference. He was pretty clear that the underlying technology in the new service was not Kubernetes—though Microsoft has contributed knowledge to the project GitHub site on how to configure Kubernetes on Azure VMs.
As far as I know, the only fully managed Kubernetes service on the market among the large public cloud providers is Google Container Engine (GKE). So if your goal is to use the things I’ve discussed in this paper to build a web-scale service, then GKE is pretty much your only fully managed offering. Additionally, since Kubernetes is an open source project with full source code living on GitHub, you can really dig into the mechanics of how GKE operates by studying the code directly.
A Word about Multi-Cloud Deployments
What if you could create a service that seamlessly spanned your bare metal and several public cloud infrastructures? I think we can agree that would be pretty handy. It certainly would make it hard for your service to go offline under any circumstances short of a large meteor strike or nuclear war.
Unfortunately, that’s still a little bit of a fairy tale in the clustering world. People are thinking hard about the problem, and a few are even taking some tentative steps to create the frameworks necessary to achieve it.
One such effort is being led by my colleague Quinton Hoole, and it’s called Kubernetes Cluster Federation, though it’s also cheekily sometimes called Ubernetes. He keeps his current thinking and product design docs on the main Kubernetes GitHub site here, and it’s a pretty interesting read—though it’s still early days.
Getting Started with Some Examples
The main Kubernetes GitHub page keeps a running list of example deployments you can try. Two of the more popular ones are the WordPress and Guestbook examples.
The WordPress example will walk you through how to set up the popular WordPress publishing platform with a MySQL backend whose data will survive the loss of a container or a system reboot. It assumes you are deploying on GKE, though you can pretty easily adapt the example to run on bare/virtual metal.
The Guestbook example is a little more complicated. It takes you step-by-step through configuring a simple guestbook web application (written in Go) that stores its data in a Redis backend. Although this example has more moving parts, it does have the advantage of being easily followed on a bare/virtual metal setup. It has no dependencies on GKE and serves as an easy introduction to replication.
Where to Go for More
There are a number of good places you can go on the Web to continue your learning about Kubernetes.
-
The main Kubernetes homepage is here and has all the official documentation.
-
The project GitHub page is here and contains all the source code plus a wealth of other configuration and design documentation.
-
If you’ve decided that you want to use the GKE-managed offering, then you’ll want to head over here.
-
When I have thorny questions about a cluster I’m building, I often head to Stack Overflow and grab all the Kubernetes discussion here.
-
You can also learn a lot by reading bug reports at the official Kubernetes issues tracker.
-
Finally, if you want to contribute to the Kubernetes project, you will want to start here.
These are exciting days for cloud computing. Some of the key technologies that we will all be using to build and deploy our future applications and services are being created and tested right around us. For those of us old enough to remember it, this feels a lot like the early days of personal computing or perhaps those first few key years of the World Wide Web. This is where the world is going, and those of our peers that are patient enough to tolerate the inevitable fits and starts will be in the best position to benefit.
Good luck, and thanks for reading.
Get Kubernetes now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.