Time is nature’s way of keeping everything from happening at once. Space is what prevents everything from happening to me.
So far, most of the programs that you’ve written run in one place (a single machine) and one line at a time (sequential). But, we can do more than one thing at a time (concurrency) and in more than one place (distributed computing or networking). There are many good reasons to challenge time and space:
Your goal is to keep fast components busy, not waiting for slow ones.
There’s safety in numbers, so you want to duplicate tasks to work around hardware and software failures.
It’s best practice to break complex tasks into many little ones that are easier to create, understand, and fix.
It’s just plain fun to send your footloose bytes to distant places, and bring friends back with them.
We’ll start with concurrency, first building on the non-networking techniques that are described in Chapter 10—processes and threads. Then we’ll look at other approaches, such as callbacks, green threads, and coroutines. Finally, we’ll arrive at networking, initially as a concurrency technique, and then spreading outward.
Some Python packages discussed in this chapter were not yet
ported to Python 3 when this was written.
In many cases, I’ll show example code that would
need to be run with a Python 2 interpreter,
which we’re calling