Chapter 21. The Benchmarking Interlude
Now that we’ve fully explored function coding and iteration tools, we’re going to take a short side trip to put both of them to work. This chapter closes out the function part of this book with a larger case study that times the relative performance of the iteration tools we’ve met so far, in both standard Python and one of its alternatives.
Along the way, this case study surveys Python’s code-timing tools, discusses benchmarking techniques in general, and develops code that’s more realistic and useful than most of what we’ve seen up to this point. We’ll also measure the speed of code we’ve used—data points that may or may not be significant, depending on your programs’ goals.
Finally, because this is the last chapter in this part of the book, we’ll close with the usual sets of “gotchas” and exercises to help you start coding the ideas you’ve read about. First, though, let’s have some fun with tangible Python code.
Benchmarking with Homegrown Tools
We’ve met quite a few iteration alternatives in this book. Like much in programming, they represent trade-offs—in terms of both subjective factors like expressiveness, and more objective criteria such as performance. Part of your job as a programmer and engineer is selecting tools based on factors like these.
In terms of performance, this book has mentioned a few times that list comprehensions and map calls sometimes have a speed advantage over for loop statements. It has also noted that sorting ...