21Performance and Computer Memory
This chapter discusses ways that your code can be made to run faster. It roughly breaks into two very distinct topics:
- The theory of how fast an algorithm is in the abstract, independent of the details of the computer or the implementation. You don’t need to know a whole lot about this subject – mostly just how to avoid a few potentially catastrophic errors.
- Various nitty‐gritty performance optimizations, which mostly involve making good use of the computer’s memory and cache.
The first of these topics relates to figuring out which algorithms are fundamentally and theoretically better than others. The second topic is about how to eke out real‐world performance gains for whatever abstract algorithm you are using.
21.1 A Word of Caution
Don Knuth, the computer scientist who invented the Big‐O notation discussed in this chapter, is often quoted as saying that “premature optimization is the root of all evil.” This chapter will discuss many techniques for making your code faster, but the first question to ask is whether you want to make it faster. Extra speed is nice, but if the optimizations will take a long time to implement, or if they will make your code difficult to understand and modify, it is often best to let well enough alone. In production systems that will be widely deployed obsessing about performance is sometimes justified. And certainly, if you’re working on a large dataset, you need to make sure things scale gracefully. But, most ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access