This chapter discusses ways that your code can be made to run faster. It roughly breaks into two very distinct topics:

- The theory of how fast an algorithm is in the abstract, independent of the details of the computer or the implementation. You don't need to know a whole lot about this subject – mostly just how to avoid a few potentially catastrophic errors.
- Various nitty-gritty performance optimizations, which mostly involve making good use of the computer's memory and cache.

The first of these topics relates to figuring out which algorithms are fundamentally, theoretically better than others. The second topic is about how to eke out real-world performance gains for whatever abstract algorithm you are using.

The following script examines two different ways to solve the same problem: given a list of items, count how many of the entries are duplicates of some other entry. It sees how long each algorithm takes for lists of varying length and plots out the time.

The two ways I solve the problem are as follows:

- For each element
*x*in the list, count how many times it occurs in the list – if it occurs more than once, then it is a duplicate. The key thing here is that counting the occurrences of*x*will require searching through the whole list and checking whether the elements are equal to*x*. That is, for every element of the list, we loop through the whole list. - Keep a dictionary that maps elements in the list to how ...

Start Free Trial

No credit card required