Chapter 9. Searching
As I mentioned at the beginning of Chapter 8, it has always been a popular claim that more than 80% of all computing cycles are devoted to the process of sorting. This was especially true when mainframes were just about the only computers around and a majority of these were doing business-centric computations. Querying and managing databases, payroll, loan applications, medical billing, and other such processing had names and ID numbers associated with records that needed to be sorted.
Today, there is still quite a bit of sorting going on all the time (you still want to collect your paycheck, and the security device supplier still needs to know how many THX-1138/GL steam-powered tasers are on hand). With the advent and popularity of search engines on the Internet, I think that searching has certainly become more high-profile and also accounts for a bigger slice of the total computing cycle pie.
In this chapter, I’ll discuss two algorithms you can use to search through a collection of data and how to make them concurrent to decrease the time needed to locate items of interest. For simplicity, I will assume that all keys within a data set to be searched are unique. Strategies for dealing with multiple duplicate keys will be mentioned, but the implementation details are left to you. More complex or proprietary searching techniques (e.g., Google, Lucene), while interesting, ...
Get The Art of Concurrency now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.