Chapter 9. Threads
We take for granted that modern computer systems can manage many applications and operating system (OS) tasks running concurrently and make it appear that all the software is running simultaneously. Most systems today have multiple processors or multiple cores or both, and they can achieve an impressive degree of concurrency. The OS still juggles applications at a higher level but turns its attention from one to the next so quickly that they also appear to run at once.
Note
In programming, concurrent operation denotes multiple, typically unrelated tasks running at the same time. Think of a fast-food cook preparing multiple orders on a grill. Parallel operation usually involves breaking up a large task into related subtasks that can be run alongside each other to produce the final result more quickly. Our cook could prepare a bacon double cheeseburger “in parallel” by tossing two patties and some bacon on the grill at the same time. In either case, programmers talk more generally about these tasks and subtasks occurring simultaneously. That’s not to say everything starts and stops at the same exact instant, but it does mean that the execution times for those tasks overlap.
In the old days, the unit of concurrency for an operating system was the application or process. To the OS, a process was more or less a black box that decided what to do on its own. If an application required greater concurrency, it could get it only by running multiple processes and communicating ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access