Chapter 10. Concurrency
There was a time, not all that long ago, when it was easy to improve the performance of your programs. You could take a look at what a program was doing using a profiler and study the inner loops of the code. You could write dozens of test cases, varying the load on the program, to see which optimization was working under what set of circumstances. You could handcraft clever data structures that would save cycles per call. If you were industrious and lucky, all of this work would take about 18 months, at which time the new generation of processors would become available and your program would suddenly run about twice as fast. Lather, rinse, repeat.
But this pattern has changed recently. Cranking up the clock (and therefore the speed) on a processor has become harder and harder to do. People are now worried about energy efficiency, which goes down as the clock goes up. The corollary to the energy problem is the heat problem; chips are getting harder and harder to cool as they go faster and faster. It has been some time since the raw speed of the CPU has seen a significant bump.
We are still seeing the effects of Moore’s Law; CPU designers are still cramming more and more transistors on each piece of silicon. But rather than using those transistors to make the CPU faster, they have moved to producing multicore chips, in which multiple copies of the CPU share a chip. The idea is that if you have multiple programs running on a machine, you can run each program ...
Get Java: The Good Parts now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.