Chapter 12. Performing in Parallel

A natural way to approach parallel computing is to ask the question, “How do I do many things at once?” However, problems more often arise in the form of, “How do I solve my one problem faster?” From the standpoint of the computer or operating system, parallelism is about simultaneously performing tasks. From the user’s perspective, parallelism is about determining dependencies between different pieces of code and data so that the code may be executed faster.

Programming in parallel can be fun. It represents a different way of thinking from the traditional "x then y then z" procedure that we have seen up until now. That said, parallelism typically makes problems faster for the computer to execute, but harder for you to program. Debugging, opening files, and even printing to the screen all become more difficult to reason about in the face of P processors. Parallel computing has its own set of rewards, challenges, and terminology. These are important to understand, because you must be more like a mechanic than a driver when programming in parallel.

Physicists often approach parallelism only when their problems finally demand it. They also tend to push back that need as far as possible. It is not yet easier to program in parallel than it is to program procedurally. Here are some typical reasons that parallel solutions are implemented:

  • The problem creates or requires too much data for a normal machine.

  • The sun would explode before the computation ...

Get Effective Computation in Physics now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.