Chapter 12. Parallel Threads
It’s 99 revolutions tonight.
Green Day, “99 Revolutions”
Just about all the computers sold in the last few years—even many telephones—are multicore. If you are reading this on a keyboard-and-monitor computer, you may be able to find out how many cores your computer has via:
-
Linux:
grep cores /proc/cpuinfo
-
Mac:
sysctl hw.logicalcpu
-
Cygwin:
env | grep NUMBER_OF_PROCESSORS
A single-threaded program doesn’t make full use of the resources the hardware manufacturers gave us. Fortunately, it doesn’t take much to turn a program into one with concurrent parallel threads—in fact, it often only takes one extra line of code. In this chapter, I will cover:
-
A quick overview of the several standards and specifications that exist for writing concurrent C code
-
The one line of OpenMP code that will make your
for
loops multithreaded -
Notes on the compiler flags you’ll need to compile with OpenMP or pthreads
-
Some considerations of when it’s safe to use that one magic line
-
Implementing map-reduce, which requires extending that one line by another clause
-
The syntax for running a handful of distinct tasks in parallel, like the UI and backend of a GUI-based program
-
C’s
_Thread_local
keyword, which makes thread-private copies of global static variables -
Critical regions and mutexes
-
Atomic variables in OpenMP
-
A quick note on sequential consistency and why you want it
-
POSIX threads, and how they differ from OpenMP
-
Atomic scalar variables via C ...
Get 21st Century C, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.