Chapter 3. Understanding Parallelism Granularity

One of the most essential tasks when writing parallel code is to divide your program into subsets that will run in parallel and communicate between each other. Sometimes the task naturally divides into separate pieces, but usually it is up to you to choose which parts to make parallel. Should we use a small number of large tasks, many small tasks, or maybe large and small tasks at the same time?

Theoretically speaking, it does not matter. In case of an ideal computational device, it would have no overhead for creating a worker thread and distributing work between any numbers of threads. However, on a real CPU, this performance overhead is significant and it is very important to take this into account. ...

Get Mastering C# Concurrency now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.