## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

No credit card required

# A Better Matrix Multiply (Strassen)

This implementation uses the task scheduler to run a seven-task version of the Strassen algorithm. It is an excellent and simple introduction to using the task scheduler (Chapter 9).

The Strassen algorithm is faster than the standard matrix multiply process for large matrixes. Volker Strassen published his algorithm in 1969 and was the first to point out that the standard method of Gaussian elimination is not optimal. His paper touched off a search for even faster algorithms.

The parallel implementation is found in the `StrassenMultiply` class. Instead of a recursive function call, we create a new task of type `StrassenMultiply` that will operate with the submatrixes (see Example 11-35). Seven new tasks are created to compute `p1…p7`. Those new tasks are put into the `tbb::task_list` (Chapter 9) and then they are all spawned by the `spawn_and_wait_for_all` (list) function. After all of the children are finished, it calculates the resulting submatrixes. The recursion ends when the matrix size becomes less than the cutoff parameter, and it uses a serial algorithm to multiply these smaller matrixes.

The serial version uses two temporary arrays `(a_cum and b_cum)` to store things such as `(a11+a22)` and `(b21+b22)` (operands for the recursive call to compute `pj`). Those arrays are being reused to compute `p1…p7`. For Threading Building Blocks, this would not work and the memory will be spoiled by the part that prepares arguments for the following `pj`. So, we have to allocate ...

## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

No credit card required