Amdahl's Law quantifies the fact that the sequential portion of an application will put a lid on its potential scalability. This is best illustrated by an example such as matrix multiplication. It consists of three stages:

Initialization: read in the matrices data values.

Multiplication: multiply the two matrices.

Presentation: present the resulting matrix.

Let's assume further that the whole computation takes 10 ms, broken down as follows (these numbers are completely fictitious but help illustrate the point):

Initialization: 2 ms

Multiplication: 6 ms

Presentation: 2 ms

Over the years, many clever parallel algorithms have been developed to exploit multiprocessor architectures to speed up the matrix multiplication phase. Not so with ...

Start Free Trial

No credit card required