1
Chapter 1
Introduction and
Brief History of
Supercomputing
Merle Giles and Anwar Osseyran
1.1 SUPERCOMPUTING 101
It should be noted that supercomputing, high-performance computing
(HPC),and advanced computing are terms that are oen used interchange-
ably. ese advanced machines use components similar to everyday
computers, such as memory, processors known as CPUs (central processing
units), datastorage, communication, and soware, but supercomputers use
these in ways that produce speedy results. e “trick” to supercomputing is
in the scale of use by stringing together vast numbers of each of the compo-
nents listed above and parallelizing data movement at every possible step.
Typically, communication between multiple computers in a data center
occurs using common Ethernet cables, oen seen in the form of orange
CONTENTS
1.1 Supercomputing 101 1
1.2 e Dawn of Digital Electronics: 1930s–1940s 4
1.3 e Dawn of Supercomputing: 1950s–1970s 8
1.4 Growth and Commoditization of Supercomputing: 1980s–2000s 10
1.5 Supercomputers for Open Science and Open Engineering 12
1.6 Supercomputers for Industry 16
1.7 Modern Industrial Supercomputing 18
Additional Reading 20
References 21
2 Industrial Applications of High-Performance Computing
or blue cables in homes and businesses. In supercomputing applications,
even faster communication is used and highly proprietary communication
schemes push speeds to 4100 times the speeds of common Ethernet,
using parallel, simultaneous data paths to push and pull data from CPUs
to storage and memory and back.
Four basic components of supercomputing can be described as follows:
1. Parallel processing, using multiple cores per CPU, multiple CPUs in a
single computer, and multiple computers in a supercomputer. Agood
consumer CPU example is a dual-core or quad-core processor in a
laptop computer, which allows soware applications to do two or
four operations at once. CPUs used in supercomputers will typically
have between 8 and 32 cores, with newer general purpose graphics
processing units (GPUs) deploying hundreds of cores.
2. Large memory, with each computer holding 32, 64, 256, or
more gigabytes of memory, depending on the specic science or
engineering need.
3. Multiprocessor, or multinode communication, allowing an entire
collection of computers to be used in a single application. Special
soware techniques, such as MPI (Message Passing Interface) must
be used for sharing an application across two or more computers.
Supercomputers might pass information simultaneously among
25,000 or more individual computers.
4. Parallel le systems move data between compute nodes and storage
(disks, magnetic tape, and solid-state devices), using multiple
channels simultaneously, rather than one, to speed up data input
and output. In very advanced systems, large caches of memory sit
between the storage and compute systems to minimize bottlenecks.
As in all computers, supercomputers require an operating system—
commonly Linux—as well as machine-management soware known
as middleware. Specic application soware will map to the operating
system, and ultimately to the machine components themselves through
the use of compilers, which translate word-based programs into machine
language that only recognizes zeros and ones.
Supercomputing adds signicant complexity for users due to
idiosyncrasies of various vendor platforms, processors, storage disks, and
so on. ese dierences are routinely subtle and nonobvious, leading to

Get Industrial Applications of High-Performance Computing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.