271
Chapter 28
Parallel and
DistributedComputing
28.1 DAWN OF PARALLELISM
Parallel computing is a form of computation in which more than one
calculation can be concurrently carried out. A parallel computer is a com-
puter system with multiple processing elements, working in parallel, to
solve a problem. Before the middle of the 1950s, all commercial computers
were traditional serial computers.
IBM 704 was introduced in 1954. It was the rst mass-produced com-
puter with oating-point arithmetic hardware and could execute up to
4000 instructions per second. IBM 704 was a very successful commercial
computer. However, aer the middle of the 1950s some research projects
needed much faster computers. For example, the University of California
Radiation Laboratory (UCRL) in Livermore, California, and Los Alamos
Scientic Laboratory (LASL) wanted high-performance computers for
their projects. In April 1955, IBM submitted a proposal to UCRL, but
UCRL rejected it, instead getting in contact with Remington Rand
(UNIVAC). en IBM submitted a proposal of STRETCH (also known as
IBM 7030) to LASL in 1956, and was awarded the contract with LASL for
the high-performance computer system.
STRETCH was an amazing computer system in the 1950s that con-
tained many high-performance features, such as local concurrency,
nonlocal concurrency, multiprogramming, a look-ahead approach to start
272 Computing
memory fetches early, and pipeline utilization. John Cocke (1925–2002)
contributed to developing these ideas. From these features, we can say that
STRETCH was an aggressive computer system with single-processor par-
allelism. We may therefore consider that the start of the STRETCH project
is the dawn of parallelism. e STRETCH design had its roots in 1954
from initial studies on advanced concepts for high-performance comput-
ing by Stephen W. Dunwell (1913–1994) and Werner Buchholz (1922–).
e STRETCH project started formally in 1955 aer UNIVAC won the
contract to build the Livermore Automatic Research Computer (LARC).
Aer losing the competition on LARC, IBM proposed a high-performance
computer system that was 100 times faster than that of IBM 704 to the Los
Alamos Scientic Laboratory in 1955. John Cocke won the Turing Award
for his large contribution to computer architecture and compiler optimi-
zation in 1987.
In 1961, actual benchmarks indicated that the performance of the IBM
7030 was only about 30 times faster than that of the IBM 704. While the
IBM 7030 was not considered successful, it spawned technologies incor-
porated in future computer systems. e STRETCH was conceived as a
supercomputer since its high-performance and new concepts of advanced
technology were far beyond the level of existing computer systems in
the 1950s. Many advanced technologies developed with the STRETCH
project were incorporated in later supercomputer designs, such as IBM
System/360 models, IBM System/370 models, and the IBM 3090 series. As
the editor, Werner Buchholz published a book about the STRETCH proj-
ect in 1962 [7]. He is the person who coined the term byte in 1956, a unit
of digital information (1 Byte = 8 bits). e rst STRETCH was delivered
to Los Alamos Scientic Laboratory (LASL) in 1961, and used until 1971.
e second STRETCH was delivered to the U.S. National Security Agency
as part of the HARVEST system in 1962. Altogether, 8 STRETCH systems
(six in the United States, one in the UK, and one in France) were sold from
1961 to 1963.
Frances E. Allen (1932–) joined IBM in 1957 and ended up staying there
for 45 years. Her work has had strong impacts on compiler research and
practice. She introduced many of the abstractions, algorithms, and imple-
mentations that laid the groundwork for automatic program optimization
technology. Allen developed and implemented her methods as part of the
compiler for the IBM STRETCH-HARVEST system. She became the rst
woman to win the Turing Award in 2006.

Get Computing now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.