332 High Performance Visualization
In February 2011, the Department of Energy Oﬃce of Advanced Scientiﬁc
Computing Research convened a workshop to explore the problem of scientiﬁc
understanding of data from HPC at the exascale. The goal of the workshop
was to identify the research directions that the data management, analysis,
and visualization community must take to enable scientiﬁc discovery for HPC
at this extreme scale (1 exaﬂop = 1 quintillion ﬂoating point calculations per
second). Projections from the international TOP500 list  place the avail-
ability of the ﬁrst exascale computers at around 2018–2019.
Extracting scientiﬁc insight from large HPC facilities is of crucial impor-
tance for the United States and the world. The scientiﬁc simulations that run
on supercomputers are only half of the “science”; scientiﬁc advances are made
only once the data produced by the simulations is processed into output that
is understandable by a scientist. As mathematician Richard Hamming said,
“The purpose of computing is insight, not numbers” . It is precisely the
visualization and analysis community that provides the algorithms, research,
and tools to enable that critical insight.
The hardware and software changes that will occur as HPC enters the
exascale era will be dramatic and disruptive. Scientiﬁc simulations are fore-
casted to grow by many orders of magnitude, and also the methods by which
current HPC systems are programmed and data is extracted are not expected
to survive into the exascale. Changing the fundamental methods by which
scientiﬁc understanding is obtained from HPC simulations is a daunting task.
Dramatic changes to concurrency will reformulate existing algorithms and
workﬂows and cause a reconsideration of how to provide the best scalable
techniques for scientiﬁc understanding. Speciﬁcally, the changes are expected
to aﬀect: concurrency, memory hierarchies, GPU and other accelerator pro-
cessing, communication bandwidth, and, ﬁnally, I/O.
This chapter provides an overview of the February 2011 workshop ,
which examines potential research directions for the community as computing
leaves the petascale era and enters the uncharted exascale era.
15.2 Future System Architectures
For most of the history of scientiﬁc computation, Moore’s Law  pre-
dicts the doubling of transistors per unit of area and cost every 18 months,
which has been reﬂected in increased scalar ﬂoating point performance and
increased processor core count while a fairly standard balance is maintained
among memory, I/O bandwidth, and CPU performance. However, the exascale
will usher in an age of signiﬁcant imbalances between system components—
sometimes by several orders of magnitude. These imbalances will necessitate
a signiﬁcant transformation in how all scientists use HPC resources.
Although extrapolation of current hardware architectures allows for a fairly