O'Reilly logo

High Performance Visualization by E. Wes Bethel, Charles Hansen, Hank Childs

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 15
The Path to Exascale
Sean Ahern
Oak Ridge National Laboratory & University of Tennessee, Knoxville
15.1 Introduction ...................................................... 331
15.2 Future System Architectures ..................................... 332
15.3 Science Understanding Needs at the Exascale ................... 335
15.4 Research Directions .............................................. 338
15.4.1 Data Processing Modes .................................. 338
15.4.1.1 In Situ Processing ......................... 338
15.4.1.2 Post-Processing Data Analysis ............ 339
15.4.2 Visualization and Analysis Methods .................... 341
15.4.2.1 Support for Data Processing Modes ....... 341
15.4.2.2 Topological Methods ....................... 342
15.4.2.3 Statistical Methods ........................ 343
15.4.2.4 Adapting to Increased Data Complexity .. 343
15.4.3 I/O and Storage Systems ................................ 344
15.4.3.1 Storage Technologies for the Exascale ..... 345
15.4.3.2 I/O Middleware Platforms ................. 346
15.5 Conclusion and the Path Forward ............................... 346
References .......................................................... 349
The hardware and system architectural changes that will occur over the next
decade, as high performance computing (HPC) enters the exascale era, will be
dramatic and disruptive. Not only are scientific simulations forecasted to grow
by many orders of magnitude, but also the current methods by which HPC
systems are programmed and data are stored are not expected to survive into
the exascale. Most of the algorithms outlined in this book have been designed
for the petascale—not the exascale—and simply increasing concurrency is
insufficient to meet the challenges posed by exascale computing. Changing
the fundamental methods by which scientific understanding is obtained from
HPC simulations is daunting. This chapter explores some research directions
for addressing these formidable challenges.
331
332 High Performance Visualization
15.1 Introduction
In February 2011, the Department of Energy Office of Advanced Scientific
Computing Research convened a workshop to explore the problem of scientific
understanding of data from HPC at the exascale. The goal of the workshop
was to identify the research directions that the data management, analysis,
and visualization community must take to enable scientific discovery for HPC
at this extreme scale (1 exaflop = 1 quintillion floating point calculations per
second). Projections from the international TOP500 list [9] place the avail-
ability of the first exascale computers at around 2018–2019.
Extracting scientific insight from large HPC facilities is of crucial impor-
tance for the United States and the world. The scientific simulations that run
on supercomputers are only half of the “science”; scientific advances are made
only once the data produced by the simulations is processed into output that
is understandable by a scientist. As mathematician Richard Hamming said,
“The purpose of computing is insight, not numbers” [17]. It is precisely the
visualization and analysis community that provides the algorithms, research,
and tools to enable that critical insight.
The hardware and software changes that will occur as HPC enters the
exascale era will be dramatic and disruptive. Scientific simulations are fore-
casted to grow by many orders of magnitude, and also the methods by which
current HPC systems are programmed and data is extracted are not expected
to survive into the exascale. Changing the fundamental methods by which
scientific understanding is obtained from HPC simulations is a daunting task.
Dramatic changes to concurrency will reformulate existing algorithms and
workflows and cause a reconsideration of how to provide the best scalable
techniques for scientific understanding. Specifically, the changes are expected
to affect: concurrency, memory hierarchies, GPU and other accelerator pro-
cessing, communication bandwidth, and, finally, I/O.
This chapter provides an overview of the February 2011 workshop [1],
which examines potential research directions for the community as computing
leaves the petascale era and enters the uncharted exascale era.
15.2 Future System Architectures
For most of the history of scientific computation, Moore’s Law [28] pre-
dicts the doubling of transistors per unit of area and cost every 18 months,
which has been reflected in increased scalar floating point performance and
increased processor core count while a fairly standard balance is maintained
among memory, I/O bandwidth, and CPU performance. However, the exascale
will usher in an age of significant imbalances between system components—
sometimes by several orders of magnitude. These imbalances will necessitate
a significant transformation in how all scientists use HPC resources.
Although extrapolation of current hardware architectures allows for a fairly

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required