O'Reilly logo

High Performance Visualization by E. Wes Bethel, Charles Hansen, Hank Childs

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 8
Progressive Data Access for Regular
Grids
John Clyne
Computational Information Systems Laboratory, National Center for Atmo-
spheric Research
8.1 Introduction ...................................................... 145
8.2 Preliminaries ..................................................... 146
8.3 Z-Order Curves .................................................. 147
8.3.1 Constructing the Curve ................................. 149
8.3.2 Progressive Access ....................................... 149
8.4 Wavelets .......................................................... 151
8.4.1 Linear Decomposition ................................... 153
8.4.2 Scaling and Wavelet Functions .......................... 154
8.4.3 Wavelets and Filter Banks .............................. 156
8.4.4 Compression ............................................. 158
8.4.5 Boundary Handling ...................................... 159
8.4.6 Multiple Dimensions ..................................... 164
8.4.7 Implementation Considerations ......................... 164
8.4.7.1 Blocking .................................... 165
8.4.7.2 Wavelet Choice ............................. 165
8.4.7.3 Coefficient Addressing ..................... 166
8.4.8 A Hybrid Approach ..................................... 166
8.4.9 Volume Rendering Example ............................. 167
8.5 Further Reading .................................................. 167
References .......................................................... 169
This chapter presents three progressive refinement methods for data sampled
on a regular grid. Two of the methods are based on multiresolution: the grid
may be coarsened or refined as needed by dyadic factors. The third is based
on the energy compaction properties of the discrete wavelet transform, which
enables the sparse representation of signals. In all cases, the objective is to
afford the end user the ability to make trade-offs between fidelity and speed, in
response to the available computing resources, when visualizing or analyzing
large data.
145
146 High Performance Visualization
8.1 Introduction
Fueled by decades of exponential increases in microprocessor performance,
computational scientists in a diverse set of disciplines have enjoyed unprece-
dented supercomputing capabilities. However, with the increase in comput-
ing power, comes more sophisticated and realistic computing models and an
invariable increase in resolution of the discrete grids used to solve the equa-
tions of state. A direct result of the increase in grid resolution is the profuse
amount of stored data. Unfortunately, the ability to generate data has not
been matched by our ability to consume it. Whereas microprocessor floating
point performance, combined with communication interconnect bandwidth
and latencies, largely determines the scale by which numerical simulations are
run, it is often primary storage capacities and I/O bandwidths that constrain
access to data during visualization and analysis. These latter technologies,
and I/O bandwidths, in particular, have not kept pace with the performance
advancements of CPUs, GPUs, or high performance communication fabrics.
For many visualization and analysis workflows, there is a bottleneck caused
by the rate at which data is delivered to the computational components of the
analysis pipeline.
This chapter discusses methods used to reduce the volume of data that
must be processed in order to support a meaningful analysis. Ideally, the user
should be able to trade-off data fidelity for increased interactivity. In many
applications, aggressive data reduction may have a negligible impact, or no
impact whatsoever, on the resulting analyses. The specific target workflow is
highly interactive, quick-look exploratory visualization enabled by coarsened
approximations of the original data, followed by a less interactive validation
of results using the refined or original data, as needed. This model and these
methods, which are referred to as progressive data access (PDA), are similar
to those employed by the ubiquitous GoogleEarth
TM
—coarsened imagery is
transmitted and displayed when the viewpoint is far away, and continuously
refined as the user zooms in on a region of interest.
The PDA approaches in this chapter are all intended to minimize the
volume of data accessed from secondary storage, whether the storage is a
locally attached disk, or a remote, network-attached service. Additionally,
the following properties of the data model are also deemed important: (1) the
ability to quickly access coarsened approximations of the full data domain; (2)
the ability to quickly extract subsets of the full domain at a higher quality; (3)
lossless reconstruction of the original data; and (4) minimal storage overhead.
8.2 Preliminaries
A trivial approach to supporting PDA for regular grids is the general-
ization of 2D texture MIP mapping to higher dimensions. A MIP map, also
referred to in the literature as an image pyramid, is a precalculated hierarchy

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required