O'Reilly logo

GPU PRO 3 by Wolfgang Engel

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

3
III
Real-Time Near-Field Global
Illumination Based on a
Voxel Model
Sinje Thiedemann, Niklas Henrich,
Thorsten Grosch, and Stefan M
¨
uller
3.1 Introduction
In real-time applications, displaying full global illumination is still an open prob-
lem for large and dynamic scenes. A recent trend is to restrict the incoming light
to the near-field around the receiver point, which allows approximate one-bounce
indirect illumination at real-time frame rates. Although plausible results can be
achieved, several problems appear since the recent methods work in image-space:
only occluders and senders of indirect light visible in the current image contribute
to the final illumination. This results in shadows and indirect light that appears
and disappears, depending on camera and object movement. On the other hand,
several real-time voxelization methods were developed recently that enable the
generation of a coarse scene description within milliseconds. At first glance, the
voxel model seems to be a solution for the visibility problems, but in fact the
voxel model is view-dependent as well: since the voxelization methods are based
on rendering, gaps appear in the voxel model for polygons that are viewed from a
grazing angle by the voxelization camera. To solve both problems, we introduce
voxel-based global illumination (VGI) [Thiedemann et al. 11]. The basic idea is
to generate a dynamic, view-independent voxel model from a texture atlas that
provides visibility information (see Figure 3.1). In combination with reflective
shadow maps (RSM), one-bounce indirect light can then be displayed with cor-
rect occlusion inside the near-field at real-time frame rates. We first describe our
new voxelization method and a ray-intersection test with the voxel model. We
then explain the one-bounce global illumination with voxel-based visibility, and
evaluate our method with several test scenes.
209
210 III Global Illumination Effects
Figure 3.1. Voxel-based global illumination starts by generating an atlas of the dynamic
scene that contains the 3D positions as texel colors (left). By rendering a point for each
atlas texel, we obtain a hierarchical voxel model (center), which serves as visibility
information in a real-time global illumination simulation (right). The indirect light is
exaggerated in this example.
3.2 Binary Boundary Voxelization
In this section, we present our new boundary voxelization method, which turns
a scene representation consisting of discrete geometric entities (e.g., triangles)
into a three-dimensional regular-spaced grid. There are several approaches to
creating the voxel grid. Methods based on slicing intersect all triangles of the
scene with each plane of the three-dimensional voxel grid to successively fill each
layer [Crane et al. 07,Fang and Chen 00]. Alternative methods use depth-peeling
[Li et al. 05,Passalis et al. 07] for this process. The method presented by Eisemann
and ecoret [Eisemann and ecoret 06] utilizes the depth of a rasterized fragment
to set the appropriate cell in the voxel grid. The grid itself is represented by a
2D texture, where the bits of the RGBA channels encode the presence of the
voxelized geometry. A similar approach presented by Dong et al. [Dong et al. 04]
better handles geometry that lies parallel to the viewing direction.
The key idea of our approach is that a texture atlas discretizes the surface
of an object. This discretization is used to create the voxel grid. The algorithm
consists of two main steps (see Figure 3.2). In a first step, all objects are rendered
to one or several atlas-texture images. In the second step, one vertex is generated
for each valid atlas texel and inserted into a voxel grid. In this way, a voxelization
of deforming objects is possible. Our method can create a binary voxelization,
where the bits of the RGBA channels of a two-dimensional texture are used to
encode the voxels, as done by [Eisemann and ecoret 06]. It can also create a
multivalued voxel grid, where the information is stored in a three-dimensional
texture (one texel per voxel). With the help of a multivalued voxelization, any

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required