O'Reilly logo

OpenGL Insights by Christophe Riccio, Patrick Cozzi

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Depth of Field with
Bokeh Rendering
Charles de Rousiers and Matt Pettineo
15.1 Introduction
In order to increase realism and immersion, current games make frequent use of
depth of field to simulate lenticular phenomena. Typical implementations use screen-
space filtering techniques to roughly approximate a cameras circle of confusion for
out-of-focus portions of a scene. While such approaches can provide pleasing results
with minimal performance impact, crucial features present in real-life photography
are still missing. In par ticular, lens-based cameras produce a phenomenon known
as bokeh (blur in Japanese). Bokeh manifests as distinctive geometric shapes that
are most visible in out-of-focus portions of an image with high local contrast (see
Figure 15.1). The actual shape itself depends on the shape of the cameras aperture,
which is typically circular, octagonal, hexagonal, or pentagonal.
Current and upcoming Direct3D 11 engines, e.g., CryENGINE, Unreal En-
gine 3, Lost Planet 2 Engine, have recently demonstrated new techniques for simu-
lating bokeh depth of field, which reflects a rising interest in reproducing such effects
in real time. However, these techniques have performance requirements that can
potentially relegate them to high-end GPUs. The precise implementation details of
these techniques a lso arent publicly available, making it difficult to integrate these
techniques into existing engines. Consequently, it remains an active area of research,
as there is still a need for implementations that are suitable for a wider range of
hardware.
A naive approach would be to explicitly render a quad for each pixel, with each
quad using a texture containing the aperture shape. While this can produce excellent
205
15
206 II Rendering Techniques
Figure 15.1. Comparison between a simple blur-based depth of field (left) and a depth of
field with bokeh rendering (right).
results [Sousa 11,Furturemark 11,Mittring and Dudash 11], it is also extremely inef-
ficient due to the heavy fill rate and bandwidth requirements. Instead, we propose a
hybrid method that mixes previous filtering-based approaches with quad rendering.
Our method selects pixels with high local contrast and renders a single textured quad
for each such pixel. The texture used for the quad contains the cameras aperture
shape, which allows the quads to approximate bokeh effects. In order to achieve high
performance, we use atomic counters in conjunction with an image texture for ran-
dom memory access. An indirect draw command is also used, which avoids the need
for expensive CPU-GPU synchronization. This efficient OpenGL 4.2 implementa-
tion allows rendering of thousands of aperture-shaped quads at high frame rates, and
also ensures the temporal coherency of the rendered bokeh.
15.2 Depth of Field Phenomemon
Depth of field is an important effect for conveying a realistic sense of depth and
scale, particularly in open scenes with a large viewing distance. Traditional real-time
applications use a pinhole camera model [Pharr and Humphreys 10] for rasterization,
which results in an infinite depth of field. However, real cameras use a thin lens,
which introduces a limited depth of field based on aperture size and focal distance.
Objects outside this region appear blurred on the final image, while objects inside it
remain sharp (see Figure 15.2).
15. Depth of Field with Bokeh Rendering 207
Start
near
End
near
Start
far
End
far
In focusOut of focusOut of focus
Distance
focused at
CCD Sensor ApertureLens
Focal point
Resulting
Images
Near area Far area
Reference
Linear
approximation
CoC Size
Depth
Object
Figure 15.2. Depth of field phenomenon, where a thin lens introduces a limited depth of
field. In-focus objects appear sharp, while out-of-focus objects appear blurred. The size of the
circle of confusion depends on the distance between object and the point at which the camera
is focused. We use a linear approximation in order to simplify parameters as well as run-time
computations.
The “blurriness of an object is defined by its circle of confusion (CoC). The size
of this CoC depends on the distance between the object and the area on which
the camera is focused. The further an object is from the focused area, the blurrier
it appears. The size of the CoC does not increase linearly based on this distance.
The size actually increases faster in the out-of-focus foreground area than it does in
the out-of-focus background area (see Figure 15.2). Since the CoC size ultimately
depends on focal distance, lens size, and aperture shape, setting up the simulation
parameters may not be intuitive to someone inexperienced with photography. This

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required