i
i
i
i
i
i
i
i
11
Texture Mapping
The shading models presented in Chapter 10 assume that a diffuse surface has
uniform reectance c
r
.Thisisne for surfaces such as blank paper or painted
walls, but it is inefcient for objects such as a printed sheet of paper. Such objects
have an appearance whose complexity arises from variation in reectance prop-
erties. While we could use such small triangles that the variation is captured by
varying the reectance properties of the triangles, this would be inefcient.
The common technique to handle variations of reectance is to store the re-
ectance as a function or a a pixel-based image and “map” it onto a surface (Cat-
mull, 1975). The function or image is called a texture map, and the process of
controlling reectance properties is called texture mapping. This is not hard to
implement once you understand the coordinate systems involved. Texture map-
ping can be classied by several different properties:
1. the dimensionality of the texture function,
2. the correspondences dened between points on the surface and points in the
texture function, and
3. whether the texture function is primarily procedural or primarily a table
look-up.
These items are usually closely related, so we will somewhat arbitrarily classify
textures by their dimension. We rst cover 3D textures, often called solid tex-
tures or volume textures. We will then cover 2D textures, sometimes called image
243
i
i
i
i
i
i
i
i
244 11. Texture Mapping
textures. When graphics programmers talk about textures without specifying di-
mension, they usually mean 2D textures. However, we begin with 3D textures
because, in many ways, they are easier to understand and implement. At the end
of the chapter we discuss bump mapping and displacement mapping which use
textures to change surface normals and position, respectively. Although those
methods modify properties other than reectance, the images/functions they use
are still called textured. This is consistent with common usage where any image
used to modify object appearance is called a texture.
11.1 3D Texture Mapping
In previous chapters we used c
r
as the diffuse reectance at a point on an object.
For an object that does not have a solid color, we can replace this with a function
c
r
(p) which maps 3D points to RGB colors (Peachey, 1985; Perlin, 1985). This
function might just return the reectance of the object that contains p.Butfor
objects with texture, we should expect c
r
(p) to vary as p moves across a surface.
One way to do this is to create a 3D texture that denes an RGB value at every
point in 3D space. We will only call it for points p on the surface, but it is usually
easier to dene it for all 3D points than a potentially strange 2D subset of points
that are on an arbitrary surface. Such a strategy is clearly suitable for surfaces that
are “carved” from a solid medium, such as a marble sculpture.
Note that in a ray-tracing program, we have immediate access to the point p
seen through a pixel. However, for a z-buffer or BSP-tree program, we only know
the point after projection into device coordinates. We will show how to resolve
this problem in Section 11.3.1.
11.1.1 3D Stripe Textures
There are a surprising number of ways to make a striped texture. Let’s assume we
have two colors c
0
and c
1
that we want to use to make the stripe color. We need
some oscillating function to switch between the two colors. An easy one is a sine:
RGB stripe( point p )
if (sin(x
p
) > 0) then
return c
0
else
return c
1

Get Fundamentals of Computer Graphics, 3rd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.