O'Reilly logo

Graphics Shaders, 2nd Edition by Steve Cunningham, Mike Bailey

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

179
Surface Textures in
the Fragment Shader
9
Texture mapping is a special activity within shader programs. In can be used
in vertex, geometry, tessellation, or fragment shaders, although most of the
time it seems to nd its most fun use in fragment shaders. In texture map-
ping, texture coordinates from the original model are used as an index into a
1D, 2D, or 3D texture. Textures can hold any piece of information. Most of the
time, they hold information related to determining the color of a pixel during
fragment processing. However, more and more, textures are nding them-
selves being used to hold general-purpose data for a variety of shader-based
computations. However, in this chapter, we will discuss the use of textures for
image creation. While this should be familiar from your introduction to graph-
ics, you have much more control over the use of textures when you’re using
fragment shaders. We will go well beyond the traditional texture mapping
and will cover other techniques, such as bump mapping and cube mapping,
180
9. Surface Textures in the Fragment Shader
that take texture coordinates as their starting point. And later, in Chapter 15
on scientic visualization, we will show how textures can be used to pass data
to shaders.
Texture Coordinates
Texture coordinates specify the coordinates in texture space for each vertex of a
graphics primitive. Texture coordinates are not part of the basic geometry of
a primitive, but rather are an aribute aached to each vertex. In the vertex
shader, the per-vertex texture coordinates are typically assigned to variables
that can be interpolated by the rasterizer across the entire polygon and then
given to the fragment shaders.
In the previous chapter on fragment shaders, you saw that you access the
interpolated texture coordinates with the texture coordinate variables we have
been calling
vST in the vertex shader, and that you can get the RGBA color of
a texel from one of the
texture( ) functions. You are not limited to using just
the single texels at those given texture coordinates, however. You can also use
any texture coordinates you need in developing the color of the pixel. As an
example, in the chapter on scientic visualization, we will describe the line
integral convolution (LIC) process that probes the texture map along specic
function streamlines to compute the color of each pixel. You can use a great
deal of creativity in how you use textures.
Traditional Texture Mapping
Traditional OpenGL texture mapping uses a number of functions that dene
the way a texture is read, stored, accessed, and processed. The apparent com-
plexity comes mostly from the exibility that a generalized graphics API must
have in order to be used so widely. If you are writing your own texture func-
tions in a fragment shader, you can implement just those operations you need,
which should make the task less intimidating than it might appear.
Your experience is that xed-function OpenGL supports four kinds of
textures: 1D, 2D, and 3D textures, and cube maps. It also supports multitextur-
ing. Our goal is to see how you can create each of these standard functional-
ities with fragment shaders.
When you rst encounter texturing in OpenGL, you nd that to use tex-
tures, you must rst set up a number of texture properties. You must associate
181
Traditional Texture Mapping
a texture identier (an integer generated by glGenTextures( ); this is texA in
Figure 9.1) with a texture, you must set a number of texture parameters (such
as texture wrap and lter), and you must set the texture image parameters that
interpret the texture (color model, dimensions, size of texture component, and
texture data). This is illustrated in Figure 9.1, which shows how the usual set of
texture functions specify texture properties. The texture unit is the number of
the “docking port” in the graphics context, with default zero, and the texture
identier
texA acts as a pointer to a specic area in GPU memory.
In a xed-function program, you must also associate a texture name with
the texture identier, enable textures, and specify the texture environment.
Overall, the setup for a single xed-function texture that has been loaded into
an array texImage looks like this:
GLuint texA;
glGenTextures( 1, &texA );
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texA );
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexImage3D(GL_TEXTURE_2D,0,GL_RGB8,TEX_WIDTH,TEX_HEIGHT,
0,GL_RGB,GL_UNSIGNED_BYTE,texImage);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND);
Figure 9.1. Docking texture parameters with the OpenGL system.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required