Download presentation
Presentation is loading. Please wait.
Published byLeon Derrick Fleming Modified over 9 years ago
1
CHAPTER 8 Color and Texture Mapping © 2008 Cengage Learning EMEA
2
LEARNING OBJECTIVES In this chapter you will learn about: – –Color – –Texturing – –Texture filtering – –Mipmapping – –Nearest point interpolation – –Bilinear filtering – –Trilinear filtering – –Anisotropic filtering – –Basic texture mapping – –Bump mapping – –Cube mapping (environmental mapping)
3
COLOR The representation of color in display devices (using a red, green, and blue component) can directly be linked to the human perception of it. The human brain only picks up three color values at any given moment (as opposed to a complete color distribution). These three values, called tri-stimulus values, are the result of the human visual system’s color cones.
4
COLOR Color cones reduce any perceived color range to three distinct values. The significance of this reduction to computer graphics is that any color can be reproduced using three color elements, namely, a red, green, and blue component.
5
COLOR Computers make use of the red–green–blue (RGB) color model. This is an additive color model where perceived colors are formed by overlapping the primary red, green, and blue colors
6
COLOR Another commonly used color model is the cyan– magenta–yellow (CMY) model, also called the subtractive color model. The perceived colors are formed by overlapping the complementary colors cyan, magenta, and yellow
7
COLOR Working in true-color (a representation of red–green–blue color values using 24 bits per pixel), we can specify a cube by viewing color as a specific point in three-dimensional space. This cube is defined via a coordinate system analogous to the three primary colors with the intensity of a color represented by the distance from the origin to any other location within the cube – the color vector.
8
COLOR The hexcone model presents a hexagonal cone for the representation of the color space. Using a hexagonal cone, also referred to as a hexcone, leads to a greater level of perceptual linearality.
9
TEXTURING Texture mapping is an easy way of adding realism to a computer generated object. The texture (be it a tile-able photograph or complex pattern) is mapped (fitted) to the computer generated object, either stretched or tiled to encompass the entire object area.
10
TEXTURING Textures consist of a number of fundamental subunits called texels or texture elements (which can be considered the pixels of a texture). Arrays of texels make up a texture just as images are created using arrays of pixels. Textures can be one, two or three dimensional in nature (sometimes even four dimensional) with two-dimensional textures being the most common of all these. A one-dimensional texture is simply an array of texture elements.
11
TEXTURING Two-dimensional textures are represented using a two-dimensional array with each texture element addressable via an x and y vector. These textures are the type that we will be working with; even our depth and normal maps used during bump mapping are represented using these two-dimensional bitmap arrays.
12
TEXTURING Volumetric textures, also called three dimensional textures, are another interesting texture resource type. These textures, represented as three- dimensional volumes, are useful for describing solid material blocks from which arbitrary objects can be shaped.
13
TEXTURING Texture mapping is based on the manipulation of individual fragments during the graphics pipeline’s fragment processing stage. The method used to perform the actual texture mapping at application level depends mainly on the level of quality required.
14
TEXTURING The most common method maps a two-dimensional texture resource onto the surface of an object. This texture mapping process starts out in two- dimensional texture space and moves to three- dimensional object space where the texture is mapped onto the object – a process known as surface parameterization. A projection transformation is then used to move from object space to screen space.
15
TEXTURING Textures are loaded into system memory as arrays with coordinates, called texture coordinates. These coordinates allow us to address and access the individual texel elements making up the array. Texture coordinates are generally scaled to range over the interval (0, 1).
16
TEXTURING Two-dimensional textures can be described using the notation T(u, v) with u and v the texture coordinates uniquely defined for each vertex on a given surface. The process of texture mapping is thus concerned with aligning each texel’s texture space coordinates with a vertex on the surface of an object.
17
Texture Filtering Every pixel of an onscreen image contains an independently controlled color value obtained from the texture. Texture filtering, also called ‘texture smoothing’, controls the way in which pixels are colored by blending the color values of adjacent texture elements.
18
Texture Filtering Mipmapping –A –A mipmap is a series of pre-filtered texture images of varying resolution.
19
Texture Filtering Nearest Point Interpolation – –The point matching the center of a texture element is rarely obtained when texture coordinates are mapped to a two-dimensional array of texels. – –Nearest point interpolation is used to approximate this point by using the color value of the texel closest to the sampled point.
20
Texture Filtering Bilinear Filtering – –Bilinear filtering builds on the concept of nearest point interpolation by sampling not just one but four texture elements when texture coordinates are mapped to a two- dimensional array of texels.
21
Texture Filtering Trilinear Filtering – –Bilinear filtering does not perform any interpolation between mipmaps, resulting in noticeable quality changes where the graphics system switches between mipmap levels. – –Trilinear filtering solves this quality issue by extending the previous technique to perform a texture lookup coupled with a bilinear filtering operation for the two bordering mipmap images, one for the higher resolution texture, and the other for the lower resolution one.
22
Texture Filtering Anisotropic Filtering – –Anisotropy is a distortion visible in the texels of a textured object when the object is rotated at a specific angle to the point of view.
23
Texture Filtering Anisotropic Filtering – –Anisotropic texture filtering deals with this blurriness by sampling texture elements using a quadrilateral modified according to the viewing angle. – –A single pixel could encompass more texel elements in one direction, such as along the x-axis, than in another, for instance along the z-axis. – –By using a modifiable quadrilateral for the sampling of texels, we are able to maintain proper perspective and precision when mapping a texture to an object.
24
Basic Texture Mapping Implementation [see the textbook and source code examples, “TextureMapping(Direct3D)” and “TextureMapping(OpenGL)”, on the book’s website for detailed examples].
25
Bump Mapping There are a number of techniques combining lighting calculations with texture surface normal perturbations to create more realistic looking object surfaces. Bump mapping is one such technique. Bump mapping can be described as a form of texture mapping incorporating light reflection to simulate real-world surfaces where the unevenness of a surface influences the reflection of light. Bump mapping combines per-pixel lighting calculations with the normals calculated at each pixel of a surface.
26
Bump Mapping
29
Implementing Bump Mapping We can summarize the process of bump mapping as follows: 1 Determine the inverse TBN matrix. This is required because the TBN matrix translates coordinates from texture space to object space and we need to convert the light vector from object space to texture space. 2 Calculate the light vector. 3 Transform the light vector from object space to texture space by multiplying it with the TBN matrix. 4 Read the normal vector at the specific pixel. 5 Calculate the dot product between the light vector and normal vector. 6 Multiply the result from step 5 with the color of the light and that of the surface material (this is the final diffuse light color). 7 Repeat the previous six steps for each and every pixel of the textured surface.
30
Implementing Bump Mapping [see the textbook for a detailed example and discussion].
31
Cube Mapping Cube mapping, also called environmental mapping or sometimes reflection mapping, allows us to simulate complex reflections by mapping real-time computed texture images to the surface of an object. Each texture image used for environmental mapping stores a ‘snapshot’ image of the environment surrounding the mapped object. These snapshot images are then mapped to a geometric object to simulate the object reflecting its surrounding environment. An environment map can be considered an omnidirectional image.
32
Cube Mapping
33
Cube mapping is a type of texturing where six environmental maps are arranged as if they were faces of a cube. Images are combined in this manner so that an environment can be reflected in an omnidirectional fashion.
34
Implementing Cube Mapping [see the textbook for a detailed example and discussion].
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.