Presentation is loading. Please wait.

Presentation is loading. Please wait.

Three-Dimensional Graphics VI

Similar presentations


Presentation on theme: "Three-Dimensional Graphics VI"— Presentation transcript:

1 Three-Dimensional Graphics VI
Computer Graphics Three-Dimensional Graphics VI

2 Texture Mapping

3 Introduce Mapping Methods
Objectives Introduce Mapping Methods Texture Mapping Environment Mapping Bump Mapping Consider basic strategies Forward vs backward mapping Point sampling vs area averaging

4 The Limits of Geometric Modeling
Although graphics cards can render over 10 million polygons per second, that number is insufficient for many phenomena Clouds Grass Terrain Skin

5 Consider the problem of modeling an orange (the fruit)
Start with an orange-colored sphere Too simple Replace sphere with a more complex shape Does not capture surface characteristics (small dimples) Takes too many polygons to model all the dimples First attempt might be to start with a sphere. We can build an approximation to a sphere out of triangles, and we can render these triangles using material properties that match those of a real orange. Unfortunately, such a rendering would be far too regular to look much like an orange. We could try to model the orange with some sort of curved surface and then render the surface. This procedure would give us more control over the shape of our virtual orange, but the image produced still would not look right.

6 Still might not be sufficient because resulting surface will be smooth
Modeling an Orange (2) Take a picture of a real orange, scan it, and “paste” onto simple geometric model This process is known as texture mapping Still might not be sufficient because resulting surface will be smooth Need to change local shape Bump mapping An alternative is not to attempt to build increasingly more complex models, but rather to build a simple model and to add detail as part of the rendering process.

7 Environment (reflection mapping)
Three Types of Mapping Texture Mapping Uses images to fill inside of polygons Environment (reflection mapping) Uses a picture of the environment for texture maps Allows simulation of highly specular surfaces Bump mapping Emulates altering normal vectors during the rendering process Texture mapping uses a pattern (or texture) to determine the color of a fragment. These patterns could be determined by a fixed pattern, such as the regular patterns often used to fill polygons; by a procedural texture-generation method; or through a digitized image. In all cases, we can characterize the image produced by a mapping of a texture to the surface as part of the rendering of the surface Reflection maps, or environment maps, allow us to create images that have the appearance of ray-traced images without our having to trace reflected rays. In this case, an image of the environment is painted onto the surface as that surface is being rendered. Whereas texture maps give detail by painting patterns onto smooth surfaces, Bump maps distort the normal vectors during the shading process to make the surface appear to have small variation in shap, such as the bumps on areal orange.

8 Texture Mapping geometric model texture mapped

9 Environment Mapping

10 Bump Mapping

11 Where does mapping take place?
Mapping techniques are implemented at the end of the rendering pipeline Very efficient because few polygons make it past the clipper Opengl’s texture maps rely on its pipeline architecture. There are actually two parallel pipelines: the geometric pipeline and the pixel pipeline. Fo the display of bitmaps and pixel rectangles, the pipelines merge during reasterization. For texture mapping, the pixel pipeline merges with fragment processing after rasterization.

12 Is it simple? Although the idea is simple---map an image to a surface---there are 3 or 4 coordinate systems involved 2D image 3D surface

13 Parametric coordinates Texture coordinates
Coordinate Systems Parametric coordinates May be used to model curves and surfaces Texture coordinates Used to identify points in the image to be mapped Object or World Coordinates Conceptually, where the mapping takes place Window Coordinates Where the final image is really produced At various stages in the process, we will be working with Parametric coordinates, which we use to help us define curved surfaces. Texture coordinates, which we use to locate positions in the texture; Object coordinates, where we describe the objects upon which the textures will be mapped; Screen coordinates, where the final image is produced;

14 Texture Mapping texture coordinates window coordinates
parametric coordinates One way to think about texture mapping is in terms of two concurrent mappings: the first from texture coordinates to object coordinates, and the second from parametric coordinates to object coordinates. A third mapping takes us from object coordinates to window coordinates. texture coordinates window coordinates world coordinates

15 Basic problem is how to find the maps
Mapping Functions Basic problem is how to find the maps Consider mapping from texture coordinates to a point on surface Appear to need three functions x = x(s,t) y = y(s,t) z = z(s,t) But we really want to go the other way (x,y,z) t s

16 Backward Mapping We really want to go backwards Given a pixel, we want to know to which point on an object it corresponds Given a point on an object, we want to know to which point in the texture it corresponds Need a map of the form s = s(x,y,z) t = t(x,y,z) Such functions are difficult to find in general One of the difficulties we must confront is that although these functions exist conceptually, finding them may not be possible in practice. In addition, we are worried about the inverse problem: having been given a point (x,y,z) on an object, how do we find the corresponding texture coordinates, or equivalently the inverse functions.

17 Example: map to cylinder
Two-part mapping One solution to the mapping problem is to first map the texture to a simple intermediate surface Example: map to cylinder One solution to the mapping problem is to use a two-part mapping. The first step maps the texture to a simple three-dimensional intermediate surface, such as a sphere, cylinder, or cube. In the second step, the intermediate surface containing the mapped texture is mapped to the surface being rendered. This two-step mapping process can be applied to surfaces defined in either geometric or parametric coordinates.

18 maps from texture space
Cylindrical Mapping parametric cylinder x = r cos 2p u y = r sin 2pu z = v/h maps rectangle in u,v space to cylinder of radius r and height h in world coordinates s = u t = v Suppose that our texture coordinates vary over the unit square, and that we use the surface of a cylinder of height h and radius r as our intermediate object. Points on the cylinder are given by the parametric equations. U and v vary over (0,1). By using only the curved part of the cylinder and not the top and bottom, we were able to map the texture without distorting its shape. maps from texture space

19 Spheres are used in environmental maps
Spherical Map We can use a parametric sphere x = r cos 2pu y = r sin 2pu cos 2pv z = r sin 2pu sin 2pv in a similar manner to the cylinder but have to decide where to put the distortion Spheres are used in environmental maps This problem is similar to the problem of creating a 2d image of the earth for a map. If you look at the various maps of the earth in an atlas, all distort shapes and distances. Both texture-mapping and map-design techniques must choose among a variety of representations, based on where we wish to place the distortion.

20 Easy to use with simple orthographic projection
Box Mapping Easy to use with simple orthographic projection Also used in environment maps We can also use a rectangular box, as shown in this figure. There we map the texture to a box that can be unraveled, like a cardboard packing box. This mapping is often used with environment maps.

21 Normals from intermediate to actual
Second Mapping Map from intermediate object to actual object Normals from intermediate to actual Normals from actual to intermediate Vectors from center of intermediate actual intermediate The second step is to map the texture values on the intermediate object to the desired surface. This figure shows three possible strategies. Left: we take the texture value at a point on the intermediate object, go from this point in the direction of the normal until we intersect the object, and then place the texture value at the point of intersection. Middle: we could also reverse this method, starting at a point on the surface of the object and going in the direction of the normal at this point until we intersect the intermediate object, where we obtain the texture value. Right: if we know the center of the intermediate object, is to draw a line from the center through a point on the object, and to calculate the intersection of this line with the intermediate surface. The texture at the point of intersection with the intermediate object is assigned to the corresponding point on the desired object.

22 Point sampling of the texture can lead to aliasing errors
point samples in u,v (or x,y,z) space miss blue stripes Usually, we can use the location that we get by back projection of the pixel center to find a texture value, but sometimes it is subject to serious aliasing problems, which are especially visible if the texture is periodic. Here we have a repeated texture and a flat surface. The back projection of the center of each pixel happens to fall in between the blue lines, and the texture value is always the lighter color. point samples in texture space

23 A better but slower option is to use area averaging
pixel preimage This figure shows the difficulties to potential aliasing problem. Suppose that we are computing a color for the square pixel centered at screen coordinates (xs, ys). The center (xs, ys) corresponds to a point(x,y,z) in object space, but if the object is curved, the projection of the coners of the pixel backward into object aspace yields a curved preimage of the pixel. In terms of the texture image T(s,t), projecting the pixel back yields a preimage in texture space that is the area of the texture that ideally should contribtue to the shading of the pixel. A better but Is to assign a texture value based on averaging of the texture map over the preimage. Note that this method is imperfect too. we might still have aliasing defects due to the limited resolution of both the frame buffer and the texuture map. These problems are most visible when there are regular high-frequency components in the texture. Note that preimage of pixel is curved

24 OpenGL Texture Mapping

25 Introduce the OpenGL texture functions and options
Objectives Introduce the OpenGL texture functions and options

26 Three steps to applying a texture
Basic Stragegy Three steps to applying a texture specify the texture read or generate image assign to texture enable texturing assign texture coordinates to vertices Proper mapping function is left to application specify texture parameters wrapping, filtering

27 Texture Mapping x y z geometry display s t image

28 Texture Example The texture (below) is a 256 x 256 image that has been mapped to a rectangular polygon which is viewed in perspective

29 Texture Mapping and the OpenGL Pipeline
Images and geometry flow through separate pipelines that join at the rasterizer “complex” textures do not affect geometric complexity geometry pipeline vertices pixel pipeline image rasterizer

30 Specifying a Texture Image
Define a texture image from an array of texels (texture elements) in CPU memory Glubyte my_texels[512][512]; Define as any other pixel map Scanned image Generate by application code Enable texture mapping glEnable(GL_TEXTURE_2D) OpenGL supports 1-4 dimensional texture maps

31 Define Image as a Texture
glTexImage2D( target, level, components, w, h, border, format, type, texels ); target: type of texture, e.g. GL_TEXTURE_2D level: used for mipmapping (discussed later) components: elements per texel w, h: width and height of texels in pixels border: used for smoothing (discussed later) format and type: describe texels texels: pointer to texel array glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE, my_texels);

32 Converting A Texture Image
OpenGL requires texture dimensions to be powers of 2 If dimensions of image are not powers of 2 gluScaleImage( format, w_in, h_in, type_in, *data_in, w_out, h_out, type_out, *data_out ); data_in is source image data_out is for destination image Image interpolated and filtered during scaling

33 Based on parametric texture coordinates
Mapping a Texture Based on parametric texture coordinates glTexCoord*() specified at each vertex Texture Space Object Space t 1, 1 (s, t) = (0.2, 0.8) 0, 1 A a c (0.4, 0.2) b B C s (0.8, 0.4) 0, 0 1, 0

34 Note that we can use vertex arrays to increase efficiency
Typical Code glBegin(GL_POLYGON); glColor3f(r0, g0, b0); //if no shading used glNormal3f(u0, v0, w0); // if shading used glTexCoord2f(s0, t0); glVertex3f(x0, y0, z0); glColor3f(r1, g1, b1); glNormal3f(u1, v1, w1); glTexCoord2f(s1, t1); glVertex3f(x1, y1, z1); . glEnd(); Note that we can use vertex arrays to increase efficiency

35 bilinear interpolation good selection of tex coordinates
OpenGL uses interpolation to find proper texels from specified texture coordinates Can be distortions texture stretched over trapezoid showing effects of bilinear interpolation good selection of tex coordinates poor selection of tex coordinates A,b mapping of a checkerboard texture to a triangle. C, mapping of a checkerboard texture to a trapezoid. Note the artifacts of the interpolation and how quadrilaterals aare treated as two triangles as they are rendered.

36 Texture Parameters OpenGL has a variety of parameters that determine how texture is applied Wrapping parameters determine what happens if s and t are outside the (0,1) range Filter modes allow us to use area averaging instead of point samples Mipmapping allows us to use textures at multiple resolutions Environment parameters determine how texture mapping interacts with shading

37 Clamping: if s,t > 1 use 1, if s,t <0 use 0
Wrapping Mode Clamping: if s,t > 1 use 1, if s,t <0 use 0 Wrapping: use s,t modulo 1 glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP ) glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT ) texture s t GL_CLAMP wrapping GL_REPEAT

38 Magnification and Minification
More than one pixel can cover a texel (magnification) or more than one texel can cover a pixel (minification) or Can use point sampling (nearest texel) or linear filtering ( 2 x 2 filter) to obtain texture values Texture Polygon Magnification Minification In the second case, the texel is larger than 1 pixel; in the first, it is smaller. In both cases, the fastest strategy is to use the value of the nearest point sampling. Alternatively, we can use filtering to obtain a smoother, less aliased image. Is to use a weighted average of a group of texels in the neighborhood of the texel determined by point sampling.

39 glTexParameteri( target, type, mode )
Filter Modes Modes determined by glTexParameteri( target, type, mode ) glTexParameteri(GL_TEXTURE_2D, GL_TEXURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXURE_MIN_FILTER, GL_LINEAR); Note that linear filtering requires a border of an extra texel for filtering at edges (border = 1)

40 Lessens interpolation errors for smaller textured objects
Mipmapped Textures Mipmapping allows for prefiltered texture maps of decreasing resolutions Lessens interpolation errors for smaller textured objects Declare mipmap level during texture definition glTexImage2D( GL_TEXTURE_*D, level, … ) GLU mipmap builder routines will build all the textures from a given image gluBuild*DMipmaps( … ) Opengl has another way to deal with the minification problem; For objects that project to an area of screen space that is small compared with the size of the texel array, we do not need the resolution of the original texel array. Opengl allows us to create a series of texture arrays at reduced sizes; it will then automatically use the appropriate size.

41 Example linear filtering mipmapped mipmapped point linear sampling
Shows the differences in mapping a texture using the nearest texel, linear filtering and mipmapping, both with using he nearest texel and with liner filtering. The object is a quadrilateral that appears alost as a triangle when shown in perspective. The texture is a series of black and white lines that appear to converge at the far side of the quadrilateral. Note that this texutre map, because of its regularity, shows dramatic aliasing effects. The use of the nearest texel shows moire patterns and jaggedness in the lines. Using linear filtering makes the lines smoother, but there are still clear moire patterns. The texels between the balck and white stripes are gray because of the filtering. Mipmapping also replaces many of the blacks and withes of the two-color patterns with grays that are the average of the two color values. The mipmapped texture using the nearest texel in the proper mipmap still shows the jaggedness that is smoothed out when we use linear filtering with the mipmap.

42 glTexEnv{fi}[v]( GL_TEXTURE_ENV, prop, param )
Texture Functions Controls how texture is applied glTexEnv{fi}[v]( GL_TEXTURE_ENV, prop, param ) GL_TEXTURE_ENV_MODE modes GL_MODULATE: modulates with computed shade GL_BLEND: blends with an environmental color GL_REPLACE: use only texture color GL(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); Set blend color with GL_TEXTURE_ENV_COLOR

43 Perspective Correction Hint
Texture coordinate and color interpolation either linearly in screen space or using depth/perspective values (slower) Noticeable for polygons “on edge” glHint( GL_PERSPECTIVE_CORRECTION_HINT, hint ) where hint is one of GL_DONT_CARE GL_NICEST GL_FASTEST

44 Generating Texture Coordinates
OpenGL can generate texture coordinates automatically glTexGen{ifd}[v]() specify a plane generate texture coordinates based upon distance from the plane generation modes GL_OBJECT_LINEAR GL_EYE_LINEAR GL_SPHERE_MAP (used for environmental maps)

45 Texture is part of the OpenGL state
Texture Objects Texture is part of the OpenGL state If we have different textures for different objects, OpenGL will be moving large amounts data from processor memory to texture memory Recent versions of OpenGL have texture objects one image per texture object Texture memory can hold multiple texture objects

46 specify textures in texture objects set texture filter
Applying Textures II specify textures in texture objects set texture filter set texture function set texture wrap mode set optional perspective correction hint bind texture object enable texturing supply texture coordinates for vertex coordinates can also be generated

47 Other Texture Features
Environment Maps Start with image of environment through a wide angle lens Can be either a real scanned image or an image created in OpenGL Use this texture to generate a spherical map Use automatic texture coordinate generation Multitexturing Apply a sequence of textures through cascaded texture units

48 Compositing and Blending

49 Learn to use the A component in RGBA color for
Objectives Learn to use the A component in RGBA color for Blending for translucent surfaces Compositing images Antialiasing So far, we have assumed that we want to form a single image and that the objects that form this image have surfaces that are opaque. openGl provides a mechanism, through alpha blending, that, among other effects, can create images with translucent objects. In RGBA mode, if blending is enabled, the value of a controls how the RGB values are written to the frame buffer. Because fragments from multiple objects can contribute to the color of the same pixel, we say that these objects are blended or compostied together.

50 Opacity and Transparency
Opaque surfaces permit no light to pass through Transparent surfaces permit all light to pass Translucent surfaces pass some light translucency = 1 – opacity (a) Opacity of a surface is a measure of how much light penetrates through that surface. An opacity of 1 corresponds to a completely opaque surface that blocks all light incident on it. A surface with an opacity of 0 is transparent: all light passes through it. The transparency or translucency of a surface with opacity a is given by 1-a. Consider the three uniformly lit polygons shown in this figure. Assume that the middle polygon is opaque, and the front polygon, nearest to the viewer, is transparent. If the front polygon were perfectly transparent, the viewer would see only the middle polygon. However, if the front polygon is only partially opaque, similar to colored glass, the color the viewer sees is a blending of the colors of the front and middle polygons. Because the middle polygon is opaque, the viewer does not see the back polygon. opaque surface a =1

51 the complexity of the internal interactions of light and matter
Physical Models Dealing with translucency in a physically correct manner is difficult due to the complexity of the internal interactions of light and matter Using a pipeline renderer Although using the a-channel gives us a way of creating the appearance of translucency, it is difficult to handle transparency in a phisically correct maner without taing into account how an object is lit and what happens to rays and projectiors that pass through translucent objects. From this figure, we can see several of the diificulties. Suppose that the rear polygon is opaque, but reflective, and that the two polygons closer to the viewer are translucent. By following various rays from the light source, we can see a number of possibilities. 1) some rays trike the rear polygon, the the corresponding pixels can be colored with the shade at the intersection of the pojector and the polygon. For these rays, we should also distinguish between points illuminated directly by the light source, and points for which the incident light passes through one or both translucent polygons. 2) for rays that pass though only one translucent surface, we have to adjust the color based on the color and opacity of the polygon. We should also add a term that accounts for light striking the front polygon that is reflected toward the viewer. 3) for rays passing through both translucent polygons, we have to consider their combined effect. We also have to account for the bending, or refraction, of light as it passes through a material. For a pipeline renderer, the task is even more difficult, because we have to determine the contribution that each polygon makes as it is passed through the pipeline, rather than considering the contributions of all polygons to a given pixel at the same time.

52 Writing Model Use A component of RGBA (or RGBa) color to store opacity During rendering we can expand our writing model to use RGBA values blend source blending factor destination component source component destination blending factor Color Buffer

53 Blending Equation We can represent source and destination pixels with the four-element (RGBA) arrays: s = [sr, sg, sb, sa] d = [dr, dg, db, da] Suppose that the source and destination blending factors are b = [br, bg, bb, ba] c = [cr, cg, cb, ca] Blend as d’ = [br sr+ cr dr, bg sg+ cg dg , bb sb+ cb db , ba sa+ ca da ]

54 OpenGL Blending and Compositing
Must enable blending and pick source and destination factors glEnable(GL_BLEND) glBlendFunc(source_factor, destination_factor) Only certain factors supported GL_ZERO, GL_ONE GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA See Redbook for complete list

55 This color becomes the initial destination color
Example Suppose that we start with the opaque background color (R0,G0,B0,1) This color becomes the initial destination color We now want to blend in a translucent polygon with color (R1,G1,B1,a1) Select GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA as the source and destination blending factors R’1 = a1 R1 +(1- a1) R0, …… Note this formula is correct if polygon is either opaque or transparent

56 All the components (RGBA) are clamped and stay in the range (0,1)
Clamping and Accuracy All the components (RGBA) are clamped and stay in the range (0,1) However, in a typical system, RGBA values are only stored to 8 bits Can easily loose accuracy if we add many components together Example: add together n images Divide all color components by n to avoid clamping Blend with source factor = 1, destination factor = 1 But division by n loses bits A value of a over 1.0 is limited or clamped to the maximum of 1.0 and negative values are clamped to 0.0.

57 Is this image correct? Probably not Polygons are rendered
Order Dependency Is this image correct? Probably not Polygons are rendered in the order they pass down the pipeline Blending functions are order dependent

58 Opaque and Translucent Polygons
Suppose that we have a group of polygons some of which are opaque and some translucent How do we use hidden-surface removal? Opaque polygons block all polygons behind them and affect the depth buffer Translucent polygons should not affect depth buffer Render with glDepthMask(GL_FALSE) which makes depth buffer read-only Sort polygons first to remove order dependency

59 Simulates a fog effect Exponential Gaussian Linear (depth cueing) Fog
We can composite with a fixed color and have the blending factors depend on depth Simulates a fog effect Blend source color Cs and fog color Cf by Cs’=f Cs + (1-f) Cf f is the fog factor Exponential Gaussian Linear (depth cueing) Depth cueing is one of the oldest techniques in 3d grphics. We created the illusion of depth by drawing lines farther from the viewer dimmer than lines closer to the viewer, this technique known as depth cueing. We can extend this idea to create the illusion of partially translucent space between the object and the viewer, by blending in a distance-dependent color as each fragment is processed. Let f denote a fog factor. If the fragment has a color Cs and the fog is assigned a color Cf , then we can use the color Cs’ in the rendering.

60 Fog Functions If f varies linearly between some minimum and maximum values, we have a depth-cueing effect. If this factor varies exponentially, then we get effects that look more like fog.

61 OpenGL Fog Functions GLfloat fcolor[4] = {……}: glEnable(GL_FOG);
glFogf(GL_FOG_MODE, GL_EXP); glFogf(GL_FOG_DENSITY, 0.5); glFOgv(GL_FOG, fcolor); We can set up a fog-density function f=e^(-0.5z^2).

62 Ideal raster line is one pixel wide
Line Aliasing Ideal raster line is one pixel wide All line segments, other than vertical and horizontal segments, partially cover pixels Simple algorithms color only whole pixels Lead to the “jaggies” or aliasing Similar issue for polygons One of the major uses of a channel is for antialiasing.

63 Fraction depends on percentage of pixel covered by fragment
Antialiasing Can try to color a pixel by adding a fraction of its color to the frame buffer Fraction depends on percentage of pixel covered by fragment Fraction depends on whether there is overlap no overlap Suppose that, as part of the geometric-processing stage of the rendering process, as we process a fragement, we set the a value for the corresponding pixel to be a number between 0 and 1 that is the amount of that pixel covered by the fragment. We can then use this a-value to modulate the color as we render the fragment to the frame buffer. We can use a destination blending factor of 1-a and a source blending factor of a. Left: first polygon: Cd=(1-a1)C0 +a1C1 (C0 is the background color, C1 is the color of the first polygon, a1 is the amount the pixel is covered by first polygon); second polygon: Cd=(1-a2)((1-a1)C0 +a1C1 ) + a2C2, ad = a1+a2. If there is overlap of fragments within a pixel, then there are numerous possiblities. overlap

64 Use average area a1+a2-a1a2 as blending factor
Area Averaging Use average area a1+a2-a1a2 as blending factor If fragment 1 occupies a fraction a1 of the pixel, and the fragment 2 occupies a fraction a2 of the same pixel, and we have no other info about the location of the fragments within the pixel, then the average area of overlap is a1a2, we can represent the average case as shown in this figure. Hence, the new destinatin a should be ad=a1+a2-a1a2. How we should assign the color is a more complex problem because we have to decide whether the second fragment is in front of the first of the first is in front of the second, or even whether the two should be blended. We can define an appropriate blending for whichever assumption we wish to make. Not that, in a pipleine renderer, polygons can be generated in an order that has nothing to do with their distances from the viewer. However, if we couple a-blending with hidden-surface removal, we can use the depth information to make front-versus-back decisions.

65 Can enable separately for points, lines, or polygons
OpenGL Antialiasing Can enable separately for points, lines, or polygons glEnable(GL_POINT_SMOOTH); glEnable(GL_LINE_SMOOTH); glEnable(GL_POLYGON_SMOOTH); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); In opengl, we can invoke antialiasing without having the application program combine a-values explicitly if we enable blending and smoothing for points, lines, or polygons; for example, we can use To enable all three.

66 Typically 8 bits per color component
Accumulation Buffer Compositing and blending are limited by resolution of the frame buffer Typically 8 bits per color component The accumulation buffer is a high resolution buffer (16 or more bits per component) that avoids this problem Write into it or read from it with a scale factor Slower than direct compositing into the frame buffer

67 Image Filtering (convolution) Whole scene antialiasing Motion effects
Applications Compositing Image Filtering (convolution) Whole scene antialiasing Motion effects


Download ppt "Three-Dimensional Graphics VI"

Similar presentations


Ads by Google