Presentation is loading. Please wait.

Presentation is loading. Please wait.

University of North Carolina at Greensboro

Similar presentations


Presentation on theme: "University of North Carolina at Greensboro"— Presentation transcript:

1 University of North Carolina at Greensboro
Computer Graphics using OpenGL, 3rd Edition F. S. Hill, Jr. and S. Kelley Chapter 9 Tools for Raster Displays S. M. Lea University of North Carolina at Greensboro © 2007, Prentice Hall

2 Aliasing The jaggies: a form of aliasing.
Aliasing occurs because pixels are displayed in a fixed rectangular array.

3 Aliasing (2) Each pixel is set to black based on sampling at its center. If the rectangle covers the center, the color of the entire pixel area is set to white or black.

4 Sampling Effects Small objects can disappear entirely if the object lands between pixel centers (left). An object can blink on and off in an animation (right).

5 Why Aliasing Occurs A rapidly varying signal is sampled infrequently, causing the appearance of a lower “alias” frequency.

6 Anti-Aliasing Techniques
Anti-aliasing techniques involve blurring to smooth the image. For a black rectangle against a white background, the sharp transition from black to white is softened by using a mixture of gray pixels near the rectangle's border. When the picture is looked at from afar, the eye blends the gracefully varying shades of gray together and sees a smoother edge.

7 Anti-Aliasing Techniques (2)
Three approaches to anti-aliasing are commonly used: prefiltering supersampling postfiltering

8 Prefiltering Prefiltering techniques compute pixel colors based on an object’s coverage: the fraction of the pixel area that is covered by the object. A pixel that is half-covered by the polygon should be given the intensity 1/2; one that is one-third covered should be given the intensity 1/3; and so forth.

9 Prefiltering (2) Prefiltering with 16 shades of grey:

10 Prefiltering (3) Prefiltering operates on the detailed geometric shape of the object(s) being scan converted and computes an average intensity for each pixel based on the objects found lying within each pixel's area. For shapes other than polygons, it can be expensive computationally.

11 Supersampling Since aliasing arises from sampling an object at too few points, we can try to reduce its effects by sampling more often than one sample per pixel. This is called supersampling: taking more intensity samples of the scene than are displayed.

12 Supersampling (2) Each display pixel value (square) is formed as the average of several samples (x).

13 Supersampling (3) Each final display pixel can be formed as the average of the nine neighbor samples: the center one and the eight surrounding ones.

14 Supersampling (4) The pixel at A has six samples within the bar and three samples of background. Its color is set to two-thirds the bar's color + one-third the background’s color.

15 Supersampling (5) Left: a scene displayed at a resolution of 300-by-400 pixels. The jaggies are readily apparent. Right: the same scene sampled at a resolution of 600-by-800 samples. Each of the 300-by-400 display pixels is an average of nine neighbors. The jaggies have been softened considerably.

16 Supersampling (6) Supersampling computes Ns scene samples in both x and y for each display pixel, averaging some number of neighbor samples to form each display pixel value. Supersampling with Ns = 4, for example, averages 16 samples for each display pixel.

17 Supersampling With Ns = 1
The scene is sampled at the corner of each pixel. Each pixel is set to the average of the four samples taken at its corners. Some softening of the jaggies is still observed even though there is no supersampling.

18 Postfiltering Postfiltering computes each display pixel as a weighted average of an appropriate set of neighboring samples of the scene.

19 Postfiltering (2) Each value represents the intensity of a scene sample, the ones in gray indicating the centers of the various display pixels. The square mask or window function of weights is laid over each gray square in turn.

20 Postfiltering (3) Each weight is multiplied by its corresponding sample; the nine products are summed to form the pixel intensity. The weights must always sum to 1.

21 Postfiltering (4) Example: when the mask shown is laid over the sample of intensity 30, the weighted average is found to be (30)/2 + ( )/16 = , which rounds to intensity 33.

22 Postfiltering (5) Supersampling is a special case of postfiltering, in which all the weights have value 1/9. Larger masks, 5-by-5 or even 7-by-7 look farther into the neighborhood of the center sample and can provide additional smoothing. Postfiltering can be performed for any value of oversampling Ns. If Ns = 4 is used, a 5-by-5, 7-by-7, or even 9-by-9 mask is appropriate. If Ns = 1, a 3-by-3 mask that weights the center pixel most heavily works best.

23 Anti-aliasing for Textures
Mapped textures are particularly prone to aliasing effects. Above: aliased texture. Below: anti-aliased texture.

24 Anti-aliasing for Textures (2)
Texture is defined as a function texture(s, t) in texture space, which undergoes a complex sequence of mappings before it is finally depicted on the display. The rendering task is to work the other way, and, for each given display pixel at coordinates (x, y), find the corresponding color in the texture() function.

25 Anti-aliasing for Textures (3)
The figure shows a pixel at (x, y) being rendered, and the value (s*, t*) in texture space that is accessed.

26 Anti-aliasing for Textures (4)
Let T() be the mapping from pixel space to texture space, so (s*, t*) = T(x, y). Pixels have area. The whole pixel at (x, y) maps to texture space as a quadrilateral.

27 Anti-aliasing for Textures (5)
We call this the “texture quad” for the screen pixel in question. The texture space is covered with such quads, each arising from a screen pixel. The size and shape of each texture quad depends on the nature of T() and can be costly to find. If texture(,) varies inside the quad, yet the screen pixel is colored using only the single sample texture(s*, t*), significant information is missed, and there is substantial aliasing.

28 Anti-aliasing for Textures (6)
To reduce the effects of aliasing, we should color each screen pixel with some average of the colors lying in the corresponding texture quad. Finding the area of each texel that lies inside the texture quad is very slow. We need some approximate techniques.

29 Anti-aliasing for Textures (7)
Elliptical weighted average (EWA) filter Covers each screen pixel by a circularly-symmetric filter function. The concentric circles indicate different weighting levels and map the filter function into texture space. Once in texture space the levels become a form of ellipse that roughly resembles the shape of the texture quad. Samples of the filter function, stored in a look-up table, are used to weight different points within the ellipse, and these weighted values are summed to form the average.

30 Anti-aliasing for Textures (8)
This can all be done incrementally and very efficiently at the cost of a few arithmetic operations per texel.

31 Anti-aliasing for Textures (9)
Stochastic sampling avoids difficult calculations in forming an average texture color by sampling texels in the quad in a randomized pattern and averaging the results.

32 Anti-aliasing for Textures (10)
Stochastic sampling uses the average αk and βk are small random quantities that are easy to create using a random number generator. Their distribution can be tuned if desired to the general size of the texture quad.

33 Anti-aliasing in OpenGL
Anti-aliasing in OpenGL uses the accumulation buffer, an extra area that OpenGL can create and draw into. It is created by glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_ACCUM | GLUT_DEPTH); Clear it using glClear(GL_ACCUM_BUFFER_BIT);

34 Anti-aliasing in OpenGL (2)
The anti-aliasing method resembles stochastic sampling. The scene is drawn 8 times, each time translating the camera in x and y by a small displacement stored in an array jitter[ ] of vectors. Each new drawing is scaled by 1/8 and added pixel by pixel to the accumulation buffer using glAccum(GL_ACCUM, 1/8.0).

35 Anti-aliasing in OpenGL (3)
When the eight renditions have been drawn, the accumulation buffer is copied into the frame buffer using glAccum(GL_RETURN, 1.0). The result is an average of several pixels.

36 Anti-aliasing in OpenGL: Code
glClear(GL_ACCUM_BUFFER_BIT); // clear ACCUM for (int i=0; i < 8; i++) { cam.slide(f * jitter[i].x, f * jitter[i].y, 0); //slide camera display(); // draw the scene glAccum(GL_ACCUM, 1/8.0); //add to the accumulation buffer } glAccum(GL_RETURN, 1.0);//copy accumulation buffer into frame buffer

37 Anti-aliasing in OpenGL: Code (2)
The jitter vector contains eight points that lie in x and y between –0.5 and 0.5. The header file jitter.h uses the values ( , ), ( , ), ( , ), ( , ), ( , ), ( , ), (0.1022, ), ( , ). These mimic eight randomly chosen offsets from a circularly symmetric probability distribution, reminiscent of the EWA method described earlier. jitter.h also contains other jitter vectors, both shorter and longer, that can be used to try different levels of anti-aliasing.

38 Example Left: no anti-aliasing; right: eight jittered versions averaged in the accumulation buffer. Performance is reduced, since the scene is rendered 8 times for each frame.

39 Creating More Shades and Colors
32 bits per pixel is common now. Considering how to make multiple colors from a very much smaller number of bits will allow us to understand better the ways that the human-perceptual system interacts with an image.

40 Halftoning Halftoning (used for pictures in newspapers) trades spatial resolution for color resolution. Only black ink is used, yet an image appears to have many levels of gray. This is achieved by using smaller or larger blobs of black ink spaced closely together. Areas where most of the blobs are large appear darker to the eye because the average level of blackness is higher. Places where the blobs are smaller appear as a lighter shade of gray.

41 Halftoning (2) The eye combines the blobs, and perceives an average darkness over small regions. The spatial resolution of a newspaper picture is much less than that of a photograph, however, because it is made up of distinct blobs, which cannot be arbitrarily small.

42 Computer Halftoning Digital halftoning, or patterning, uses arrays of small dots instead of variable-sized blobs. The figure shows 2-by-2 arrays of dots (each dot is 0 or 1) being used to simulate larger blobs having five possible intensity levels. The eye sees the average intensity in each 2-by-2 blob, and so can see five levels.

43 Computer Halftoning (2)
Uses: an original image uses a 100-by-100 array of pixels whose intensity values range from 0 to 4. We have only a bi-level display available, so we display the image using a 200-by-200 pixel area. We shade each 2-by-2 block of pixels appropriately to create a semblance of one of the gray shades 0,...,4. Again spatial resolution is exchanged for intensity resolution.

44 Computer Halftoning (3)
The positions of the black elements in the cell were chosen to be as irregular as possible. If, instead, either of the patterns shown on the right were used for level 2, the image might have horizontal or vertical stripes in certain patterns.

45 Computer Halftoning (4)
Left: 256 shades of grey; right: a bi-level display when 2-by-2 patterning is used

46 Computer Halftoning (5)
Larger cell sizes can be used to create a larger number of gray levels. An n-by-n cell of zeros and ones can produce n2 + 1 gray levels.

47 Computer Halftoning (6)
Patterning is most applicable when the original image is of lower resolution than is the display device to be used.

48 Error Diffusion Error diffusion provides another technique for displaying multi-level pixmaps on a display which supports a small number of colors. Suppose each pixel of the original pixmap has intensities between 0 and 255, and that we need to replace each pixel by 0 or 1.

49 Error Diffusion (2) If a pixel has intensity A, we replace it by 0 if A < 128, and by 1 if A  128. When A is anything other than exactly 0 or 255, this produces some error between the truth and the displayed values. If A = 42, for instance, we set the display pixel to 0 which is too low by the amount 42. If A = 167, we display a 1 (the highest intensity, corresponding to a pixel value of 255), which is too high by the amount = 88.

50 Error Diffusion (3) In error diffusion we try to compensate for the unavoidable errors by subtracting them from some of the neighboring pixels in the pixmap. We pass portions of the error on to neighboring pixels that haven’t been thresholded yet, so that when they get thresholded later it’s the new adjusted value that is tested against the threshold. In this way the error diffuses through the image, maintaining proper values of average intensity.

51 Error Diffusion (4) The figure shows part of the original (multi-level) pixmap. It is processed top to bottom and left to right. The shaded pixels have been processed. Pixel p (actual value A) has just been compared with 128, and either 0 or 1 has been output.

52 Error Diffusion (5) If A is less than 128 the display pixel is set to 0 and the error E is -A (we are displaying a value A too low). If A is greater than or equal to 128 the display pixel is set to 1 and the error E is 255-A (we are displaying a value too high by the amount 255-A).

53 Error Diffusion (6) Fractions of the resulting error E are now passed to pixels a, b, c, and d. Old values are replaced with: a = a - fa E {adjust pixel to the right} b = b - fb E {adjust pixel at lower left} c = c - fc E {adjust pixel below} d = d - fd E {adjust pixel at lower right}

54 Error Diffusion (7) A typical choice of the fractions is (fa, fb, fc, fd) = (7/16, 3/16, 5/16, 1/16). These values sum to one, so the entire amount of error has been passed off to neighbors of p. This acts to preserve the average intensity of a region.

55 Error Diffusion (8) When the end of a scan line is reached the errors that would go to a and d do not get passed on to the start of the next scan line. They can be discarded, or the algorithm could diffuse the entire error to pixels b and c. Experience shows that it is best to alternate the direction in which successive scan lines are processed: first left to right then right to left.

56 Error Diffusion (9) The pattern in the figure reverses on the next line, (e.g. a is to the left). The snake-like shape of this scanning has caused it to be called a serpentine raster pattern.

57 Example The figure shows a 512 x 512 pixmap after error diffusion, viewed on an out-dated display. The error diffusion method used the serpentine raster and the coefficients (7/16, 3/16, 5/16, 1/16).

58 Error Diffusion (10) Extending this technique to displays support more than 2 colors is easy. At each pixel the closest displayable level is found, and the resulting error is passed on exactly as described. For color images each of the three color components is error diffused independently.


Download ppt "University of North Carolina at Greensboro"

Similar presentations


Ads by Google