Presentation is loading. Please wait.

Presentation is loading. Please wait.

Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Realism in Computer Graphics K. H. Ko School of Mechatronics Gwangju.

Similar presentations


Presentation on theme: "Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Realism in Computer Graphics K. H. Ko School of Mechatronics Gwangju."— Presentation transcript:

1 Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Realism in Computer Graphics K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology

2 2 Why Realism? The creation of realistic pictures is an important goal in fields such as simulation, design, entertainment and advertising, research and education, and command and control. Creating realistic computer-generated images is often an easier, less expensive, and more effective way to see preliminary results than is building models and prototypes.  Allows more alternative designs to be considered. In the entertainment world, computer games and science-fiction movies are the main beneficiary of the advance of computer graphics technology. Realistic images are becoming an essential tool in research and education. Another application for realistic imagery is in command and control, in which the user needs to be informed about and to control the complex process represented by the picture.

3 3 Fundamental Difficulties A fundamental difficulty in achieving total visual realism is the complexity of the real world.  The richness of our environment: texture, subtle color gradations, shadows, reflections and slight irregularities in the surrounding objects. A more easily met subgoal in the quest for realism is to provide sufficient information to let the viewer understand the 3D spatial relationships among several objects.  This subgoal can be achieved at a significantly lower cost.  It is a common requirement in CAD and in many other application areas.  In some contexts, extra detail may only distract the viewer’s attention from more important information being depicted.

4 4 Fundamental Difficulties One long-standing difficulty in depicting spatial relationships is that most display devices are 2D.  3D objects must be projected into 2D, with considerable attendant loss of information.

5 5 Fundamental Difficulties The more the viewers know about the object being displayed, the more readily they can form an object hypothesis.  Additional context will resolve the ambiguity.

6 6 Rendering Techniques for Line Drawings A subgoal of Realism  Showing 3D detail relationships on a 2D surface.  It can be achieved by line drawings.

7 7 Rendering Techniques for Line Drawings Multiple Orthographic Views  The projection plane is perpendicular to a principal axis.

8 8 Rendering Techniques for Line Drawings Axonometric and Oblique  A point’s z coordinates influences its x and y coordinates in the projection.

9 9 Rendering Techniques for Line Drawings Perspective Projections  An object’s size is scaled in inverse proportion to its distance from the viewer. Our interpretation of perspective projections is often based on the assumption that a smaller object is farther away.

10 10 Rendering Techniques for Line Drawings Depth Cueing  The depth (distance) of an object can be represented by the intensity of the image. Parts of objects that are intended to appear farther from the viewer are displayed at lower intensity. The eye’s intensity resolution is lower than its spatial resolution, so depth cueing is not useful for accurately depicting small differences in distance.

11 11 Rendering Techniques for Line Drawings Depth Clipping

12 12 Rendering Techniques for Line Drawings Texture  Simple vector textures may be applied to an object. Cross-hatching: useful in perspective projections Color

13 13 Rendering Techniques for Line Drawings Visible-Line Determination  Hidden line removal: Not necessarily the most effective way to show depth relations.  Showing hidden lines as dashed lines can be a useful compromise.

14 14 Rendering Techniques for Shaded Images Visible-Surface Determination

15 15 Rendering Techniques for Shaded Images Illumination and Shading

16 16 Rendering Techniques for Shaded Images Interpolated Shading  Shading information is computed for each polygon vertex and interpolated across the polygons to determine the shading at each pixel.  Useful for curved surfaces

17 17 Rendering Techniques for Shaded Images Material Properties

18 18 Rendering Techniques for Shaded Images Modeling Curved Surfaces

19 19 Rendering Techniques for Shaded Images Improved Illumination and Shading  One of the most important reasons for the unreal appearance of most computer graphics images is the failure to model accurately the many ways that light interacts with object.

20 20 Rendering Techniques for Shaded Images Texture

21 21 Rendering Techniques for Shaded Images Shadows: soft and hard shadow

22 22 Rendering Techniques for Shaded Images Transparency and Reflection

23 23 Rendering Techniques for Shaded Images Improved Camera Models  Modeling the focal properties of lenses. Some parts of objects are in focus, whereas closer and farther parts are out of focus. Represent a moving object.  Motion blur

24 24 Improved Object Models  The search for realism has concentrated in part on ways of building more convincing models, both static and dynamic. Independent of the rendering technology used.  Gases, waves, mountains, trees, etc.

25 25 Dynamics Changes that spread across a sequence of pictures, including changes in position, size, material properties, lighting and viewing specification. Motion dynamics  From simple transformations to complex animation

26 26 Stereopsis Look at an object with one eye, then with the other.  Two views differ slightly because our eyes are separated from each other by a few inches.  The binocular disparity caused by this separation provides a powerful depth cue called stereopsis or stereo vision. Our brain fuses the two separate images into one that is interpreted as being in 3D Holography, true 3D images, etc.

27 27 Interacting with Our Other Senses The final step toward realism is the integration of realistic imagery with information presented to our other senses.  Virtual reality, immersive modeling.

28 28 Aliasing and Anti-aliasing Signal: a function that conveys information.  A function of time: temporal domain  A function of space: spatial domain. Image: z = f(x,y), z: intensity At the horizontal line α: 2D signal.

29 29 Aliasing and Anti-aliasing Continuous and Discrete Signal  Continuous: a continuum of position in space  Discrete: a set of discrete points in space Our rendering algorithms must determine the intensities of the finite number of pixels in the array so that they best represent the continuous 2D signal defined by the projection. The process of selecting a finite set of values from a signal is “sampling.” Those values are “samples.” We display samples using a process, reconstruction, that attempts to recreate the original continuous signal from the samples.

30 30 Aliasing and Anti-aliasing Perfect reconstruction is often impossible.  Converting a continuous signal to a finite array of values may result in a loss of information.  Minimum sampling frequency will be infinite for many kinds of signals.

31 31 Point Sampling Select one point for each value, evaluate the original signal at this point and assign its value to the pixel. Since the signal’s values at a finite set of points are sampled, important features of the signal may be missed.  Increase sampling rate.  Generate an image with fewer pixels by combining several adjacent samples.

32 32 Point Sampling Supersampling: Take more than one sample for each pixel and combine them.  Corresponds to reconstructing the signal and resampling the reconstructed one.  Often better than sampling the original signal. Then, how many samples are enough?  We need a way to guarantee that the samples we take are spaced close enough to reconstruct the original signal.  Sampling Theory.

33 33 Area Sampling Unweighted Area Sampling  Integrating the signal over a square centered about each grid point, dividing by the square’s area, and using this average intensity as that of the pixel. No objects are missed. Each object’s projection contributes to those pixels that contain it.  In strict proportion to the amount of pixel’s area it covers  Without regard to the location of that area in the pixel. No consideration of the object’s position in the grid.

34 34 Area Sampling Weighted Area Sampling  Assign different weights to different parts of the pixel.  Weights are determined by considering the distance to the center of the pixel.  BUT we need that the weighting functions of adjacent pixels should overlap.

35 35 Sampling Theory A signal can be composed with a sum of phase-shifted sine waves. We use two representations for a signal which are related through the Fourier transformation.  Time (spatial) domain: signal vs. time (space)  Frequency domain: signal vs. frequency.

36 36 Sampling Theory

37 37 Sampling Theory Sampling theory tells us that a signal can be properly reconstructed from its samples if the original signal is sampled at a frequency that is greater than twice, f h, the highest frequency component in its spectrum.  The lower bound on the sampling rate is called the Nyquist rate.

38 38 Aliasing The phenomenon of high frequencies masquerading as low frequencies in the reconstructed signal.

39 39 Aliasing In CG, aliasing includes:  “Jaggies” along edges are caused by discontinuities at the projected edges of objects: a point sample either does or does not lie in an object’s projection.  Textures and objects seen in perspective may cause arbitrarily many discontinuities and fluctuations in the environment’s projection, making it possible for objects whose projections are too small and too close together to be alternately missed and sampled.

40 40 Filtering A partial solution to the aliasing problem  Remove the offending high frequencies from the original signal.  The new signal could be reconstructed properly from a finite number of samples. Low pass filtering  It causes blurring in the spatial domain since fine visual detail is captured in the high frequencies that are attenuated by low-pass filtering.

41 41 Filtering

42 42 Sampling Theory Low-pass filtering

43 43 Reconstruction

44 44 Q & A?


Download ppt "Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Realism in Computer Graphics K. H. Ko School of Mechatronics Gwangju."

Similar presentations


Ads by Google