Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Visual Appearance K. H. Ko School of Mechatronics Gwangju Institute.

Slides:



Advertisements
Similar presentations
Visible-Surface Detection(identification)
Advertisements

15.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 15 – Visible Surfaces and Shadows.
16.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 16 – Some Special Rendering Effects.
8.1si31_2001 SI31 Advanced Computer Graphics AGR Lecture 8 Polygon Rendering.
Compositing and Blending Ed Angel Professor Emeritus of Computer Science University of New Mexico 1 E. Angel and D. Shreiner: Interactive Computer Graphics.
Virtual Realism TEXTURE MAPPING. The Quest for Visual Realism.
CS 352: Computer Graphics Chapter 7: The Rendering Pipeline.
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Graphics Pipeline.
Week 7 - Monday.  What did we talk about last time?  Specular shading  Aliasing and antialiasing.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
Compositing and Blending Mohan Sridharan Based on slides created by Edward Angel 1 CS4395: Computer Graphics.
Compositing and Blending - Chapter 8 modified by Ray Wisman Ed Angel Professor of Computer Science, Electrical and Computer Engineering,
Image Compositing Angel 8.11 Angel: Interactive Computer Graphics5E © Addison-Wesley
Chapter 2: Time and Space Lecturer: Norhayati Mohd Amin.
Informationsteknologi Monday, October 29, 2007Computer Graphics - Class 21 Today’s class Graphics programming Color.
1Notes. 2Atop  The simplest (useful) and most common form of compositing: put one image “atop” another  Image 1 (RGB) on top of image 2 (RGB)  For.
Graphics File Formats. 2 Graphics Data n Vector data –Lines –Polygons –Curves n Bitmap data –Array of pixels –Numerical values corresponding to gray-
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
7/2/2006Based on: Angel (4th Edition) & Akeine-Möller & Haines (2nd Edition)1 CSC345: Advanced Graphics & Virtual Environments Lecture 4: Visual Appearance.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
02/14/02(c) University of Wisconsin 2002, CS 559 Last Time Filtering Image size reduction –Take the pixel you need in the output –Map it to the input –Place.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Computer Graphics Shadows
Guilford County Sci Vis V204.01
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Digital Images The digital representation of visual information.
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
Computer Graphics Inf4/MSc Computer Graphics Lecture 7 Texture Mapping, Bump-mapping, Transparency.
COMPUTER GRAPHICS CS 482 – FALL 2014 AUGUST 27, 2014 FIXED-FUNCTION 3D GRAPHICS MESH SPECIFICATION LIGHTING SPECIFICATION REFLECTION SHADING HIERARCHICAL.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Shading & Texture. Shading Flat Shading The process of assigning colors to pixels. Smooth Shading Gouraud ShadingPhong Shading Shading.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
Filtering and Color To filter a color image, simply filter each of R,G and B separately Re-scaling and truncating are more difficult to implement: –Adjusting.
Visible-Surface Detection Jehee Lee Seoul National University.
3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging.
Week 6 - Wednesday.  What did we talk about last time?  Light  Material  Sensors.
Week 10 - Wednesday.  What did we talk about last time?  Shadow volumes and shadow mapping  Ambient occlusion.
CS-378: Game Technology Lecture #4: Texture and Other Maps Prof. Okan Arikan University of Texas, Austin V Lecture #4: Texture and Other Maps.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
INT 840E Computer graphics Introduction & Graphic’s Architecture.
Real-Time rendering Chapter 4.Visual Appearance 4.4. Aliasing and antialiasing 4.5. Transparency,alpha,and compositing 4.6. Fog 4.7. Gamma correction
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
CHAPTER 8 Color and Texture Mapping © 2008 Cengage Learning EMEA.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Lecture 6 Rasterisation, Antialiasing, Texture Mapping,
Visual Appearance Chapter 4 Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology.
UniS CS297 Graphics with Java and OpenGL Blending.
Single Pass Point Rendering and Transparent Shading Paper by Yanci Zhang and Renato Pajarola Presentation by Harmen de Weerd and Hedde Bosman.
CDS 301 Fall, 2008 From Graphics to Visualization Chap. 2 Sep. 3, 2009 Jie Zhang Copyright ©
1cs426-winter-2008 Notes. 2 Atop operation  Image 1 “atop” image 2  Assume independence of sub-pixel structure So for each final pixel, a fraction alpha.
Compositing and Blending Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Computer Graphics Ken-Yi Lee National Taiwan University (the slides are adapted from Bing-Yi Chen and Yung-Yu Chuang)
Computer Graphics I, Fall 2008 Compositing and Blending.
Week 7 - Monday CS361.
Visual Appearance Chapter 4
The Graphics Rendering Pipeline
Computer Vision Lecture 4: Color
(c) 2002 University of Wisconsin
Lecture 13 Clipping & Scan Conversion
Chapter 14 Shading Models.
Presentation transcript:

Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Visual Appearance K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology

2 Surface Detail (Simulation of missing surface detail) Surface-Detail Polygons  Add gross detail through the use of surface-detail polygons to show features on a base polygon.  Each surface-detail polygon is coplanar with its base polygon.  It does not need to be compared with other polygons during visible-surface determination.

3 Surface Detail (Simulation of missing surface detail) Texture Mapping  Map an image, either digitized or synthesized, onto a surface.  Two steps. Mapping the four corners of the pixel onto the surface. The pixel’s corner points in the surface’s (s,t) coordinate space are mapped into the texture’s (u,v) coordinate space.  We compute a value for the pixel by summing all texels that lie within the quadrilateral, weighting each by the fraction of the texel that lies within the quadrilateral.

4 Surface Detail (Simulation of missing surface detail) Bump Mapping  It simulates slight roughness on a surface by perturbing the surface normal before it is used in the illumination model.  We introduce a bump map, which is an array of displacements, each of which can be used to simulate displacing a point on a surface a little above or below that point’s actual position.  A good approximation of a disturbed normal B u and B v are the partial derivatives of the selected bump-map entry B with respect to the bump-map parameterization axes, u and v. A surface P=P(t,s) is given.

5 Surface Detail (Simulation of missing surface detail) Bump Mapping

6 Transparency Transparency effects  The bending of light (refraction)  Attenuation of light due to the thickness of the transparent object  Reflectivity  Transmission changes due to the viewing angle  etc. They are limited in real-time rendering systems. A little transparency is better than none at all.

7 Transparency How to achieve transparency effect  Screen-door transparency A simple method for giving the illusion of transparency The idea is to render the transparent polygon with a checkerboard fill pattern : Every other pixel of the polygon is rendered, thereby leaving the object behind it partially visible.  In general the pixels on the screen are close enough together that the checkerboard pattern itself is not visible.

8 Transparency How to achieve transparency effect  Screen-door transparency Drawbacks  A transparent object can be only 50% transparent.  Only one transparent object can be convincingly rendered on one area of the screen.

9 Transparency How to achieve transparency effect  Screen-door transparency Advantage  Simplicity: Transparent objects can be rendered at any time, in any order and no special hardware is needed.

10 Transparency How to achieve transparency effect  Alpha Blending A method for more general and flexible transparency effects. Blends the transparent object’s color with the color of the object behind it.

11 Transparency

12 Transparency How to achieve transparency effect  Alpha Blending Alpha is a value describing the degree of opacity of an object for a given pixel.  1.0: the object is opaque and entirely covers the pixel’s area of interest  0.0: the pixel is not obscured at all. When an object is rendered on the screen, an RGB color and a Z-buffer depth are associated with each pixel. Another component, called alpha, can also be generated and, optionally, stored.

13 Transparency How to achieve transparency effect  Alpha Blending To make an object transparent, it is rendered on top of the existing scene with an alpha of less than 1.0. Blending formula  c o = a s c s + (1 – a s ) c d.  c s : the color of the transparent object (source)  a s : the object’s alpha  c d : the pixel color before blending (destination)  c o : the resulting color due to placing the transparent object over the existing scene.

14 Transparency How to achieve transparency effect sorting  To render transparent objects properly into a scene requires sorting. First, the opaque objects are rendered. Second, the transparent objects are blended on top of them in back-to-front order.  The blending equation is order-dependent. Blending in arbitrary order can produce serious artifacts.

15 Transparency

16 Transparency How to achieve transparency effect  Transparency can be computed using two or more depth buffers and multiple passes. First, a rendering pass is made so that the opaque surfaces’ z-depths are in the first Z-buffer. On the second rendering pass, the depth test is modified to accept the surface that is both closer than the depth of the first buffer’s stored z-depth and the farthest among such surfaces.  This step renders the backmost transparent object into the frame buffer and the z-depths into a second Z-buffer. This Z-buffer is then used to derive the next-closest transparent surface in the next pass.  Effective, but slow!!!

17 Compositing The blending process of photographs or synthetic renderings of objects is called compositing.  The alpha value at each pixel is stored along with the RGB color value for the object. The alpha channel is called the matte, and shows the silhouette shape of the object. This RGBα image can then be used to blend it with other such elements or against a background.

18 Compositing The most common way to store RGBα images are with premultiplied alphas.  The RGB values are multiplied by the alpha value before being stored.  Image file formats that support alpha include TIFF and PNG.

19 Compositing Chroma-keying  A concept related to the alpha channel.  Actors are filmed against a blue, yellow, or green screen and blended with a background. Blue-screen matting  A particular color is designated to be considered transparent. Where it is detected, the background is displayed. One drawback of this scheme is that the object is either entirely opaque or transparent at any pixel.  Alpha is effectively only 1.0 or 0.0.

20 Fog Fog is a simple atmospheric effect that can be added to the final image.  It increases the level of realism for outdoor scenes.  Since the fog effect increases with the distance from the viewer, it helps the viewer of a scene to determine how far away objects are located.  If used properly, it helps to provide smoother culling of objects by the far plane.  Fog is often implemented in hardware, so it can be used with little or no additional cost.

21 Fog

22 Fog The color of the fog is denoted C f. The fog factor is called f ∈ [0,1].  The fog factor decreases with the distance from the viewer. The final color of the pixel is determined by  c p = fc s + (1 – f) c f As f decreases, the effect of the fog increases.

23 Fog Linear fog.  The fog factor decreases linearly with the depth from the viewer.  F = (z end -z p )/(z end -z start ) Exponential fog  F = e –d f z p Squared exponential fog  F = e –(d f z p )^2 d f is a parameter that is used to control the density of the fog. The value is clamped to [0,1]

24 Fog Tables are sometimes used in implementing these fog functions in hardware accelerators. Some assumptions that can be considered in real-time systems which can affect the quality of the output.  Fog can be applied on a vertex level or a pixel level. Applying it on the vertex level -> The fog effect is computed as part of the illumination equation and the computed color is interpolated across the polygon using Gouraud shading. Pixel-level fog is computed using the depth stored at each pixel. Pixel-level fog gives a better result, all other factors being equal.  The distance along the viewing axis is used as the depth for computing the fog effect. One can use the true distance from the viewer to the object to compute fog. -> Called radial fog, range-based fog, or Euclidean distance fog. The highest-quality fog is generated by using pixel-level radial fog.

25 Fog

26 Gamma Correction Once the pixel values have been computed, we need to display them on a monitor.  There is a physical relationship between the voltage input to an electron gun in a Cathode-Ray Tube (CRT) monitor and the light output by the screen. I = a(V + ε) γ  V: input voltage  a and γ : constants for each monitor  ε : the black level (brightness) setting for the monitor  I : intensity generated. The gamma value for a particular CRT ranges from about 2.3 to 2.6. Mostly the value of 2.5 is used. This is a good average monitor value.  But it can be differently set depending on the situation.

27 Gamma Correction There exists nonlinearity in the CRT response curve.  The relation of voltage to intensity for an electron gun in a CRT is nonlinear.  This causes a problem within the field of computer graphics. Lighting equations compute intensity values that have a linear relationship to each other.  A computed value of 0.5 is expected to appear half as bright as 1.0.  But due to the nonlinearity, this expected effect may not be obtained.  To ensure that the computed values are perceived correctly relative to each other, gamma correction is necessary.

28 Gamma Correction Assume that the black level is zero. The computed color component c i is converted by  C = c i 1/γ for display by the CRT.  Ex. With a gamma of 2.2 and c i = 0.5, the gamma-corrected c is So if the electron gun level is set at 0.73, an intensity level of 0.5 is displayed.  Computed colors need to be boosted by this equation to be perceived properly with respect to one another when displayed on a CRT monitor.

29 Gamma Correction Gamma correction is important to real-time graphics.  Cross-platform compatibility  Color fidelity, consistency, and interpolation  Dithering  Line and edge antialiasing equality  Alpha blending and compositing  Texturing

30 Gamma Correction Cross-platform compatibility.  It affects all images displayed, not just scene renderings.  If gamma correction is ignored, models authored and rendered on, say, an SGI machine will display differently when moved to a Macintosh or a PC. This issue instantly affects any images or models made available on a web server.  Some sites have employed the strategy of attempting to detect the platform of the client requesting information and serving up images or models tailored for it.

31 Gamma Correction Color fidelity  The appearance of a color will differ from its true hue. Color consistency  Without correction, intensity controls will not work as expected. If a light or material color is changed from (0.5,0.5,0.5) to (1.0,1.0,1.0), the user will expect it to appear twice as bright, but it will not. Color interpolation  A surface that goes from dark to light will not appear to increase linearly in brightness across its surface.  The midtones will appear too dark.

32 Gamma Correction Dithering  In dithering, two colors are displayed close together and the eye combines them and perceives a blend of the two.  Lack of gamma correction adversely affects dithering algorithms. Without accounting for gamma, the dithered color can be perceptibly different from the color that is to be represented. Using screen-door transparency will result in a different perceived color from the blended transparency color.

33 Gamma Correction Line and edge antialiasing quality  As the use of line and edge antialiasing in real-time rendering increases, gamma’s effect on the quality of these techniques will be more noticeable.  For example, a polygon edge covers four screen grid cells. The polygon is white and the background is black. Left to right, the cells are covered 1/8, 3/8, 5/8 and 7/8.  We want the pixels to appear as 0.125, 0.375, and If the system has a gamma of 2.2, we need to send values of 0.389, 0.64, 0.808, and to the electron guns. Failing to do so will mean that the perceived brightness will not increase linearly.  Sending to the guns will result in a perceived relative brightness of only  will be affected somewhat less and will be perceived as

34 Gamma Correction Line and edge antialiasing quality  This nonlinearity causes an artifact called roping. The edge looks somewhat like a twisted rope.

35 Gamma Correction Line and edge antialiasing quality  The Liquid-Crystal Displays (LCDs) often have different voltage/luminance response curves. Because of these different response curves, lines that look antialiased on CRTs may look jagged on LCDs, or vice versa.

36 Gamma Correction Alpha blending and compositing  They should be done in a linear space and the final result should be gamma-corrected.  This can lead to difficulties, as pixel values stored in the color buffer are likely to be gamma- corrected. Bits of accuracy are lost as values are transferred from one computational space to another.

37 Gamma Correction Texturing  Images used as textures are normally stored in gamma-corrected form for some particular type of system, e.g., a PC or Macintosh.  When using textures in a synthesized scene, care must be taken to gamma-correct the texture a sum total of only one time.

38 Gamma Correction Hardware for gamma correction If no hardware is available, try to perform gamma correction earlier on in the pipeline.  Ex. We could gamma-correct the illumination value computed at the vertex, then Gouraud shade from there. This approach partially solves the cross-platform problem. Another solution is to ignore the gamma correction problem entirely and not do anything about it.  Even if the issues you encounter cannot be fixed, it is important to understand what problems are caused by a lack of gamma correction.

39 Gamma Correction Gamma correction is not a user preference. It is something that can be designed into an application to allow cross-platform consistency and to improve image fidelity and rendering algorithm quality.