Chapter X Advanced Texturing

Slides:



Advertisements
Similar presentations
Exploration of advanced lighting and shading techniques
Advertisements

CS123 | INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 1/16 Deferred Lighting Deferred Lighting – 11/18/2014.
Technische Universität München Computer Graphics SS 2014 Graphics Effects Rüdiger Westermann Lehrstuhl für Computer Graphik und Visualisierung.
Graphics Pipeline.
Computer graphics & visualization Global Illumination Effects.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
Real-Time Rendering TEXTURING Lecture 02 Marina Gavrilova.
Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Computer Graphics Shadows
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
Computer Graphics Mirror and Shadows
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Graphics Texture Mapping
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Computer Graphics 2 Lecture 7: Texture Mapping Benjamin Mora 1 University of Wales Swansea Pr. Min Chen Dr. Benjamin Mora.
Global Illumination. Local Illumination  the GPU pipeline is designed for local illumination  only the surface data at the visible point is needed to.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Single Pass Point Rendering and Transparent Shading Paper by Yanci Zhang and Renato Pajarola Presentation by Harmen de Weerd and Hedde Bosman.
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
Computer Graphics Ken-Yi Lee National Taiwan University (the slides are adapted from Bing-Yi Chen and Yung-Yu Chuang)
Applications and Rendering pipeline
Texturing Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology.
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
Discrete Techniques.
Real-Time Soft Shadows with Adaptive Light Source Sampling
Week 7 - Monday CS361.
Deferred Lighting.
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Jim X. Chen George Mason University
Chapter 14 Shading Models.
Models and Architectures
Chapters VIII Image Texturing
Models and Architectures
Introduction to Computer Graphics with WebGL
Interactive Graphics Algorithms Ying Zhu Georgia State University
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Real-Time Volume Graphics [06] Local Volume Illumination
(c) 2002 University of Wisconsin
Chapter IX Bump Mapping
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Chapter V Vertex Processing
Chapter XVI Texturing toward Global Illumination
UMBC Graphics for Games
Lecture 13 Clipping & Scan Conversion
Chapter VII Rasterizer
UMBC Graphics for Games
Models and Architectures
Chapter XIV Normal Mapping
Chapter IX Lighting.
CS5500 Computer Graphics May 29, 2006
CS-378: Game Technology Lecture #4: Texture and Other Maps
Models and Architectures
Last Time Presentation of your game and idea Environment mapping
Chapter XV Shadow Mapping
Chapter 14 Shading Models.
Frame Buffer Applications
Directional Occlusion with Neural Network
Presentation transcript:

Chapter X Advanced Texturing

Environment Mapping Environment mapping simulates a shiny object reflecting its surrounding environment. Remember the morphing cyborg in T2!!

Environment Mapping (cont’d) The first task for environment mapping is to capture the environment images in a texture called an environment map. The most popular implementation is using a cube map. It’s cube mapping.

Cube Mapping To determine the reflected color at p, a ray is fired from the viewpoint toward p. The ray denoted by I is reflected with respect to the surface normal n at p. HLSL library function texCUBE() takes a cube map and R as input, and returns an RGB color.

Cube Mapping (cont’d) The cube map faces are named {+x, -x, +y, -y, +z, -z}.

Cube Mapping (cont’d) The cube map face intersected by R is identified using R's coordinate that has the largest absolute value. The remaining coordinates are divided by the largest absolute value, and then range-converted from [-1,1] to [0,1].

Cube Mapping (cont’d) We have to use (u,v), but R returns (u',v'). Note that (u,v) and (u',v') will produce the same filtering result only when the environment is infinitely far away from p. Fortunately, people are fairly forgiving about physical incorrectness that results when the environment is not far away.

Dynamic Cube Mapping A cube map is created at run time, and then immediately used for environment mapping. Two-pass algorithm 1st pass: The Geometry Shader replicates the incoming primitive into six separate primitives, and each primitive is rasterized on its own render target which refers to a general off-screen buffer. Six images are generated. 2nd pass: The scene is rendered using the cube map. For the first pass, multiple render targets (MRT) are used for enabling the rendering pipeline to produce images to multiple render target textures at once.

Light Mapping Suppose a character navigating a static environment illuminated by static light sources. Even though the viewpoint is moving, the diffuse reflection remains constant over the environment surfaces. Then, we can pre-compute part of the diffuse reflection ahead of time, store the result in a light map, and look up the light map at run time.

Light Mapping (cont’d) When creating a light map, there is no real-time constraint. Therefore, the light map is computed using a global illumination algorithm, especially the radiosity algorithm. The light map is combined with the image texture at run time, i.e., the diffuse reflectance is read from the image texture, incoming light is read from the light map, and they are multiplied.

Light Mapping (cont’d) Quake II released in 1997 was the first commercial game that used the light map.

Radiosity Normal Mapping Radiosity computation adopts the concept of hemisphere. Its orientation is defined by the surface normal np. See below. Consider combining light mapping and normal mapping. The normal n(u, v) fetched from the normal map is the perturbed instance of np. If lighting were computed on the fly, n(u, v) would be used. However, the light map that is going to partly replace lighting computation was created using the unperturbed normal np. A simple solution to this discrepancy would be to use the perturbed normals stored in the normal map when the light map is created. Then, the light map would have the same resolution as the normal map. It is unnecessarily large.

Radiosity Normal Mapping (cont’d) In the solution of Half-Life 2, three vectors, v0, v1, and v2, which form an orthonormal basis, are pre-defined in the tangent space. Each vi is transformed into the world space, a hemisphere is placed, and the radiosity preprocessor is run. Then, incoming light is computed per vi, and three different colors are computed for p. If we repeat this for all sampled points of the object surface, we obtain three light maps for the object. Each light map is often called a directional light map.

Radiosity Normal Mapping (cont’d) At run time, the directional light maps are blended. The normal fetched from the normal map, (nx,ny,nz), is a tangent-space normal and can be redefined in terms of the basis {v0,v1,v2}. Let (n'x,n'y,n'z) denote the transformed normal. They represent the coordinates with respect to the basis {v0,v1,v2}, and are used for computing the weights needed for blending the three light maps.

Radiosity Normal Mapping (cont’d)

Radiosity Normal Mapping (cont’d) The resolutions of the directional light maps and the normal map. In (a), the macrostructure is densely sampled for creating the normal map. In (b), the macrostructure is sparsely sampled for creating the directional light maps. When two fragment, f1 and f2, are processed at run time, the directional light maps would provide appropriate colors for both f1 and f2. In (c), the traditional light map with the same resolution as the directional light maps would provide an incorrect light color for f2.

Why Shadows? Shadows help us understand the spatial relationships among objects in a scene, especially between the occluders and receivers. (Occluders cast shadows onto receivers.)

Why Shadows? (cont’d) Shadows increase the realism.

Terminologies in Shadows A point light source generates only fully shadowed regions, called hard shadows. An area light source generates soft shadows, which consist of Umbra: fully shadowed regions Penumbra: partially shadowed regions This chapter will focus on hard shadows even though a lot of real-time algorithms for soft shadow generation are available.

Shadow Mapping Two-pass algorithm Pass 1 Render the scene from the position of the light source. Store the depths into the shadow map, which is a depth map with respect to the light source.

Shadow Mapping (cont’d) Pass 2 Render the scene from the camera position. It’s real rendering. For each pixel, compare its distance d to the light source with the depth value z in the shadow map. If d > z, the pixel is in shadows. Otherwise, the pixel is lit.

Shadow Mapping Artifact – Surface Acne The algorithm works, but its brute-force implementation suffers from two major problems. Surface acne - a mixture of shadowed and lit areas Shadow-map resolution sensitivity

Shadow Mapping Artifact – Surface Acne (cont’d) What’s the problem? The shadowed and lit pixels coexist on a surface area that should be entirely lit. Note that the scene points sampled at the second pass are usually different from the scene points sampled at the first pass. Suppose the nearest point sampling. In the example, p1 is to be lit, but judged to be in shadows because d1 > z1. For p1, it should be d1 < z1.

Shadow Mapping Artifact – Surface Acne (cont’d) At the 2nd pass, add a small bias value to z1 such that d1 < z1´.

Shadow Mapping Artifact – Surface Acne (cont’d) The bias value is usually fixed after a few trials. Anyway, the surface acne problem has been largely resolved.

Shadow Mapping Artifact – Resolution Sensitivity If the resolution of a shadow map is not high enough, multiple pixels may be mapped to a single texel of the shadow map. This is a magnification example, but bilinear interpolation wouldn’t help as you will see very soon.

Shadow Mapping Artifact – Resolution Sensitivity (cont’d) A simple but expensive solution is to increase the shadow map resolution. Many other solutions have been proposed.

Shadow Mapping – Shader After every vertex is transformed into the world space, the view parameters are specified with respect to the light source. Consequently, the view-transformed vertices are defined in the so-called light space. The view frustum is then specified in the light space.

Shadow Mapping – Shader (cont’d)

Shadow Mapping – Shader (cont’d)

Shadow Mapping – Shader (cont’d)

Shadow Mapping – Filtering Recall that, for filtering the shadow map, we assumed nearest point sampling. An alternative would be bilinear interpolation. The problems of bilinear interpolation It is not good to have completely different results depending on the filtering options. The shadow quality is not improved at all and the shadow still reveals the jagged edges.

Shadow Mapping – Filtering (cont’d) A solution to this problem is to first determine the visibilities of a pixel with respect to the four texels, and then interpolate the visibilities. This value is taken as the “degree of being lit.” In general, the technique of taking multiple texels from the shadow map and blending the pixel's visibilities against the texels is named percentage closer filtering (PCF).

Shadow Mapping – Filtering (cont’d) Direct3D 10 supports PCF through a new function SampleCmpLevelZero(). In the example, the visibility is computed for each of nine texels, and the visibilities are averaged.

Ambient Occlusion We have assumed that the ambient light arrives at a surface point from all directions. In reality, however, part of the directions may be occluded by the external environment. Ambient occlusion algorithm computes how much of the ambient light is occluded, which we call the occlusion degree. Imagine casting rays along the opposite directions of the incident light. Some rays intersect the scene geometry. The occlusion degree is defined as the ratio between the intersected and all cast rays. However, it’s expensive.

Screen Space Ambient Occlusion The occlusion degree can be approximated as the ratio between the occupied space and the entire hemisphere’s space. Instead of computing the space volumes, let’s take a set of samples, and then determine if each sample is inside or outside.

Screen Space Ambient Occlusion (cont’d) Take the z-buffer of the scene (seen from the camera) as the discrete approximation of the scene geometry. It’s depth map that can be created using the algorithm presented in the first pass of shadow mapping. At the second pass, a quad covering the entire screen (more precisely, the viewport) is rendered such that each fragment of the quad corresponds to a pixel of the z-buffer. Let’s do comparison that is similar to the one performed for shadow mapping. In the example, s1 is outside as d1 < z1. However, s2 is inside as d2 > z2. The occlusion degree is 3/8. Each of the occlusion degrees can be used to modulate sa in the ambient reflection term.

Screen Space Ambient Occlusion (cont’d) The result is great!! However, we have errors!!

Screen Space Ambient Occlusion (cont’d) Gears of War example Lighting is one of the elements influenced the most by GPU evolution. The state of the art in real-time lighting has been gradually moving away from the classic implementation of the Phong model.

Deferred Shading A most notable technique that extensively utilizes the MRT is deferred shading. This technique does not shade a fragment that will fail the depth test, i.e., shading is deferred until all “visible surfaces” of the scene are determined. It is split into geometry pass and lighting passes. At the geometry pass, various per-pixel attributes are computed and stored into MRT by a fragment shader. The per-pixel attributes may include texture colors, depths, and normals “of the visible surfaces.”

Deferred Shading (cont’d) The MRT filled at the geometry pass is often called the G-buffer which stands for geometric buffer. Screen-space image texture Depth map Screen-space normal map Shading using the G-buffer.

Deferred Shading (cont’d) At the lighting passes, a large number of light sources can be applied to the G-buffer. All light sources are iterated. For a light source, a full-screen quad is rendered (as was done by the SSAO algorithm) such that each fragment of the quad corresponds to a pixel on the screen. The fragment shader takes the G-buffer as input, computes lighting, and determines the color of the current pixel. It is blended with the current content of the color buffer, which was computed using the previous light sources.