Download presentation
Presentation is loading. Please wait.
Published byAdam May Modified over 9 years ago
1
Shadow Mapping Chun-Fa Chang National Taiwan Normal University
2
Advanced Texture Mapping Using multiple textures Multi-Pass textures 1 st Pass: render the scenes as usual Create textures from the output images 2 nd Pass: render the scenes again using the created texture
3
Using Textures in GLSL Shader sampler2D data type in GLSL Binding to the C/C++ program through glGetUniformLocation() See the myTexture variable in Lab 7 in both the fragment shader and, the C code setShaders().
4
Shadow Map Using two textures: color and depth Relatively straightforward design using pixel (fragment) shaders on GPUs.
5
Image Source: Cass Everitt et al., “Hardware Shadow Mapping” NVIDIA SDK White PaperHardware Shadow Mapping Eye’s ViewLight’s ViewDepth/Shadow Map
6
Basic Steps of Shadow Maps 1. Render the scene from the light’s point of view, 2. Use the light’s depth buffer as a texture (shadow map), 3. Projectively texture the shadow map onto the scene, Use “TexGen” or shader 4. Use “texture color” (comparison result) in fragment shading.
7
What’re in the Example Code? A C++ class for storing matrix state: class OpenGL_Matrix_State { void Save_Matrix_State(); void Restore_Matrix_State(); void Set_Texture_Matrix(); } A proxy rectangle for debug
8
(1) Rendering from Light’s View Set the camera to the light position. Viewport set to the same size as the texture. To avoid the floating point precision problem (casting its own shadow to a surface), depth must be shifted: glPolygonOffset(...,...); glEnable(GL_POLYGON_OFFSET_FILL); Shading could be turned off We only care about the depth!
9
(2) Creation of Shadow Map (Texture) Draw the objects (from light’s view) To create a depth texture, use: glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowMapSize, shadowMapSize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0); Then use glCopyTexSubImage2D() to copy the frame buffer to the depth texture.
10
(3) Generation of Texture Coordinates When we render the scene again from the normal camera view: We store the light’s view to the texture matrix. The texture matrix is then passed to the GLSL shaders. gl_TextureMatrix[0] * vertex gives us the homogeneous coordinates in light space Divide by w to obtain the texture coordinates. Watch out! Must shift from [-1, 1] to [0,1]
11
Normalized Coordinates Independent of the screen resolution or window size. Clip coordinates: after Model-View and Projection transformation. Normalized Device Coordinates (NDC): after division by w.
12
(4) Depth Comparison in Fragment Shader Compare two depths: Depth read from the shadow map Depth by transformation to the light space In the shadow if ____?_(your exercise)____ Set a darker color for shadowed surfaces
13
More GPU Programming and GPGPU Chun-Fa Chang National Taiwan Normal University
14
Calculator vs. Computer What is the difference between a calculator and a computer? Doesn ’ t a compute-r just “ compute ” ? The Casio fx3600p calculated can be programmed (38 steps allowed).
15
Turing Machine Can be adapted to simulates the logic of any computer that could possibly be constructed. von Neumann architecture implements a universal Turing machine. Look them up at Wikipedia!
17
Simplified View The Data Flow: 3D Polygons (+Colors, Lights, Normals, Texture Coordinates … etc.) 2D Polygons 2D Pixels (I.e., Output Images) Transform (& Lighting) Rasterization
18
Global Effects translucent surface shadow multiple reflection
19
Local vs. Global
20
How Does GPU Draw This?
21
Quiz Q1: A straightforward GPU pipeline give us local illumination only. Why? Q2: What typical effects are missing? Hint: How is an object drawn? Do they consider the relationship with other objects? Shadow, reflection, and refraction…
22
Wait but I ’ ve seen shadow and reflection in games before … With ShadowsWithout Shadows
23
Faked Global Illumination Shadow, Reflection, BRDF … etc. In theory, real global illumination is not possible in current graphics pipeline: Conceptually a loop of individual polygons. No interaction between polygons. Can this be changed by multi-pass rendering?
24
Case Study: Shadow Map Using two textures: color and depth Relatively straightforward design using pixel (fragment) shaders on GPUs.
25
Adding “ Memory ” to the GPU Computation Modern GPUs allow: The usage of multiple textures. Rendering algorithms that use multiple passes. Transform (& Lighting) Rasterization Textures
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.