Chapter IX Bump Mapping

Slides:



Advertisements
Similar presentations
16.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 16 – Some Special Rendering Effects.
Advertisements

Michael I. Gold NVIDIA Corporation
Exploration of bump, parallax, relief and displacement mapping
Bump Mapping CSE 781 Roger Crawfis.
Graphics Pipeline.
03/16/2009Dinesh Manocha, COMP770 Texturing Surface’s texture: its look & feel Graphics: a process that takes a surface and modifies its appearance using.
Computer Graphics Bing-Yu Chen National Taiwan University.
3D Graphics for Game Programming (J. Han) Chapter VIII Image Texturing.
(conventional Cartesian reference system)
Texture Mapping from Watt, Ch. 8 Jonathan Han. Topics Discussed Texture Map to Models Bump Maps, Light Maps Environment (Reflection) Mapping 3D Textures.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Computer Graphics Inf4/MSc Computer Graphics Lecture 7 Texture Mapping, Bump-mapping, Transparency.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Interactive Rendering of Meso-structure Surface Details using Semi-transparent 3D Textures Vision, Modeling, Visualization Erlangen, Germany November 16-18,
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
CS 445 / 645: Introductory Computer Graphics Light.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
CHAPTER 8 Color and Texture Mapping © 2008 Cengage Learning EMEA.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Exam 3 Review Questions. Q1: Rasterize the line (2, -1) to (4, -6) (hint: Bresenham)
Module 05 –Bump mapping Module 05 – Bump mapping Module 05 Advanced mapping techniques: Bump mapping.
Texturing CMSC 435/ What is Texturing? 2 Texture Mapping Definition: mapping a function onto a surface; function can be: – 1, 2, or 3D – sampled.
Chapter 10: Computer Graphics
Tessellation Shaders.
Shaders, part 2 alexandri zavodny.
- Introduction - Graphics Pipeline
Ying Zhu Georgia State University
Computer Graphics Imaging
Week 7 - Wednesday CS361.
Chapter X Advanced Texturing
Game Programming (Mapping)
3D Graphics Rendering PPT By Ricardo Veguilla.
Bump Mapping -1 Three scales of detail on an object
Texture Mapping COMP575/COMP770.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Chapter 14 Shading Models.
Basics Texture Map Setup
Models and Architectures
Chapters VIII Image Texturing
Models and Architectures
Introduction to Computer Graphics with WebGL
Introduction to Computer Graphics with WebGL
Computer Animation Texture Mapping.
Chapter III Modeling.
Chapter V Vertex Processing
Chapter XVI Texturing toward Global Illumination
Models and Architectures
Chapter XIV Normal Mapping
Chapter IX Lighting.
CS-378: Game Technology Lecture #4: Texture and Other Maps
Chapter XVIII Surface Tessellation
Models and Architectures
Last Time Presentation of your game and idea Environment mapping
Illumination and Shading
Texture mapping - other methods
Game Programming Algorithms and Techniques
Chapter 14 Shading Models.
Advanced Computer Graphics: Texture
03 | Creating, Texturing and Moving Objects
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Adding Surface Detail 고려대학교 컴퓨터 그래픽스 연구실.
Texture Mapping Jung Lee.
Presentation transcript:

Chapter IX Bump Mapping

Bumpy Surfaces Image texturing only Highly tessellated mesh Fast Not realistic Highly tessellated mesh Realistic Slow A way out of this dilemma Use a smooth coarse mesh called macrostructure. Store the high-frequency geometric details in a texture. Use the texture at run time. This technique is generally called bump mapping.

Height Field A popular method to represent a high-frequency surface is to use a height field. It is a function h(x,y) that returns a height or z value given (x,y) coordinates. The height field is sampled with a 2D array of regularly spaced (x,y) coordinates, and the height values are stored in a texture named height map. The height map can be drawn in gray scales. If the height is in the range of [0,255], the lowest height 0 is colored in black, and the highest 255 is colored in white.

Height Field for Bump Mapping Bump mapping was originally proposed by J. Blinn in 1978. Each surface point of the macrostructure is to be displaced in the direction of its surface normal. The displacement amount is fetched using (u,v) from the height map.

Height Field for Bump Mapping (cont’d) Three algorithms – How to handle the height map Normal mapping The oldest but the most popular It pre-computes a special texture, normal map, from the height map. At run tome, the normal map is used to perturb the surface normals of the macrostructure so as to ‘mimic’ the geometric bumpiness. Implemented in fragment shader Parallax mapping It takes the height map at input, and runs a simplified ray tracing algorithm. Unlike normal mapping, it makes the bumps appear to block each other. Displacement mapping It tessellates the macrostructure and then displaces its vertices using the height map. It is emerging as an attractive algorithm due to the tessellation support in Shader Model 5. Implemented in the domain shader with the aid of tessellator.

Normal Mapping Recall the interaction among light sources, surfaces, and camera/eye. Why light and why dark? Surface normals play key roles in lighting.

Normal Mapping (cont’d) Use a macrostructure. Store the surface normals of the high-frequency surface into a texture, called the normal map. At run time, use the normal map for lighting.

Normal Map Simple image-editing operations can create a gray-scale image (height map) from an image texture (from (a) to (b)). The next step from (b) to (c) is done automatically.

Normal Map (cont’d) Creation of a normal map from a height map Visualization of a normal map

Normal Mapping The height field (height map) could be handled by the fragment shader. The macrostructure is rasterized. When the fragment shader processes p, the point actually visible to the viewpoint should be (u',v',h(u',v')). The visible point could be computed using ray tracing. Then, both the normal map and the image texture are filtered using (u',v'). Ray tracing is (‘was’) not easy to implement. Assume that, compared to the extent of the macrostructure, the height values of the height field are negligibly small. Then, the visible point is approximated to be p, and p's texture coordinates (u,v) can be used to access the textures. Then, we may not need the height map but simply use the normal map.

Normal Mapping (cont’d) The normal at (u,v) is obtained by filtering the normal map through bilinear interpolation. Recall the diffuse reflection term max(n•l,0)sdmd. The normal n is fetched from the normal map. md is fetched from the image texture.

Normal Mapping (cont’d)

Normal Mapping (cont’d)

Normal Mapping (cont’d)

Tangent-space Normal Mapping So far, it has been implicitly assumed that the normal map stores the perturbed instances of uz, which represents the world-space z-axis. It’s not good. Recall that texturing is described as wrapping a texture onto an object surface. Note that n(up,vp) will replace np, and n(uq,vq) will replace nq. So, the normals of the normal map should be considered as perturbed instances of the surface normals, not of uz.

Tangent-space Normal Mapping (cont’d) For a surface point, consider a tangent space, the z-axis of which corresponds to the surface normal. Assuming that a tangent space is defined for a surface point to be normal-mapped, the normal fetched from the normal map is taken as defined in the tangent space of the point, not in the world space. In this respect, the normal is named the tangent-space normal.

Tangent-space Normal Mapping (cont’d) The basis of the tangent space {T,B,N} Vertex normal N – defined per vertex at the modeling stage. Tangent T – needs to compute Binormal B – needs to compute Utility functions such as D3DXComputeTangentFrameEx return the TBN vectors on each vertex of a triangle mesh. With the tangent space, we can take the normal from the normal map as is.

Tangent-space Normal Mapping (cont’d) The diffuse term of the Phong lighting model A light source is defined in the world space, and so is l. In contrast, n fetched from the normal map is defined in the tangent space. To resolve this inconsistency, l has to be redefined in the tangent space. Typically, the per-vertex TBN-basis is pre-computed, is stored in the vertex buffer, and is passed to the vertex shader. Then, the vertex shader constructs a matrix that rotates the world-space light vector into the per-vertex tangent space.

Tangent-space Normal Mapping (cont’d)

Tangent-space Normal Mapping (cont’d) The the tangent-space light vector l and the texture coordinates (u,v) for referencing the normal map are passed to the rasterizer. They are interpolated. The fragment shader computes lighting using the interpolated light vector (l) and the normal (n) retrieved from the normal map.

Tangent-space Normal Mapping (cont’d)

Computing Tangent-space Why is it crucial to correctly compute T and B?

Computing Tangent-space (cont’d) Note that x-, y-, and z-axes of the normal map space correspond to T-, B-, and N-axes of the tangent space, respectively. Also note that the texture axis u is identical to the T-axis. Similarly, the v-axis is identical to the B-axis. Therefore, analyzing the texture coordinates (ui,vi)s assigned to the vertices, we can determine the directions of the T- and B-axes. Ti and Bi are computed for every triangle i sharing p0. All Tis and Bis are summed to define T' and B', respectively. Finally, the Gram-Schmidt algorithm is invoked to convert {T’,B',N} into an orthonormal basis.