Texture Mapping (cont.) Jian Huang, CS 594, updated in Fall’08.

Slides:



Advertisements
Similar presentations
Lecture 8 Transparency, Mirroring
Advertisements

Michael I. Gold NVIDIA Corporation
Bump Mapping CSE 781 Roger Crawfis.
Graphics Pipeline.
Computer Graphic Creator: Mohsen Asghari Session 2 Fall 2014.
03/16/2009Dinesh Manocha, COMP770 Texturing Surface’s texture: its look & feel Graphics: a process that takes a surface and modifies its appearance using.
Texture Mapping. Texturing  process that modifies the appearance of each point on a surface using an image or function  any aspect of appearance can.
Week 7 - Monday.  What did we talk about last time?  Specular shading  Aliasing and antialiasing.
Course Note Credit: Some of slides are extracted from the course notes of prof. Mathieu Desburn (USC) and prof. Han-Wei Shen (Ohio State University). CSC.
Real-Time Rendering TEXTURING Lecture 02 Marina Gavrilova.
Texture Mapping. Typical application: mapping images on geometry 3D geometry (quads mesh) + RGB texture 2D (color-map) =
Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2005 Tamara Munzner Textures II Week 8, Wed.
Advanced Texture and Lighting
Texture Mapping CPSC /24/03 Abhijeet Ghosh.
(conventional Cartesian reference system)
Hofstra University1 Texture Motivation: to model realistic objects need surface detail: wood grain, stone roughness, scratches that affect shininess, grass,
OpenGL Texture Mapping
1 Lecture 12 Texture Mapping uploading of the texture to the video memory the application of the texture onto geometry.
OpenGL Texture Mapping April 16, Angel: Interactive Computer Graphics 3E © Addison-Wesley 2002 Basic Stragegy Three steps to applying a texture.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
09/18/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Bump Mapping Multi-pass algorithms.
9/20/2001CS 638, Fall 2001 Today Finishing Up Reflections More Multi-Pass Algorithms Shadows.
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
CS 638, Fall 2001 Today Light Mapping (Continued) Bump Mapping with Multi-Texturing Multi-Pass Rendering.
COMP 175: Computer Graphics March 24, 2015
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
CS 638, Fall 2001 Admin Grad student TAs may have had their accounts disabled –Please check and the lab if there is a problem If you plan on graduating.
Fundamentals of Computer Graphics Part 9 Discrete Techniques prof.ing.Václav Skala, CSc. University of West Bohemia Plzeň, Czech Republic ©2002 Prepared.
11/11/04© University of Wisconsin, CS559 Fall 2004 Last Time Shading Interpolation Texture mapping –Barycentric coordinates for triangles.
1 SIC / CoC / Georgia Tech MAGIC Lab Rossignac Textures and shadows  Generation  Mipmap  Texture coordinates,
Texture Mapping. Scope Buffers Buffers Various of graphics image Various of graphics image Texture mapping Texture mapping.
Texture Mapping. Example Mappings Mapping Techniques Consider the problem of rendering a sphere in the examples The geometry is very simple - a sphere.
Mapping method Texture Mapping Environmental mapping (sphere mapping) (cube mapping)
An Interactive Introduction to OpenGL Programming Ed Angel
Shading & Texture. Shading Flat Shading The process of assigning colors to pixels. Smooth Shading Gouraud ShadingPhong Shading Shading.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
09/11/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Graphics Pipeline Texturing Overview Cubic Environment Mapping.
Game Programming (Mapping) Spring.
CS 638, Fall 2001 Today Project Stage 0.5 Environment mapping Light Mapping.
CS 638, Fall 2001 Multi-Pass Rendering The pipeline takes one triangle at a time, so only local information, and pre-computed maps, are available Multi-Pass.
CS-378: Game Technology Lecture #4: Texture and Other Maps Prof. Okan Arikan University of Texas, Austin V Lecture #4: Texture and Other Maps.
OpenGL Texture Mapping. 2 Objectives Introduce the OpenGL texture functions and options.
CS 480/680 Computer Graphics OpenGL Texture Mapping Dr. Frederick C Harris, Jr. Fall 2011.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Texturing K. H. Ko School of Mechatronics Gwangju Institute of Science.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
CS-378: Game Technology Lecture #7: More Mapping Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney, Zoran Popovic, Jessica.
Texture Mapping (cont.) Jian Huang, CS 594, Fall 2003.
2 COEN Computer Graphics I Evening’s Goals n Discuss displaying and reading image primitives n Describe texture mapping n Discuss OpenGL modes and.
09/16/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Environment mapping Light mapping Project Goals for Stage 1.
A Few Things about Graphics Jian Huang Computer Science University of Tennessee.
Computing & Information Sciences Kansas State University Lecture 12 of 42CIS 636/736: (Introduction to) Computer Graphics CIS 636/736 Computer Graphics.
11/5/2002 (c) University of Wisconsin, CS 559 Last Time Local Shading –Diffuse term –Specular term –All together –OpenGL brief overview.
11/24/ :45 Graphics II Shadow Maps Reflections Session 5.
Where We Stand So far we know how to: –Transform between spaces –Rasterize –Decide what’s in front Next –Deciding its intensity and color.
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
第三课. Overview of this Section Concept of Texture Mapping ( 纹理映射 ) 2D Texture 3D Texture Environment Mapping Bump Mapping Others OpenGL Implementation.
Texturing Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology.
CS-378: Game Technology Lecture #7: More Mapping Prof. Okan Arikan
Week 7 - Monday CS361.
Game Programming (Mapping)
© University of Wisconsin, CS559 Fall 2004
© University of Wisconsin, CS559 Fall 2004
UMBC Graphics for Games
3D Game Programming Texture Mapping
CS-378: Game Technology Lecture #4: Texture and Other Maps
Last Time Presentation of your game and idea Environment mapping
3D Game Programming Texture Mapping
OpenGL Texture Mapping
Presentation transcript:

Texture Mapping (cont.) Jian Huang, CS 594, updated in Fall’08

OpenGL functions During initialization read in or create the texture image and place it into the OpenGL state. glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, imageWidth, imageHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData); Before rendering your textured object, enable texture mapping and tell the system to use this particular texture. glBindTexture (GL_TEXTURE_2D, 13);

OpenGL functions During rendering, give the cartesian coordinates and the texture coordinates for each vertex. glBegin (GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f (0.0, 0.0, 0.0); glTexCoord2f (1.0, 0.0); glVertex3f (10.0, 0.0, 0.0); glTexCoord2f (1.0, 1.0); glVertex3f (10.0, 10.0, 0.0); glTexCoord2f (0.0, 1.0); glVertex3f (0.0, 10.0, 0.0); glEnd ();

OpenGL functions Automatic texture coordinate generation –Void glTexGenf( coord, pname, param) Coord: –GL_S, GL_T, GL_R, GL_Q Pname –GL_TEXTURE_GEN_MODE –GL_OBJECT_PLANE –GL_EYE_PLANE Param –GL_OBJECT_LINEAR –GL_EYE_LINEAR –GL_SPHERE_MAP

What happens when outside the 0-1 range? (u,v) should be in the range of 0~1 What happens when you request ( )? –Tile: repeat (OGL); the integer part of the value is dropped, and the image repeats itself across the surface –Mirror: the image repeats itself but is mirrored (flipped) on every other repetition –Clamp to edge – value outside of the range are clamped to this range –Clamp to border – all those outside are rendered with a separately defined color of the border

Methods for modifying surface After a texture value is retrieved (may be further transformed), the resulting values are used to modify one or more surface attributes Called combine functions or texture blending operations –Replace: replace surface color with texture color –Decal: replace surface color with texture color, blend the color with underlying color with an alpha texture value, but the alpha component in the framebuffer is not modified –Modulate: multiply the surface color by the texture color (shaded + textured surface)

Multi-Pass Rendering The final color of a pixel in the framebuffer is dependent on: –The shading/illumination applied on it –How those fragments are combined/blended together Given the set graphics hardware, how can we get more control (programmability)? –Example: color plate V in the book Want a range of special effects to be available but tailor based on the variety of hardware?

Multi-pass Rendering Each pass renders every pixel once Each pass computes a piece of the lighting equation and the framebuffer is used to store intermediate results

Quake III Engine Ten passes –(1-4: accumulate bump map) –5: diffuse lighting) –6: base texture (with specular component) –(7: specular lighting) –(8: emissive lighting) –(9: volumetric/atmospheric effects) –(10: screen flashes)

AB+CD? Using framebuffer as the intermediate storage, can multi-pass rendering implement AB+ CD ? –Suppose A, B, C, D are each result of a certain pass

Multi-texture Environments Chain of Texture Units (Texture Stages) GL_MODULATE GL_DECAL GL_BLEND glMultiTexCoord2f( GL_TEXTURE0, …) glMultiTexCoord2f( GL_TEXTURE1, …) glMultiTexCoord2f( GL_TEXTURE2, …) Pre-texturing color glColor3f(r,g,b) #0 #1 #2 Post-texturing color Lookup & filter Lookup & filter Lookup & filter

Multi-Texturing Should have at least 2 texture units if multi- texturing is supported Each texture unit: –Texture image –Filtering parameters –Environment application –Texture matrix stack –Automatic texture-coordinate generation (doesn’t seem obvious, but very have very creative apps)

Multi-Texturing Apply more than one texture to each fragment –Alpha –Luminance –Luminance and alpha –Intensity –RGB –RGBA For instance, first put a color texture on a fragment and then put an intensity map on the fragment that modulates the intensity to simulate lighting situation –(what could this be?)

Putting Things Together With multi-pass rendering + multi-texturing, the lighting equation can be very flexibly controlled to render effects that graphics hardware does not do by its default Common practice in the industry now Next, let’s look at a few established texturing methods

Alpha Mapping Given a square texture, what if we only want the interesting part of it? –Say, a tree? Utilize the alpha channel of the texture, so that only the interesting part becomes non- transparent But what about the order?

Alpha Mapping A binary mask, really redefines the geometry.

Light Mapping Given a complicated lighting condition, how to accelerate and still get real-time frame rates? Assuming your environment is diffuse, let’s compute your lighting condition once and then use as texture map

Example

Choosing a Mapping Problem: In a preprocessing phase, points on polygons must be associated with points in maps One solution: –Find groups of polygons that are “near” co-planar and do not overlap when projected onto a plane Result is a mapping from polygons to planes –Combine sections of the chosen planes into larger maps –Store texture coordinates at polygon vertices Lighting tends to change quite slowly (except at hard shadows), so the map resolution can be poor

Generating the Map Problem: What value should go in each pixel of the light map? Solution: –Map texture pixels back into world space (using the inverse of the texture mapping) –Take the illumination of the polygon and put it in the pixel Advantages of this approach: –Choosing “good” planes means that texture pixels map to roughly square pieces of polygon - good sampling –Not too many maps are required, and not much memory is wasted

Example

Applying Light Maps Use multi-texturing hardware –First stage: Apply color texture map –Second stage: Modulate with light map Pre-lighting textures: –Apply the light map to the texture maps as a pre- process –When is this less appealing? Multi-pass rendering: –Same effect as multi-texturing, but modulating in the frame buffer

Gloss Mapping To rendering a tile floor that is worn in places or a sheet of metal with some rusty spots Let’s control the specular component of the lighting equation

Bump Mapping Many textures are the result of small perturbations in the surface geometry Modeling these changes would result in an explosion in the number of geometric primitives. Bump mapping attempts to alter the lighting across a polygon to provide the illusion of texture.

Bump Mapping This modifies the surface normals. More on this later.

Bump Mapping

Consider the lighting for a modeled surface.

Bump Mapping We can model this as deviations from some base surface. The question is then how these deviations change the lighting. N

Bump Mapping Assumption: small deviations in the normal direction to the surface. X = X + B N Where B is defined as a 2D function parameterized over the surface: B = f(u,v)

Bump Mapping Step 1: Putting everything into the same coordinate frame as B(u,v). –x(u,v), y(u,v), z(u,v) – this is given for parametric surfaces, but easy to derive for other analytical surfaces. –Or O(u,v)

Bump Mapping Define the tangent plane to the surface at a point (u,v) by using the two vectors O u and O v. The normal is then given by: N = O u  O v N

Bump Mapping The new surface positions are then given by: O’(u,v) = O(u,v) + B(u,v) N Where, N = N / |N| Differentiating leads to: O’ u = O u + B u N + B (N) u  O’ u = O u + B u N O’ v = O v + B v N + B (N) v  O’ v = O v + B v N If B is small.

Bump Mapping This leads to a new normal: N’(u,v) = O u  O v - B u (N  O v ) + B v (N  O u ) + B u B v (N  N) »= N - B u (N  O v ) + B v (N  O u ) »= N + D N D N’

Bump Mapping For efficiency, can store B u and B v in a 2- component texture map. The cross products are geometry terms only. N’ will of course need to be normalized after the calculation and before lighting. –This floating point square root and division makes it difficult to embed into hardware.

Displacement Mapping Modifies the surface position in the direction of the surface normal.

Displacement Mapping Bump mapping has a limitation on how much you can tweak If the desired amount of change is too large for bump mapping, can use displacement mapping. Actually go and modify the surface geometry, and re-calculate the normals Quite expensive

Environment Mapping Environment mapping produces reflections on shiny objects Texture is transferred in the direction of the reflected ray from the environment map onto the object Reflected ray: R=2(N·V)N- V What is in the map? Object Viewer Reflected ray Environment Map

Approximations Made The map should contain a view of the world with the point of interest on the object as the eye –We can’t store a separate map for each point, so one map is used with the eye at the center of the object –Introduces distortions in the reflection, but the eye doesn’t notice –Distortions are minimized for a small object in a large room The object will not reflect itself The mapping can be computed at each pixel, or only at the vertices

Environment Maps The environment map may take one of several forms: –Cubic mapping –Spherical mapping (two variants) –Parabolic mapping Describes the shape of the surface on which the map “resides” Determines how the map is generated and how it is indexed What are some of the issues in choosing the map?

Example

Cubic Mapping The map resides on the surfaces of a cube around the object –Typically, align the faces of the cube with the coordinate axes To generate the map: –For each face of the cube, render the world from the center of the object with the cube face as the image plane Rendering can be arbitrarily complex (it’s off-line) –Or, take 6 photos of a real environment with a camera in the object’s position Actually, take many more photos from different places the object might be Warp them to approximate map for all intermediate points Remember The Abyss and Terminator 2?

Cubic Map Example

Indexing Cubic Maps Assume you have R and the cube’s faces are aligned with the coordinate axes, and have texture coordinates in [0,1]x[0,1] –How do you decide which face to use? –How do you decide which texture coordinates to use? What is the problem using cubic maps when texture coordinates are only computed at vertices?

Lat/Long Mapping The original algorithm (1976) placed the map on a sphere centered on the object Mapping functions assume that s,t equate to latitude and longitude on the sphere: What is bad about this method? –Sampling –Map generation –Complex texture coordinate computations

Sphere Mapping Again the map lives on a sphere, but now the coordinate mapping is simplified To generate the map: –Take a map point (s,t), cast a ray onto a sphere in the -Z direction, and record what is reflected –Equivalent to photographing a reflective sphere with an orthographic camera (long lens, big distance) Again, makes the method suitable for film special effects

A Sphere Map

Indexing Sphere Maps Given the reflection vector: Implemented in hardware Problems: –Highly non-uniform sampling –Highly non-linear mapping

Example

Parabolic Mapping Assume the map resides on a parabolic surface –Two surfaces, facing each other Improves on sphere maps: –Texture coordinate generation is a near linear process –Sampling is more uniform –Result is more view-independent However, requires multi-passes to implement, so not generally used

Partially Reflective Objects Use multi-texturing hardware –First stage applied color texture –Second stage does environment mapping using alpha blend with existing color