Presentation is loading. Please wait.

Presentation is loading. Please wait.

Last Time Midterm Shading Light source Shading Interpolation

Similar presentations


Presentation on theme: "Last Time Midterm Shading Light source Shading Interpolation"— Presentation transcript:

1 Last Time Midterm Shading Light source Shading Interpolation
Point light sources Directional light sources Spot light sources Shading Interpolation 11/10/09 © NTUST

2 This Week Shading Interpolation Texture mapping Texture Anti-Aliasing
Barycentric coordinates for triangles Texture Anti-Aliasing Texture boundaries Modeling introduction Homework5 Due 11/17 11/10/09 © NTUST

3 Flat shading Compute shading at a representative point and apply to whole polygon OpenGL uses one of the vertices Advantages: Fast - one shading computation per polygon Disadvantages: Inaccurate What are the artifacts? 11/10/09 © NTUST

4 Gouraud Shading Shade each vertex with it’s own location and normal
Linearly interpolate the color across the face Advantages: Fast: incremental calculations when rasterizing Much smoother - use same normal every time a vertex is used for a face Disadvantages: What are the artifacts? Is it accurate? 11/10/09 © NTUST

5 Phong Interpolation Interpolate normals across faces
Shade each pixel individually Advantages: High quality, narrow specularities Disadvantages: Expensive Still an approximation for most surfaces Not to be confused with Phong’s specularity model 11/10/09 © NTUST

6 11/10/09 © NTUST

7 Shading and OpenGL OpenGL defines two particular shading models
Controls how colors are assigned to pixels glShadeModel(GL_SMOOTH) interpolates between the colors at the vertices (the default, Gouraud shading) glShadeModel(GL_FLAT) uses a constant color across the polygon Phong shading requires a significantly greater programming effort – later we will try to have a short discussion if possible Also requires fragment shaders on programmable graphics hardware 11/10/09 © NTUST

8 The Current Generation
Current hardware allows you to break from the standard illumination model Programmable Vertex Shaders and Fragment Shaders allow you to write a small program that determines how the color of a vertex or pixel is computed Your program has access to the surface normal and position, plus anything else you care to give it (like the light) You can add, subtract, take dot products, and so on Fragment shaders are most useful for lighting because they operate on every pixel 11/10/09 © NTUST

9 Original And Modified Pipeline
Replace transform and lighting with vertex shader Vertex shader must now do transform and lighting But can also do more Replace texture stages with fragment (pixel) shader Previously, texture stages were only per-pixel operations Fragment shader must do texturing 11/10/09 © NTUST

10 The Full Story We have only touched on the complexities of illuminating surfaces The common model is hopelessly inadequate for accurate lighting (but it’s fast and simple) Consider two sub-problems of illumination Where does the light go? Light transport What happens at surfaces? Reflectance models Other algorithms address the transport or the reflectance problem, or both Much later in class, or a separate course (CS 779) 11/10/09 © NTUST

11 Mapping Techniques Consider the problem of rendering a soup can
The geometry is very simple - a cylinder But the color changes rapidly, with sharp edges With the local shading model, so far, the only place to specify color is at the vertices To do a soup can, would need thousands of polygons for a simple shape Same thing for an orange: simple shape but complex normal vectors Solution: Mapping techniques use simple geometry modified by a detail map of some type 11/10/09 © NTUST

12 Texture Mapping The soup tin is easily described by pasting a label on the plain cylinder Texture mapping associates the color of a point with the color in an image: the texture Soup tin: Each point on the cylinder gets the label’s color Question to address: Which point of the texture do we use for a given point on the surface? Establish a mapping from surface points to image points Different mappings are common for different shapes We will, for now, just look at triangles (polygons) 11/10/09 © NTUST

13 Example Mappings 11/10/09 © NTUST

14 Basic Mapping The texture lives in a 2D space
Parameterize points in the texture with 2 coordinates: (s,t) These are just what we would call (x,y) if we were talking about an image, but we wish to avoid confusion with the world (x,y,z) Define the mapping from (x,y,z) in world space to (s,t) in texture space To find the color in the texture, take an (x,y,z) point on the surface, map it into texture space, and use it to look up the color of the texture Samples in a texture are called texels, to distinguish them from pixels in the final image With polygons: Specify (s,t) coordinates at vertices Interpolate (s,t) for other points based on given vertices 11/10/09 © NTUST

15 Texture Interpolation
Specify where the vertices in world space are mapped to in texture space A texture coordinate is the location in texture space that corresponds to the vertex Linearly interpolate the mapping for other points in world space Straight lines in world space go to straight lines in texture space (s,t) (x,y),(s,t) t s Texture map – texture coordinates Triangle in world space – vertex coordinates 11/10/09 © NTUST

16 Interpolating Coordinates
(x3, y3), (s3, t3) (x2, y2), (s2, t2) (x1, y1), (s1, t1) 11/10/09 © NTUST

17 Barycentric Coordinates
An alternate way of describing points in triangles These can be used to interpolate texture coordinates Gives the same result as previous slide Method in textbook (Shirley) x2 x x3 x1 11/10/09 © NTUST

18 Steps in Texture Mapping
Polygons (triangles) are specified with texture coordinates at the vertices A modeling step, but some ways to automate it for common shapes When rasterizing, interpolate the texture coordinates to get the texture coordinate at the current pixel Previous slides Look up the texture map using those coordinates Just round the texture coordinates to integers and index the image Take the color from the map and put it in the pixel Many ways to put it into a pixel (more later) 11/10/09 © NTUST

19 Pipelines and Texture Mapping
Texture mapping is done in canonical screen space as the polygon is rasterized When describing a scene, you assume that texture interpolation will be done in world space Something goes wrong… Interpolation then projection Projection then interpolation Textured square 11/10/09 © NTUST

20 Perspective Correct Mapping
Which property of perspective projection means that the “wrong thing” will happen if we apply our simple interpolations from the previous slide? Perspective correct texture mapping does the right thing, but at a cost Interpolate homogeneous coordinate w and divide it out just before indexing texture Is it a problem with orthographic viewing? 11/10/09 © NTUST

21 Basic OpenGL Texturing (cont)
Enable texturing: glEnable(GL_TEXTURE_2D) State how the texture will be used: glTexEnvf(…) Texturing is done after lighting You’re ready to go… 11/10/09 © NTUST

22 Nasty Details There are a large range of functions that control the layout of texture data: You must state how the data in your image is arranged Eg: glPixelStorei(GL_UNPACK_ALIGNMENT, 1) tells OpenGL not to skip bytes at the end of a row You must state how you want the texture to be put in memory: how many bits per “pixel”, which channels,… Textures must be square with width/height a power of 2 Common sizes are 32x32, 64x64, 256x256 Smaller uses less memory, and there is a finite amount of texture memory on graphics cards Some extensions to OpenGL allow arbitrary textures 11/10/09 © NTUST

23 Controlling Different Parameters
The “pixels” in the texture map may be interpreted as many different things. For example: As colors in RGB or RGBA format As grayscale intensity As alpha values only The data can be applied to the polygon in many different ways: Replace: Replace the polygon color with the texture color Modulate: Multiply the polygon color with the texture color or intensity Similar to compositing: Composite texture with base color using operator 11/10/09 © NTUST

24 Example: Diffuse shading and texture
Say you want to have an object textured and have the texture appear to be diffusely lit Problem: Texture is applied after lighting, so how do you adjust the texture’s brightness? Solution: Make the polygon white and light it normally Use glTexEnvi(GL_TEXTURE_2D, GL_TEXTURE_ENV_MODE, GL_MODULATE) Use GL_RGB for internal format Then, texture color is multiplied by surface (fragment) color, and alpha is taken from fragment 11/10/09 © NTUST

25 Specular Color Typically, texture mapping happens after lighting
More useful in general Recall plastic surfaces and specularities: the highlight should be the color of the light But if texturing happens after the lighting, the color of the specularity will be modified by the texture – the wrong thing OpenGL lets you do the specular lighting after the texture 11/10/09 © NTUST

26 Some Other Uses There is a “decal” mode for textures, which replaces the surface color with the texture color, as if you stick on a decal But texture happens after lighting, so the light info is lost BUT, you can use the texture to store lighting info, and generate better looking lighting You can put the color information in the polygon, and use the texture for the brightness information Called “light maps” Normally, use multiple texture layers, one for color, one for light 11/10/09 © NTUST

27 Texture Recap Triangle in 8x8 Texture Map, I(s,t) Triangle in world space I(s,t) Interpolated (s,t) – we need a texture sample from I(s,t) t s We must reconstruct the texture image at the point (s,t) Time to apply the theory of sampling and reconstruction 11/10/09 © NTUST

28 Textures and Aliasing Textures are subject to aliasing:
A polygon pixel maps into a texture image, essentially sampling the texture at a point The situation is essentially an image warp, with the warp defined by the mapping and projection Standard approaches: Pre-filtering: Filter the texture down before applying it Useful when the texture has multiple texels per output image pixel Post-filtering: Take multiple pixels from the texture and filter them before applying to the polygon fragment Useful in all situations 11/10/09 © NTUST

29 Point Sampled Texture Aliasing
Texture map Polygon far from the viewer in perspective projection Rasterized and textured Note that the back row is a very poor representation of the true image 11/10/09 © NTUST

30 Mipmapping (Pre-filtering)
If a textured object is far away, one screen pixel (on an object) may map to many texture pixels The problem is: how to combine them A mipmap is a low resolution version of a texture Texture is filtered down as a pre-processing step: gluBuild2DMipmaps(…) When the textured object is far away, use the mipmap chosen so that one image pixel maps to at most four mipmap pixels Full set of mipmaps requires at most the storage of the original texture (in the limit) * *0.25*0.25+… 11/10/09 © NTUST

31 Many Texels for Each Pixel
Texture map with pixels drawn on it. Some pixels cover many texture elements (texels) Polygon far from the viewer in perspective projection 11/10/09 © NTUST

32 Mipmaps For far objects For middle objects For near objects 11/10/09
© NTUST

33 Mipmap Math Define a scale factor, =texels/pixel Define =log2 
A texel is a pixel from a texture  is actually the maximum from x and y The scale factor may vary over a polygon It can be derived from the transformation matrices Define =log2   tells you which mipmap level to use Level 0 is the original texture, level 1 is the next smallest texture, and so on If <0, then multiple pixels map to one texel: magnification 11/10/09 © NTUST

34 Post-Filtering You tell OpenGL what sort of post-filtering to do
Magnification: When <0 the image pixel is smaller than the texel: glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, type) Type is GL_LINEAR or GL_NEAREST Minification: When >0 the image pixel is bigger than the texel: GL_TEX_MIN_FILTER Can choose to: Take nearest point in base texture, GL_NEAREST Linearly interpolate nearest 4 pixels in base texture, GL_LINEAR Take the nearest mipmap and then take nearest or interpolate in that mipmap, GL_NEAREST_MIPMAP_LINEAR Interpolate between the two nearest mipmaps using nearest or interpolated points from each, GL_LINEAR_MIPMAP_LINEAR 11/10/09 © NTUST

35 Filtering Example s=0.12,t=0.1 =1.4 =0.49 Level 2 Level 1 Level 0
NEAREST_MIPMAP_NEAREST: level 0, pixel (0,0) LINEAR_MIPMAP_NEAREST: level 0, pixel (0,0) * 0.51 + level 1, pixel (0,0) * 0.49 Level 1 NEAREST_MIPMAP_LINEAR: level 0, combination of pixels (0,0), (1,0), (1,1), (0,1) Level 0 s=0.12,t=0.1 =1.4 =0.49 LINEAR_MIPMAP_LINEAR: Combination of level 0 and level 1, 4 pixels from each level, using 8 pixels in all 11/10/09 © NTUST

36 Boundaries You can control what happens if a point maps to a texture coordinate outside of the texture image All texture images are assumed to go from (0,0) to (1,1) in texture space – means that the mapping is independent of the texture size The problem is how to extend the image to make an infinite space Repeat: Assume the texture is tiled glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT) Clamp to Edge: the texture coordinates are truncated to valid values, and then used glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP) Can specify a special border color: glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, R,G,B,A) 11/10/09 © NTUST

37 Repeat Border (1,1) (0,0) 11/10/09 © NTUST

38 Clamp Border (1,1) (0,0) 11/10/09 © NTUST

39 Border Color (1,1) (0,0) 11/10/09 © NTUST

40 Other Texture Stuff Texture must be in fast memory - it is accessed for every pixel drawn If you exceed it, performance will degrade horribly Skilled artists can pack textures for different objects into one image Texture memory is typically limited, so a range of functions are available to manage it Specifying texture coordinates can be annoying, so there are functions to automate it Sometimes you want to apply multiple textures to the same point: Multitexturing is now in most new hardware 11/10/09 © NTUST

41 Yet More Texture Stuff There is a texture matrix: apply a matrix transformation to texture coordinates before indexing texture There are “image processing” operations that can be applied to the pixels coming out of the texture There are 1D and 3D textures Mapping works essentially the same 3D textures are very memory intensive, and how they are used is very application dependent 1D saves memory if the texture is inherently 1D, like stripes 11/10/09 © NTUST

42 Procedural Texture Mapping
Instead of looking up an image, pass the texture coordinates to a function that computes the texture value on the fly Renderman, the Pixar rendering language, does this Available in a limited form with fragment shaders on current generation hardware Advantages: Near-infinite resolution with small storage cost Idea works for many other things Has the disadvantage of being slow in most cases 11/10/09 © NTUST

43 Other Types of Mapping Environment mapping looks up incoming illumination in a map Simulates reflections from shiny surfaces Bump-mapping computes an offset to the normal vector at each rendered pixel No need to put bumps in geometry, but silhouette looks wrong Displacement mapping adds an offset to the surface at each point Like putting bumps on geometry, but simpler to model All are available in software renderers like RenderMan compliant renderers All these are becoming available in hardware 11/10/09 © NTUST

44 The Story So Far We’ve looked at images and image manipulation
We’ve looked at rendering from polygons Next major section: Modeling 11/10/09 © NTUST

45 Shape Modeling Modeling is the process of describing an object Sometimes the description is an end in itself eg: Computer aided design (CAD), Computer Aided Manufacturing (CAM) The model is an exact description Why? Drawing, Sample, Analyze More typically in graphics, the model is then used for rendering (we will work on this assumption) The model only exists to produce a picture It can be an approximation, as long as the visual result is good The computer graphics motto: “If it looks right it is right” Doesn’t work for CAD Why is this hard Shapes can be arbitrary and complex – hard to describe Conflicting goals: Concise, Intuitive, Expressive, Analyzable, … 11/10/09 © NTUST 45

46 What is a Shape Mathematical definition is elusive Set of Points
Potentially (usually) infinite “Lives” in some bigger space (e.g. 2D or 3D) Many ways to describe sets Set inclusion test (implicit representation) Procedural for generating elements of the set Explicit mapping from a known set What are some things to think about when choosing a representation? 11/10/09 © NTUST 46

47 Choosing a Representation
How well does it represents the objects of interest? How easy is it to render (or convert to polygons)? How compact is it (how cheap to store and transmit)? How easy is it to create? By hand, procedurally, by fitting to measurements, … How easy is it to interact with? Modifying it, animating it How easy is it to perform geometric computations? Distance, intersection, normal vectors, curvature, … 11/10/09 © NTUST

48 Some kinds of Shapes Curves Surfaces / Areas Solids / Volumes
With 1D parameter space Like what you draw with a pen Surfaces / Areas With 2D parameter space – the insides of things that take up surface Bounded by a Curve Solids / Volumes With 3D parameter space – the insides of things that take up volume Different definition: set with the same dimension as the embedded space (an area of 2D) 11/10/09 © NTUST 48

49 Categorizing Modeling Techniques
Surface vs. Volume Sometimes we only care about the surface Rendering and geometric computations Sometimes we want to know about the volume Medical data with information attached to the space Some representations are best thought of defining the space filled, rather than the surface around the space Parametric vs. Implicit Parametric generates all the points on a surface (volume) by “plugging in a parameter” eg (sin cos, sin sin, cos) Implicit models tell you if a point in on (in) the surface (volume) eg x2 + y2 + z2- 1 = 0 11/10/09 © NTUST

50 Boundary vs. Solid Representations
B-rep: boundary representation Sometimes we only care about the surface Rendering opaque objects and geometric computations Solid modeling Some representations are best thought of defining the space filled, rather than the surface around the space Medical data with information attached to the space Transparent objects with internal structure Taking cuts out of an object; “What will I see if I break this object?” 11/10/09 © NTUST

51 Parametric vs. Implicit vs. Procedural
Parametric generates all the points on a surface (volume) by “plugging in a parameter” eg Implicit models use an equation that is 0 if the point is on the surface Essentially a function to test the status of a point Procedural: a procedure is used to answer any question you might have about the surface eg Where does a given ray hit the surface? 11/10/09 © NTUST

52 Parameterization Parameterization is the process of associating a set of parameters with every point on an object For instance, a line is easily parameterized by a single value Actually, the barycentric parameterization for a line Triangles can be parameterized by their barycentric coordinates Polygons can be parameterized by defining a 2D space in the plane of the polygon, and using the 2D coordinates to give you 3D Several properties of a parameterization are important: The smoothness of the mapping from parameter space to 3D points The ease with which the parameter mapping can be inverted Many more We care about parameterizations for several reasons Texture mapping is the most obvious one you have seen so far 11/10/09 © NTUST

53 Parameterization Parameterization is the process of associating a set of parameters with every point on an object For instance, a line is easily parameterized by a single value Triangles in 2D can be parameterized by their barycentric coordinates Triangles in 3D can be parameterized by 3 vertices and the barycentric coordinates (need both to locate a point in 3D space) Several properties of a parameterization are important: The smoothness of the mapping from parameter space to 3D points The ease with which the parameter mapping can be inverted Many more We care about parameterizations for several reasons Texture mapping is the most obvious one you have seen so far; require (s,t) parameters for every point in a triangle 11/10/09 © NTUST

54 Techniques We Will Examine
Curves Polygon meshes Surface representation, Parametric representation Prototype instancing and hierarchical modeling Surface or Volume, Parametric Volume enumeration schemes Volume, Parametric or Implicit Parametric curves and surfaces Surface, Parametric Subdivision curves and surfaces Procedural models 11/10/09 © NTUST

55 What are Parametric Curves?
Define a parameter space 1D for curves 2D for surfaces Define a mapping from parameter space to 3D points A function that takes parameter values and gives back 3D points The result is a parametric curve or surface (Fx(t), Fy(t)) Mapping:F:t→(x,y) 1 1 t 11/10/09 © NTUST

56 Why Parametric Curves? Parametric curves are intended to provide the generality of polygon meshes but with fewer parameters for smooth surfaces Polygon meshes have as many parameters as there are vertices (at least) Fewer parameters makes it faster to create a curve, and easier to edit an existing curve Normal vectors and texture coordinates can be easily defined everywhere Parametric curves are easier to animate than polygon meshes 11/10/09 © NTUST

57 Local vs. Global Properties
Local – what can you tell at a point Position, Direction (tangent) Global – need to look at the whole curve Closed, consistent Tangent vector – direction of motion First derivative w.r.t. “time” (or parameter) 11/10/09 © NTUST 57


Download ppt "Last Time Midterm Shading Light source Shading Interpolation"

Similar presentations


Ads by Google