Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 376 Introduction to Computer Graphics 04 / 13 / 2007 Instructor: Michael Eckmann.

Similar presentations


Presentation on theme: "CS 376 Introduction to Computer Graphics 04 / 13 / 2007 Instructor: Michael Eckmann."— Presentation transcript:

1 CS 376 Introduction to Computer Graphics 04 / 13 / 2007 Instructor: Michael Eckmann

2 Michael Eckmann - Skidmore College - CS 376 - Spring 2007 Today’s Topics Questions? continue going over the pseudocode for ray tracing ppm image file format start discussion of texture mapping

3 Adding Surface Detail Until now, we've only discussed surfaces to have a color (and some properties such as how diffuse and specular it is.) That limits drastically the kinds of details we can reproduce that are found on real objects in the real world (at least without going through tons of work). Texture mapping is a technique that can provide detail of a surface without a big hit in performance. For instance, a brick wall can be modelled as one big rectangle, but instead of saying it has one color, we specify its “color” as some pre-stored image of a brick wall. The image of the brick wall is the texture map.

4 Texture mapping Texture mapping can be done in n-dimensions (usually 1, 2 or 3).(the example of a brick wall image is 2d.) An n-dimensional texture is usually described by n coordinates ranging from 0 to 1.0. Each value at a particular set of coordinates is a color. For a linear pattern (1-dimensional texture) we could store a 1d array of colors. Let's say our texture is a list of 6 colors. Our coordinates range from s=0 to 1.0, but since there's only 6 colors, the 0 th color would be s=0, the next would be s=0.2, next s=0.4, and so on until last would be s=1.0. To map this linear pattern into our scene somewhere, we would specify an s value for some point in our scene and another s value for a different point in our scene and linearly interpolate between them to get a multicolored line between them.

5 Texture mapping The length of the line in the scene, determines how many pixels get each color. Example on the board. For 2-dimensional texture we would define a texture with the coordinates s,t each in the range 0 to 1.0. 4 points on the surface of an object are specified as the points where s,t coordinates are ( 0, 0), ( 0, 1.0), ( 1.0, 0), ( 1.0, 1.0) The area in between these four corners is linearly interpolated to map the texture to the object. (like Gouraud shading.)

6 Texture mapping We have the option of –starting in texture-space and mapping that to a surface in object- space and then project the textured object into the image space. –or –start in image-space and map to the surface, then determine the texture that that piece of the surface maps to in the texture space. –We typically use the second method because starting with mapping from texture space we'll often end up with a piece of texture not mapping to exactly all one pixel (either it'll be part of a pixel, or a more than one pixel.) –It's generally easier to start mapping from pixels to the surface then the texture --- due to fewer computations and the added possibility of adding in antialiasing.

7 Texture mapping So if we use this method: –start in image-space and map to the surface, then determine the texture that that piece of the surface maps to in the texture space. There are a few issues –we have a finite number of colors in our texture (these are called texels) for example, the 6 colors stored in our linear array example were 6 texels. a 2d checkerboard might contain 64 texels. –a pixel is most certainly not always going to map exactly to the center of a texel, so we have the option of just taking the nearest texel color or averaging the texels (e.g. for a 2d texture, average the 4 nearest texels into one) –The averaging will reduce aliasing

8

9 Texture mapping From the diagram on the last slide that rotated square (texture mapped pixel area) in the texture space will certainly fall either on –more than one texel or –part of one texel or –parts of more than one texel A solution is to sum the color values of all the texels that that texture mapped pixel area covers (to the appropriate proportions of texels that it overlaps.) If the texture mapped pixel area falls (fully/partly) outside the texture array then either treat the outside area of the texture as –some background color or –more usually as being the texture area repeated (example on the board) Linear interpolation is a problem for perspectively projected surfaces because when we do the linear interpolation in image space the interpolated value doesn't appropriately match the correct place in the texture map for the world object (example in the handout.) Improvements by –decomposing polygons into smaller ones (triangles) or –most accurately perform the perspective division during interpolation

10 Texture mapping Texture mapping can be used to substitute or scale/alter any or all of the surface's material properties –e.g. color, diffuse reflection coefficient, specular reflection coefficient, etc. We just discussed texture maps that were 1d or 2d. Texture maps can be n-dimensional. How would a 3d texture map be used do you think?

11 Environment mapping Environment mapping is an intermediate surface texture mapping. Usually the intermediate surface is a sphere. On the inside of the hollow sphere is a projected view of the environment (from the center of the sphere.) It is basically a panoramic image of the scene projected onto a sphere. The environment map contains colors which can take into account lights in the environment, reflections from other objects, etc. This projected view that is used as the environment map is generated ONCE. Environment mapping is described in our text as something like “a poor man's ray tracer”. When we generate an image USING the environment map, the result will be similar but not as accurate as the ray traced image, but it can be generated quickly (cheaper than ray tracing.)

12 Environment mapping To generate the image that is projected onto the inside of the sphere to be used as our environment map, we can –ray trace (or some other method) onto the sphere instead of onto a plane –take an image of the environment with a camera with a wide angle lens which can be used as the projected view that will be on the inside of the sphere. –or alternatively one can map any image onto the inside of the sphere if realism is not desired To use the environment map to determine the color of some point on a surface, (see drawing on board) –we map the pixel onto the surface by following a ray through the pixel into our world. –Then the reflected ray (on the opposite side of the normal vector at that intersection point) is cast and (possibly hits other surfaces...) and then –hits the inside of our environment mapping sphere. –The color at that place (or average of colors at that place) on the inside of the sphere is the color we choose to draw.

13

14 Environment mapping From the picture on the last slide, we can see that a pixel maps to an area on the inside of the sphere (which is labelled on the diagram as “Pixel Projection onto Environment Map”) --- resulting color is an average of the colors in that area. More generally the environment mapping is on a closed 3d surface (not necessarily a sphere) e.g. a cube could be used. If it is on a cube, we would generate 6 images initially that are mapped on the inside of the six faces of the cube. All forms of environment mapping can simulate the handling of specular and diffuse reflections.

15 Environment mapping Advantages of environment mapping –create the environment map once then use it for different views –which is fast (can be used in interactive systems at frame rates) –simulates specular and diffuse reflections (if included in the map) –when viewpoint (eye/CoP) changes ray tracing requires a full re-execution of the ray tracer for environment mapping, we still use the same environment map, but we can quickly generate a complex (lots of reflections, etc.) looking image, by casting rays from the new CoP

16 Environment mapping Disadvantages of environment mapping –problems for concave reflective objects (because inter- reflections are position dependent) –Also, we use the reflected ray off the surface to determine the intersection with the sphere more distortions occur when the object we're rendering is further from the center of the sphere (b/c that was the CoP used to create the environment map) distortions will occur because we'll be picking the wrong (but hopefully most of the time close) area on the inside of the sphere to get the color for the surface.

17 Limitation of texture and environment mapping One could create a texture map/environment map of the look of a bumpy surface and then use that to color an object. But what's a limitation of these methods –what if lighting in the scene changes? –what if the viewpoint changes?

18 Bump mapping Texture mapping and environment mapping cannot handle the view dependent & lighting dependent nature of shadows and other effects that are seen on surfaces that are bumpy like raisins, oranges, etc. To achieve details based on lighting calculations, bump mapping can be used. Bump mapping takes a simple smooth mathematical surface like a sphere or other curved surface as its model, but renders it so that it does not appear smooth --- before the lighting calculations are done, the surface normals are perturbed across the surface. The normal at a particular surface point is the key to lighting that point of the surface. Therefore, changing the surface normals causes the surface to be rendered as not smooth. Let's see some examples of images created with bump mapping techniques.

19 Bump mapping The sphere or other mathematically described parametric surface that we are rendering will be defined as a function P(u,v), where u and v are the parameters and P(u,v) the positions on the surface. The actual normal (before bump mapping) of a point on the surface at P(u,v) can be calculated as the cross product of the vectors representing the slopes in the u and v directions. The slopes are the partial derivatives of P with respect to u and v (the parameters.) I'll draw a picture on the board for this (unfortunately there is no diagram in the book to describe this). Notation: P u is the partial derivative of P with respect to u P v is the partial derivative of P with respect to v the normal N is: N = P u x P v

20 Bump mapping N = P u x P v n = N / |N| make n a unit vector We use a bump function B(u,v) that generates relatively small values that will be added to our points from P(u,v). We compute a perturbed normal for the point by first –adding B(u,v) to P in the direction of n to get P' P'(u,v) = P(u,v) + B(u,v) n –then compute N' (which is the normal at P') to be N' = P' u x P' v –now we have to figure out what P' u and P' v are Note: the use here of the ' is NOT notation for derivative.

21 Bump mapping We know that P'(u,v) = P(u,v) + B(u,v) n so, P' u = partial derivative with respect to u of (P + B n) which is P u + B u n + Bn u and, P' v = partial derivative with respect to v of (P + B n) which is P v + B v n + Bn v we assume that the magnitude of B is small so we can ignore the last term in both equations to get approximations like: P' u = P u + B u n and P' v = P v + B v n

22 Bump mapping We need to calculate N' = P' u x P' v so, N' = ( P u + B u n ) x ( P v + B v n ) which after we do the math is (P u x P v ) + B v (P u x n) + B u (n x P v ) + B u B v (n x n) n x n = 0, so we get a good approximation for N' to be (P u x P v ) + B v (P u x n) + B u (n x P v ) then we should normalize (make unit magnitude) the N'

23 Bump mapping Now that we have the N' (the perturbed normal) at the point, what do we do?

24 RayTracing / Radiosity Ray Tracing is a type of direct llumination method in image space. Direct illumination in image space because the scene that we're rendering is made up of surfaces and lights and we compute the colors of the pixels one at a time. –If the view moves --- have to ray trace again –If the world moves --- have to ray trace again Ray tracing results in some realism but with a few drawbacks + Handles both diffuse and specular reflections as well as refractions –Compute intensive –Shadows are too crisp Radiosity is a type of global illumination method that works in object space. + If the view moves, we DO NOT have to rerun the radiosity algorithm –Only diffuse, no specular reflection (therefore no mirrorlike surfaces) + Shadows are softer, more realistic +/- Color bleeds from surfaces to nearby surfaces Radiosity and Ray tracing can be combined to produce a more realistic image than either one separately. We'll discuss radiosity next time.


Download ppt "CS 376 Introduction to Computer Graphics 04 / 13 / 2007 Instructor: Michael Eckmann."

Similar presentations


Ads by Google