Download presentation
Presentation is loading. Please wait.
Published byStephanie Barker Modified over 9 years ago
1
CS 376 Introduction to Computer Graphics 04 / 11 / 2007 Instructor: Michael Eckmann
2
Michael Eckmann - Skidmore College - CS 376 - Spring 2007 Today’s Topics Questions? continue going over the pseudocode for ray tracing ppm image file format start discussion of texture mapping
3
RayTracing Let's take a look at the handout which shows pseudocode for a recursive ray tracing algorithm and see what it entails. We'll have all our objects defined in the world (assume only spheres and polygons) with world coordinates. First part shows that we decide on a CoP and a view plane then go through the image to be created one pixel at a time from left to right on a scan line and top to bottom in scan lines. For each pixel we –Determine the ray through the center of the pixel starting at the CoP (how to we determine this?) –Then call RT_trace and pass in that ray and the number 1 (for the current depth in the ray tree) –RT_trace will return the color that the pixel should be RT_trace first determines which is the closest object of intersection (how would we do this?) –If the ray doesn't intersect any object, then return the background color (whatever you decide it to be) otherwise...
4
RayTracing RT_trace first determines which is the closest object of intersection (how would we do this?) –If the ray doesn't intersect any object, then return the background color (whatever you decide it to be) otherwise –call RT_shade with the closest object intersected, the ray, the point of intersection, the normal at that intersection (calculate this) and the depth (which is 1 the first time). How do we compute the point of intersection with the object? How do we compute the normal there? In RT_shade we set the color initially to be some ambient term (based on an ambient coefficient for the object, the object's color and our ambient light defined in our world. Go through all the lights in our world and determine the shadow rays one at a time. It then says “if the dot product of normal and direction to light is positive”. Can anyone describe what that tells us? –If a shadow ray is blocked by an opaque surface then ignore it. –If a shadow ray goes through a transparent surface reduce the amount of light transmitted by some factor (k t ) which is associated with the object (see Basic Transparency Model discussion on pages 578-579 in text). –If a shadow ray doesn't intersect anything, then just attenuate the light based on the distance. Use this attenuated light and the surface's diffuse property to compute the diffuse term.
5
RayTracing Then add each of the shadow ray/diffuse contributions to the color RT_shade then continues as long as depth < MaxDepth. Depth is initially 1 and MaxDepth can be set to something like 4. –Recursively call RT_trace for the reflection ray (and depth+1) if the surface is specularly reflective Scale the color returned by the specular coefficient of the surface Add this to the color of the pixel we're calculating –Recursively call RT_trace for the refraction ray (and depth+1) if the surface is transparent Scale the color returned by the transmission coefficient of the surface Add this to the color of the pixel we're calculating Return color
6
RayTracing Comments about ray tracing –To store which object is closest for a given ray through a pixel, we could use an altered form of the z-buffer method. –How did z-buffer method (aka depth buffer) work again?
7
RayTracing A few comments about ray tracing –To store which object is closest for a given ray through a pixel, we could use an altered form of the z-buffer method. –How did z-buffer method (aka depth buffer) work again? It stored the color of the nearest surface for each pixel into the frame buffer and the distance into a z-buffer Instead of storing the distance into the z-buffer, store which object is closest into an item-buffer. –Instead of creating the same depth tree for each ray (that is, the number of levels of recursion) at a particular level, determine if the expected maximum contribution (to the color of the pixel) of the ray to be cast will not be above some threshold. If it is deemed to be insignificant, don't cast it. If it has the potential to be significant, cast it. For example, a primary ray hits an object which could cast a reflection ray and that reflection ray also hits an object and casts a reflection ray and that one also hits an object and can cast a reflection ray. The surface that the second reflection ray hits might have a very small specular coefficient thereby influencing the color of the pixel insignificantly, so, then don't cast that ray (and you can save the computation for that ray as well as the ray it would have spawned, and so on.) Ideally in this situation, only the ones that count will be recursed to further depths.
8
Simple Image format PPM PPM is an image file format that is easy to write to from a program and easy to read into a program. It comes in two forms –Raw and plain The first line of this image file is a “magic number” consisting of two characters –P3 for plain –P6 for raw The second line contains two integers separated by whitespace representing the width and height (in that order) in pixels of this image. The next line contains one number which represents the maximum value for a color (e.g. 255 if you wish to describe each RGB value with a number between 0 and 255 (1 byte).) Although I describe that these things should all be on separate lines, all that is really required is whitespace between the parts that I say should be on separate lines.
9
Simple Image format PPM Next, the file contains height number of lines, each line of which contains width number of R, G, B values which are –3 ASCII decimal values between 0 and the maxvalue, each separated by whitespace for the plain format file –3 binary values (1 byte each if maxvalue < 256, 2 bytes each otherwise) If you wish to write raw format files using one byte per color value, then use type byte in Java. If you wish to write plain format, write out the color values to the file as plain text (e.g. A Red color value of 210 would appear in the file as 3 characters 2 1 and 0.) Let's look at example PPM files. They can be displayed in most image viewers and can be converted to other formats.
10
Suggestions for ray tracer prog. You will be required to save your ray traced images in a file (ppm is suggested since it is easy to write to) You will also be required to show your ray traced image on screen in openGL. Since you are computing the color of one pixel at a time, you can simply draw the pixel as a polygon with four vertices.
11
Adding Surface Detail Until now, we've only discussed surfaces to have a color (and some properties such as how diffuse and specular it is.) That limits drastically the kinds of details we can reproduce that are found on real objects in the real world (at least without going through tons of work). Texture mapping is a technique that can provide detail of a surface without a big hit in performance. For instance, a brick wall can be modelled as one big rectangle, but instead of saying it has one color, we specify its “color” as some pre-stored image of a brick wall. The image of the brick wall is the texture map.
12
Texture mapping Texture mapping can be done in n-dimensions (usually 1, 2 or 3).(the example of a brick wall image is 2d.) An n-dimensional texture is usually described by n coordinates ranging from 0 to 1.0. Each value at a particular set of coordinates is a color. For a linear pattern (1-dimensional texture) we could store a 1d array of colors. Let's say our texture is a list of 6 colors. Our coordinates range from s=0 to 1.0, but since there's only 6 colors, the 0 th color would be s=0, the next would be s=0.2, next s=0.4, and so on until last would be s=1.0. To map this linear pattern into our scene somewhere, we would specify an s value for some point in our scene and another s value for a different point in our scene and linearly interpolate between them to get a multicolored line between them.
13
Texture mapping The length of the line in the scene, determines how many pixels get each color. Example on the board. For 2-dimensional texture we would define a texture with the coordinates s,t each in the range 0 to 1.0. 4 points on the surface of an object are specified as the points where s,t coordinates are ( 0, 0), ( 0, 1.0), ( 1.0, 0), ( 1.0, 1.0) The area in between these four corners is linearly interpolated to map the texture to the object. (like Gouraud shading.)
14
Texture mapping We have the option of –starting in texture-space and mapping that to a surface in object- space and then project the textured object into the image space. –or –start in image-space and map to the surface, then determine the texture that that piece of the surface maps to in the texture space. –We typically use the second method because starting with mapping from texture space we'll often end up with a piece of texture not mapping to exactly all one pixel (either it'll be part of a pixel, or a more than one pixel.) –It's generally easier to start mapping from pixels to the surface then the texture --- due to fewer computations and the added possibility of adding in antialiasing.
15
Texture mapping So if we use this method: –start in image-space and map to the surface, then determine the texture that that piece of the surface maps to in the texture space. There are a few issues –we have a finite number of colors in our texture (these are called texels) for example, the 6 colors stored in our linear array example were 6 texels. a 2d checkerboard might contain 64 texels. –a pixel is most certainly not always going to map exactly to the center of a texel, so we have the option of just taking the nearest texel color or averaging the texels (e.g. for a 2d texture, average the 4 nearest texels into one) –The averaging will reduce aliasing
17
Texture mapping From the diagram on the last slide that rotated square (texture mapped pixel area) in the texture space will certainly fall either on –more than one texel or –part of one texel or –parts of more than one texel A solution is to sum the color values of all the texels that that texture mapped pixel area covers (to the appropriate proportions of texels that it overlaps.) If the texture mapped pixel area falls (fully/partly) outside the texture array then either treat the outside area of the texture as –some background color or –more usually as being the texture area repeated (example on the board) Linear interpolation is a problem for perspectively projected surfaces because when we do the linear interpolation in image space the interpolated value doesn't appropriately match the correct place in the texture map for the world object (example in the handout.) Improvements by –decomposing polygons into smaller ones (triangles) or –most accurately perform the perspective division during interpolation
18
Texture mapping Texture mapping can be used to substitute or scale/alter any or all of the surface's material properties –e.g. color, diffuse reflection coefficient, specular reflection coefficient, etc. We just discussed texture maps that were 1d or 2d. Texture maps can be n-dimensional. How would a 3d texture map be used do you think?
19
Texture mapping We'll discuss more about texture mapping as well as environment mapping and bump mapping next time. And we'll also discuss the radiosity method of image creation and compare/contrast it with ray tracing.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.