Introduction to Raytracing Raytracing Introduction (from Wikipedia) In computer graphics, ray tracing is a technique for generating an image by tracing.

Slides:



Advertisements
Similar presentations
Coordinatate systems are used to assign numeric values to locations with respect to a particular frame of reference commonly referred to as the origin.
Advertisements

C Structures and Memory Allocation There is no class in C, but we may still want non- homogenous structures –So, we use the struct construct struct for.
Introduction to Linked Lists In your previous programming course, you saw how data is organized and processed sequentially using an array. You probably.
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Computer Graphics methods
In the name of God Computer Graphics Bastanfard.
Ray Tracing Part I References /rtrace0.htm A. Watt & M. Watt. Advanced Animation & Rendering.
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
ATEC Procedural Animation Introduction to Procedural Methods in 3D Computer Animation Dr. Midori Kitagawa.
Diffuse Illumination Diffuse illumination is associated with specific light sources but is reflected uniformly in all directions. A white sheet of paper.
Parametric Equations Here are some examples of trigonometric functions used in parametric equations.
CS 376 Introduction to Computer Graphics 04 / 09 / 2007 Instructor: Michael Eckmann.
Connecting with Computer Science, 2e
CS485/685 Computer Vision Prof. George Bebis
Manipulating 2D arrays in Java
Image representation methods. Bitmap graphic method of a black-and-white image.
Computer Vision Lecture 3: Digital Images
Connecting with Computer Science 2 Objectives Learn why numbering systems are important to understand Refresh your knowledge of powers of numbers Learn.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 19 Other Graphics Considerations Review.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
An Introduction to Ray Tracing CS /27/1998 Vic Baker.
COMP 175: Computer Graphics March 24, 2015
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Chapter 13 Recursion. Topics Simple Recursion Recursion with a Return Value Recursion with Two Base Cases Binary Search Revisited Animation Using Recursion.
Epipolar geometry The fundamental matrix and the tensor
10/4/2015Tables1 Spring, 2008 Modified by Linda Kenney 4/2/08.
Computer Graphics World, View and Projection Matrices CO2409 Computer Graphics Week 8.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Ray Tracing Chapter CAP4730: Computational Structures in Computer Graphics.
Chapter 1 Introduction to Computers and C++ Programming Goals: To introduce the fundamental hardware and software components of a computer system To introduce.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Getting Started with MATLAB 1. Fundamentals of MATLAB 2. Different Windows of MATLAB 1.
C++ Programming Language Lecture 2 Problem Analysis and Solution Representation By Ghada Al-Mashaqbeh The Hashemite University Computer Engineering Department.
1 Computer Graphics Week2 –Creating a Picture. Steps for creating a picture Creating a model Perform necessary transformation Lighting and rendering the.
Gene Au-yeung, Daniel Quach, Jeffrey Su, Albert Wang, Jessica Wang, David Woo.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Ray tracer objects To reduce the complexity of the ray tracer we will try to avoid duplicating code as much as possible. We will find many cases where.
More on Environment Mapping Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Wednesday, December 10, 2003.
Games Development 1 Camera Projection / Picking CO3301 Week 8.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Digital Media Dr. Jim Rowan ITEC So far… We have compared bitmapped graphics and vector graphics We have discussed bitmapped images, some file formats.
Parallel Ray Tracer Computer Systems Lab Presentation Stuart Maier.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
OpenGL The Viewing Pipeline: Definition: a series of operations that are applied to the OpenGL matrices, in order to create a 2D representation from 3D.
CS1372: HELPING TO PUT THE COMPUTING IN ECE CS1372 Some Basics.
Basic Ray Tracing CMSC 435/634.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Arrangements and Duality Motivation: Ray-Tracing Fall 2001, Lecture 9 Presented by Darius Jazayeri 10/4/01.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
CS 325 Introduction to Computer Graphics 04 / 12 / 2010 Instructor: Michael Eckmann.
UniS CS297 Graphics with Java and OpenGL Blending.
 2008 Pearson Education, Inc. All rights reserved. 1 Arrays and Vectors.
Arrays. The array data structure Array is a collection of elements, that have the same data type Integers (int) Floating point numbers (float, double)
The Hashemite University Computer Engineering Department
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
More on Ray Tracing Glenn G. Chappell U. of Alaska Fairbanks CS 481/681 Lecture Notes Wednesday, April 14, 2004.
RENDERING : Global Illumination
CS 325 Introduction to Computer Graphics 04 / 07 / 2010 Instructor: Michael Eckmann.
Orthonormal Basis Cartesian Coordinate System – Unit vectors: i, j, k – Normalized to each other – Unique representation for each position!! – Convenient!
1 Sections 5.1 – 5.2 Digital Image Processing Fundamentals of Java: AP Computer Science Essentials, 4th Edition Lambert / Osborne.
Coordinatate systems are used to assign numeric values to locations with respect to a particular frame of reference commonly referred to as the origin.
Computer Graphics CC416 Lecture 04: Bresenham Line Algorithm & Mid-point circle algorithm Dr. Manal Helal – Fall 2014.
ITEC2110, Digital Media Chapter 2 Fundamentals of Digital Imaging 1 GGC -- ITEC Digital Media.
Representation of image data
CS 4731: Computer Graphics Lecture 20: Raster Graphics Part 1
Computer Vision Lecture 4: Color
Digital Image Processing using MATLAB
Digital Media Dr. Jim Rowan ITEC 2110.
CS297 Graphics with Java and OpenGL
SPL – PS1 Introduction to C++.
Presentation transcript:

Introduction to Raytracing Raytracing Introduction (from Wikipedia) In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The ray tracing algorithm builds an image by firing rays into a scene. Typically, each ray must be tested for intersection with some subset of all the objects in the scene.

Introduction to Raytracing Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel.

Introduction to Raytracing The objective of a ray tracing program is to render a photo- realistic image of a virtual scene in 3 dimensional space.

Introduction to Raytracing There are three major elements involved in the process: 1.The viewpoint: This is the location in 3-d space at which the viewer of the scene is located 2.The window: This defines a virtual window through which the viewer observes the scene. The window can be viewed as a discrete 2-D pixel array (pixmap). The ray tracing procedure computes the color of each pixel. When all pixels have been computed, the pixmap is written out as a.ppm file 3.The scene: The scene consists of objects and light sources | light light | obj viewpt | obj | obj window

Coordinate Systems Two coordinate systems will be involved and it will be necessary to map between them: 1.Window coordinates: the coordinates of individual pixels in the virtual window. These are two dimensional (x, y) integer numbers. –e.g., If a 400 col x 300 row image is being created, then the window's x coordinates range from 0 to 399, and the window's y coordinates range from 0 to 299. – In the ray tracing algorithm, a ray will be fired through each pixel in the window. The color of the pixels will be determined by the color of the object(s) the ray hits. 2.World coordinates: the “natural” coordinates of the scene measured in feet/meters etc. Since world coordinates describe the entire scene these coordinates are three dimensional (x, y, z) floating point numbers.

Assumptions For the sake of simplicity, we will assume that The screen lies in the z = 0.0 plane the center of the window has world coordinates (0.0, 0,0, 0,0) the lower left corner of the window has window (pixel) coordinates (0, 0) the location of the viewpoint has a positive z coordinate all objects have negative z coordinates.

Coordinate Systems "window" and coordinate systems

Translating from Pixel to World Coordinates In building a PPM image it is useful to think in terms of a screen with rows of pixels. Each pixel has a row index and a column index. The screen is 2-D as is this pixel coordinate system. In the virtual world it is useful to think in terms of a 3-D coordinate that is measured in some units other than pixels. We will call this the "world" coordinate system. Every pixel has screen coordinates (the column and row index) and world coordinates. As we will see, we need some means to convert between the two coordinate systems. We can do this by noting the relationship between the screen’s pixel width and height and the screen’s world coordinates width and height.

Translating from Pixel to World Coordinates As stated earlier, we will arbitrarily decide that the lower left pixel of the image is at screen coordinates, and the CENTER of the image is at world coordinates. For anything on the viewpoint side of the screen, the z world coordinate will be positive, and for objects behind the screen the z coordinate is negative. These choices are intuitively natural.

Translating from Pixel to World Coordinates To map a pixel's column index to the pixel’ s x coordinate in world coordinates we can use the following: point_t world; window_t *window = scene->window;... world.x = ((double)(column) / (double)(scene->picture->columns - 1)) * window->windowWidth; world.x -= window->windowWidth/2.0; Where we assume "scene” is pointing to the initialized “scene_t” structure. Note the part of the equation in red computes the percent of the column width that the current column index represents. When we multiply this percent by the width of the image in world coordinates, we get the distance from the left edge of the image in world coordinates. Finally, since the x coordinate of the middle of the image is 0, we subtract away half of the image width (i.e. the x world coordinate for a pixel in the left half of the window is negative, and positive is the right half). The computation of the y coordinate is similar. For all pixels the z coordinate is 0.

Translating from Pixel to World Coordinates Example: Suppose the window is 640 pixels wide x 480 pixels high, and the dimensions of the window in world coordinates is 8 ft wide by 6 ft high. Find the world coordinates of the pixel at column 100, row 40. Solution: –Compute the fraction or percentage of the complete x size that must be traversed to reach column /(640 -1) – Therefore, world.x = 100/639 * 8 ft 1.25 ft. world.x = world.x - 8/ or – Similarly, for the world.y = 40/479 * 6 ft 0.5 world.y = world.y - 6/ or -2.5 – world.z = 0. Why? (Because the screen lies in the z = 0 plane, thus the z coordinate of every point in the window is 0.0!) – So the world coordinates of the pixel are (-1.25, -2.5, 0)

Computing the Number of Rows in the Output Image We will find that the input data will provide us with the scene dimensions, and the number of pixel columns. From this we can compute the number of pixel rows in the output image. So in the scene_t, the fields sceneHeight and sceneWidth will have known values. Also in the scene_t the pixelColumns field has a known value.

Computing the Number of Rows in the Output Image Note that the following ratios are equal: rows/pixelColumns = windowHeight/windowWidth So we can compute rows as: rows = (windowHeight/windowWidth)*pixelColumns We will see later that the value of rows in the image_t will be computed in a procedure called “completeScene()”, and the values of the rows and columns field will be used to malloc space for the output image.

The genRay() function The genRay() function creates a unit vector from the viewpoint to a pixel. We will use this function to create a unit vector from the viewpoint towards every pixel, and then ask the question “did we hit anything?”

The genRay() function The function has the prototype: vector_t genRay(scene_t *scene, int column, int row); where scene points to the scene_t, and column and row are the pixel’s pixel coordinates.

The genRay() function The function should return a unit vector that goes in the direction from the viewpoint to the pixel at pixel coordinates. The procedure should: 1.convert the pixel coordinates to their corresponding world coordinates. The procedure for computing x was described in the previous section. The procedure for computing y is similar. For all pixels z will be 0 (why?). 2.generate a vector from the viewpoint coordinates to the pixel’s coordinates. Recall that this is readily done using the ray() function. The vector will not necessarily be a unit vector. 3.convert the vector to a unit vector 4.return the unit vector.

Additional ray tracer data types In the ray tracer we have many instances of wanting to use vectors. In lab 3 we created a library of functions that manipulate vectors. For the sake of readability in our code we will use typedef’s to define additional data types that are all equivalent to a vector_t, e.g.: typedef vector_t point_t; typedef vector_t intensity_t;

Additional ray tracer data types The pixel_t defines the (r, g, b) fields of a pixel as we have used in previous home works, i.e.: typedef struct pixel_type { unsigned char r; unsigned char g; unsigned char b; } pixel_t; Note that for an intensity_t the RGB values equate to the x, y and z components of the tuple_t structure, which of course are 3 double precision numbers. We will see later how the use of an intensity_t differs from a pixel_t, which is 3 eight-bit integer values ranging from 0 to 255.

The ray tracing algorithm Phase 1: Initialization –Acquire input SDL filename and output ppm filename from the command line –Process the scene description language (SDL) –print scene data to the stderrr Phase 2: The ray tracing procedure for building the screen image { –initialize the color of the pixel to (0, 0, 0) –compute the world coordinates of the pixel –compute the direction in 3-d space of a ray from the viewpoint through the pixel –identify the first (closest) object hit by the ray –compute the color of the pixel based upon the illumination of the object(s) }

The ray tracing algorithm Phase 3: Writing out the image_t as a.ppm file –write.ppm header to output ppm file –write the image to output ppm file

Data Structures - the big picture WARNING: Some elements of the definitions have been abbreviated and/or assume the use of the typedef construct. See the examples on other pages for these details.

The main function A properly designed and constructed program is necessarily modular in nature. Modularity is somewhat automatically enforced in O-O languages, but new C programmers often revert to an ugly pack- it- all- into- one-main- function approach. To discourage this in the program, the design will avoid: 1 - Functions that are too long (greater than 30 lines) 2 - Nesting of code greater than 2 deep 3 - Lines that are too long (greater than 80 characters)

The main function Here is the main function for the final version of the ray tracer. int main(int argc, char *argv[]) { open model description language (SDL) input file and the output image PPM file Process the scene description language file and build the scene data structures Print the scene data (for debugging purposes) Create the image Output the ppm image return 0; }