Hank Childs, University of Oregon Oct. 24th, 2014 CIS 441/541: Introduction to Computer Graphics Lecture 8: Arbitrary Camera Positions.

Slides:



Advertisements
Similar presentations
Today Composing transformations 3D Transformations
Advertisements

Lecture 3: Transformations and Viewing. General Idea Object in model coordinates Transform into world coordinates Represent points on object as vectors.
Hank Childs, University of Oregon Oct. 22 nd, 2014 CIS 441/541: Introduction to Computer Graphics Lecture 7: Arbitrary Camera Positions.
Transformations We want to be able to make changes to the image larger/smaller rotate move This can be efficiently achieved through mathematical operations.
Graphics Pipeline.
Hank Childs, University of Oregon Oct. 17 th, 2014 CIS 441/541: Introduction to Computer Graphics Lecture 6: Lighting Review, Arbitrary Camera Position.
Transformations II Week 2, Wed Jan 17
CS5500 Computer Graphics March 22, Angel: Interactive Computer Graphics 3E © Addison-Wesley 2002 Coordinate-Free Geometry When we learned simple.
CSCE 590E Spring 2007 Basic Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
3-D Geometry.
Computer Graphics (Fall 2008) COMS 4160, Lecture 3: Transformations 1
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
Viewing Doug James’ CG Slides, Rich Riesenfeld’s CG Slides, Shirley, Fundamentals of Computer Graphics, Chap 7 Wen-Chieh (Steve) Lin Institute of Multimedia.
Computer Graphics (Fall 2004) COMS 4160, Lecture 3: Transformations 1
CHAPTER 7 Viewing and Transformations © 2008 Cengage Learning EMEA.
CAP4730: Computational Structures in Computer Graphics
3D Graphics Goal: To produce 2D images of a mathematically described 3D environment Issues: –Describing the environment: Modeling (mostly later) –Computing.
Foundations of Computer Graphics (Fall 2012) CS 184, Lecture 3: Transformations 1
Basic graphics. ReviewReview Viewing Process, Window and viewport, World, normalized and device coordinates Input and output primitives and their attributes.
Transformations Aaron Bloomfield CS 445: Introduction to Graphics
COMP 175: Computer Graphics March 24, 2015
Viewing and Projections
CS 480/680 Computer Graphics Representation Dr. Frederick C Harris, Jr. Fall 2012.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
CSE 381 – Advanced Game Programming Basic 3D Graphics
Transformations Dr. Amy Zhang.
Week 2 - Wednesday CS361.
MIT EECS 6.837, Durand and Cutler Graphics Pipeline: Projective Transformations.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
Homogeneous Form, Introduction to 3-D Graphics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Monday, October 20,
Hank Childs, University of Oregon Oct. 31st, 2014 CIS 441/541: Introduction to Computer Graphics Lecture 10: Textures.
Shadows via Projection Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Wednesday, November 5, 2003.
Computer Graphics, KKU. Lecture 51 Transformations Given two frames in an affine space of dimension n, we can find a ( n+1 ) x ( n +1) matrix that.
CAP4730: Computational Structures in Computer Graphics 3D Transformations.
Jinxiang Chai CSCE441: Computer Graphics 3D Transformations 0.
Geometric Objects and Transformation
10/3/02 (c) 2002 University of Wisconsin, CS 559 Last Time 2D Coordinate systems and transformations.
16/5/ :47 UML Computer Graphics Conceptual Model Application Model Application Program Graphics System Output Devices Input Devices API Function.
Games Development 1 Camera Projection / Picking CO3301 Week 8.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
Mark Nelson 3d projections Fall 2013
CSE Real Time Rendering Week 5. Slides(Some) Courtesy – E. Angel and D. Shreiner.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Geometric Transformations
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
2/19/04© University of Wisconsin, CS559 Spring 2004 Last Time Painterly rendering 2D Transformations –Transformations as coordinate system changes –Transformations.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
Hank Childs, University of Oregon Volume Rendering, pt 1.
Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 3: Transformations 1
Honours Graphics 2008 Session 2. Today’s focus Vectors, matrices and associated math Transformations and concatenation 3D space.
OpenGL and You I Cast, Therefore I Am. Ray Casting Idea is simple, implementation takes some work –Cast rays as if you were the camera –Determine intersection.
III- 1 III 3D Transformation Homogeneous Coordinates The three dimensional point (x, y, z) is represented by the homogeneous coordinate (x, y, z, 1) In.
Computer Graphics I, Fall 2010 Transformations.
CS 551 / 645: Introductory Computer Graphics Viewing Transforms.
Geometric Transformations Ceng 477 Introduction to Computer Graphics Computer Engineering METU.
Forward Projection Pipeline and Transformations CENG 477 Introduction to Computer Graphics.
Hank Childs, University of Oregon Oct. 26th, 2016 CIS 441/541: Introduction to Computer Graphics Lecture 14: OpenGL Basics.
Introduction to Computer Graphics
Hank Childs, University of Oregon
CSE 167 [Win 17], Lecture 3: Transformations 1 Ravi Ramamoorthi
Hank Childs, University of Oregon
Computer Graphics CC416 Week 15 3D Graphics.
CIS 441/541: Introduction to Computer Graphics Lecture 15: More OpenGL
CSCE 441 Computer Graphics 3-D Viewing
Review: Transformations
Hank Childs, University of Oregon
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
3D Graphics Rendering PPT By Ricardo Veguilla.
CENG 477 Introduction to Computer Graphics
Hank Childs, University of Oregon
Presentation transcript:

Hank Childs, University of Oregon Oct. 24th, 2014 CIS 441/541: Introduction to Computer Graphics Lecture 8: Arbitrary Camera Positions

What am I missing?  1C redux submission  1D redux  ???

Upcoming Schedule  IEEE Vis Nov 8-13  Class canceled Nov 12  Class held Nov 14  SuperComputer Nov  Midterm Nov 19  Class held Nov 21  HiPC Dec. 10+  Get Final projects in on time  Final late policy: 20% off for 1 day late, 50% off thereafter  OH on Weds 10/29: 3:30-4:30

Project #1E (7%), Due Mon 10/27  Goal: add Phong shading  Extend your project1D code  File proj1e_geometry.vtk available on web (9MB)  File “reader1e.cxx” has code to read triangles from file.  No Cmake, project1e.cxx

Outline  Review from last time  Camera Transform example  View Transform  Putting it all together  Project 1F Note: Ken Joy’s graphics notes are fantastic

Outline  Review from last time  Camera Transform example  View Transform  Putting it all together  Project 1F Note: Ken Joy’s graphics notes are fantastic

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height

World Space  World Space is the space defined by the user’s coordinate system.  This space contains the portion of the scene that is transformed into image space by the camera transform.  It is normally left to the user to specify the bounds on this space.

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height Camera Transform

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height View Transform

World Space  World Space is the space defined by the user’s coordinate system.  This space contains the portion of the scene that is transformed into image space by the camera transform.  It is normally left to the user to specify the bounds on this space.

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height

How do we specify a camera? The “viewing pyramid” or “view frustum”. Frustum: In geometry, a frustum (plural: frusta or frustums) is the portion of a solid (normally a cone or pyramid) that lies between two parallel planes cutting it.

What is the up axis?  Up axis is the direction from the base of your nose to your forehead Up

What is the up axis?  Up axis is the direction from the base of your nose to your forehead + =

What is the up axis?  Up axis is the direction from the base of your nose to your forehead  (if you lie down while watching TV, the screen is sideways) + =

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height

Image Space  Image Space is the three-dimensional coordinate system that contains screen space.  It is the space where the camera transformation directs its output.  The bounds of Image Space are 3-dimensional cube. {(x,y,z) : − 1≤x≤1, − 1≤y≤1, − 1≤z≤1}

Image Space Diagram Up X=1 X = -1 Y=1 Y = -1 Z=1 Z = -1

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height

Screen Space  Screen Space is the intersection of the xy-plane with Image Space.  Points in image space are mapped into screen space by projecting via a parallel projection, onto the plane z = 0.  Example:  a point (0, 0, z) in image space will project to the center of the display screen

Screen Space Diagram X +1 Y +1

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height

Device Space  Device Space is the lowest level coordinate system and is the closest to the hardware coordinate systems of the device itself.  Device space is usually defined to be the n × m array of pixels that represent the area of the screen.  A coordinate system is imposed on this space by labeling the lower-left-hand corner of the array as (0,0), with each pixel having unit length and width.

Device Space With Depth Information  Extends Device Space to three dimensions by adding z-coordinate of image space.  Coordinates are (x, y, z) with 0 ≤ x ≤ n 0 ≤ y ≤ m z arbitrary (but typically -1 ≤ z ≤ +1)

Easiest Transform World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height

Image Space to Device Space  (x, y, z)  ( x’, y’, z’), where  x’ = n*(x+1)/2  y’ = m*(y+1)/2  z’ = z  (for an n x m image)  Matrix: (x’ 0 0 0) (0 y’ 0 0) (0 0 z’ 0) ( )

Homogeneous Coordinates  Defined: a system of coordinates used in projective geometry, as Cartesian coordinates are used in Euclidean geometry  Primary uses:  4 × 4 matrices to represent general 3-dimensional transformations  it allows a simplified representation of mathematical functions – the rational form – in which rational  We only care about the first  I don’t really even know what the second use means

Interpretation of Homogeneous Coordinates  4D points: (x, y, z, w)  Our typical frame: (x, y, z, 1)

Interpretation of Homogeneous Coordinates  4D points: (x, y, z, w)  Our typical frame: (x, y, z, 1) Our typical frame in the context of 4D points So how to treat points not along the w=1 line?

Projecting back to w=1 line  Let P = (x, y, z, w) be a 4D point with w != 1  Goal: find P’ = (x’, y’, z’, 1) such P projects to P’  (We have to define what it means to project)  Idea for projection:  Draw line from P to origin.  If Q is along that line (and Q.w == 1), then Q is a projection of P

Projecting back to w==1 line  Idea for projection:  Draw line from P to origin.  If Q is along that line (and Q.w == 1), then Q is a projection of P

So what is Q?  Similar triangles argument:  x’ = x/w  y’ = y/w  z’ = z/w

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O -Z  Need to construct a Camera Frame  Need to construct a matrix to transform points from Cartesian Frame to Camera Frame  Transform triangle by transforming its three vertices

Camera frame construction  Must choose (v1,v2,v3,O)  O = camera position  v3 = O-focus  v2 = up  v1 = up x (O-focus) Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O -Z

But wait … what if dot(v2,v3) != 0?  We can get around this with two cross products:  v1 = Up x (O-focus)  v2 = (O-focus) x v1 O-focus Up

Camera frame summarized  O = camera position  v1 = Up x (O-focus)  v2 = (O-focus) x v1  v3 = O-focus

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O -Z  Need to construct a Camera Frame  ✔  Need to construct a matrix to transform points from Cartesian Frame to Camera Frame  Transform triangle by transforming its three vertices

The Two Frames of the Camera Transform  Our two frames:  Cartesian:   (0,0,0)  Camera:  v1 = up x (O-focus)  v2 = (O-focus) x u  v3 = (O-focus)  O

Solving the Camera Transform [e1,1 e1,2 e1,3 0] [v1.x v2.x v3.x 0] [e2,1 e2,2 e2,3 0] [v1.y v2.y v3.y 0] [e3,1 e3,2 e3,3 0] = [v1.z v2.z v3.z 0] [e4,1 e4,2 e4,3 1] [v1. t v2. t v3. t 1] Where t = (0,0,0)-O How do we know?: Cramer’s Rule + simplifications Want to derive?: GraphicsNotes/Camera-Transform/Camera- Transform.html

Outline  Review from last time  Camera Transform example  View Transform  Putting it all together  Project 1F Note: Ken Joy’s graphics notes are fantastic

Let’s do an example [e1,1 e1,2 e1,3 0] [v1.x v2.x v3.x 0] [e2,1 e2,2 e2,3 0] [v1.y v2.y v3.y 0] [e3,1 e3,2 e3,3 0] = [v1.z v2.z v3.z 0] [e4,1 e4,2 e4,3 1] [v1. t v2. t v3. t 1 ] Where t = (0,0,0)-O Camera frame: V1 = (0.6, 0, 0.8) V2 = (0, 1, 0) V3 = (-0.8, 0, 0.6) O = (2,2,2)

[e1,1 e1,2 e1,3 0] [ ] [e2,1 e2,2 e2,3 0] [ ] [e3,1 e3,2 e3,3 0] = [ ] [e4,1 e4,2 e4,3 1] [ ] Let’s do an example [e1,1 e1,2 e1,3 0] [v1.x v2.x v3.x 0] [e2,1 e2,2 e2,3 0] [v1.y v2.y v3.y 0] [e3,1 e3,2 e3,3 0] = [v1.z v2.z v3.z 0] [e4,1 e4,2 e4,3 1] [v1. t v2. t v3. t 1 ] Where t = (0,0,0)-O = (-2,-2,-2) Camera frame: V1 = (0.6, 0, 0.8) V2 = (0, 1, 0) V3 = (-0.8, 0, 0.6) O = (2,2,2)

Example continued  Convert Cartesian frame coordinates (1,2,3) to Camera frame coordinates [e1,1 e1,2 e1,3 0] [ ] [e2,1 e2,2 e2,3 0] [ ] [e3,1 e3,2 e3,3 0] = [ ] [e4,1 e4,2 e4,3 1] [ ] Camera frame: V1 = (0.6, 0, 0.8) V2 = (0, 1, 0) V3 = (-0.8, 0, 0.6) O = (2,2,2)

Example continued  Convert Cartesian frame coordinates (1,2,3) to Camera frame coordinates [e1,1 e1,2 e1,3 0] [ ] [e2,1 e2,2 e2,3 0] [ ] [e3,1 e3,2 e3,3 0] = [ ] [e4,1 e4,2 e4,3 1] [ ] Camera frame: V1 = (0.6, 0, 0.8) V2 = (0, 1, 0) V3 = (-0.8, 0, 0.6) O = (2,2,2) (1, 2, 3, 1) = (0.2, 0, 1.4, 1) These are the coordinates of (1,2,3) in the Camera Frame

Example continued Camera frame: V1 = (0.6, 0, 0.8) V2 = (0, 1, 0) V3 = (-0.8, 0, 0.6) O = (2,2,2)  Let’s convert (0.2, 0, 1.4) back to the Cartesian frame.  Do we get (1,2,3)?

Outline  Review from last time  Camera Transform example  View Transform  Putting it all together  Project 1F Note: Ken Joy’s graphics notes are fantastic

Our goal World space: Triangles in native Cartesian coordinates Camera located anywhere O Camera space: Camera located at origin, looking down -Z Triangle coordinates relative to camera frame O Image space: All viewable objects within -1 <= x,y,z <= +1 x y z Screen space: All viewable objects within -1 <= x, y <= +1 Device space: All viewable objects within 0<=x<=width, 0 <=y<=height View Transform

View Transformation w = -1w = +1 v = -1 v = +1 The viewing transformation is not a combination of simple translations, rotations, scales or shears: it is more complex.

Derivation of Viewing Transformation  I personally don’t think it is a good use of class time to derive this matrix.  Well derived at:  /Viewing-Transformation/Viewing-Transformation.html /Viewing-Transformation/Viewing-Transformation.html

The View Transformation  Input parameters: ( α, n, f)  Transforms view frustum to image space cube  View frustum: bounded by viewing pyramid and planes z=-n and z=-f  Image space cube: -1 <= u,v,w <= 1  Cotangent = 1/tangent [cot( α /2) 0 0 0] [0 cot( α /2) 0 0] [0 0 (f+n)/(f-n) -1] [0 0 2fn/(f-n) 0 ]

Let’s do an example α=90  Input parameters: ( α, n, f) = (90, 5, 10) [cot( α /2) 0 0 0] [0 cot( α /2) 0 0] [0 0 (f+n)/(f-n) -1] [0 0 2fn/(f-n) 0 ] n=5 f=10

Let’s do an example α=90  Input parameters: ( α, n, f) = (90, 5, 10) [ ] [ ] [ ] [ ] n=5 f=10

Let’s do an example α=90  Input parameters: ( α, n, f) = (90, 5, 10) [ ] [ ] [ ] [ ] n=5 f=10 Let’s multiply some points: (0,7,-6,1) (0,7,-8,1)

Let’s do an example α=90  Input parameters: ( α, n, f) = (90, 5, 10) [ ] [ ] [ ] [ ] n=5 f=10 Let’s multiply some points: (0,7,-6,1) = (0,7,-2,6) = (0, 1.16, -0.33) (0,7,-8,1) = (0,7,4,8) = (0, 0.88, 0.5)

Let’s do an example α=90  Input parameters: ( α, n, f) = (90, 5, 10) [ ] [ ] [ ] [ ] n=5 f=10 More points: (0,7,-4,1) = (0,7,-12,4) = (0, 1.75, -3) (0,7,-5,1) = (0,7,-15,3) = (0, 2.33, -1) (0,7,-6,1) = (0,7,-2,6) = (0, 1.16, -0.33) (0,7,-8,1) = (0,7,4,8) = (0, 0.88, 0.5) (0,7,-10,1) = (0,7,10,10) = (0, 0.7, 1) (0,7,-11,1) = (0,7,13,11) = (0,.63, 1.18)

View Transformation w = -1w = +1 v = -1 v = +1 The viewing transformation is not a combination of simple translations, rotations, scales or shears: it is more complex.

View Transformation w = -1w = +1 v = +1 v = -1 More points: (0,7,-4,1) = (0,7,-12,4) = (0, 1.75, -3) (0,7,-5,1) = (0,7,-5,5) = (0, 1.4, -1) (0,7,-6,1) = (0,7,-2,6) = (0, 1.16, -0.33) (0,7,-8,1) = (0,7,4,8) = (0, 0.88, -0.5) (0,7,-10,1) = (0,7,10,10) = (0, 0.7, 1) (0,7,-11,1) = (0,7,13,11) = (0,.63, 1.18) (0,1.75, -3) (0,0.88,-0.5) (0,0.7,1) (0,0.63,1.18) (0,1.4, -1) (0,1.16, -0.33) Note there is a non-linear relationship in Z.

Outline  Review from last time  Camera Transform example  View Transform  Putting it all together  Project 1F Note: Ken Joy’s graphics notes are fantastic

How do we transform?  For a camera C,  Calculate Camera Frame  From Camera Frame, calculate Camera Transform  Calculate View Transform  Calculate Device Transform  Compose 3 Matrices into 1 Matrix (M)  For each triangle T, apply M to each vertex of T, then apply rasterization/zbuffer/Phon g shading

Outline  Review from last time  Camera Transform example  View Transform  Putting it all together  Project 1F Note: Ken Joy’s graphics notes are fantastic

Project #1F (8%), Due Nov 5th  Goal: add arbitrary camera positions  Extend your project1E code  Re-use: proj1e_geometry.vtk available on web (9MB), “reader1e.cxx”, “shading.cxx”.  No Cmake, project1F.cxx  New: Matrix.cxx, Camera.cxx

Project #1F, expanded  Matrix.cxx: complete  Methods: class Matrix { public: double A[4][4]; void TransformPoint(const double *ptIn, double *ptOut); static Matrix ComposeMatrices(const Matrix &, const Matrix &); void Print(ostream &o); };

Project #1F, expanded  Camera.cxx: you work on this class Camera { public: double near, far; double angle; double position[3]; double focus[3]; double up[3]; Matrix ViewTransform(void) {;}; Matrix CameraTransform(void) {;}; Matrix DeviceTransform(void) {;}; // Will probably need something for calculating Camera Frame as well }; Also: GetCamera(int frame, int nFrames)

Project #1F, deliverables  Same as usual, but times 4  4 images, corresponding to GetCamera(0, 1000) GetCamera(250,1000) GetCamera(500,1000) GetCamera(750,1000)  If you want:  Generate all thousand images, make a movie Can discuss how to make a movie if there is time

Project #1F, game plan vector t = GetTriangles(); AllocateScreen(); for (int i = 0 ; i < 1000 ; i++) { InitializeScreen(); Camera c = GetCamera(i, 1000); TransformTrianglesToDeviceSpace(); // involves setting up and applying matrices //… if you modify vector t, // remember to undo it later RenderTriangles() SaveImage(); }

Correct answers given for GetCamera(0, 1000) Camera Frame: U = 0, , Camera Frame: V = , , Camera Frame: W = , , Camera Frame: O = 40, 40, 40 Camera Transform ( ) ( ) ( ) ( ) View Transform ( ) ( ) ( ) ( ) Transformed , , , 1 to 0, 0,1 Transformed , , , 1 to 0, 0,-1

Project #1F, pitfalls  All vertex multiplications use 4D points. Make sure you send in 4D points for input and output, or you will get weird memory errors.  Make sure you divide by w.  Your Phong lighting assumed a view of (0,0,-1). The view will now be changing with each render and you will need to incorporate that view direction in your rendering.

Project #1F, pitfalls  People often get a matrix confused with its transpose. Use the method Matrix::Print() to make sure the matrix you are setting up is what you think it should be. Also, remember the points are left multiplied, not right multiplied.  Regarding multiple renderings:  Don’t forget to initialize the screen between each render  If you modify the triangle in place to render, don’t forget to switch it back at the end of the render

Project #1F (8%), Due Nov 5th  Goal: add arbitrary camera positions