Lecture 9: Projection Li Zhang Spring 2008

Slides:



Advertisements
Similar presentations
Computer Graphics - Viewing -
Advertisements

Lecture 3: Transformations and Viewing. General Idea Object in model coordinates Transform into world coordinates Represent points on object as vectors.
Viewing and Projections
Foundations of Computer Graphics (Spring 2010) CS 184, Lecture 5: Viewing
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Viewing Doug James’ CG Slides, Rich Riesenfeld’s CG Slides, Shirley, Fundamentals of Computer Graphics, Chap 7 Wen-Chieh (Steve) Lin Institute of Multimedia.
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2008 Tamara Munzner Viewing/Projections I.
1 Chapter 5 Viewing. 2 Perspective Projection 3 Parallel Projection.
2/26/04© University of Wisconsin, CS559 Spring 2004 Last Time General Orthographic Viewing –Specifying cameras in world coordinates –Building world  view.
Introduction to 3D viewing 3D is just like taking a photograph!
Viewing and Projections
02/26/02 (c) 2002 University of Wisconsin, CS 559 Last Time Canonical view pipeline Orthographic projection –There was an error in the matrix for taking.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
2/24/04© University of Wisconsin, CS559 Spring 2004 Last Time 3D Transformations –Most are natural extensions of 2D transforms –Rotations can be represented.
Graphics Graphics Korea University cgvr.korea.ac.kr 3D Viewing 고려대학교 컴퓨터 그래픽스 연구실.
CS-378: Game Technology Lecture #2.1: Projection Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney, Zoran Popovic, Jessica.
Computer Graphics Bing-Yu Chen National Taiwan University.
10/3/02 (c) 2002 University of Wisconsin, CS 559 Last Time 2D Coordinate systems and transformations.
CS559: Computer Graphics Lecture 9: Rasterization Li Zhang Spring 2008.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
OpenGL The Viewing Pipeline: Definition: a series of operations that are applied to the OpenGL matrices, in order to create a 2D representation from 3D.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Jinxiang Chai CSCE 441 Computer Graphics 3-D Viewing 0.
Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 5: Viewing
2/19/04© University of Wisconsin, CS559 Spring 2004 Last Time Painterly rendering 2D Transformations –Transformations as coordinate system changes –Transformations.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Classical Viewing Ed Angel Professor of Computer Science, Electrical and Computer Engineering,
Rendering Pipeline Fall, D Polygon Rendering Many applications use rendering of 3D polygons with direct illumination.
Taxonomy of Projections FVFHP Figure Taxonomy of Projections.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Viewing and Projection. The topics Interior parameters Projection type Field of view Clipping Frustum… Exterior parameters Camera position Camera orientation.
Lecture 14: Projection CS4670 / 5670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Viewing Angel Angel: Interactive Computer Graphics5E © Addison-Wesley
OpenGL LAB III.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
Outline 3D Viewing Required readings: HB 10-1 to 10-10
Perspective projection
Rendering Pipeline Fall, 2015.
Viewing.
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2016
CS5670: Computer Vision Lecture 9: Cameras Noah Snavely
Computer Viewing.
CSE 167 [Win 17], Lecture 5: Viewing Ravi Ramamoorthi
3D Viewing cgvr.korea.ac.kr.
Rendering Pipeline Aaron Bloomfield CS 445: Introduction to Graphics
CSCE 441 Computer Graphics 3-D Viewing
Viewing in 3D Chapter 6.
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
CENG 477 Introduction to Computer Graphics
Projections and Normalization
CS451Real-time Rendering Pipeline
CSC461: Lecture 19 Computer Viewing
Viewing/Projections I Week 3, Fri Jan 25
Three Dimensional Viewing
OpenGL 2D Viewing Pipeline
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2016
CSC4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University View & Projection.
Lecture 13: Cameras and geometry
Computer Graphics (Spring 2003)
Announcements Midterm out today Project 1 demos.
Chapter V Vertex Processing
Last Time Canonical view pipeline Projection Local Coordinate Space
Chap 3 Viewing Pipeline Reading:
Lecture 9: Projection Li Zhang Spring 2008
Viewing (Projections)
Viewing (Projections)
Interactive Computer Graphics Viewing
Lecture 8: 3D Transforms Li Zhang Spring 2008
Announcements Midterm out today Project 1 demos.
Presentation transcript:

Lecture 9: Projection Li Zhang Spring 2008 CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008

Today Projection Reading: Shirley ch 7 Last time, we talked how to sample a continuous light pattern into a discrete iamge. How to resample an discrete at different resolution or sampling rate. This lecture, we’ll look at what operation we can do on a discrete image. We’ll start with convolution

Geometric Interpretation 3D Rotations Can construct rotation matrix from 3 orthonormal vectors Rows of matrix are 3 unit vectors of new coord frame Effectively, projections of point into new coord frame First row

The 3D synthetic camera model 3D model and virtual camera Simulate how light goes through the camera to form an image The synthetic camera model involves two components, specified independently: objects (a.k.a. geometry) viewer (a.k.a. camera)

Imaging with the synthetic camera In graphics, we don’t need to put film in the box, we can image putting it in front of the camera, What’s the difference? Upside down. Conceptually trivial. Connecting each point on the model, find the intersection, draw the color there. For modeling, repeat shapes, waste of time to create pillar for every pillar. Having one and move it around. In general, we have library of objects. That’s the modeling part. We’ll be looking at those later on. The image is rendered onto an image plane or projection plane (usually in front of the camera). Rays emanate from the center of projection (COP) at the center of the lens (or pinhole). The image of an object point P is at the intersection of the ray through P and the image plane.

Specifying a viewer Camera specification requires four kinds of parameters: 􀂊 Position: the COP. 􀂊 Orientation: rotations about axes with origin at the COP. 􀂊 Focal length: determines the size of the image on the film plane, or the field of view. 􀂊 Film plane: its width and height. In this lecture, we’ll be focusing on the camera part.

3D Geometry Pipeline Model Space (Object Space) World Space Rotation Library of objects Each one is in its’ own coord. Uint cube, unit sphere Model Space (Object Space) World Space Rotation Translation Resizing

3D Geometry Pipeline Model Space (Object Space) Eye Space (View Space) Setting up the frustrum, fov, cliping plane. Occlusion, visibility changes Converging is OK, but not most convenient to implement, instead… Model Space (Object Space) Eye Space (View Space) World Space Rotation Translation Projective, Scale Translation Rotation Translation Resizing

3D Geometry Pipeline (cont’d) Every thing here is still in 3D Continuous light pattern hitting the sensor plane Sample the lighting pattern to form a digital image Normalized Projection Space Image Space (pixels) Raster Space Screen Space (2D) Project Scale Translation

World -> eye transformation Let’s look at how we would compute the world->eye transformation. This is a simple exercise of the last slide of Monday lecture or first slide of today’s lecture This is a good test whether you really understand the geometric interpretation of rotation

World -> eye transformation Let’s look at how we would compute the world->eye transformation. This is a good test whether you really understand the geometric interpretation of rotation P-e is a translation Dot is rotation

How to specify eye coordinate? Z negative convetion Which is vertical direction, or up vector? Advantage, no need to have u.ze=0 Give eye location e Give target position p Give upward direction u OpenGL has a helper command: gluLookAt (eyex, eyey, eyez, px, py, pz, upx, upy, upz )

Projections Projections transform points in n-space to m-space, where m<n. In 3-D, we map points from 3-space to the projection plane (PP) (a.k.a., image plane) along rays emanating from the center of projection (COP): There are two basic types of projections: Perspective – distance from COP to PP finite Parallel – distance from COP to PP infinite

Parallel projections For parallel projections, we specify a direction of projection (DOP) instead of a COP. We can write orthographic projection onto the z=0 plane with a simple matrix, such that x’=x, y’=y. Drop z value

Parallel projections For parallel projections, we specify a direction of projection (DOP) instead of a COP. We can write orthographic projection onto the z=0 plane with a simple matrix, such that x’=x, y’=y. Normally, we do not drop the z value right away. Why not? Drop z value You may wonder, why bother to introduce so many zeros to have a matrix. You will see later the unification reason later.

Properties of parallel projection Are actually a kind of affine transformation Parallel lines remain parallel Ratios are preserved Angles not (in general) preserved Not realistic looking Good for exact measurements, Most often used in CAD, architectural drawings, etc., where taking exact measurement is important Scanalyze

Perspective effect Distant objects are smaller Paralle lines meet

Perspective Illusion

Perspective Illusion

Derivation of perspective projection y/z = y’/-d y’ = -y/z d x’ = -x/z d Distant things look smaller Consider the projection of a point onto the projection plane: By similar triangles, we can compute how much the x and y coordinates are scaled:

Homogeneous coordinates and perspective projection Now we can re-write the perspective projection as a matrix equation: x/z const, map to the same point What’s important here is that we can represent nonlinear thing using homo coord Only last row is different (4,4) is no longer zero Decompose complex tasks into simpler ones. Lost z is not best choice. Orthographic projection

Vanishing points What happens to two parallel lines that are not parallel to the projection plane? Think of train tracks receding into the horizon... The equation for a line is: After perspective transformation we get: 0  directional vector P is one point here V is the direction of the rail T is the step

Vanishing points (cont'd) Dividing by w: Letting t go to infinity: We get a point! What happens to the line l = q + tv? Each set of parallel lines intersect at a vanishing point on the PP. Q: How many vanishing points are there? x’ = -vx/vz d y’ = -vy/vz d Inf number of vanishing points Same v means the two lines are parapllel

Vanishing points

Vanishing points

Vanishing points

General Projective Transformation The perspective projection is an example of a projective transformation.

Properties of perspective projections Here are some properties of projective transformations: Lines map to lines Parallel lines do not necessarily remain parallel Ratios are not preserved One of the advantages of perspective projection is that size varies inversely with distance – looks realistic. A disadvantage is that we can't judge distances as exactly as we can with parallel projections. Stereo, occlusions, perspective (comparative size), focus, motion parallax Auxiliary: sound

3D Geometry Pipeline Model Space (Object Space) Eye Space (View Space) Setting up the frustrum, fov, cliping plane. Occlusion, visibility changes Converging is OK, but not most convenient to implement, instead… Model Space (Object Space) Eye Space (View Space) World Space Rotation Translation Projective, Scale Translation Rotation Translation Resizing

Clipping and the viewing frustum The center of projection and the portion of the projection plane that map to the final image form an infinite pyramid. The sides of the pyramid are clipping planes. Frequently, additional clipping planes are inserted to restrict the range of depths. These clipping planes are called the near and far or the hither and yon clipping planes. All of the clipping planes bound the viewing frustum. The basic equations we have seen give a flavor of what happens, but they are insufficient for all applications They do not get us to a Canonical View Volume They make assumptions about the viewing conditions To get to a Canonical Volume, we need a Perspective Volume …

Clipping and the viewing frustum Notice that it doesn’t really matter where the image plane is located, once you define the view volume You can move it forward and backward along the z axis and still get the same image, only scaled Usually d = near

Clipping Planes The left/right/top/bottom planes are defined according to where they cut the near clip plane Left Clip Plane Near Clip Plane xv Far Clip Plane View Volume l n -zv f r What is image size Right Clip Plane Need 6 parameters, Or, define the left/right and top/bottom clip planes by the field of view

Field of View Assumes a symmetric view volume Left Clip Plane Near Clip Plane xv Far Clip Plane View Volume FOV -zv f Image size Right Clip Plane Need 4 parameters, Or, define the near, far, fov, aspect ratio

Focal Distance to FOV You must have the image size to do this conversion Why? Same d, different image size, different FOV height/2 d FOV/2 d

Perspective Parameters We have seen three different ways to describe a perspective camera Clipping planes, Field of View, Focal distance, The most general is clipping planes – they directly describe the region of space you are viewing For most graphics applications, field of view is the most convenient You can convert one thing to another

Perspective Projection Matrices We want a matrix that will take points in our perspective view volume and transform them into the orthographic view volume This matrix will go in our pipeline before an orthographic projection matrix (r,t,f) (r,t,f) (r,t,n) Parallel is eariser than perspective Clipping is more efficient (r,t,n) (l,b,n) (l,b,n)

Mapping Lines y -z We want to map all the lines through the center of projection to parallel lines This converts the perspective case to the orthographic case, we can use all our existing methods The relative intersection points of lines with the near clip plane should not change The matrix that does this looks like the matrix for our simple perspective case n f

How to get this mapping? y -z n f

Properties of this mapping y -z n f If z = n, M(x,y,z,1) = [x,y,z,1] near plane is unchanged If x/z = c1, y/z = c2, then x’=n*c1, y’=n*c2 bending converging rays to parallel rays If z1 < z2, z1’ < z2’ z ordering is preserved

General Perspective

To Canonical view volume Perspective projection transforms a pyramid volume to an orthographic view volume (r,t,f) (r,t,f) (r,t,n) (r,t,n) (l,b,n) (l,b,n) Rotate, translate Parallel is eariser than perspective Clipping is more efficient (-1,1,-1) (1,-1,1) Canonical view volume [-1,1]^3

Orthographic View to Canonical Matrix

Orthographic View to Canonical Matrix

Complete Perspective Projection After applying the perspective matrix, we map the orthographic view volume to the canonical view volume:

3D Geometry Pipeline Model Space (Object Space) Eye Space (View Space) Setting up the frustrum, fov, cliping plane. Converging is OK, but not most convenient to implement, instead… Model Space (Object Space) Eye Space (View Space) World Space Rotation Translation Projective, Scale Translation Rotation Translation Resizing

3D Geometry Pipeline (cont’d) Continuous light pattern hitting the sensor plane Sample the lighting pattern to form a digital image Normalized Projection Space Image Space (pixels) Raster Space Screen Space (2D) Project Scale Translation

Canonical  Window Transform (1,1) (xmax,ymax) (xmin,ymin) (-1,-1)

Canonical  Window Transform (1,1) (xmax,ymax) (xmin,ymin) (-1,-1)

Mapping of Z is nonlinear Many mappings proposed: all have nonlinearities Advantage: handles range of depths (10cm – 100m) Disadvantage: depth resolution not uniform More close to near plane, less further away Common mistake: set near = 0, far = infty. Don’t do this. Can’t set near = 0; lose depth resolution.