CSC4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University View & Projection.

Slides:



Advertisements
Similar presentations
Computer Graphics - Viewing -
Advertisements

Based on slides created by Edward Angel
Viewing and Transformation
Viewing and Projections
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Computer Viewing Ed Angel Professor of Computer Science, Electrical and Computer Engineering,
OpenGL (II). How to Draw a 3-D object on Screen?
1 Chapter 5 Viewing. 2 Perspective Projection 3 Parallel Projection.
Viewing and Projections
Advanced Computer Graphics Three Dimensional Viewing
The Viewing Pipeline (Chapter 4) 5/26/ Overview OpenGL viewing pipeline: OpenGL viewing pipeline: – Modelview matrix – Projection matrix Parallel.
Transformation & Projection Feng Yu Proseminar Computer Graphics :
2 COEN Computer Graphics I Evening’s Goals n Discuss the mathematical transformations that are utilized for computer graphics projection viewing.
Computer Graphics I, Fall 2010 Classical Viewing.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
Viewing Angel Angel: Interactive Computer Graphics5E © Addison-Wesley
1 Classical Viewing. 2 Objectives Introduce the classical views Compare and contrast image formation by computer with how images have been formed by architects,
C O M P U T E R G R A P H I C S Guoying Zhao 1 / 67 C O M P U T E R G R A P H I C S Guoying Zhao 1 / 67 Computer Graphics Three-Dimensional Graphics III.
Computer Graphics Bing-Yu Chen National Taiwan University.
CAP 4703 Computer Graphic Methods Prof. Roy Levow Chapter 5.
Demetriou/Loizidou – ACSC330 – Chapter 5 Viewing Dr. Giorgos A. Demetriou Computer Science Frederick Institute of Technology.
Computer Graphics I, Fall 2010 Computer Viewing.
OpenGL The Viewing Pipeline: Definition: a series of operations that are applied to the OpenGL matrices, in order to create a 2D representation from 3D.
OpenGL Viewing and Modeling Transformation Geb Thomas Adapted from the OpenGL Programming Guidethe OpenGL Programming Guide.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
The Camera Analogy ► Set up your tripod and point the camera at the scene (viewing transformation) ► Arrange the scene to be photographed into the desired.
Chapters 5 2 March Classical & Computer Viewing Same elements –objects –viewer –projectors –projection plane.
Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 5: Viewing
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Classical Viewing Ed Angel Professor of Computer Science, Electrical and Computer Engineering,
Classical Viewing Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
Taxonomy of Projections FVFHP Figure Taxonomy of Projections.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Viewing and Projection. The topics Interior parameters Projection type Field of view Clipping Frustum… Exterior parameters Camera position Camera orientation.
Classical Viewing Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico.
Viewing Angel Angel: Interactive Computer Graphics5E © Addison-Wesley
CS5500 Computer Graphics March 20, Computer Viewing Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts.
OpenGL LAB III.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
Three Dimensional Viewing
Visible Surface Detection
Viewing.
Computer Viewing.
Courtesy of Drs. Carol O’Sullivan / Yann Morvan Trinity College Dublin
Projection Our 3-D scenes are all specified in 3-D world coordinates
Isaac Gang University of Mary Hardin-Baylor
CSCE 441 Computer Graphics 3-D Viewing
CSC461: Lecture 18 Classical Viewing
CSC461: Lecture 20 Parallel Projections in OpenGL
Unit 4 3D Viewing Pipeline Part - 2
CENG 477 Introduction to Computer Graphics
Projections and Normalization
The Graphics Rendering Pipeline
Advanced Graphics Algorithms Ying Zhu Georgia State University
Classical Viewing Ed Angel
A Photograph of two papers
Isaac Gang University of Mary Hardin-Baylor
Introduction to Computer Graphics with WebGL
Three Dimensional Viewing
Lecture 08 and 09 View & Projection
Computer Graphics Imaging
Last Time Canonical view pipeline Projection Local Coordinate Space
Type of View Perspective View COP(Center of Plane) Diminution of size
Viewing (Projections)
University of New Mexico
Viewing (Projections)
Interactive Computer Graphics Viewing
Computer Graphics 3Practical Lesson
THREE-DIMENSIONAL VIEWING II
University of New Mexico
Isaac Gang University of Mary Hardin-Baylor
Presentation transcript:

CSC4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University View & Projection

Outline Camera and view transformation View volume and projection transformation Orthographic projection Perspective projection Perspective division View port transformation

Outline

3D Scenes and Camera In computer graphics, we often use terms that are analog to theatre or photography A virtual world is a “scene” Objects in the scene are “actors” There is a “camera” to specify viewing position and viewing parameters. Viewing and projection is about how to position the camera and project the 3D scene to a 2D screen.

Viewing In modeling tools, you can create multiple cameras But in OpenGL, there is only one camera Camera can be animated too The 3D scene is rendered through the viewpoint of the camera It’s important to make sure your objects are visible through the camera Note the difference between a camera in computer graphics and a real camera

Camera in computer graphics Computer graphics uses the pinhole camera model This results in perfectly sharp images No depth of field or motion blur Real cameras use lenses with variable aperture sizes This causes depth-of-field: out-of-focus objects appear blurry

Depth of field Depth of view is an important part of the storytelling process Direct viewer’s eyes to a certain area of the scene Depth of view can be faked in CG, but needs extra work

Motion Blur Camera in computer graphics does not generate motion blur either Again, it can be faked in CG but performance may suffer

Aspects of Viewing There are three aspects of the viewing process Positioning the camera Setting the model-view matrix Selecting a lens Setting the projection matrix Clipping Setting the view volume

Positioning the camera Each camera has a location and a direction The purpose of viewing transformation is to position the camera. In OpenGL, by default the camera is located at origin and points in the negative z direction

Moving the Camera No matter where the camera is initially, eventually it needs to be transformed to the origin, facing –Z axis in order to make the subsequent stages in the graphics pipeline very efficient But the picture taken by the (virtual) camera should be the same before and after the camera transformation How to achieve this? Construct a view transformation matrix to transform the camera to the origin, facing -Z Apply the view transformation matrix to every object in the virtual scene This view transformation is not visible

What does gluLookAt() do? gluLookAt(eyex, eyey, eyez, atx, aty, atz, upx, upy, upz) is equivalent to glMultMatrixf(M); // post-multiply M with current model-view matrix glTranslated(-eyex, -eyey, -eyez); Where M = u, n, v are unit vectors.

The bottom line Understanding the transformations (and matrix multiplications) behind the OpenGL calls can save you a lot of trouble Don’t blindly follow the code pattern in sample programs Understand the relationship among Current model-view matrix Camera transformations Model transformations OpenGL matrix stack

Projection Transformation Viewing transformation transform the camera to the origin and look-at direction with negative Z axis. Before one can actually render a scene, all relevant objects in the scene must be projected to some kind of plane or into some kind of simple volume After that, clipping and rendering are performed Projection transformation maps all 3D objects onto the viewing plane to create a 2D image.

Projection Transformation The previous section described how to compose the desired modelview matrix so that the correct modeling and viewing transformations are applied. Now we explain how to define the desired projection matrix, which is also used to transform the vertices in your scene

Projection transformation

View volume (frustum) We need to define a view volume for projection Everything inside the view volume will be projected to the 2D plane Everything outside the view volume will be “clipped” and ignored The job of projection transformation is to transform the view volume (and everything in it) to a canonical view volume Canonical view volume: a unit cube centered at origin It has minimum corner of (-1, -1, -1) and maximum corner of (1, 1, 1)

View volume (frustum)

Canonical view volume The coordinates in this volume is called Normalized Device Coordinates (NDC) The reason for transforming into the canonical view volume is that clipping is more efficiently performed there especially in hardware implementation The canonical view volume can also be conveniently mapped to 2D window by view port transformation

Projection transformation Two basic projections in OpenGL: Orthographic projections Useful in applications where relative length judgment are important Can also yield simplifications where perspective would be too expensive, e.g. in some medical visualization applications Perspective projections Used in most interactive 3D applications

Orthographic Projection Projection lines are orthogonal to projection surface

Orthographic Projection Projection plane parallel to principal face Usually form front, top, side views isometric (not multiview orthographic view) front in CAD and architecture, we often display three multiviews plus isometric side top

Orthographic Projection Preserves both distances and angles Shapes preserved Can be used for measurements Building plans Manuals Cannot see what object really looks like because many surfaces hidden from view Often we add the isometric view

Orthographic Projection in OpenGL Specify an orthographic view volume in OpenGL glOrtho(left,right,bottom,top,near,far) Anything outside the viewing frustum is clipped. near and far are distances from camera

Orthographic Projection in OpenGL The orthographic view volume is a rectangular volume The orthographic view volume is along the negative Z axis Remember that projection transformation happens after the view transformation At this point, the camera is at the origin, pointing to the negative Z axis

Orthographic projection The matrix created by glOrtho() will transform the orthographic view volume to canonical view volume A cube that extends from -1 to 1 in x, y, z. This is often called the clip space. The next steps are clipping, perspective division, and view port transformation We’ll discuss clipping later

Perspective Projection Projectors converge at center of projection

Advantages & Disadvantages Objects further from viewer are projected smaller than the same sized objects closer to the viewer Make images look realistic Equal distances along a line are not projected into equal distances Angles preserved only in planes parallel to the projection plane More difficult to construct by hand than orthographic projection (but not more difficult by computer)

Perspective Projection Unlike the orthographic view volume, the perspective view volume is a like truncated pyramid The perspective view volume is also along the negative Z axis Remember that projection transformation happens after the view transformation At this point, the camera is at the origin, pointing to the negative Z axis

Perspective Projection The job of perspective projection is to transform this view volume (and everything in it) to a canonical view volume Everything outside of this volume will be clipped and ignored

Perspective Projection Unfortunately there is no 4x4 matrix that can achieve the perspective transformation But there is a workaround called perspective division Use a 4x4 matrix to generate a different kind of homogeneous points. Then divide the first 3 components with the 4th component (w) This is why homogeneous coordinates are used in 3D graphics – it allows matrix multiplications being used in projection transformation We’ll see an example later

What does glFrustum() do? Internally OpenGL creates the following matrix A = (right + left) / (right - left) B = (top + bottom) / (top - bottom) C = -(zFar + zNear) / (zFar - zNear) D = (-2 * zFar * zNear) / (zFar - zNear) The current projection matrix is multiplied by this matrix and the result replaces the current projection matrix

What does gluPerspective() do? Internally OpenGL creates the following matrix The current projection matrix is multiplied by this matrix and the result replaces the current matrix

Aspect Ratio The aspect ratio in gluPerspective should match the aspect ratio of the associated viewport Normally, gluPerspective(fov, w/h, near, far) W is the width of your window, h is the height of your window You want the aspect ratio of your perspective view volume to match the aspect ratio of your window Otherwise, your final image will look distorted E.g. if you set aspect ratio to be a constant value, then when you resize your window, your image will look distorted

Perspective projection The matrix created by glFrustum() or gluPerspective() will transform the perspective view volume to a canonical view volume A cube that extends from -1 to 1 in x, y, z. This is often called the clip space. The next steps are clipping, perspective division, and view port transformation We’ll discuss clipping later

The journey of a vertex so far The vertex will be transformed first by the model-view matrix and then by the projection matrix p’ = P x V x M x p E.g. glTranslatef(); glRotatef(); … E.g. gluLookAt() E.g. gluPerspective()

After the perspective projection As the result of the these transformations, vertex p is transformed to p’ After perspective projection, the 4th element of the homogenous coordinates of p’ may not be 1 any more We have to homogenize it to get the real coordinates Divide every element by the 4th element (called w) Normalized Device Coordinates After perspective transformation

Perspective division This operation is also called perspective division The homogenized coordinates are called Normalized Device Coordinates This is the coordinates for point p in the canonical view volume Now we are ready to map this point to the window

View port transformation

View port transformation View port transformation transforms x and y from normalized device coordinates to window coordinates Note that z value of the normalized device coordinates are not transformed in this stage Because z axis is orthogonal to the window (2D plane) and Z values have no effect in the mapping Remember our eye is looking down negative Z But this z value is not lost. It will be passed down to the rasterizer stage Will be used in scan conversion and depth buffer test

View port transformation in OpenGL glViewport(GLint x, GLint y, GLsizei width, GLsizei height) x, y: Specify the lower left corner of the viewport rectangle, in pixels. The initial value is (0,0). width, height: Specify the width and height of the viewport.

What does glViewport() do? Let (xnd, ynd) be normalized device coordinates. Then the window coordinates (xw, yw) are computed as follows: xw = (xnd + 1)(width / 2) + x yw = (ynd + 1)(height / 2) + y It’s a matrix multiplication too. Now we know where to place this particular vertex in the final 2D image

glViewport() glViewport(GLint x, GLint y, GLsizei width, GLsizei height) X and y are normally set to (0, 0) Width and height are normally set to window width and height This means your 2D image size matches your window size You can make your image smaller or bigger than the window by adjusting those values E.g. put multiple images in one window

The journey of a vertex

Why do we spend so much time on transformations? Because that’s where many people find OpenGL programs hard to understand You need to know those low level details when you write vertex shader programs Your vertex shader, not OpenGL, are supposed to handle the model, view, and projection transformations In your vertex shader, you’ll have to deal directly with transformation matrices and matrix multiplications

Summary Viewing and projection is about how to position the camera and project the 3D scene to a 2D screen. In OpenGL, view transformation is conducted by manipulating modelview matrix. Projection transformation is conducted by manipulating projection matrix. Viewport transformation is conducted by glViewport()