Rasterizer Overview References:

Slides:



Advertisements
Similar presentations
Graphics Pipeline.
Advertisements

Ray Tracing Part I References /rtrace0.htm A. Watt & M. Watt. Advanced Animation & Rendering.
CAP4730: Computational Structures in Computer Graphics Visible Surface Determination.
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
CS 4363/6353 INTRODUCTION TO COMPUTER GRAPHICS. WHAT YOU’LL SEE Interactive 3D computer graphics Real-time 2D, but mostly 3D OpenGL C/C++ (if you don’t.
CGDD 4003 THE MASSIVE FIELD OF COMPUTER GRAPHICS.
Game Engine Design ITCS 4010/5010 Spring 2006 Kalpathi Subramanian Department of Computer Science UNC Charlotte.
Viewing Doug James’ CG Slides, Rich Riesenfeld’s CG Slides, Shirley, Fundamentals of Computer Graphics, Chap 7 Wen-Chieh (Steve) Lin Institute of Multimedia.
1 Chapter 5 Viewing. 2 Perspective Projection 3 Parallel Projection.
CHAPTER 7 Viewing and Transformations © 2008 Cengage Learning EMEA.
2/26/04© University of Wisconsin, CS559 Spring 2004 Last Time General Orthographic Viewing –Specifying cameras in world coordinates –Building world  view.
CS 450: Computer Graphics REVIEW: OVERVIEW OF POLYGONS
Basics of Rendering Pipeline Based Rendering –Objects in the scene are rendered in a sequence of steps that form the Rendering Pipeline. Ray-Tracing –A.
COMP 175: Computer Graphics March 24, 2015
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
CSE 381 – Advanced Game Programming Basic 3D Graphics
02/26/02 (c) 2002 University of Wisconsin, CS 559 Last Time Canonical view pipeline Orthographic projection –There was an error in the matrix for taking.
Week 2 - Wednesday CS361.
Computer Graphics World, View and Projection Matrices CO2409 Computer Graphics Week 8.
Buffers Textures and more Rendering Paul Taylor & Barry La Trobe University 2009.
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
Homogeneous Form, Introduction to 3-D Graphics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Monday, October 20,
CAP4730: Computational Structures in Computer Graphics 3D Transformations.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Games Development 1 Camera Projection / Picking CO3301 Week 8.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
OpenGL The Viewing Pipeline: Definition: a series of operations that are applied to the OpenGL matrices, in order to create a 2D representation from 3D.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Chapters 5 2 March Classical & Computer Viewing Same elements –objects –viewer –projectors –projection plane.
Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 5: Viewing
Chapter III Rasterization
Rendering Pipeline Fall, D Polygon Rendering Many applications use rendering of 3D polygons with direct illumination.
Computer Graphics Matrices
Coordinate Systems Lecture 1 Fri, Sep 2, The Coordinate Systems The points we create are transformed through a series of coordinate systems before.
OpenGL and You I Cast, Therefore I Am. Ray Casting Idea is simple, implementation takes some work –Cast rays as if you were the camera –Determine intersection.
RAYTRACER PART I OF II.  Loosely based on the way we perceive the world around us (visually)  A (near) infinite number of photons are emitted by a light.
Viewing and Projection. The topics Interior parameters Projection type Field of view Clipping Frustum… Exterior parameters Camera position Camera orientation.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Ray Tracing Part I Sources: * raytrace/rtrace0.htm * A. Watt & M. Watt. Advanced Animation & Rendering.
Introduction to Computer Graphics
Visible Surface Detection
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
COMP 261 Lecture 13 3D Graphics 1 of 2.
CSE 167 [Win 17], Lecture 5: Viewing Ravi Ramamoorthi
Week 2 - Friday CS361.
Intro to 3D Graphics.
Rendering Pipeline Aaron Bloomfield CS 445: Introduction to Graphics
CSCE 441 Computer Graphics 3-D Viewing
CSC461: Lecture 20 Parallel Projections in OpenGL
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
Deferred Lighting.
3D Graphics Rendering PPT By Ricardo Veguilla.
CENG 477 Introduction to Computer Graphics
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Software Rasterization
CSC4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University Transformations.
CSC4820/6820 Computer Graphics Algorithms Ying Zhu Georgia State University View & Projection.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Projection in 3-D Glenn G. Chappell
Chapter V Vertex Processing
Lecture 13 Clipping & Scan Conversion
Last Time Canonical view pipeline Projection Local Coordinate Space
CS-378: Game Technology Lecture #4: Texture and Other Maps
Game Programming Algorithms and Techniques
CS5500 Computer Graphics April 24, 2006.
Presentation transcript:

Rasterizer Overview References: “3D Math Primer for Graphics and Game Development” 15.3 http://www.songho.ca/opengl/gl_projectionmatrix.html

Goals Input (the scene): 0 or more Renderables Rasterizers only deal with triangle-meshes. You can approximate parametric objects. Parametrically a sphere is 3d point for the center and a radius. An approximation might be => Materials, which can be “applied” to renderables Lighting is exactly the same as the raytracer. DirectX and OpenGL let you program materials (shaders) 0 or more Light sources (just like the raytracer) A Camera (just like the raytracer)

Goals, cont. Output: A 2d image (just like the raytracer) The method used to achieve this goal is drastically different, however.

Spaces A space (when used in the context of a rasterizer) is a set of transformations which have been applied to all objects. We’ll look at (and define) several spaces: Model space World space View space Clip space Screen space There will be a matrix for each: Model=>World matrix World=>View matrix View=>Clip matrix (there’s an extra step here besides the matrix mult.) Clip=>Screen matrix The process of applying matrices in order is sometimes called a pipeline If we’re looking at a vertex in View space, we know that the Model=>World and World=>View matrices have been applied to it. In fact, we can (and do) concatenate Model=>World and World=>View into one matrix (Model=>View)

Rasterizer overview Transform (with a matrix) the vertices of all meshes from Model to Screen space. At this point the x and y values of each point represent pygame pixel coordinates. The z value is the depth – it is used when doing filled-polygon rasterization (not used when doing wireframe). In screen space, we use the face list of the triangle mesh… …to draw polygon outlines (in wireframe) …to draw filled polygons that aren’t obscured by other polygons (in filled-poly mode) So…the process of rasterization is mainly just building matrices and transforming the vertices of meshes by those matrices.

Model Space This is the object as it is modeled in Blender, Maya, text-files, etc. Note: All points in the mesh are defined relative to a modeling origin and modeling coordinate axes. Modeling Axes Modeling Origin

World Space Each model in a scene will have a Model=>World transformation matrix which defines where/how to place this object in the world. Example: The scene shown uses 4 copies of the same monkey mesh. Copy#1 has an Identity M=>W matrix. Copy#2 has a rotate/scale/translate M=>W matrix. Copy#3&4 have a scale/translate M=>W matrix.

View Space All objects (meshes, lights, camera) are translated and rotated so that: The camera lies at the origin. The camera’s local axes line up with the world axes. Some notation: Near: the distance from the camera to the view plane (just as in the raytracer) ViewAngle: the angle made by the camera and the top-middle and bottom-middle of the view screen (just as in the raytracer) cameraZ cameraY Far: the maximum distance from the camera that we’ll render (new) View Frustrum: the pyramidal area that will be rendered. worldZ worldY cameraZ cameraY

Clip Space “squashes” the view frustrum into a cube More importantly, objects farther away (closer to the far plane) are shrunk more than those closer to the near plane – i.e. perspective Note: These sizes are arbitrary – you could use other ranges. -near -far 1.0 2.0 In clip space, you usually cull (remove) any (parts-of) triangles outside of the frustrum (which is now a cube) This is easy because you just look at the x & y components of a vert – if they’re outside the range -1…1 (for x), -Aspect…+Aspect, clip it. If the z value is outside the range 0…1, clip it.

Screen Space Clip space coordinates will obey these “rules”: -aspectRatio <= X <= +aspectRatio -1 <= Y <= 1 0 <= Z <= 1 In screen space, we warp in the x & y directions to match the (pygame) window size. The z-value is used if doing filled polygon rendering You need this information to determine which triangle is in front of which. 799 -aspect +aspect +1 -1 599

Model=>World Matrix Simple: Any transform you want! Translate Rotate Scale A composite matrix …

World=>View Matrix Two steps: 1) Translate the camera to the origin 2) Rotate the camera’s local axes to match up with the world axes. Hint: We looked at a matrix just like this (in the matrix composition notes) Concatenate them together.

View=>Clip Space The only really complicated part of the whole process. You can find a detailed description / derivation (which is very readable) here: http://www.songho.ca/opengl/gl_projectionmatrix.html

View=>Clip, cont. (I’m assuming here we’re doing a perspective projection – an orthogonal projection would have a different transform matrix) In this step, we are projecting points. In other words, introducing the perspective effect Points farther away are “moved” towards the vanishing point. Example: the rails of a train track are parallel, but if you view along them… …Points farther along the rail are “pulled” towards the middle (the vanishing point)

View=>Clip, cont. Looking at this in a slightly different way (top-down view)… (imagine each point is one point on a rail)

View=>Clip, cont. So how do we do this? -near -far So how do we do this? Imagine a dotted line which goes from a point to the camera. Find the y-value where this line intersects the near plane. This is the projected y- coordinate.

View=>Clip, cont. We’ll use the law of “Similar Triangles” [1] PV.y -near -far [1] PV.y PC.y PV.z (which is negative for this point) using similar logic… [2]

View=>Clip, cont. Now, we can attempt to start the matrix for the View=>Clip transform. We want to end up with formulae [1] and [2] from the last slide… …but it involves a division by PV.z – PROBLEM! Why? Because we’d have to come up with a different matrix for each point! (Element 0,0 = Element 1,1 = -near/PV.z) This would be a huge amount of overhead. And would defeat the purpose of the “pipeline” in the rasterizer. We need something that’s more general.

View=>Clip, cont. We can use a trick… We’ll make the w component of the transformed vector equal to PV.z. After transforming, we’ll divide everything by the w component – homogeneous divide. Now, we can really start our matrix…

View=>Clip, cont. We want the new w-value to be equal to PZ.z… …The bottom row of the matrix controls this, which means we need this: Since we’re going to be doing a homogeneous divide, we need to get these values for the new x- and y-values…. …in order to get these… (formulae [1] and [2])

View=>Clip, cont. Now for the 3rd row (which determines the new z-value) I’m making the (arbitrary) choice that I want my z (depth) values to be in the range 0…1. 0=on the near plane 1=on the far plane First observation: the depth is completely controlled by the z-coordinate in View space (the x & y don’t affect it) This means i & j are 0…so now we have:

View=>Clip, cont. Now for “k” & “l”: When PV.z is –near, we should get 0.0 for PC.z When PV.z is –far, we should get 1.0 for PC.z Remember that we’ll be dividing the z-value by the w-value (during homogeneous divide), which is PV.z The normal matrix mult. Rules + anticipating the homogeneous divide. These are our desired values [3] Multiply both sides by the denom. [4]

View=>Clip, cont. [3], repeated [5] Solve for l [4], repeated Substitute l in [5] into [4]. Factor our the k [6] Solve for k [5], repeated Substitute k in [6] into [5]. Multiply the near term through [7]

View=>Clip Matrix, cont. Afterwards, the w component of the transformed vector will by the z-component (in view space) You need to do a homogeneous divide. This is dividing all 4 components of the new vector by the w’th component. Afterwards w will equal 1.0. You can’t do this step any other way (it would be nice if we could do it with a matrix). Since we can’t, this is the one step in the “transform pipeline” that isn’t done with a matrix.

Clip space operations If we really wanted to optimize the rasterizer, we’d now: Remove a vertex (which is now in Clip Space) if any of these apply: X values outside the range –aspectRatio…+aspectRatio Y values outside the range -1…+1 Z values outside the range 0…1 Note: You might have to add new vertices in their place… +x X=aspectRatio

Clip Space operations, cont. (This only usually applies for filled poly rasterization) [ADD DISCUSSION OF BACKFACE CULLING]

Clip=>Screen The z-values of our vertices are now done. They’re in the range 0…1. 0=close, 1=far We need to change the range of value for x & y Currently, x is in the range –aspect…+aspect, y is in the range -1…1. We want to transform the x,y values to lie in this range: X between 0…pixelWidth (of the window) Y between 0…pixelHeight (of the window) Additionally, most windows have a +y axis going down, so we’ll need to account for that.

Clip=>Screen, cont. 3 steps: Step1: Invert the y-values (a y-axis invert matrix) Step2: Scale: X by pixelWidth/(2*aspect) (they’ll then lie in the range –pixelW/2…+pixelW/2) Y by pixelHeight/2 (they’ll then lie in the range -pixelH/2…+pixelH/2) Z by 1 (so they’ll still lie in the range 0…1) Step3: Translate X by pixelW/2 (they’ll then lie in the range 0…pixelW) Y by pixelH/2 (they’ll then lie in the range 0…pixelH) Z by 0 (won’t change them) Concatenate these 3 steps into 1 matrix and you’re done!

Wireframe rendering Now each vertex in Screen space is a pygame coordinate (just look at the .x and .y part) For each face you want to draw, draw an outlined polygon (pygame.draw.polygon) that connects the 3-n points in the polygon.

Filled Polygon rasterization A little trickier because we now need to look at depth. (I’m using the z-buffer algorithm here) Step1: Create a buffer (an image) – the depth buffer. This only needs one channel (i.e. it’s a grayscale image) This will be used to store the depth of the closest “thing” for each pixel, so far. Initialize it to 1 (the farthest z-value)

Filled Polygon rasterization, cont. Step2: Rasterize each polygon. Convert the 3-n “corner” points into a set of pixel coordinates.

Filled Polygon Rasterization, cont. Step3: For each pixel, we need to compute a depth value We’ll use this in the next stage to decide if (parts of) this polygon are hidden by other polygons. We already have it for the corner pixels… …But we also need it for the “inside” pixels too. To solve this, we can use barycentric coordinates of the middle pixels.

Interlude: Barycentric coordinates P0 P2 P1 Barycentric Coordinates B A = [0.9, 0.05, 0.05] B = [0.05, 0.9, 0.05] E C C = [0.05, 0.05, 0.9] D D = [0.5, 0.5, 0] E = [0.33, 0.33, 0.34] A

Interlude: Barycentric coordinates, cont. P0 P2 P1 ΔQP1P2 Q One technique for finding these barycentric coordinates involves calculating the area of triangles. ΔP0P1Q Area(ΔP0P1P2)=?? ΔP0QP2

Filled Polygon rasterization, cont. Step3, cont: Recall, we’re trying to get a depth value at some pixel P. We’ll use the depth value at the 3 (or more) corner points to compute this. C2 (depth=0.8) P (depth=?) C0 (depth=0.5) C1 (depth=0.1)

Filled Polygon rasterization, cont. [FINISH ME!!!!!!!!!] C2 (depth=0.8) P (depth=?) C0 (depth=0.5) C1 (depth=0.1)

Filled Polygon Rasterization, cont. [SHOW INTERPOLATION OF SOME OTHER QUANTITY, like COLOR]