Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research.

Slides:



Advertisements
Similar presentations
Today Composing transformations 3D Transformations
Advertisements

Real-Time Rendering 靜宜大學資工研究所 蔡奇偉副教授 2010©.
MAT 594CM S2010Fundamentals of Spatial ComputingAngus Forbes Overview Goals of the course: 1. to introduce real-time 3D graphics programming with openGL.
Graphics Pipeline.
3D Graphics Rendering and Terrain Modeling
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
Informationsteknologi Wednesday, November 7, 2007Computer Graphics - Class 51 Today’s class Geometric objects and transformations.
CS 4363/6353 INTRODUCTION TO COMPUTER GRAPHICS. WHAT YOU’LL SEE Interactive 3D computer graphics Real-time 2D, but mostly 3D OpenGL C/C++ (if you don’t.
HCI 530 : Seminar (HCI) Damian Schofield. HCI 530: Seminar (HCI) Transforms –Two Dimensional –Three Dimensional The Graphics Pipeline.
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry Trigonometric functions Defined using right triangle  x y h.
CSCE 590E Spring 2007 Basic Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
3-D Geometry.
Game Engine Design ITCS 4010/5010 Spring 2006 Kalpathi Subramanian Department of Computer Science UNC Charlotte.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Linear Algebra Review CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter 2005.
CS 450: Computer Graphics 2D TRANSFORMATIONS
Geometric Objects and Transformations Geometric Entities Representation vs. Reference System Geometric ADT (Abstract Data Types)
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
Week 4 - Monday.  What did we talk about last time?  Vectors.
Week 1 - Friday.  What did we talk about last time?  C#  SharpDX.
10/5/04© University of Wisconsin, CS559 Fall 2004 Last Time Compositing Painterly Rendering Intro to 3D Graphics Homework 3 due Oct 12 in class.
CS 450: COMPUTER GRAPHICS 3D TRANSFORMATIONS SPRING 2015 DR. MICHAEL J. REALE.
Mathematical Fundamentals
CS 450: Computer Graphics REVIEW: OVERVIEW OF POLYGONS
COMP 175: Computer Graphics March 24, 2015
CS 480/680 Computer Graphics Representation Dr. Frederick C Harris, Jr. Fall 2012.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Geometric Transformation. So far…. We have been discussing the basic elements of geometric programming. We have discussed points, vectors and their operations.
Transformations Dr. Amy Zhang.
Week 2 - Wednesday CS361.
Computer Graphics World, View and Projection Matrices CO2409 Computer Graphics Week 8.
Week 5 - Wednesday.  What did we talk about last time?  Project 2  Normal transforms  Euler angles  Quaternions.
MIT EECS 6.837, Durand and Cutler Graphics Pipeline: Projective Transformations.
CS559: Computer Graphics Lecture 9: Projection Li Zhang Spring 2008.
CS 450: COMPUTER GRAPHICS REVIEW: INTRODUCTION TO COMPUTER GRAPHICS – PART 2 SPRING 2015 DR. MICHAEL J. REALE.
Week 2 - Friday.  What did we talk about last time?  Graphics rendering pipeline  Geometry Stage.
Homogeneous Form, Introduction to 3-D Graphics Glenn G. Chappell U. of Alaska Fairbanks CS 381 Lecture Notes Monday, October 20,
CAP4730: Computational Structures in Computer Graphics 3D Transformations.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Geometric Objects and Transformation
10/3/02 (c) 2002 University of Wisconsin, CS 559 Last Time 2D Coordinate systems and transformations.
CS 450: COMPUTER GRAPHICS PROJECTIONS SPRING 2015 DR. MICHAEL J. REALE.
Mark Nelson 3d projections Fall 2013
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
2/19/04© University of Wisconsin, CS559 Spring 2004 Last Time Painterly rendering 2D Transformations –Transformations as coordinate system changes –Transformations.
©2005, Lee Iverson Lee Iverson UBC Dept. of ECE EECE 478 Viewing and Projection.
Computer Graphics Matrices
What are shaders? In the field of computer graphics, a shader is a computer program that runs on the graphics processing unit(GPU) and is used to do shading.
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
CS 445 / 645: Introductory Computer Graphics Review.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Week 5 - Monday.  What did we talk about last time?  Lines and planes  Trigonometry  Transforms  Affine transforms  Rotation  Scaling  Shearing.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Geometric Transformations Ceng 477 Introduction to Computer Graphics Computer Engineering METU.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Introduction to Computer Graphics
Visible Surface Detection
- Introduction - Graphics Pipeline
Week 2 - Monday CS361.
Week 4 - Monday CS361.
Week 2 - Friday CS361.
CSCE 441 Computer Graphics 3-D Viewing
3D Transformations Source & Courtesy: University of Wisconsin,
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
3D Graphics Rendering PPT By Ricardo Veguilla.
CS451Real-time Rendering Pipeline
Chapter V Vertex Processing
Lecture 13 Clipping & Scan Conversion
Presentation transcript:

Week 15 - Monday

 What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

 If the R, G, B values happen to be the same, the color is a shade of gray  255, 255, 255 = White  128, 128, 128 = Gray  0, 0, 0 = Black  To convert a color to a shade of gray, use the following formula:  Value =.3R +.59G +.11B  Based on the way the human eye perceives colors as light intensities

 We can adjust the brightness of a picture by multiplying each pixel's R,G, and B value by a scalar b  b  [0,1) darkens  b  (1,  ) brightens  We can adjust the contrast of a picture by multiplying each pixel's R,G, and B value by a scalar c and then adding -128c to the value  c  [0,1) decreases contrast  c  (1,  ) increases contrast  After adjustments, values must be clamped to the range [0, 255] (or whatever the range is)

 HSV  Hue (which color)  Saturation (how colorful the color is)  Value (how bright the color is)  Hue is represented as an angle between 0° and 360°  Saturation and value are often given between 0 and 1  Saturation in HSV is not the same as in HSL

 LoadContent() method  Update() method  Draw() method  Texture2D objects  Sprites

 What do we have?  Virtual camera (viewpoint)  3D objects  Light sources  Shading  Textures  What do we want?  2D image

 For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline  This pipeline contains three conceptual stages: Produces material to be rendered Application Decides what, how, and where to render Geometry Renders the final image Rasterizer

 The application stage is the stage completely controlled by the programmer  As the application develops, many changes of implementation may be done to improve performance  The output of the application stage are rendering primitives  Points  Lines  Triangles

 Reading input  Managing non-graphical output  Texture animation  Animation via transforms  Collision detection  Updating the state of the world in general

 The Application Stage also handles a lot of acceleration  Most of this acceleration is telling the renderer what NOT to render  Acceleration algorithms  Hierarchical view frustum culling  BSP trees  Quadtrees  Octrees

 The output of the Application Stage is polygons  The Geometry Stage processes these polygons using the following pipeline: Model and View Transform Vertex Shading ProjectionClipping Screen Mapping

 Each 3D model has its own coordinate system called model space  When combining all the models in a scene together, the models must be converted from model space to world space  After that, we still have to account for the position of the camera

 We transform the models into camera space or eye space with a view transform  Then, the camera will sit at (0,0,0), looking into negative z

 Figuring out the effect of light on a material is called shading  This involves computing a (sometimes complex) shading equation at different points on an object  Typically, information is computed on a per- vertex basis and may include:  Location  Normals  Colors

 Projection transforms the view volume into a standardized unit cube  Vertices then have a 2D location and a z-value  There are two common forms of projection:  Orthographic: Parallel lines stay parallel, objects do not get smaller in the distance  Perspective: The farther away an object is, the smaller it appears

 Clipping process the polygons based on their location relative to the view volume  A polygon completely inside the view volume is unchanged  A polygon completely outside the view volume is ignored (not rendered)  A polygon partially inside is clipped  New vertices on the boundary of the volume are created  Since everything has been transformed into a unit cube, dedicated hardware can do the clipping in exactly the same way, every time

 Screen-mapping transforms the x and y coordinates of each polygon from the unit cube to screen coordinates  SharpDX conforms to the Windows standard of pixel (0,0) being in the upper left of the screen  OpenGL conforms to the Cartesian system with pixel (0,0) in the lower left of the screen

 Backface culling removes all polygons that are not facing toward the screen  A simple dot product is all that is needed  This step is done in hardware in SharpDX and OpenGL  You just have to turn it on  Beware: If you screw up your normals, polygons could vanish

 The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space  Doing so is called:  Rasterization  Scan Conversion  Note that the word pixel is actually short for "picture element"

 As you should expect, the Rasterizer Stage is also divided into a pipeline of several functional stages: Triangle Setup Triangle Traversal Pixel Shading Merging

 Setup  Data for each triangle is computed  This could include normals  Traversal  Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel  The properties of this fragment are created by interpolating data from the vertices  These are done with fixed-operation (non- customizable) hardware

 This is where the magic happens  Given the data from the other stages, per- pixel shading (coloring) happens here  This stage is programmable, allowing for many different shading effects to be applied  Perhaps the most important effect is texturing or texture mapping

 Texturing is gluing a (usually) 2D image onto a polygon  To do so, we map texture coordinates onto polygon coordinates  Pixels in a texture are called texels  This is fully supported in hardware  Multiple textures can be applied in some cases

 The final screen data containing the colors for each pixel is stored in the color buffer  The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel  Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)  The Z-buffer is often used for this

 Modern GPU's are generally responsible for the Geometry and Rasterizer Stages of the overall rendering pipeline  The following shows colored-coded functional stages inside those stages  Red is fully programmable  Purple is configurable  Blue is not programmable at all Vertex Shader Geometry Shader Clipping Screen Mapping Triangle Setup Triangle Traversal Pixel Shader Merger

 You can do all kinds of interesting things with programmable shading, but the technology is still evolving  Modern shader stages such as Shader Model 4.0 and 5.0 use a common-shader core  Strange as it may seem, this means that vertex, pixel, and geometry shaders use the same language

 Supported in hardware by all modern GPUs  For each vertex, it modifies, creates, or ignores:  Color  Normal  Texture coordinates  Position  It must also transform vertices from model space to homogeneous clip space  Vertices cannot be created or destroyed, and results cannot be passed from vertex to vertex  Massive parallelism is possible

 Newest shader added to the family, and optional  Comes right after the vertex shader  Input is a single primitive  Output is zero or more primitives  The geometry shader can be used to:  Tessellate simple meshes into more complex ones  Make limited copies of primitives  Stream output is possible

 Clipping and triangle set up is fixed in function  Everything else in determining the final color of the fragment is done here  Because we aren't actually shading a full pixel, just a particular fragment of a triangle that covers a pixel  A lot of the work is based on the lighting model  The pixel shader cannot look at neighboring pixels  Except that some information about gradient can be given  Multiple render targets means that many different colors for a single fragment can be made and stored in different buffers

 Fragment colors are combined into the frame buffer  This is where stencil and Z-buffer operations happen  It's not fully programmable, but there are a number of settings that can be used  Multiplication  Addition  Subtraction  Min/max

 We will be interested in a number of operations on vectors, including:  Addition  Scalar multiplication  Dot product  Norm  Cross product

 A vector can either be a point in space or an arrow (direction and distance)  The norm of a vector is its distance from the origin (or the length of the arrow)  In R 2 and R 3, the dot product follows: where  is the smallest angle between u and v

 The cross product of two vectors finds a vector that is orthogonal to both  For 3D vectors u and v in an orthonormal basis, the cross product w is:

 Also:  w  u and w  v  u, v, and w form a right-handed system

 We will be interested in a number of operations on matrices, including:  Addition  Scalar multiplication  Transpose  Trace  Matrix-matrix multiplication  Determinant  Inverse

 The determinant is a measure of the magnitude of a square matrix  We'll focus on determinants for 2 x 2 and 3 x 3 matrices

 For a square matrix M where |M| ≠ 0, there is a multiplicative inverse M -1 such that MM -1 = I  For cases up to 4 x 4, we can use the adjoint:  Properties of the inverse:  (M -1 ) T = (M T ) -1  (MN) -1 = N -1 M -1

 A square matrix is orthogonal if and only if its transpose is its inverse  MM T = M T M = I  Lots of special things are true about an orthogonal matrix M  |M| = ± 1  M -1 = M T  M T is also orthogonal  ||Mu|| = ||u||  Mu  Mv iff u  v  If M and N are orthogonal, so is MN  An orthogonal matrix is equivalent to an orthonormal basis of vectors lined up together

 We add an extra value to our vectors  It's a 0 if it’s a direction  It's a 1 if it's a point  Now we can do a rotation, scale, or shear with a matrix (with an extra row and column):

 Then, we multiply by a translation matrix (which doesn't affect a direction)  A 3 x 3 matrix cannot translate a vector

 Explicit form (works for 2D and 3D lines) :  r(t) = o + td  o is a point on the line and d is its direction vector  Implicit form (2D lines only):  p is on L if and only if n p + c = 0  If p and q are both points on L then we can describe L with  n (p – q) = 0  Thus, n is perpendicular to L  n = (-(p y - q y ),(p x – q x )) = (a, b)

 Once we are in 3D, we have to talk about planes as well  The explicit form of a plane is similar to a line:  p(u,v) = o + us + vt  o is a point on the plane  s and t are vectors that span the plane  s x t is the normal of the plane

 Adding a vector after a linear (3 x 3) transform makes an affine transform  Affine transforms can be stored in a 4 x 4 matrix using homogeneous notation  Affine transforms:  Translation  Rotation  Scaling  Reflection  Shearing

NotationNameCharacteristics T(t)T(t)Translation matrixMoves a point (affine) RRotation matrixRotates points (orthogonal and affine) S(s)S(s)Scaling matrixScales along x, y, an z axes according to s (affine) H ij (s)Shear matrixShears component i by factor s with respect to component j E(h,p,r)E(h,p,r)Euler transformOrients by Euler angles head (yaw), pitch, and roll (orthogonal and affine) Po(s)Po(s)Orthographic projection Parallel projects onto a plane or volume (affine) Pp(s)Pp(s)Perspective projection Project with perspective onto a plane or a volume slerp(q,r,t)Slerp transformInterpolates quaternions q and r with parameter t

 Move a point from one place to another by vector t = (t x, t y, t z )  We can represent this with translation matrix T

 Scaling is easy and can be done for all axes as the same time with matrix S  If s x = s y = s z, the scaling is called uniform or isotropic and nonuniform or anisotropic otherwise

 Usually all the rotations are multiplied together before translations  But if you want to rotate around a point  Translate so that that point lies at the origin  Perform rotations  Translate back

 A shearing transform distorts one dimension in terms of another with parameter s  Thus, there are six shearing matrices H xy (s), H xz (s), H yx (s), H yz (s), H zx (s), and H zy (s)  Here's an example of H xz (s):

 A rigid-body transform preserves lengths, angles, and handedness  We can write any rigid-body transform X as a rotation matrix R multiplied by a translation matrix T(t)

 This example from the book shows how the same sets of transforms, applied in different orders, can have different outcomes

 The matrix used to transform points will not always work on surface normals  Rotation is fine  Uniform scaling can stretch the normal (which should be unit)  Non-uniform scaling distorts the normal  Transforming by the transpose of the adjoint always gives the correct answer  In practice, the transpose of the inverse is usually used

 For normals and other things, we need to be able to compute inverses  The inverse of a rigid body transform X is X -1 = (T(t)R) -1 = R -1 T(t) -1 = R T T(-t)  For a concatenation of simple transforms with known parameters, the inverse can be done by inverting the parameters and reversing the order: ▪ If M = T(t)R(  ) then M -1 = R(-  )T(-t)  For orthogonal matrices, M -1 = M T  If nothing is known, use the adjoint method

 We can describe orientations from some default orientation using the Euler transform  The default is usually looking down the –z axis with "up" as positive y  The new orientation is:  E(h, p, r) = R z (r)R x (p)R y (h)  h is head (or yaw), like shaking your head "no"  p is pitch, like nodding your head back and forth  r is roll… the third dimension

 Quaternions are a compact way to represent orientations  Pros:  Compact (only four values needed)  Do not suffer from gimbal lock  Are easy to interpolate between  Cons:  Are confusing  Use three imaginary numbers  Have their own set of operations

 Multiplication  Addition  Conjugate  Norm  Identity

 If we animate by moving rigid bodies around each other, joints won't look natural  To do so, we define bones and skin and have the rigid bone changes dictate blended changes in the skin

 Morphing interpolates between two complete 3D models  Vertex correspondence ▪ What if there is not a 1 to 1 correspondence between vertices?  Interpolation ▪ How do we combine the two models?  If there's a 1 to 1 correspondence, we use parameter s  [0,1] to indicate where we are between the models and then find the new location m based on the two locations p 0 and p 1  Morph targets is another technique that adds in weighted poses to a neutral model

 An orthographic projection maintains the property that parallel lines are still parallel after projection  The most basic orthographic projection matrix simply removes all the z values  This projection is not ideal because z values are lost  Things behind the camera are in front  z-buffer algorithms don't work

 To maintain relative depths and allow for clipping, we usually set up a canonical view volume based on (l,r,b,t,n,f)  These letters simply refer to the six bounding planes of the cube  Left  Right  Bottom  Top  Near  Far  Here is the (OpenGL) matrix that translates all points and scales them into the canonical view volume

 A perspective projection does not preserve parallel lines  Lines that are farther from the camera will appear smaller  Thus, a view frustum must be normalized to a canonical view volume  Because points actually move (in x and y) based on their z distance, there is a distorting term in the w row of the projection matrix

 Here is the SharpDX projection matrix  It is different from the OpenGL again because it only uses [0,1] for z

 Review up to Exam 2

 Finish Project 4  Due on Friday before midnight