Presentation is loading. Please wait.

Presentation is loading. Please wait.

Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research.

Similar presentations


Presentation on theme: "Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research."— Presentation transcript:

1 Week 15 - Monday

2  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research

3

4

5

6

7

8  If the R, G, B values happen to be the same, the color is a shade of gray  255, 255, 255 = White  128, 128, 128 = Gray  0, 0, 0 = Black  To convert a color to a shade of gray, use the following formula:  Value =.3R +.59G +.11B  Based on the way the human eye perceives colors as light intensities

9  We can adjust the brightness of a picture by multiplying each pixel's R,G, and B value by a scalar b  b  [0,1) darkens  b  (1,  ) brightens  We can adjust the contrast of a picture by multiplying each pixel's R,G, and B value by a scalar c and then adding -128c + 128 to the value  c  [0,1) decreases contrast  c  (1,  ) increases contrast  After adjustments, values must be clamped to the range [0, 255] (or whatever the range is)

10  HSV  Hue (which color)  Saturation (how colorful the color is)  Value (how bright the color is)  Hue is represented as an angle between 0° and 360°  Saturation and value are often given between 0 and 1  Saturation in HSV is not the same as in HSL

11  LoadContent() method  Update() method  Draw() method  Texture2D objects  Sprites

12

13  What do we have?  Virtual camera (viewpoint)  3D objects  Light sources  Shading  Textures  What do we want?  2D image

14  For API design, practical top-down problem solving, and hardware design, and efficiency, rendering is described as a pipeline  This pipeline contains three conceptual stages: Produces material to be rendered Application Decides what, how, and where to render Geometry Renders the final image Rasterizer

15  The application stage is the stage completely controlled by the programmer  As the application develops, many changes of implementation may be done to improve performance  The output of the application stage are rendering primitives  Points  Lines  Triangles

16  Reading input  Managing non-graphical output  Texture animation  Animation via transforms  Collision detection  Updating the state of the world in general

17  The Application Stage also handles a lot of acceleration  Most of this acceleration is telling the renderer what NOT to render  Acceleration algorithms  Hierarchical view frustum culling  BSP trees  Quadtrees  Octrees

18  The output of the Application Stage is polygons  The Geometry Stage processes these polygons using the following pipeline: Model and View Transform Vertex Shading ProjectionClipping Screen Mapping

19  Each 3D model has its own coordinate system called model space  When combining all the models in a scene together, the models must be converted from model space to world space  After that, we still have to account for the position of the camera

20  We transform the models into camera space or eye space with a view transform  Then, the camera will sit at (0,0,0), looking into negative z

21  Figuring out the effect of light on a material is called shading  This involves computing a (sometimes complex) shading equation at different points on an object  Typically, information is computed on a per- vertex basis and may include:  Location  Normals  Colors

22  Projection transforms the view volume into a standardized unit cube  Vertices then have a 2D location and a z-value  There are two common forms of projection:  Orthographic: Parallel lines stay parallel, objects do not get smaller in the distance  Perspective: The farther away an object is, the smaller it appears

23  Clipping process the polygons based on their location relative to the view volume  A polygon completely inside the view volume is unchanged  A polygon completely outside the view volume is ignored (not rendered)  A polygon partially inside is clipped  New vertices on the boundary of the volume are created  Since everything has been transformed into a unit cube, dedicated hardware can do the clipping in exactly the same way, every time

24  Screen-mapping transforms the x and y coordinates of each polygon from the unit cube to screen coordinates  SharpDX conforms to the Windows standard of pixel (0,0) being in the upper left of the screen  OpenGL conforms to the Cartesian system with pixel (0,0) in the lower left of the screen

25  Backface culling removes all polygons that are not facing toward the screen  A simple dot product is all that is needed  This step is done in hardware in SharpDX and OpenGL  You just have to turn it on  Beware: If you screw up your normals, polygons could vanish

26  The goal of the Rasterizer Stage is to take all the transformed geometric data and set colors for all the pixels in the screen space  Doing so is called:  Rasterization  Scan Conversion  Note that the word pixel is actually short for "picture element"

27  As you should expect, the Rasterizer Stage is also divided into a pipeline of several functional stages: Triangle Setup Triangle Traversal Pixel Shading Merging

28  Setup  Data for each triangle is computed  This could include normals  Traversal  Each pixel whose center is overlapped by a triangle must have a fragment generated for the part of the triangle that overlaps the pixel  The properties of this fragment are created by interpolating data from the vertices  These are done with fixed-operation (non- customizable) hardware

29  This is where the magic happens  Given the data from the other stages, per- pixel shading (coloring) happens here  This stage is programmable, allowing for many different shading effects to be applied  Perhaps the most important effect is texturing or texture mapping

30  Texturing is gluing a (usually) 2D image onto a polygon  To do so, we map texture coordinates onto polygon coordinates  Pixels in a texture are called texels  This is fully supported in hardware  Multiple textures can be applied in some cases

31  The final screen data containing the colors for each pixel is stored in the color buffer  The merging stage is responsible for merging the colors from each of the fragments from the pixel shading stage into a final color for a pixel  Deeply linked with merging is visibility: The final color of the pixel should be the one corresponding to a visible polygon (and not one behind it)  The Z-buffer is often used for this

32

33  Modern GPU's are generally responsible for the Geometry and Rasterizer Stages of the overall rendering pipeline  The following shows colored-coded functional stages inside those stages  Red is fully programmable  Purple is configurable  Blue is not programmable at all Vertex Shader Geometry Shader Clipping Screen Mapping Triangle Setup Triangle Traversal Pixel Shader Merger

34  You can do all kinds of interesting things with programmable shading, but the technology is still evolving  Modern shader stages such as Shader Model 4.0 and 5.0 use a common-shader core  Strange as it may seem, this means that vertex, pixel, and geometry shaders use the same language

35  Supported in hardware by all modern GPUs  For each vertex, it modifies, creates, or ignores:  Color  Normal  Texture coordinates  Position  It must also transform vertices from model space to homogeneous clip space  Vertices cannot be created or destroyed, and results cannot be passed from vertex to vertex  Massive parallelism is possible

36  Newest shader added to the family, and optional  Comes right after the vertex shader  Input is a single primitive  Output is zero or more primitives  The geometry shader can be used to:  Tessellate simple meshes into more complex ones  Make limited copies of primitives  Stream output is possible

37  Clipping and triangle set up is fixed in function  Everything else in determining the final color of the fragment is done here  Because we aren't actually shading a full pixel, just a particular fragment of a triangle that covers a pixel  A lot of the work is based on the lighting model  The pixel shader cannot look at neighboring pixels  Except that some information about gradient can be given  Multiple render targets means that many different colors for a single fragment can be made and stored in different buffers

38  Fragment colors are combined into the frame buffer  This is where stencil and Z-buffer operations happen  It's not fully programmable, but there are a number of settings that can be used  Multiplication  Addition  Subtraction  Min/max

39

40  We will be interested in a number of operations on vectors, including:  Addition  Scalar multiplication  Dot product  Norm  Cross product

41  A vector can either be a point in space or an arrow (direction and distance)  The norm of a vector is its distance from the origin (or the length of the arrow)  In R 2 and R 3, the dot product follows: where  is the smallest angle between u and v

42  The cross product of two vectors finds a vector that is orthogonal to both  For 3D vectors u and v in an orthonormal basis, the cross product w is:

43  Also:  w  u and w  v  u, v, and w form a right-handed system

44  We will be interested in a number of operations on matrices, including:  Addition  Scalar multiplication  Transpose  Trace  Matrix-matrix multiplication  Determinant  Inverse

45

46  The determinant is a measure of the magnitude of a square matrix  We'll focus on determinants for 2 x 2 and 3 x 3 matrices

47

48  For a square matrix M where |M| ≠ 0, there is a multiplicative inverse M -1 such that MM -1 = I  For cases up to 4 x 4, we can use the adjoint:  Properties of the inverse:  (M -1 ) T = (M T ) -1  (MN) -1 = N -1 M -1

49  A square matrix is orthogonal if and only if its transpose is its inverse  MM T = M T M = I  Lots of special things are true about an orthogonal matrix M  |M| = ± 1  M -1 = M T  M T is also orthogonal  ||Mu|| = ||u||  Mu  Mv iff u  v  If M and N are orthogonal, so is MN  An orthogonal matrix is equivalent to an orthonormal basis of vectors lined up together

50  We add an extra value to our vectors  It's a 0 if it’s a direction  It's a 1 if it's a point  Now we can do a rotation, scale, or shear with a matrix (with an extra row and column):

51  Then, we multiply by a translation matrix (which doesn't affect a direction)  A 3 x 3 matrix cannot translate a vector

52  Explicit form (works for 2D and 3D lines) :  r(t) = o + td  o is a point on the line and d is its direction vector  Implicit form (2D lines only):  p is on L if and only if n p + c = 0  If p and q are both points on L then we can describe L with  n (p – q) = 0  Thus, n is perpendicular to L  n = (-(p y - q y ),(p x – q x )) = (a, b)

53  Once we are in 3D, we have to talk about planes as well  The explicit form of a plane is similar to a line:  p(u,v) = o + us + vt  o is a point on the plane  s and t are vectors that span the plane  s x t is the normal of the plane

54

55  Adding a vector after a linear (3 x 3) transform makes an affine transform  Affine transforms can be stored in a 4 x 4 matrix using homogeneous notation  Affine transforms:  Translation  Rotation  Scaling  Reflection  Shearing

56 NotationNameCharacteristics T(t)T(t)Translation matrixMoves a point (affine) RRotation matrixRotates points (orthogonal and affine) S(s)S(s)Scaling matrixScales along x, y, an z axes according to s (affine) H ij (s)Shear matrixShears component i by factor s with respect to component j E(h,p,r)E(h,p,r)Euler transformOrients by Euler angles head (yaw), pitch, and roll (orthogonal and affine) Po(s)Po(s)Orthographic projection Parallel projects onto a plane or volume (affine) Pp(s)Pp(s)Perspective projection Project with perspective onto a plane or a volume slerp(q,r,t)Slerp transformInterpolates quaternions q and r with parameter t

57  Move a point from one place to another by vector t = (t x, t y, t z )  We can represent this with translation matrix T

58

59  Scaling is easy and can be done for all axes as the same time with matrix S  If s x = s y = s z, the scaling is called uniform or isotropic and nonuniform or anisotropic otherwise

60  Usually all the rotations are multiplied together before translations  But if you want to rotate around a point  Translate so that that point lies at the origin  Perform rotations  Translate back

61  A shearing transform distorts one dimension in terms of another with parameter s  Thus, there are six shearing matrices H xy (s), H xz (s), H yx (s), H yz (s), H zx (s), and H zy (s)  Here's an example of H xz (s):

62  A rigid-body transform preserves lengths, angles, and handedness  We can write any rigid-body transform X as a rotation matrix R multiplied by a translation matrix T(t)

63  This example from the book shows how the same sets of transforms, applied in different orders, can have different outcomes

64  The matrix used to transform points will not always work on surface normals  Rotation is fine  Uniform scaling can stretch the normal (which should be unit)  Non-uniform scaling distorts the normal  Transforming by the transpose of the adjoint always gives the correct answer  In practice, the transpose of the inverse is usually used

65  For normals and other things, we need to be able to compute inverses  The inverse of a rigid body transform X is X -1 = (T(t)R) -1 = R -1 T(t) -1 = R T T(-t)  For a concatenation of simple transforms with known parameters, the inverse can be done by inverting the parameters and reversing the order: ▪ If M = T(t)R(  ) then M -1 = R(-  )T(-t)  For orthogonal matrices, M -1 = M T  If nothing is known, use the adjoint method

66  We can describe orientations from some default orientation using the Euler transform  The default is usually looking down the –z axis with "up" as positive y  The new orientation is:  E(h, p, r) = R z (r)R x (p)R y (h)  h is head (or yaw), like shaking your head "no"  p is pitch, like nodding your head back and forth  r is roll… the third dimension

67  Quaternions are a compact way to represent orientations  Pros:  Compact (only four values needed)  Do not suffer from gimbal lock  Are easy to interpolate between  Cons:  Are confusing  Use three imaginary numbers  Have their own set of operations

68  Multiplication  Addition  Conjugate  Norm  Identity

69  If we animate by moving rigid bodies around each other, joints won't look natural  To do so, we define bones and skin and have the rigid bone changes dictate blended changes in the skin

70  Morphing interpolates between two complete 3D models  Vertex correspondence ▪ What if there is not a 1 to 1 correspondence between vertices?  Interpolation ▪ How do we combine the two models?  If there's a 1 to 1 correspondence, we use parameter s  [0,1] to indicate where we are between the models and then find the new location m based on the two locations p 0 and p 1  Morph targets is another technique that adds in weighted poses to a neutral model

71  An orthographic projection maintains the property that parallel lines are still parallel after projection  The most basic orthographic projection matrix simply removes all the z values  This projection is not ideal because z values are lost  Things behind the camera are in front  z-buffer algorithms don't work

72  To maintain relative depths and allow for clipping, we usually set up a canonical view volume based on (l,r,b,t,n,f)  These letters simply refer to the six bounding planes of the cube  Left  Right  Bottom  Top  Near  Far  Here is the (OpenGL) matrix that translates all points and scales them into the canonical view volume

73  A perspective projection does not preserve parallel lines  Lines that are farther from the camera will appear smaller  Thus, a view frustum must be normalized to a canonical view volume  Because points actually move (in x and y) based on their z distance, there is a distorting term in the w row of the projection matrix

74  Here is the SharpDX projection matrix  It is different from the OpenGL again because it only uses [0,1] for z

75

76

77  Review up to Exam 2

78  Finish Project 4  Due on Friday before midnight


Download ppt "Week 15 - Monday.  What did we talk about last time?  Future of graphics  Hardware developments  Game evolution  Current research."

Similar presentations


Ads by Google