Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adapted from Ben Slides

Similar presentations


Presentation on theme: "Adapted from Ben Slides"— Presentation transcript:

1 Adapted from Ben Lok @UFL Slides
Gaming Overview Week 4 3D Game Engine Technology Adapted from Ben Slides

2 The job of the rendering engine: to make a 2D image appear as 3D!
Input is 3D model data Output is 2D Images (screen) Yet we want to show a 3D world! How can we do this? We can include ‘cues’ in the image that give our brain 3D information about the scene These cues are visual depth cues

3 Visual Depth Cues Monoscopic Depth Cues (single 2D image)
Stereoscopic Depth Cues (two 2D images) Motion Depth Cues (series of 2D images) Physiological Depth Cues (body cues)

4 Monoscopic Depth Cues Interposition Shading Size Linear Perspective
An object that occludes another is closer Shading Shape info. Shadows are included here Size Usually, the larger object is closer Linear Perspective parallel lines converge at a single point Surface Texture Gradient more detail for closer objects Height in the visual field Higher the object is (vertically), the further it is Atmospheric effects further away objects are blurrier Brightness further away objects are dimmer

5

6

7 Viewing a 3D world +Y We have a model in this world and would like to view it from a new position. We’ll call this new position the camera or eyepoint. Our job is to figure out what the model looks like on the display plane. +X +Z

8 Parallel Projection +Y +Z +X

9 Perspective Projection
+Y +Z +X

10 Coordinate Systems Object Coordinates World Coordinates
Eye Coordinates

11 Object Coordinates

12 World Coordinates

13 Screen Coordinates

14 Transformations Rigid Body Transformations - transformations that do not change the object. i.e. when you place your model in your scene Translate If you translate a rectangle, it is still a rectangle Scale If you scale a rectangle, it is still a rectangle Rotate If you rotate a rectangle, it is still a rectangle

15 Translation Translation - repositioning an object along a straight-line path (the translation distances) from one coordinate location to another. (x’,y’) (tx,ty) (x,y)

16 Applying to Triangles (tx,ty)

17 Scale Scale - Alters the size of an object. Scales about a fixed point
(x’,y’) (x,y)

18 Rotation P=(4,4) =45 degrees

19 Rotations V(-0.6,0) V(0,-0.6) V(0.6,0.6) Rotate -30 degrees

20 Combining Transformations
Note there are two ways to combine rotation and translation. Why?

21 How would we get:

22 How would we get:

23 Coordinate Hierarchy

24 Transformation Hierarchies
For example:

25 Transformation Hierarchies
We can have transformations be in relation to each other

26 More Complex Models

27 Vertices, Lines, and Polygons
So, we know how to move, scale, and rotate the points (vertices) of our model. object coordinates -> world coordinates We know how these points relate to locations on the screen world coordinates -> screen coordinates How do we connect the dots (draw the edges) and color in the lines (fill the polygons)?

28 How do we choose between 1,0 and 1,1? What would be a good heuristic?
Draw a line from 0,0 to 4,2 How do we choose between 1,0 and 1,1? What would be a good heuristic? (4,2) 2 1 (0,0) 1 2 3 4

29 Let’s draw a triangle (4,2) 2 1 (0,0) (4,0) 1 2 3 4

30 A Slight Translation 2 1 1 2 3 4

31 Filling in the polygons
What is area filling? Scan Conversion algorithms

32 Wireframe Vs. Filled Area
2 1 1 2 3 4

33 Scan Conversion Scan converting is converting a picture definition, or a model, into pixel intensity values. We want to scan convert polygons.

34

35 Area Filling We want to find which pixels on a scan line are “inside” the polygon Note: the scan line intersects the polygon an EVEN number of times. We simply fill an interval between the odd and even numbered intersections. What happens if the scan line exactly intersects a vertex:

36

37 Area filling Triangles
How is this easier? What would you do for:

38 What do we have here? You know how to: What we don’t know?
Overlapping polygons: which pixels do we see?

39 Goal of Visible Surface Determination
To draw only the surfaces (triangles) that are visible, given a view point and a view direction

40 Three reasons to not draw something
1. It isn’t in the view frustum 2. It is “back facing” 3. Something is in front of it (occlusion) We need to do this computation quickly. How quickly?

41 Surface Normal Surface Normal - vector perpendicular to the surface
Three non-collinear points (that make up a triangle), also describes a plane. The normal is the vector perpendicular to this plane.

42 Normals

43 Vertex Order Vertex order matters. We usually agree that counterclockwise determines which “side” or a triangle is labelled the “front”. Think: Right handed coordinate system.

44 What do the normals tell us?
Q: How can we use normals to tell us which “face” of a triangle we see?

45 Examine the angle between the normal and the view direction
Front if V . N <0 (angle > 90 degrees)

46 Backface Culling Before scan converting a triangle, determine if it is facing you Compute the dot product between the view vector (V) and triangle normal (N) Simplify this to examining only the z component of the normal If Nz<0 then it is a front facing triangle, and you should scan convert it What surface visibility problems does this solve? Not solve?

47 Multiple Objects If we want to draw:
We can sort in z. What are the advantages? Disadvantages? Called Painter’s Algorithm or splatting.

48 Side View

49 Side View - What is a solution?

50 Even Worse… Why?

51 Painter’s Algorithm Pros: Cons: No extra memory Relatively fast
Easy to understand and implement Cons: Precision issues (and additional work to handle them) Sort stage Intersecting objects

52 Depth Buffers Goal: We want to only draw something if it appears in front of what is already drawn. What does this require? Can we do this on a per object basis?

53 Depth Buffers We can’t do it object based, it must be image based.
What do we know about the x,y,z points where the objects overlap? Remember our “eye” or “camera” is at the origin of our view coordinates. What does that mean need to store?

54 Side View

55 Depth or Z-Buffer requirements
We need to have an additional value for each pixel that stores the depth value. What is the data type for the depth value? How much memory does this require? Playstation 1 had 2 MB. The first 512 x 512 framebuffer cost $50,000 Called Depth Buffering or Z buffering

56 Depth Buffer Algorithm
Begin frame Clear color Clear depth to z = zmax Draw Triangles When scan converting znew pixel < zvalue at the pixel, set color and zvalue at the pixel = znew pixel What does it mean if znew pixel > zvalue at the pixel? Why do we clear the depth buffer? Now we see why it is sometimes called the z buffer

57 Computing the znew pixel
Q: We can compute the znsc at the vertices, but what is the znsc as we scan convert? A: We interpolate znsc while we scan convert too!

58 Z Buffer Precision What does the # of bits for a depth buffer element mean? The z from eye space to normalized screen space is not linear. That is we do not have the same precision across z. (we divided by z). In fact, half of our precision is in z=0 and z=0.5. What does this mean? What happens if we do NOT have enough precision?

59 Z Fighting If we do not have enough precision in the depth buffer, we can not determine which fragment should be “in front”. What does this mean for the near and far plane? We want them to as closely approximate our volume

60 Don’t forget Even in 1994, memory wasn’t cheap. If we wanted 1024x768x16bit = 1.6 MB additional memory. Depth Buffers weren’t common till recently because of this. Since we have to draw every triangle -> fill rate goes UP. Currently graphics cards approach the 1 Gigapixel fill rate. An image space algorithm

61 Depth Buffer Algorithm
Pros: Easy to understand and implement per pixel “correct” answer no preprocess draw objects in any order no need to redivide objects Cons: Z precision additional memory Z fighting

62 What we know We already know how to render the world from a viewpoint.
Why doesn’t this look 3D? Lighting and shading!

63 How do we know what color each pixel gets?
Lighting Lighting models Ambient Diffuse Specular Surface Rendering Methods

64 “Lighting” Two components:
Lighting Model or Shading Model - how we calculate the intensity at a point on the surface Surface Rendering Method - How we calculate the intensity at each pixel

65 Jargon Illumination - the transport of light from a source to a point via direct and indirect paths Lighting - computing the luminous intensity for a specified 3D point, given a viewpoint Shading - assigning colors to pixels Illumination Models: Empirical - approximations to observed light properties Physically based - applying physics properties of light and its interactions with matter

66 Lighting in general What factors play a part in how an object is “lit”? Let’s examine different items here…

67 Two components Light Source Properties Object Properties
Color (Wavelength(s) of light) Shape Direction Object Properties Material Geometry Absorption

68 Light Source Properties
Color We usually assume the light has one wavelength Shape point light source - approximate the light source as a 3D point in space. Light rays emanate in all directions. good for small light sources (compared to the scene) far away light sources

69 Distributed Lights Light Source Shape continued
distributed light source - approximating the light source as a 3D object. Light rays usually emanate in specific directions good for larger light sources area light sources

70 Light Source Direction
In computer graphics, we usually treat lights as rays emanating from a source. The direction of these rays can either be: Omni-directional (point light source) Directional (spotlights)

71 Light Position We can specify the position of a light one of two ways, with an x, y, and z coordinate. What are some examples? These lights are called positional lights Q: Should the sun be represented as a positional light? A: Nope! If a light is significantly far away, we can represent the light with only a direction vector. These are called directional lights. How does this help?

72 Contributions from lights
We will breakdown what a light does to an object into three different components. This APPROXIMATES what a light does. To actually compute the rays is too expensive to do in real-time. Light at a pixel from a light = Ambient + Diffuse + Specular contributions. Ilight = Iambient + Idiffuse + Ispecular

73 Ambient Term - Background Light
The ambient term is a HACK! It represents the approximate contribution of the light to the general scene, regardless of location of light and object Indirect reflections that are too complex to completely and accurately compute Iambient = color

74 Diffuse Term Contribution that a light has on the surface, regardless of viewing direction. Diffuse surfaces, on a microscopic level, are very rough. This means that a ray of light coming in has an equal chance of being reflected in any direction. What are some ideal diffuse surfaces?

75 Lambert’s Cosine Law Diffuse surfaces follow Lambert’s Cosine Law
Lambert’s Cosine Law - reflected energy from a small surface area in a particular direction is proportional to the cosine of the angle between that direction and the surface normal. Think about surface area and # of rays

76 Specular Reflection Specular contribution can be thought of as the “shiny highlight” of a plastic object. On a microscopic level, the surface is very smooth. Almost all light is reflected. What is an ideal purely specular reflector? What does this term depend on? Viewing Direction Normal of the Surface

77 Snell’s Law Specular reflection applies Snell’s Law.
The incoming ray, the surface normal, and the reflected ray all lie in a common plane. The angle that the reflected ray forms with the surface normal is determined by the angle that the incoming ray forms with the surface normal, and the relative speeds of light of the mediums in which the incident and reflected rays propagate according to: We assume l = r

78 Snell’s Law is for IDEAL surfaces
Think about the amount of light reflected at different angles. N R L V

79 Different for shiny vs. dull objects

80 Snell’s Law is for IDEAL surfaces
Think about the amount of light reflected at different angles. N R L V

81 Phong Model Phong Reflection Model
An approximation sets the intensity of specular reflection proportional to (cos )shininess What does the value of shininess mean? How do we represent shinny or dull surfaces using the Phong model?

82 Effect of the shininess value

83 Combining the terms Ambient - the combination of light reflections from various surfaces to produce a uniform illumination. Background light. Diffuse - uniform light scattering of light rays on a surface. Proportional to the “amount of light” that hits the surface. Depends on the surface normal and light vector. Sepecular - light that gets reflected. Depends on the light ray, the viewing angle, and the surface normal.

84 Ambient + Diffuse + Specular

85 Lighting Equation N R L V 
Ilambient = light source l’s ambient component Ildiffuse = light source l’s diffuse component Ilspecular = light source l’s specular component kambient = surface material ambient reflectivity kdiffuse = surface material diffuse reflectivity kspecular = surface material specular reflectivity shininess = specular reflection parameter (1 -> dull, > very shiny) N R L V

86 Attenuation & Spotlights
One factor we have yet to take into account is that a light source contributes a higher incident intensity to closer surfaces. The energy from a point light source falls off proportional to 1/d2. Actually, using only 1/d2, makes it difficult to correctly light things. Think if d=1 and d=2. Why? What happens if we don’t do this? How do we do spotlights?

87 Full Illumination Model

88 Lighting and Shading When do we do the lighting equation?
Does lighting calculate the pixel colors?

89 Shading Shading is how we “color” a triangle. Constant Shading
Gouraud Shading Phong Shading

90 Constant Shading Constant Intensity or Flat Shading
One color for the entire triangle Fast Good for some objects What happens if triangles are small? Sudden intensity changes at borders

91 Gouraud Shading Intensity Interpolation Shading
Calculate lighting at the vertices. Then interpolate the colors as you scan convert

92 Gouraud Shading Relatively fast, only do three calculations
No sudden intensity changes What can it not do? What are some approaches to fix this? Question, what is the normal at a vertex?

93 Phong Shading Interpolate the normal, since that is the information that represents the “curvature” Linearly interpolate the vertex normals. For each pixel, as you scan convert, calculate the lighting per pixel. True “per pixel” lighting Not done by most hardware/libraries/etc

94 Shading Techniques Constant Shading
Calculate one lighting calculation (pick a vertex) per triangle Color the entire triangle the same color Gouraud Shading Calculate three lighting calculations (the vertices) per triangle Linearly interpolate the colors as you scan convert Phong Shading While you scan convert, linearly interpolate the normals. With the interpolated normal at each pixel, calculate the lighting at each pixel

95 How do we do this?

96 …or this?

97 …using only Triangles? Using only triangles to model everything is hard Think about a label on a soup can Instead of interpolating colors, map a texture pattern

98 Texture Patterns Image Textures Procedure (Procedural Textures)

99 Let’s look at a game What effects do we see?

100 Transparencies The Alpha channel can stencil out textures.
Thus per pixel, set alpha = 1 or alpha = 0. Where alpha = 0,

101 Combining Lighting + Texturing
If you notice there is no lighting involved with texture mapping! They are independent operations, which MAY (you decide) be combined It all depends on how you “apply” the texture to the underlying triangle

102 Combining Lighting + Texturing
CT = Texture Color CC = Base Triangle Replace, CF = CT Blend, CF = CT * CC

103 What else does the engine need to do?
NOT an exhaustive list! Load Models Model Acquisition Surfaces/Curves/NURBS Fast Performance Simplification/Level of Detail Culling/Cells and Portals/BSP Advanced Rendering Lighting/Shaders Non-Photorealistic Rendering Effects Interactive Techniques/User interfaces Game logic/Scripting/Artificial Intelligence Physical properties: Collisions, gravity, etc. Load/Save States

104 Global Illumination Radiosity Radiosity as textures
Light maps Bidirection Reflectance Distribution Function (BRDF) Light as rays, doesn’t do everything raytracing

105 Advanced Effects Cloth Liquids Fire Hair/Fur Skin Grass
What are the common denominators here?

106 Performance Massive Models models of 100,000,000 triangles
Replace geometry with images Warp images Occlusion culling/BSP trees Cell and Portal culling Level of detail

107

108 Simplification/Level of Detail
Objects farther away can be represented with less detail How do we “remove” triangles? What are the advantages and disadvantages? Can we do this automatically?

109 BSP Trees (Fuchs, et. al 1980) Binary Space Partitioning
Doom and most games before framebuffers (circa ) Given a world, we want to build a datastructure that given anypoint, it can return a sorted list of objects What assumptions are we making? Note, what happens in those “old” games like Doom?

110 BSP Trees Two stages: Draw parallels to Doom
preprocess - we do this at the “offline” runtime - what we do per frame Draw parallels to Doom Since this is easier in 2D, note all “old” FPS are really 2D.

111

112

113 BSP Algorithm For a viewpoint, determine where it sits on the tree.
Now draw objects on the “other half of the tree” farside.draw(viewpoint) nearside.draw(viewpoint) Intuition - we draw things farther away first Is this an image space or object space algorithm?

114 BSP Trees Pros Preprocess step means fast determination of what we can see and can’t Works in 3D -> Quake1 Painter’s algorithm Pros Cons Still has intersecting object problems Static scene

115 Determining if something is viewable
Viewfrustum Culling Cells and Portals definitions cell portal preprocess step runtime computation where do we see it? Quake3

116

117

118

119 Collision Detection Determining intersections between models
Resolution of collisions Where is the intersection? Normal of surfaces Depth of intersection Multiple collisions Collisions over time Vector collisions

120 Shaders and Non Photorealistic Rendering
Cartoons Pen/Pencil Paints Art Drawing Styles


Download ppt "Adapted from Ben Slides"

Similar presentations


Ads by Google