Presentation is loading. Please wait.

Presentation is loading. Please wait.

Last Time Clipping Homework 4, due Nov 2 in class

Similar presentations


Presentation on theme: "Last Time Clipping Homework 4, due Nov 2 in class"— Presentation transcript:

1 Last Time Clipping Homework 4, due Nov 2 in class
Why we care Sutherland-Hodgman Cohen-Sutherland Intuition for Liang-Barsky Homework 4, due Nov 2 in class Midterm, Nov 2 in class 10/27/09 © NTUST

2 Midterm Info November 2 in class
All material listed in online “to-know” list Allowed: Pencil/pen, ruler One sheet of letter (standard) sized paper, with anything on it, both sides Nothing to increase the surface area Not allowed: Calculator, anything else 10/27/09 © NTUST

3 This Week Liang-Barsky Details Weiler-Atherton clipping algorithm
Drawing points and lines Visibility Z-Buffer and transparency A-buffer Area subdivision BSP Trees Exact Cell-Portal Lighting and Shading – Part 1 Project 2 is posted and due at Nov 18 10/27/09 © NTUST

4 General Liang-Barsky Liang-Barsky works for any convex clip region
E.g. Perspective view volume in world or view coordinates Require a way to perform steps 1 and 2 Compute intersection t for all clip lines/planes Label them as entering or exiting Far Near Left Right 10/27/09 © NTUST

5 In View Space For Project 2, you need to clip edges to a view frustum in world space Situation is: xleft xright frustum x1 x2 eye, e 10/27/09 © NTUST

6 First Step Compute inside/outside for endpoints of the line segment
Determine which side of each clip plane the segment endpoints lie Use the cross product What do we know if (x1 - e)  (xleft - e) > 0 ? Other cross products give other information What can we say if both segment endpoints are outside one clip plane? Stop here if we can, otherwise… 10/27/09 © NTUST

7 Finding Parametric Intersection
Left clip edge: x = e + (xleft - e) t Line: x = x1 + (x2 - x1) s Solve simultaneous equations in t and s: Use endpoint inside/outside information to label as entering or leaving Now we have general Liang-Barsky case 10/27/09 © NTUST

8 General Clipping Liang-Barsky can be generalized to clip line segments to arbitrary polygonal clip regions Consider clip edges as non-infinite segments Look at all intersecting ts between 0 and 1 Clipping general polygons against general clip regions is quite hard: Weiler-Atherton algorithm Start with polygons as lists of vertices Replace crossing points with vertices Double all edges and form linked lists of edges Adjust links at crossing vertices Enumerate polygon patches 10/27/09 © NTUST

9 Weiler-Atherton – Form Lists
Original Polygons Double Linked Lists - outside and inside lists 10/27/09 © NTUST

10 Weiler-Atherton – Find Crossings
Crossing vertices added – links re-written 10/27/09 © NTUST

11 Weiler-Atherton – Enumerate
“Clip and Poly” “Poly not Clip” “Not clip not poly” “Clip not Poly” Every link used once 10/27/09 © NTUST

12 Where We Stand At this point we know how to: Next thing:
Convert points from local to screen coordinates Clip polygons and lines to the view volume Next thing: Determine which pixels to fill for any given point, line or polygon Convert points from local to window coordinates Determine which pixels are covered by any given line or polygon Anti-alias Put the slides in the extra lecture note but we won’t discuss it because it is too low level. Determine which polygon is in front 10/27/09 © NTUST

13 Visibility Given a set of polygons, which is visible at each pixel? (in front, etc.). Also called hidden surface removal Very large number of different algorithms known. Two main classes: Object precision: computations that operate on primitives Image precision: computations at the pixel level All the spaces in the viewing pipeline maintain depth, so we can work in any space World, View and Canonical Screen spaces might be used Depth can be updated on a per-pixel basis as we scan convert polygons or lines Actually, run Bresenham-like algorithm on z and w before perspective divide 10/27/09 © NTUST

14 Visibility Issues Efficiency – it is slow to overwrite pixels, or rasterize things that cannot be seen Accuracy - answer should be right, and behave well when the viewpoint moves Must have technology that handles large, complex rendering databases In many complex environments, few things are visible How much of the real world can you see at any moment? Complexity - object precision visibility may generate many small pieces of polygon 10/27/09 © NTUST

15 Painters Algorithm (Image Precision)
Choose an order for the polygons based on some choice (e.g. depth to a point on the polygon) Render the polygons in that order, deepest one first This renders nearer polygons over further Difficulty: works for some important geometries (2.5D - e.g. VLSI) doesn’t work in this form for most geometries - need at least better ways of determining ordering Fails zs Which point for choosing ordering? xs 10/27/09 © NTUST

16 The Z-buffer (1) (Image Precision)
For each pixel on screen, have at least two buffers Color buffer stores the current color of each pixel The thing to ultimately display Z-buffer stores at each pixel the depth of the nearest thing seen so far Also called the depth buffer Initialize this buffer to a value corresponding to the furthest point (z=1.0 for canonical and window space) As a polygon is filled in, compute the depth value of each pixel that is to be filled if depth < z-buffer depth, fill in pixel color and new depth else disregard 10/27/09 © NTUST

17 The Z-buffer (2) Advantages: Disadvantages:
Simple and now ubiquitous in hardware A z-buffer is part of what makes a graphics card “3D” Computing the required depth values is simple Disadvantages: Over-renders – rasterizes polygons even if they are not visible Depth quantization errors can be annoying Can’t easily do transparency or filter-based anti-aliasing (Requires keeping information about partially covered polygons) 10/27/09 © NTUST

18 Visibility Recap You are given a set of polygons to draw and you need to figure out which one is visible at every pixel Issues include: Efficiency – it is slow to overwrite pixels, or scan convert things that cannot be seen Accuracy – answer should be right, and behave well when the viewpoint moves Complexity – object precision visibility may generate many small pieces of polygon 10/27/09 © NTUST

19 Z-Buffer and Transparency
Say you want to render transparent surfaces (alpha not 1) with a z-buffer Must render in back to front order Otherwise, would have to store at least the first opaque polygon behind transparent one Front Partially transparent 3rd 1st or 2nd 3rd: Need depth of 1st and 2nd Opaque 2nd Opaque 1st 1st or 2nd Must recall this color and depth 10/27/09 © NTUST

20 OpenGL Depth Buffer OpenGL defines a depth buffer as its visibility algorithm The enable depth testing: glEnable(GL_DEPTH_TEST) To clear the depth buffer: glClear(GL_DEPTH_BUFFER_BIT) To clear color and depth: glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) The number of bits used for the depth values can be specified (windowing system dependent, and hardware may impose limits based on available memory) The comparison function can be specified: glDepthFunc(…) Sometimes want to draw furthest thing, or equal to depth in buffer 10/27/09 © NTUST

21 The A-buffer (Image Precision)
Handles transparent surfaces and filter anti-aliasing At each pixel, maintain a pointer to a list of polygons sorted by depth, and a sub-pixel coverage mask for each polygon Coverage mask: Matrix of bits saying which parts of the pixel are covered Algorithm: Drawing pass (do not directly display the result) if polygon is opaque and covers pixel, insert into list, removing all polygons farther away if polygon is transparent or only partially covers pixel, insert into list, but don’t remove farther polygons 10/27/09 © NTUST

22 The A-buffer (2) = Algorithm: Rendering pass Advantage: Disadvantages:
At each pixel, traverse buffer using polygon colors and coverage masks to composite: Advantage: Can do more than Z-buffer Coverage mask idea can be used in other visibility algorithms Disadvantages: Not in hardware, and slow in software Still at heart a z-buffer: Over-rendering and depth quantization problems But, used in high quality rendering tools over = 10/27/09 © NTUST

23 Area Subdivision Exploits area coherence: Small areas of an image are likely to be covered by only one polygon The practical truth of this assertion varies over the years (it’s currently going from mostly false to more true) Three easy cases for determining what’s in front in a given region: a polygon is completely in front of everything else in that region no surfaces project to the region only one surface is completely inside the region, overlaps the region, or surrounds the region 10/27/09 © NTUST

24 Warnock’s Area Subdivision (Image Precision)
Start with whole image If one of the easy cases is satisfied (previous slide), draw what’s in front Otherwise, subdivide the region and recurse If region is single pixel, choose surface with smallest depth Advantages: No over-rendering Anti-aliases well - just recurse deeper to get sub-pixel information Disadvantage: Tests are quite complex and slow 10/27/09 © NTUST

25 Warnock’s Algorithm Regions labeled with case used to classify them:
One polygon in front Empty One polygon inside, surrounding or intersecting Small regions not labeled Note it’s a rendering algorithm and a HSR algorithm at the same time Assuming you can draw squares 2 3 2 2 3 3 3 2 3 3 3 1 3 1 1 1 1 3 3 3 3 3 2 3 3 3 2 2 2 2 10/27/09 © NTUST

26 BSP-Trees (Object Precision)
Construct a binary space partition tree Tree gives a rendering order A list-priority algorithm Tree splits 3D world with planes The world is broken into convex cells Each cell is the intersection of all the half-spaces of splitting planes on tree path to the cell Also used to model the shape of objects, and in other visibility algorithms BSP visibility in games does not necessarily refer to this algorithm 10/27/09 © NTUST

27 BSP-Tree Example A A C 4 3 - + B C - B + - + 1 3 2 4 1 2 10/27/09
© NTUST

28 Building BSP-Trees Choose polygon (arbitrary)
Split its cell using plane on which polygon lies May have to chop polygons in two (Clipping!) Continue until each cell contains only one polygon fragment Splitting planes could be chosen in other ways, but there is no efficient optimal algorithm for building BSP trees Optimal means minimum number of polygon fragments in a balanced tree 10/27/09 © NTUST

29 Building Example We will build a BSP tree, in 2D, for a 3 room building Ignoring doors Splitting edge order is shown “Back” side of edge is side with the number 5 2 3 4 1 6 10/27/09 © NTUST

30 Building Example (Done)
1 - + 2 3a - + + 3b 2 4b 4a 3b 4b + + + 6 5b 5a 1 4a 3a 6 10/27/09 © NTUST

31 BSP-Tree Rendering Observation: Things on the opposite side of a splitting plane from the viewpoint cannot obscure things on the same side as the viewpoint Rendering algorithm is recursive descent of the BSP Tree At each node (for back to front rendering): Recurse down the side of the sub-tree that does not contain the viewpoint Test viewpoint against the split plane to decide which tree Draw the polygon in the splitting plane Paint over whatever has already been drawn Recurse down the side of the tree containing the viewpoint 10/27/09 © NTUST

32 Using a BSP-Tree Observation: Things on the opposite side of a splitting plane from the viewpoint cannot obscure things on the same side as the viewpoint A statement about rays – a ray must hit something on this side of the split plane before it hits the split plane and before it hits anything on the back side NOT a statement about distance – things on the far side of the plane can be closer than things on the near side Gives a relative ordering of the polygons, not absolute in terms of depth or any other quantity Split plane 10/27/09 © NTUST

33 Rendering Example 1 5b 9 5a 6 - + 2 Eye 3a - 3b 2 4b + + 8 7 5 4a 3b
Back-to-front rendering order is 3a,4a,6,1,4b,5a,2,3b,5b 10/27/09 © NTUST

34 BSP-Tree Rendering (2) Advantages: Disadvantages:
One tree works for any viewing point Filter anti-aliasing and transparency work Have back to front ordering for compositing Can also render front to back, and avoid drawing back polygons that cannot contribute to the view Uses two trees – an extra one that subdivides the window Major innovation in Quake Disadvantages: Can be many small pieces of polygon Over-rendering 10/27/09 © NTUST

35 Exact Visibility An exact visibility algorithm tells you what is visible and only what is visible No over-rendering Warnock’s algorithm is an example Difficult to achieve efficiently in practice Small detail objects in an environment make it particularly difficult But, in mazes and other simple environments, exact visibility is extremely efficient 10/27/09 © NTUST

36 Cells and Portals Assume the world can be broken into cells
Simple shapes Rooms in a building, for instance Define portals to be the transparent boundaries between cells Doorways between rooms, windows, etc In a world like this, can determine exactly which parts of which rooms are visible Then render visible rooms plus contents 10/27/09 © NTUST

37 Cell-Portal Example (1)
View 10/27/09 © NTUST

38 Cell and Portal Visibility
Start in the cell containing the viewer, with the full viewing frustum Render the walls of that room and its contents Recursively clip the viewing frustum to each portal out of the cell, and call the algorithm on the cell beyond the portal 10/27/09 © NTUST

39 Cell-Portal Example (2)
View 10/27/09 © NTUST

40 Cell-Portal Example (3)
View 10/27/09 © NTUST

41 Cell-Portal Example (4)
View 10/27/09 © NTUST

42 Cell-Portal Example (5)
View 10/27/09 © NTUST

43 Cell-Portal Example (6)
View 10/27/09 © NTUST

44 Cell-Portal Operations
Must clip polygons to the current view frustum (not the original one) Can be done with additional hardware clipping planes, if you have them Must clip the view frustum to the portal Easiest to clip portal to frustum, then set frustum to exactly contain clipped portal In Project 2, you implement these things in software, for a 2.5d environment 10/27/09 © NTUST

45 Cell-Portal Properties
Advantages Extremely efficient - only looks at cells that are actually visible: visibility culling Easy to modify for approximate visibility - render all of partially visible cells, let depth buffer clean up Can handle mirrors as well - flip world about the mirror and pretend mirror is a portal Disadvantages Restricted to environments with good cell/portal structure 10/27/09 © NTUST

46 Project 2 Intro X (cx,cy,cz) You are given the following:
Rooms, defined in 2D by the edges that surround the room The height of the ceiling Each edge is marked opaque or clear For each clear edge, there is a pointer to the thing on the other side You know where the viewer is and what the field of view is The viewer is given as (cx,cy,cz) position The view frustum is given as a viewing angle and an angle for the field of view X (cx,cy,cz) 10/27/09 © NTUST

47 Project 2 (2) Represent the frustum as a left and right clipping line
You don’t have to worry about the top and bottom Each clip line starts at the viewer’s position and goes to infinity in the viewing direction Write a procedure that clips an edge to the view frustum This takes a frustum and returns the endpoints of the clipped edge, or a flag to indicate that the edge is not visible 10/27/09 © NTUST

48 Project 2 (3) Write a procedure that takes a room and a frustum, and draws the room Clip each edge to the frustum If the edge is visible, draw the wall that the edge represents Create the 3D wall from the 2d piece of edge Project the vertices Draw the polygon in 2D If the edge is clear, recurse into neighboring polygon Draw the floor and ceiling first, because they will be behind everything 10/27/09 © NTUST

49 Where We Stand So far we know how to: Next Transform between spaces
Draw polygons Decide what’s in front Next Deciding a pixel’s intensity and color 10/27/09 © NTUST

50 Normal Vectors The intensity of a surface depends on its orientation with respect to the light and the viewer The surface normal vector describes the orientation of the surface at a point Mathematically: Vector that is perpendicular to the tangent plane of the surface What’s the problem with this definition? Just “the normal vector” or “the normal” Will use n or N to denote Normals are either supplied by the user or automatically computed 10/27/09 © NTUST

51 Transforming Normal Vectors
Normal vectors are directions To transform a normal, multiply it by the inverse transpose of the transformation matrix Recall, rotation matrices are their own inverse transpose Don’t include the translation! Use (nx,ny,nz,0) for homogeneous coordinates 10/27/09 © NTUST

52 Local Shading Models Local shading models provide a way to determine the intensity and color of a point on a surface The models are local because they don’t consider other objects We use them because they are fast and simple to compute They do not require knowledge of the entire scene, only the current piece of surface. Why is this good for hardware? For the moment, assume: We are applying these computations at a particular point on a surface We have a normal vector for that point 10/27/09 © NTUST

53 Local Shading Models What they capture: What they don’t do:
Direct illumination from light sources Diffuse and Specular reflections (Very) Approximate effects of global lighting What they don’t do: Shadows Mirrors Refraction Lots of other stuff … 10/27/09 © NTUST

54 “Standard” Lighting Model
Consists of three terms linearly combined: Diffuse component for the amount of incoming light from a point source reflected equally in all directions Specular component for the amount of light from a point source reflected in a mirror-like fashion Ambient term to approximate light arriving via other surfaces 10/27/09 © NTUST

55 Diffuse Illumination Incoming light, Ii, from direction L, is reflected equally in all directions No dependence on viewing direction Amount of light reflected depends on: Angle of surface with respect to light source Actually, determines how much light is collected by the surface, to then be reflected Diffuse reflectance coefficient of the surface, kd Don’t want to illuminate back side. Use 10/27/09 © NTUST

56 Diffuse Example Where is the light?
Which point is brightest (how is the normal at the brightest point related to the light)? 10/27/09 © NTUST

57 Illustrating Shading Models
Show the polar graph of the amount of light leaving for a given incoming direction: Show the intensity of each point on a surface for a given light position or direction Diffuse? Diffuse? 10/27/09 © NTUST

58 Specular Reflection (Phong Reflectance Model)
V R Incoming light is reflected primarily in the mirror direction, R Perceived intensity depends on the relationship between the viewing direction, V, and the mirror direction Bright spot is called a specularity Intensity controlled by: The specular reflectance coefficient, ks The Phong Exponent, p, controls the apparent size of the specularity Higher n, smaller highlight 10/27/09 © NTUST

59 Specular Example 10/27/09 © NTUST

60 Illustrating Shading Models
Show the polar graph of the amount of light leaving for a given incoming direction: Show the intensity of each point on a surface for a given light position or direction Specular? Specular? 10/27/09 © NTUST

61 Specular Reflection Improvement
H N V Compute based on normal vector and “halfway” vector, H Always positive when the light and eye are above the tangent plane Not quite the same result as the other formulation (need 2H) 10/27/09 © NTUST

62 Putting It Together Global ambient intensity, Ia:
Gross approximation to light bouncing around of all other surfaces Modulated by ambient reflectance ka Just sum all the terms If there are multiple lights, sum contributions from each light Several variations, and approximations … 10/27/09 © NTUST

63 Color Do everything for three colors, r, g and b
Note that some terms (the expensive ones) are constant Using only three colors is an approximation, but few graphics practitioners realize it k terms depend on wavelength, should compute for continuous spectrum Aliasing in color space Better results use 9 color samples 10/27/09 © NTUST

64 Approximations for Speed
The viewer direction, V, and the light direction, L, depend on the surface position being considered, x Distant light approximation: Assume L is constant for all x Good approximation if light is distant, such as sun Distant viewer approximation Assume V is constant for all x Rarely good, but only affects specularities 10/27/09 © NTUST


Download ppt "Last Time Clipping Homework 4, due Nov 2 in class"

Similar presentations


Ads by Google