Last Time Clipping Homework 4, due Nov 2 in class

Slides:



Advertisements
Similar presentations
Computer Graphics - Rasterization -
Advertisements

Visible-Surface Detection(identification)
10/10/02 (c) 2002 University of Wisconsin, CS 559 Last Time Finished viewing: Now you know how to: –Define a region of space that you wish to view – the.
03/12/02 (c) 2002 University of Wisconsin, CS559 Last Time Some Visibility (Hidden Surface Removal) algorithms –Painter’s Draw in some order Things drawn.
CAP4730: Computational Structures in Computer Graphics Visible Surface Determination.
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
1 Dr. Scott Schaefer Hidden Surfaces. 2/62 Hidden Surfaces.
1 Clipping and Hidden Surfaces CS-184: Computer Graphics Prof. James O’Brien.
Vertices and Fragments III Mohan Sridharan Based on slides created by Edward Angel 1 CS4395: Computer Graphics.
Basic Ray Tracing CMSC 435/634. Visibility Problem Rendering: converting a model to an image Visibility: deciding which objects (or parts) will appear.
10/11/2001CS 638, Fall 2001 Today Kd-trees BSP Trees.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
09/18/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Bump Mapping Multi-pass algorithms.
Hidden Surface Removal
Computer Graphics Mirror and Shadows
10/29/02 (c) 2002 University of Wisconsin, CS559 Today Hidden Surface Removal Exact Visibility.
CS 638, Fall 2001 Admin Grad student TAs may have had their accounts disabled –Please check and the lab if there is a problem If you plan on graduating.
Modeling of worlds A complicated object be decomposed into simple objects and represented by hierarchical model A world scene usually contain many objects.
02/26/02 (c) 2002 University of Wisconsin, CS 559 Last Time Canonical view pipeline Orthographic projection –There was an error in the matrix for taking.
10/26/04© University of Wisconsin, CS559 Fall 2004 Last Time Drawing lines Polygon fill rules Midterm Oct 28.
Visible-Surface Detection Jehee Lee Seoul National University.
10/15/02 (c) 2002 University of Wisconsin, CS559 Last Time Clipping.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Basic Rendering Pipeline and Shading Spring 2012.
03/14/02 (c) 2002 University of Wisconsin, CS559 Last Time Some more visibility (Hidden Surface Removal) algorithms –A-buffer –Scanline –Depth sorting.
CS-378: Game Technology Lecture #2.2: Clipping and Hidden Surfaces Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney,
3/23/04© University of Wisconsin, CS559 Spring 2004 Last Time Antialiasing –Area-weighted sampling Visibility –Painters algorithm –Depth buffer (Z-buffer)
CS 325 Introduction to Computer Graphics 03 / 22 / 2010 Instructor: Michael Eckmann.
David Luebke11/26/2015 CS 551 / 645: Introductory Computer Graphics David Luebke
11/04/04© University of Wisconsin, CS559 Fall 2004 Last Time Visibility –Z-Buffer and transparency –A-buffer –Area subdivision –BSP Trees –Exact Cell-Portal.
10/19/04© University of Wisconsin, CS559 Fall 2004 Last Time Clipping –Why we care –Sutherland-Hodgman –Cohen-Sutherland –Intuition for Liang-Barsky Homework.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Where We Stand At this point we know how to: –Convert points from local to window coordinates –Clip polygons and lines to the view volume –Determine which.
Where We Stand So far we know how to: –Transform between spaces –Rasterize –Decide what’s in front Next –Deciding its intensity and color.
In the name of God Computer Graphics. Where We Stand So far we know how to: –Transform between spaces –Draw polygons Next –Deciding a pixel’s intensity.
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
CS559: Computer Graphics Lecture 12: Antialiasing & Visibility Li Zhang Spring 2008.
01/28/09Dinesh Manocha, COMP770 Visibility Computations Visible Surface Determination Visibility Culling.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Basic Ray Tracing CMSC 435/634.
Visible Surface Detection
Computer Graphics Implementation II
Week 7 - Monday CS361.
© University of Wisconsin, CS559 Spring 2004
Computer Graphics Chapter 9 Rendering.
(c) 2002 University of Wisconsin, CS559
Transformations contd.
Computer Graphics Shading in OpenGL
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
© University of Wisconsin, CS559 Fall 2004
Hidden Surfaces Dr. Scott Schaefer.
3D Graphics Rendering PPT By Ricardo Veguilla.
© University of Wisconsin, CS559 Spring 2004
Concepts, algorithms for clipping
3D Clipping.
Jim X. Chen George Mason University
CSCE 441: Computer Graphics Hidden Surface Removal
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Implementation II Ed Angel Professor Emeritus of Computer Science
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Lecture 13 Clipping & Scan Conversion
© University of Wisconsin, CS559 Fall 2004
(c) 2002 University of Wisconsin, CS559
Last Time Liang-Barsky Details Weiler-Atherton clipping algorithm
Sweep Fill Details For row = min to row=max
Clipping Clipping Sutherland-Hodgman Clipping
(c) 2002 University of Wisconsin, CS559
Hidden Surface Removal
Implementation II Ed Angel Professor Emeritus of Computer Science
COMPUTER GRAPHICS Clipping
Presentation transcript:

Last Time Clipping Homework 4, due Nov 2 in class Why we care Sutherland-Hodgman Cohen-Sutherland Intuition for Liang-Barsky Homework 4, due Nov 2 in class Midterm, Nov 2 in class 10/27/09 © NTUST

Midterm Info November 2 in class All material listed in online “to-know” list Allowed: Pencil/pen, ruler One sheet of letter (standard) sized paper, with anything on it, both sides Nothing to increase the surface area Not allowed: Calculator, anything else 10/27/09 © NTUST

This Week Liang-Barsky Details Weiler-Atherton clipping algorithm Drawing points and lines Visibility Z-Buffer and transparency A-buffer Area subdivision BSP Trees Exact Cell-Portal Lighting and Shading – Part 1 Project 2 is posted and due at Nov 18 10/27/09 © NTUST

General Liang-Barsky Liang-Barsky works for any convex clip region E.g. Perspective view volume in world or view coordinates Require a way to perform steps 1 and 2 Compute intersection t for all clip lines/planes Label them as entering or exiting Far Near Left Right 10/27/09 © NTUST

In View Space For Project 2, you need to clip edges to a view frustum in world space Situation is: xleft xright frustum x1 x2 eye, e 10/27/09 © NTUST

First Step Compute inside/outside for endpoints of the line segment Determine which side of each clip plane the segment endpoints lie Use the cross product What do we know if (x1 - e)  (xleft - e) > 0 ? Other cross products give other information What can we say if both segment endpoints are outside one clip plane? Stop here if we can, otherwise… 10/27/09 © NTUST

Finding Parametric Intersection Left clip edge: x = e + (xleft - e) t Line: x = x1 + (x2 - x1) s Solve simultaneous equations in t and s: Use endpoint inside/outside information to label as entering or leaving Now we have general Liang-Barsky case 10/27/09 © NTUST

General Clipping Liang-Barsky can be generalized to clip line segments to arbitrary polygonal clip regions Consider clip edges as non-infinite segments Look at all intersecting ts between 0 and 1 Clipping general polygons against general clip regions is quite hard: Weiler-Atherton algorithm Start with polygons as lists of vertices Replace crossing points with vertices Double all edges and form linked lists of edges Adjust links at crossing vertices Enumerate polygon patches 10/27/09 © NTUST

Weiler-Atherton – Form Lists Original Polygons Double Linked Lists - outside and inside lists 10/27/09 © NTUST

Weiler-Atherton – Find Crossings Crossing vertices added – links re-written 10/27/09 © NTUST

Weiler-Atherton – Enumerate “Clip and Poly” “Poly not Clip” “Not clip not poly” “Clip not Poly” Every link used once 10/27/09 © NTUST

Where We Stand At this point we know how to: Next thing: Convert points from local to screen coordinates Clip polygons and lines to the view volume Next thing: Determine which pixels to fill for any given point, line or polygon Convert points from local to window coordinates Determine which pixels are covered by any given line or polygon Anti-alias Put the slides in the extra lecture note but we won’t discuss it because it is too low level. Determine which polygon is in front 10/27/09 © NTUST

Visibility Given a set of polygons, which is visible at each pixel? (in front, etc.). Also called hidden surface removal Very large number of different algorithms known. Two main classes: Object precision: computations that operate on primitives Image precision: computations at the pixel level All the spaces in the viewing pipeline maintain depth, so we can work in any space World, View and Canonical Screen spaces might be used Depth can be updated on a per-pixel basis as we scan convert polygons or lines Actually, run Bresenham-like algorithm on z and w before perspective divide 10/27/09 © NTUST

Visibility Issues Efficiency – it is slow to overwrite pixels, or rasterize things that cannot be seen Accuracy - answer should be right, and behave well when the viewpoint moves Must have technology that handles large, complex rendering databases In many complex environments, few things are visible How much of the real world can you see at any moment? Complexity - object precision visibility may generate many small pieces of polygon 10/27/09 © NTUST

Painters Algorithm (Image Precision) Choose an order for the polygons based on some choice (e.g. depth to a point on the polygon) Render the polygons in that order, deepest one first This renders nearer polygons over further Difficulty: works for some important geometries (2.5D - e.g. VLSI) doesn’t work in this form for most geometries - need at least better ways of determining ordering Fails zs Which point for choosing ordering? xs 10/27/09 © NTUST

The Z-buffer (1) (Image Precision) For each pixel on screen, have at least two buffers Color buffer stores the current color of each pixel The thing to ultimately display Z-buffer stores at each pixel the depth of the nearest thing seen so far Also called the depth buffer Initialize this buffer to a value corresponding to the furthest point (z=1.0 for canonical and window space) As a polygon is filled in, compute the depth value of each pixel that is to be filled if depth < z-buffer depth, fill in pixel color and new depth else disregard 10/27/09 © NTUST

The Z-buffer (2) Advantages: Disadvantages: Simple and now ubiquitous in hardware A z-buffer is part of what makes a graphics card “3D” Computing the required depth values is simple Disadvantages: Over-renders – rasterizes polygons even if they are not visible Depth quantization errors can be annoying Can’t easily do transparency or filter-based anti-aliasing (Requires keeping information about partially covered polygons) 10/27/09 © NTUST

Visibility Recap You are given a set of polygons to draw and you need to figure out which one is visible at every pixel Issues include: Efficiency – it is slow to overwrite pixels, or scan convert things that cannot be seen Accuracy – answer should be right, and behave well when the viewpoint moves Complexity – object precision visibility may generate many small pieces of polygon 10/27/09 © NTUST

Z-Buffer and Transparency Say you want to render transparent surfaces (alpha not 1) with a z-buffer Must render in back to front order Otherwise, would have to store at least the first opaque polygon behind transparent one Front Partially transparent 3rd 1st or 2nd 3rd: Need depth of 1st and 2nd Opaque 2nd Opaque 1st 1st or 2nd Must recall this color and depth 10/27/09 © NTUST

OpenGL Depth Buffer OpenGL defines a depth buffer as its visibility algorithm The enable depth testing: glEnable(GL_DEPTH_TEST) To clear the depth buffer: glClear(GL_DEPTH_BUFFER_BIT) To clear color and depth: glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) The number of bits used for the depth values can be specified (windowing system dependent, and hardware may impose limits based on available memory) The comparison function can be specified: glDepthFunc(…) Sometimes want to draw furthest thing, or equal to depth in buffer 10/27/09 © NTUST

The A-buffer (Image Precision) Handles transparent surfaces and filter anti-aliasing At each pixel, maintain a pointer to a list of polygons sorted by depth, and a sub-pixel coverage mask for each polygon Coverage mask: Matrix of bits saying which parts of the pixel are covered Algorithm: Drawing pass (do not directly display the result) if polygon is opaque and covers pixel, insert into list, removing all polygons farther away if polygon is transparent or only partially covers pixel, insert into list, but don’t remove farther polygons 10/27/09 © NTUST

The A-buffer (2) = Algorithm: Rendering pass Advantage: Disadvantages: At each pixel, traverse buffer using polygon colors and coverage masks to composite: Advantage: Can do more than Z-buffer Coverage mask idea can be used in other visibility algorithms Disadvantages: Not in hardware, and slow in software Still at heart a z-buffer: Over-rendering and depth quantization problems But, used in high quality rendering tools over = 10/27/09 © NTUST

Area Subdivision Exploits area coherence: Small areas of an image are likely to be covered by only one polygon The practical truth of this assertion varies over the years (it’s currently going from mostly false to more true) Three easy cases for determining what’s in front in a given region: a polygon is completely in front of everything else in that region no surfaces project to the region only one surface is completely inside the region, overlaps the region, or surrounds the region 10/27/09 © NTUST

Warnock’s Area Subdivision (Image Precision) Start with whole image If one of the easy cases is satisfied (previous slide), draw what’s in front Otherwise, subdivide the region and recurse If region is single pixel, choose surface with smallest depth Advantages: No over-rendering Anti-aliases well - just recurse deeper to get sub-pixel information Disadvantage: Tests are quite complex and slow 10/27/09 © NTUST

Warnock’s Algorithm Regions labeled with case used to classify them: One polygon in front Empty One polygon inside, surrounding or intersecting Small regions not labeled Note it’s a rendering algorithm and a HSR algorithm at the same time Assuming you can draw squares 2 3 2 2 3 3 3 2 3 3 3 1 3 1 1 1 1 3 3 3 3 3 2 3 3 3 2 2 2 2 10/27/09 © NTUST

BSP-Trees (Object Precision) Construct a binary space partition tree Tree gives a rendering order A list-priority algorithm Tree splits 3D world with planes The world is broken into convex cells Each cell is the intersection of all the half-spaces of splitting planes on tree path to the cell Also used to model the shape of objects, and in other visibility algorithms BSP visibility in games does not necessarily refer to this algorithm 10/27/09 © NTUST

BSP-Tree Example A A C 4 3 - + B C - B + - + 1 3 2 4 1 2 10/27/09 © NTUST

Building BSP-Trees Choose polygon (arbitrary) Split its cell using plane on which polygon lies May have to chop polygons in two (Clipping!) Continue until each cell contains only one polygon fragment Splitting planes could be chosen in other ways, but there is no efficient optimal algorithm for building BSP trees Optimal means minimum number of polygon fragments in a balanced tree 10/27/09 © NTUST

Building Example We will build a BSP tree, in 2D, for a 3 room building Ignoring doors Splitting edge order is shown “Back” side of edge is side with the number 5 2 3 4 1 6 10/27/09 © NTUST

Building Example (Done) 1 - + 2 3a - + + 3b 2 4b 4a 3b 4b + + + 6 5b 5a 1 4a 3a 6 10/27/09 © NTUST

BSP-Tree Rendering Observation: Things on the opposite side of a splitting plane from the viewpoint cannot obscure things on the same side as the viewpoint Rendering algorithm is recursive descent of the BSP Tree At each node (for back to front rendering): Recurse down the side of the sub-tree that does not contain the viewpoint Test viewpoint against the split plane to decide which tree Draw the polygon in the splitting plane Paint over whatever has already been drawn Recurse down the side of the tree containing the viewpoint 10/27/09 © NTUST

Using a BSP-Tree Observation: Things on the opposite side of a splitting plane from the viewpoint cannot obscure things on the same side as the viewpoint A statement about rays – a ray must hit something on this side of the split plane before it hits the split plane and before it hits anything on the back side NOT a statement about distance – things on the far side of the plane can be closer than things on the near side Gives a relative ordering of the polygons, not absolute in terms of depth or any other quantity Split plane 10/27/09 © NTUST

Rendering Example 1 5b 9 5a 6 - + 2 Eye 3a - 3b 2 4b + + 8 7 5 4a 3b Back-to-front rendering order is 3a,4a,6,1,4b,5a,2,3b,5b 10/27/09 © NTUST

BSP-Tree Rendering (2) Advantages: Disadvantages: One tree works for any viewing point Filter anti-aliasing and transparency work Have back to front ordering for compositing Can also render front to back, and avoid drawing back polygons that cannot contribute to the view Uses two trees – an extra one that subdivides the window Major innovation in Quake Disadvantages: Can be many small pieces of polygon Over-rendering 10/27/09 © NTUST

Exact Visibility An exact visibility algorithm tells you what is visible and only what is visible No over-rendering Warnock’s algorithm is an example Difficult to achieve efficiently in practice Small detail objects in an environment make it particularly difficult But, in mazes and other simple environments, exact visibility is extremely efficient 10/27/09 © NTUST

Cells and Portals Assume the world can be broken into cells Simple shapes Rooms in a building, for instance Define portals to be the transparent boundaries between cells Doorways between rooms, windows, etc In a world like this, can determine exactly which parts of which rooms are visible Then render visible rooms plus contents 10/27/09 © NTUST

Cell-Portal Example (1) View 10/27/09 © NTUST

Cell and Portal Visibility Start in the cell containing the viewer, with the full viewing frustum Render the walls of that room and its contents Recursively clip the viewing frustum to each portal out of the cell, and call the algorithm on the cell beyond the portal 10/27/09 © NTUST

Cell-Portal Example (2) View 10/27/09 © NTUST

Cell-Portal Example (3) View 10/27/09 © NTUST

Cell-Portal Example (4) View 10/27/09 © NTUST

Cell-Portal Example (5) View 10/27/09 © NTUST

Cell-Portal Example (6) View 10/27/09 © NTUST

Cell-Portal Operations Must clip polygons to the current view frustum (not the original one) Can be done with additional hardware clipping planes, if you have them Must clip the view frustum to the portal Easiest to clip portal to frustum, then set frustum to exactly contain clipped portal In Project 2, you implement these things in software, for a 2.5d environment 10/27/09 © NTUST

Cell-Portal Properties Advantages Extremely efficient - only looks at cells that are actually visible: visibility culling Easy to modify for approximate visibility - render all of partially visible cells, let depth buffer clean up Can handle mirrors as well - flip world about the mirror and pretend mirror is a portal Disadvantages Restricted to environments with good cell/portal structure 10/27/09 © NTUST

Project 2 Intro X (cx,cy,cz) You are given the following: Rooms, defined in 2D by the edges that surround the room The height of the ceiling Each edge is marked opaque or clear For each clear edge, there is a pointer to the thing on the other side You know where the viewer is and what the field of view is The viewer is given as (cx,cy,cz) position The view frustum is given as a viewing angle and an angle for the field of view X (cx,cy,cz) 10/27/09 © NTUST

Project 2 (2) Represent the frustum as a left and right clipping line You don’t have to worry about the top and bottom Each clip line starts at the viewer’s position and goes to infinity in the viewing direction Write a procedure that clips an edge to the view frustum This takes a frustum and returns the endpoints of the clipped edge, or a flag to indicate that the edge is not visible 10/27/09 © NTUST

Project 2 (3) Write a procedure that takes a room and a frustum, and draws the room Clip each edge to the frustum If the edge is visible, draw the wall that the edge represents Create the 3D wall from the 2d piece of edge Project the vertices Draw the polygon in 2D If the edge is clear, recurse into neighboring polygon Draw the floor and ceiling first, because they will be behind everything 10/27/09 © NTUST

Where We Stand So far we know how to: Next Transform between spaces Draw polygons Decide what’s in front Next Deciding a pixel’s intensity and color 10/27/09 © NTUST

Normal Vectors The intensity of a surface depends on its orientation with respect to the light and the viewer The surface normal vector describes the orientation of the surface at a point Mathematically: Vector that is perpendicular to the tangent plane of the surface What’s the problem with this definition? Just “the normal vector” or “the normal” Will use n or N to denote Normals are either supplied by the user or automatically computed 10/27/09 © NTUST

Transforming Normal Vectors Normal vectors are directions To transform a normal, multiply it by the inverse transpose of the transformation matrix Recall, rotation matrices are their own inverse transpose Don’t include the translation! Use (nx,ny,nz,0) for homogeneous coordinates 10/27/09 © NTUST

Local Shading Models Local shading models provide a way to determine the intensity and color of a point on a surface The models are local because they don’t consider other objects We use them because they are fast and simple to compute They do not require knowledge of the entire scene, only the current piece of surface. Why is this good for hardware? For the moment, assume: We are applying these computations at a particular point on a surface We have a normal vector for that point 10/27/09 © NTUST

Local Shading Models What they capture: What they don’t do: Direct illumination from light sources Diffuse and Specular reflections (Very) Approximate effects of global lighting What they don’t do: Shadows Mirrors Refraction Lots of other stuff … 10/27/09 © NTUST

“Standard” Lighting Model Consists of three terms linearly combined: Diffuse component for the amount of incoming light from a point source reflected equally in all directions Specular component for the amount of light from a point source reflected in a mirror-like fashion Ambient term to approximate light arriving via other surfaces 10/27/09 © NTUST

Diffuse Illumination Incoming light, Ii, from direction L, is reflected equally in all directions No dependence on viewing direction Amount of light reflected depends on: Angle of surface with respect to light source Actually, determines how much light is collected by the surface, to then be reflected Diffuse reflectance coefficient of the surface, kd Don’t want to illuminate back side. Use 10/27/09 © NTUST

Diffuse Example Where is the light? Which point is brightest (how is the normal at the brightest point related to the light)? 10/27/09 © NTUST

Illustrating Shading Models Show the polar graph of the amount of light leaving for a given incoming direction: Show the intensity of each point on a surface for a given light position or direction Diffuse? Diffuse? 10/27/09 © NTUST

Specular Reflection (Phong Reflectance Model) V R Incoming light is reflected primarily in the mirror direction, R Perceived intensity depends on the relationship between the viewing direction, V, and the mirror direction Bright spot is called a specularity Intensity controlled by: The specular reflectance coefficient, ks The Phong Exponent, p, controls the apparent size of the specularity Higher n, smaller highlight 10/27/09 © NTUST

Specular Example 10/27/09 © NTUST

Illustrating Shading Models Show the polar graph of the amount of light leaving for a given incoming direction: Show the intensity of each point on a surface for a given light position or direction Specular? Specular? 10/27/09 © NTUST

Specular Reflection Improvement H N V Compute based on normal vector and “halfway” vector, H Always positive when the light and eye are above the tangent plane Not quite the same result as the other formulation (need 2H) 10/27/09 © NTUST

Putting It Together Global ambient intensity, Ia: Gross approximation to light bouncing around of all other surfaces Modulated by ambient reflectance ka Just sum all the terms If there are multiple lights, sum contributions from each light Several variations, and approximations … 10/27/09 © NTUST

Color Do everything for three colors, r, g and b Note that some terms (the expensive ones) are constant Using only three colors is an approximation, but few graphics practitioners realize it k terms depend on wavelength, should compute for continuous spectrum Aliasing in color space Better results use 9 color samples 10/27/09 © NTUST

Approximations for Speed The viewer direction, V, and the light direction, L, depend on the surface position being considered, x Distant light approximation: Assume L is constant for all x Good approximation if light is distant, such as sun Distant viewer approximation Assume V is constant for all x Rarely good, but only affects specularities 10/27/09 © NTUST