Download presentation
Presentation is loading. Please wait.
1
Visible Surface Determination (VSD)
To render or not to render, that is the questionโฆ
2
What is it? Given a set of 3-D objects and a view specification (camera), determine which edges or surfaces of the object are visible why might objects not be visible? occlusion vs. clipping clipping works on the object level (clip against view volume) occlusion works on the scene level (compare depth against other edges/surfaces) Also called Hidden Surface Removal (HSR) We begin with some history of previously used VSD algorithms
3
Object-Precision Algorithms
Sutherland categorized algorithms as to whether they work on objects in the world (object precision) or with projections of objects in screen coordinates (image precision) and refer back to the world when z is needed Roberts โ63 - hidden line (edge) removal Compare each edge with every object - eliminate invisible edges or parts of edges. Complexity: ๐( ๐ 2 ) since each object must be compared with all edges A similar approach for hidden surfaces: Each polygon is clipped by the projections of all other polygons in front of it Invisible surfaces are eliminated and visible sub-polygons are created SLOW, ugly special cases, polygons only
4
Hardware Scan Conversion: VSD (1/4)
Perform backface culling If normal is facing in same direction as LOS (line of sight), itโs a back face: if ๐ฟ๐๐โ
๐ ๐๐๐ >0, then polygon is invisible โ discard if ๐ฟ๐๐โ
๐ ๐๐๐ <0, then polygon may be visible (if not, occluded) 2 +z +x +y (-1, 1, -1) (1, -1, 0) Canonical perspective-transformed view volume with cube Finally, clip against normalized view volume (-1 < x < 1), (-1 < y < 1), (-1 < z < 0) 3
5
Hardware Scan Conversion: VSD (2/4)
Still need to determine object occlusion (point-by-point) How to determine which point is closest? i.e. ๐ 2 is closer than ๐ 1 In perspective view volume, have to compute projector and which point is closest along that projector Perspective transformation causes projectors to become parallel Simplifies depth comparison to z-comparison The Z-Buffer Algorithm: Z-buffer has scalar value for each screen pixel, initialized to far planeโs z As each object is rendered, z value of each of its sample points is compared to z value in the same (x, y) location in z-buffer If new pointโs z value less than previous one (i.e. closer to eye), its z-value is placed in the z-buffer and its color is placed in the frame buffer at the same (x,y); otherwise previous z value and frame buffer color are unchanged Can store depth as integers or floats โ z-compression a problem either way Integer still used in OGL
6
Z-Buffer Algorithm void zBuffer() { int x, y; for (y = 0; y < YMAX; y++) for (x = 0; x < XMAX; x++) { WritePixel (x, y, BACKGROUND_VALUE); WriteZ (x, y, 1); } for each polygon { for each pixel in polygonโs projection { //plane equation double pz = Z-value at pixel (x, y); if (pz < ReadZ (x, y)) { // New point is closer to front of view WritePixel (x, y, color at pixel (x, y)) WriteZ (x, y, pz); Draw every polygon that we canโt reject trivially (totally outside view volume) If we find a piece (one or more pixels) of a polygon that is closer to the front, we paint over whatever was behind it Use plane equation for z = f(x, y)
7
Hardware Scan Conversion: VSD (3/4)
Requires two โbuffersโ Intensity Buffer: our familiar RGB pixel buffer, initialized to background color Depth (โZโ) Buffer: depth of scene at each pixel, initialized to 255 Polygons are scan-converted in arbitrary order. When pixels overlap, use Z-buffer to decide which polygon โgetsโ that pixel 255 64 64 255 + = integer Z-buffer with near = 0, far = 255 64 255 127 64 255 127 + =
8
Hardware Scan Conversion: VSD (4/4)
After our scene gets projected onto our film plane we know the depths only at locations in our depth buffer that our vertices got mapped to So how do we efficiently fill in all the โin betweenโ z-buffer information? Simple answer: incrementally! Remember scan conversion/polygon filling? As we move along Y-axis, track x position where each edge intersects scan line Do the same for z coordinate with y-z slope instead of y-x slope Knowing z1, z2, and z3 we can calculate za and zb for each edge, and then incrementally calculate zp as we scan. Similar to interpolation to calculate color per pixel (Gouraud shading)
9
Advantages of Z-buffer
Dirt-cheap and fast to implement in hardware, despite brute force nature and potentially many passes over each pixel Requires no pre-processing, polygons can be treated in any order! Allows incremental additions to image โ store both frame buffer and z- buffer and scan-convert the new polygons Lost coherence/polygon idโs for each pixel, so canโt do incremental deletes of obsolete information. Technique extends to other surface descriptions that have (relatively) cheap z= f(x, y) computations (preferably incremental)
10
Disadvantages of Z-Buffer
near far z x Before Disadvantages of Z-Buffer Perspective foreshortening (see slides in Viewing III) Compression in z-axis in post- perspective space Objects far away from camera have z-values very close to each other Depth information loses precision rapidly Leads to z-ordering bugs called z-fighting z x 1 After
11
Z-Fighting (1/2) Z-fighting occurs when two primitives have similar values in the z-buffer Coplanar polygons (two polygons that occupy the same space) One is arbitrarily chosen over the other, but z varies across the polygons and binning will cause artifacts, as shown on next slide Behavior is deterministic: the same camera position gives the same z-fighting pattern Two intersecting cubes Examples Image from
12
Z-Fighting (2/2) Eye at origin, Looking down Z axis
Blue, which is drawn after red, ends up in front of red1 Red in front of blue x axis of image (each column is a pixel) Here the red and blue lines represent cross-sections of the red and blue coplanar polygons from the previous slide. z-value bins 1 Using the glDepthFunc(GL_LEQUAL) depth mode, which will overwrite a fragment if the z-value is โค the value in the z-buffer
13
Visibility Determination Techniques
Back face detection The depth-sort method The depth-buffer method The A-buffer method The scan-line method Area-subdivision Ray-tracing method Bounding volumes
14
Visibility Determination
Often referred to as either Visible line/surface detection Hidden line/surface elimination Can have subtle differences Identify line/surfaces which are visible Remove line/surfaces which are not visible
15
Visibility Determination
Two approaches or some combination of both Object-space Compares objects and parts of objects to each other within the scene to determine which surfaces are visible Image-space Visibility decided point by point at each pixel position on projected plane
16
Back-face Detection Fast and simple object-space approach for determining back face of a polygon A polygon is facing forward if or equivalently Easier to test using dot product instead of cosine n.v โฅ 0
17
Back-face Detection Even simpler if test applied after transformation to normalised device coordinates, with viewing direction along the z-axis If the polygon is on the surface Ax + By + Cz + D = 0 In NDC, only need to check the z-component (i.e. C)
18
Back-face Detection Right-hand system Left-hand system
Polygon is back face if C โค 0 Default cull clockwise winding Left-hand system Polygon is back face if C โฅ 0 Default cull counter-clockwise winding
19
Back-face Detection Back-face culling Removal of back-faced polygons
Usually happens before the clipping step in a rendering pipeline Does this solve all visibility problems? No. Given these โfront-facingโ polygons, which sections are visible to the viewer?
20
Depth-sort Method Can be considered to be an object-space approach
General idea Sort polygons in order of decreasing depth (i.e. furthest to nearest) Scan convert polygons in order, starting from the furthest (greatest depth) polygon first Nearer polygons will overwrite further polygons This back-to-front rendering often referred to as the โpainterโs algorithmโ
21
Painterโs Algorithm โ Image Precision
Back-to-front algorithm was used in the first hardware-rendered scene, the 1967 GE Flight Simulator by Schumacher et al using a video drum to hold the scene Create drawing order so each polygon overwrites the previous one. This guarantees correct visibility at any pixel resolution Strategy is to work back to front; find a way to sort polygons by depth (z), then draw them in that order do a rough sort of polygons by smallest (farthest) z-coordinate in each polygon scan-convert most distant polygon first, then work forward towards viewpoint (โpaintersโ algorithmโ) See 3D depth-sort algorithm by Newell, Newell, and Sancha Can this back-to-front strategy always be done? problem: two polygons partially occluding each other โ need to split polygons
22
Depth-sort Method Obviously, depth sorting has to occur before rasterisation
23
Depth-sort Method This is easy if all the polygonsโ depths do not overlap But, how do we sort polygons with overlapping depths? For polygons with overlapping depth, need to make additional comparisons
24
Depth-sort Method Check whether the polygon overlap in x and y extents
Assuming normalised device coordinates where viewing direction is along z- axis, which is usually the case If non-overlapping x and y, then the order does not matter
25
Depth-sort Method If previous test failed
Then test whether all the vertices of one polygon lie on the same side of the plane defined by the other polygon Order relative to viewing position
26
Depth-sort Method This still leaves two troublesome situations
Cyclic overlapping Piercing polygons
27
Depth-sort Method In such cases, have to resort to splitting polygons
Does this solve all visibility problems? Yes But depth sorting is somewhat laborious, particularly if we have to split polygons
28
Depth-sort Method Commonly used image-space approach
Compares surface depth values for each pixel position on the projection plane Also referred to as the z-buffer method, since object depth is usually measured along z-axis Easy to implement, in either hardware or software
29
Depth-sort Method
30
Depth-sort Method Other than colour buffer The depth buffer
Requires another buffer (as the name already implies) to store depth information The depth buffer Has the same resolution as the frame buffer and stores floating point numbers Typically carried out in normalised coordinates, so depth values range from 0 (near clipping plane) to 1.0 (far clipping plane)
31
Depth-sort Method Algorithm
Initialise depth buffer and frame buffer for all positions depthBuff (x, y) = 1.0 frameBuff (x, y) = backgroundColour For each polygon in the scene (one at a time in any order) For each projected (x, y) pixel position of polygon, calculate the depth z (if not already known) If z < depthBuff (x, y), then compute and set the surface colour at that position depthBuff (x, y) = z frameBuff (x, y) = surfColour(x, y)
32
Depth-sort Method
33
Depth-sort Method Polygons can be processed one at a time in any order and each polygon is processed one scan line at a time Compatible with the pipeline architecture as it is performed polygon by polygon Does this solve all visibility problems? Yes Used to be a problem when memory was expensive
34
A-Buffer Method An extension of the idea behind the depth-buffer
Can be implemented using methods similar to the depth-buffer The drawback of the depth-buffer method is that it identifies only one visible surface at each pixel position Can only deal with opaque surfaces (i.e. no transparency)
35
A-Buffer Method Scan line processed to determine how much of each surface covers each pixel position across the individual scan lines But more surface information is store e.g. RGB colour intensities Opacity parameter (% of transparency) Depth % of area coverage Surface identifier, etc.
36
A-Buffer Method Pixel colour computed as a combination of different surface colours for transparency or antialiasing effects Using information like the opacity factors The colour at each pixel is calculated as an average of the contributions from overlapping surfaces Obviously more computation required compared to depth-buffer test The accumulation buffer implemented in rendering packages is a simplification of the A-buffer
37
Scan-line Method Image-space approach
Identifies visible surfaces by computing and comparing depth values along scan lines Although has elements in common with z-buffer algorithm, fundamentally different Works one scan line at a time and all polygon surface projections intersecting that line are examined (instead of one polygon at a time) Before reduced cost of memory, was the most efficient visibility determination approach No buffer required
38
Scan-line Method Basic idea of the approach
As each scan line is processed, all polygon surface projections intersecting the line are examined At each pixel position on the scan line, depth calculations performed to determine which surface is nearest to the view plane When the visible surface has been determined for a pixel, the surface colour for that position is entered into the frame buffer
39
Scan-line Method Unlike depth-buffer method
No over-rendering, each pixel only drawn once
40
Area-subdivision Method
This hidden surface removal technique is essentially an image-space method But object-space operations can be used to accomplish depth ordering of surfaces Takes advantage of area coherence Small areas of an image are likely to be covered by only a single polygon
41
Area-subdivision Method
Applied by successively dividing the total view plane into smaller and smaller rectangles until each rectangle area contains either The projection of part of a single visible surface No surface projections Or the area has been reduced to the size of a pixel Starting with a total view, apply tests to determine whether area should be subdivided
42
Area-subdivision Method
The basic principle If the area is sufficiently complex, then subdivide An easy approach is to Successively divide the area into four equal parts each step (same as quadtree) A 1024 x 1024 viewing area could be subdivided ten times before a subarea is reduced to the size of a single pixel
43
Area-subdivision Method
Four surface classifications Surrounding surface Surface completely encloses area Overlapping surface Surface partly inside and partly outside Inside surface Surface completely inside Outside surface Surface completely outside
44
Area-subdivision Method
Four surface classifications (cont.)
45
Area-subdivision Method
No further subdivision required it for specified area one of the following conditions is true Condition 1 Area has no inside, overlapping, or surrounding surfaces (i.e. all surfaces are outside the area) Condition 2 Area has only one inside, overlapping, or surrounding surface Condition 3 Area has one surrounding surface that obscures all other surfaces within the area boundaries
46
Area-subdivision Method
47
Ray-casting Method Consider line of sight from a pixel position on the view plane through scene Determine which objects in the scene (if any) intersect this line After all ray-surface intersection calculations performed, visible surface is one whose intersection point is closest to the view plane
48
Bounding Volumes Used to accelerate view frustum culling Basic idea
Put objects in a bounding volume (typically either a bounding box or a bounding sphere, as these are the easiest to test) Check whether bounding volume within the viewing frustum If not, eliminate or cull entire object from processing
49
Bounding Volumes If entire bounding box not inside viewing frustum, object is not visible
50
Thank you.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.