Download presentation
Presentation is loading. Please wait.
Published bySudomo Kurnia Modified over 6 years ago
1
Jarek Rossignac GVU Center Georgia Institute of Technology
low LOD 70000 triangles Simplification Jarek Rossignac GVU Center Georgia Institute of Technology This course covers the principles and specific techniques for compressing digital representations of 3D shapes. The importance of these techniques is rapidly increasing with the complexity of 3D databases, with their popularity in many application areas, and with the growing need to access these models over the Internet. Although the early techniques covered in this course already exhibit close to two orders of magnitude compression ratios over traditional representations of 3D shapes, research in the area is progressing rapidly.This lecture will provide the motivation for 3D compression in the context of multi-resolution rendering of remote 3D databases. After defining the compression problem at a broad level, we will focus on a specific incarnation: The bit-efficient coding of the triangle/vertex incidence graph for triangle meshes that approximate manifolds with boundaries. We argue that this fundamental tool lies at the heart of most compression and progressive transmission approaches. We provide a unifying overview of previously reported work in this area that should help the reader understand the benefits and limitations of the various approaches presented in detail in the remainder of the course.We also present a new simple scheme for the loss-less compression of the incidence graphs of triangle meshes, which requires only 2 bits per triangle for simple topologies.
2
Loss-less or lossy compression?
Loss-less compression Quantize parameters (coordinates) based on application needs Finite precision measurements, design, computation Limited needs for accuracy in some applications Encode quantized location and exact incidence Lossy compression Encode approximations of the surface using a different representation How to measure the error to ensure that tolerance is not exceeded? Binary format Compress Decompress Quantize vertices Lossy Loss-less We will primarily focus on loss-less compression, so that the decompressed model is identical to the one used as input for compression. Note however that in many cases, a lossy preprocessing step may significantly reduce the bit-count while preserving the desired accuracy. Some models may by inaccurate by construction (input discretization, numeric round-off errors, design tolerances). The application requirements may also be loose. For example, a tenth of an inch accuracy for a small part in a car engine may be sufficient for many graphic applications, thus the coordinates of its vertices can be normalized, cast to integers, and truncated to their 6 most significant bits. The normalization is simply a change of coordinates system that maps the range of points with coordinates between 0 and 26 to the smallest axis-aligned box around the part. Lossy compression will also be discussed in this course. Essentially, it is based on the substitution of an alternate representation (different faces or even surface types). The major difficulty with lossy compression is the measure of the error between the original shape and the one resulting from the decompression process.
3
Triangle count reduction techniques (LOD)
Quantize & cluster vertex data (Rossignac&Borrel’92) remove degenerate triangles (that have coincident vertices) Adapted by P. Lindstrom for out-of-core simplification Repeatedly collapse best edge (Ronfard&Rossignac96) while minimizing maximum error bound Adapted by M. Garland for least square error
4
Vertex clustering (Rossignac-Borrel)
Subdivide box around object into grid of cells Coalesce vertices in each cell into one “attractor” Remove degenerate triangles More than one vertex in a cell Not needed for dangling edge or vertex
5
Rossignac&Borrel 93
6
Rossignac&Borrel 93
7
Improving on Vertex Clustering
Advantages Trivial to implement Fast Works on any mesh or triangle soup Guaranteed Hausdroff error to diagonal of cell Reduces topology Removes holes. Never creates one Merges connected shells components. Never splits them. Drawbacks Produces sub-optimal results Too much error for a given triangle count reduction Prevents the merging of distant vertices on flat portions of the surface Fix: limit vertex moves by the resulting error Not a fixed grid
8
Simplification through edge collapse
As noted earlier, an upgrade that we wish to compress may represent an entire mesh or a small feature. In the context of the progressive transmission of feature upgrades, a feature may be defined as the connected component of the modified parts of the geometry of the object. Consider for example the most popular simplification step: the edge collapse, which identifies an edge at a time and collapses it, removing two triangles and spreading the adjacent triangles to cover the so created empty space. Note that the result of a sequence of edge-collapse operations (bottom right) is independent of the order in which the operations were carried out. One can identify (bottom center) all the edges in the original model (bottom left) that should be collapsed during a simplification process that transforms one level-of-detail into a subsequent lower level. The star of these edges (triangles incident upon them) define the part of the geometry that is altered by the simplification process. The connected components of this area are good candidates for features. A finer decomposition may be defined as the connected components of the difference between the mesh and the triangles and edges that have not been affected by the simplification process.
9
How to decide which edges to collapse?
Minimize the error between original and resulting LOD How to compute/estimate error Peformance Geometric proximity clustering of vertices (pessimistic) Rossignac&Borrel: quantizing vertices identifies candidate edges Error is bounded by the quantization error Fast, easy, robust, but sub-optimal results Collapse edges Longer edges in almost planar regions Estimate error as max distance to supporting planes (Ronfard&Rossignac) Must keep list of all planes supporting triangles incident on contracted edges Use sum of squares instead of max (Heckbert&Garland): faster, no bound L2 norm, needs only add 4x4 matrices when clusters are merged
10
Distance and quadratic error
P N Point-plane distance Point P=(x,y,z) Plane containing point Qm and having unit normal Nm Distance ||PQmNm|| Can compute max (conservative, Ronfard&Rossignac) or sum (cheap, Heckbert&Garland) of (PQmNm)2 for the planes of all the triangles Tm incident upon vertices merged at P Distance squared: (PQmNm)2 = amx2+bmy2+cmz2+dmxy+emyz+fmzx+gmx+hmy+imz+jm Sum of distances squared: (PQmNm)2 + (PQnNn)2 = (am+an)x2 +(bm+bn)y2 +(cm+cn)z2 +(dm+dn)x +(em+en)y +(fm+fn)z +gm +gn As vertices are merged recursively: With max, you need to remember all the planes With sum, you just add the coefficients Q P
11
Ronfard&Rossignac EG’96
12
Shape complexity Optimal bit allocation in 3D compression
Siggraph'98 Course21: 3D Geometry Compression Shape complexity Optimal bit allocation in 3D compression King&Rossignac, Computational Geometry, Theory & Applications’99 Approximate ET by K/T Assumes uniform error distribution (all edge collapses increase ET) Assumes smooth shapes with no features smaller than tesselation Use integral of curvature to estimate K K estimate computed efficiently using sphere-fit for each edge Formula derived for objects made of relatively large spherical caps Yields crude estimate for doubly curved surfaces (saddle points...) ET T K/T Jarek Rossignac 6
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.