Scale-Space Representation for Matching of 3D Models Dmitriy Bespalov Department of Computer Science College of Engineering Drexel University 3141 Chestnut Street Philadelphia, PA 19104
Overview of the Thesis Introduction Approach Experimental results Overview of feature decomposition Variations of feature decomposition Matching Experimental results Conclusions, contributions and future work
Related Work ? query Database query result ? query
Goals of this Work Feature extraction technique Models in polyhedral representation Using local information Tolerance to noise
CAD vs Shape Representation CAD Representation Shape Representation conversion is hard Topologically and geometrically consistent Implicit surfaces Analytic surfaces NURBS, etc Approximate representation, error prone Mesh Point cloud Can produce with laser scanners conversion is easy
What is a Scale Space Representation? Commonly used for Coarse-to-Fine representations of an object Very popular in computer Vision Constructed via spatial filters: Gaussian pyramids, Wavelets… Basic Idea: At each scale, topologically relevant components will decompose the object into so called salient parts Recursive application of this paradigm will create the object’s scale space hierarchy
Side Note: Compare Features This Technique CAD/CAM
Algorithm Overview (I) Obtain mesh representation M 2. Define measurement function: assign distance measure to every pair of points or triangles
Algorithm Overview (II) 3. Decompose M into relevant components using a singular value decomposition of distance matrix D Note: this creates a clustering based on the angle between a vector Oti and the basis vectors (ck, ck-1)
Algorithm Overview (III) Recursive feature decomposition using two principle components creates binary feature trees. Use leaf nodes as features.
Algorithmic Complexity Bisection process: SVD decomposition takes O(n3). Polyhedral representation creates a 3D lattice; if only neighboring vertices are used in construction of the distance matrix, SVD decomposition is faster and takes O(n2).
Variations of Feature Decomposition (I) Use various distance measures to tune nature of extracted features Global Distance Function Geodesic Distance Function Angular Shortest Path Max-Angle on Angular Shortest Path Global Feature Extraction Local Distance Function Local Feature Extraction
Variations of Feature Decomposition (II) Geodesic Distance Function
Variations of Feature Decomposition (III) Angular Shortest Path
Variations of Feature Decomposition (IV) Max-Angle on Angular Shortest Path
Controlling Feature Decomposition Which “feature” is better? Need to control decomposition process to get these features Otherwise, get these features
Controlling Feature Decomposition: When to Stop? Depends on the distance measure used At each step of decomposition: decide whether to stop Assign “quality” measure to each bisection
Controlling Global Feature Decomposition f measures the “quality” of a bisection Assume decomposition of M1 into M2 and M3 Bisect M1 into M2 and M3 if f(M1) < 0.5 M2 M2 M3 M3
Controlling Local Feature Decomposition “Quality” of bisection is angle based Assume decomposition of M1 into M2 and M3 Bisect M1 into M2 and M3 if angular distance between components M2 and M3 is large angle across the border of the cluster is max on the path between most of the pairs of faces in M2 and M3. M2 M3
Matching for Feature Extraction features extract features set of features set of features How to match extracted features?
Matching for Global Feature Extraction Decomposition trees are near-to-balanced Compare decomposition trees (bottom up dynamic programming) using sub-tree edit distances Calculate model similarity based on an overall similarity of matched components
Matching for Local Feature Extraction Decomposition trees can not be used Compare feature graphs (leaves of decomposition trees) Sub-graph isomorphism is used to asses similarity Hill-climbing algorithm with random restarts
Experimental Results for Global Feature Extraction Perform retrieval experiments LEGO dataset CAD dataset Functional classification Manufacturing classification
Retrieval Experiments k-nearest neighbor classification (kNN) Recall and precision measures Precision against recall graphs Relevant models: number of models that fall in the same category as query model Retrieved models: number of models returned by a query Retrieved and Relevant models: number of models returned and that fell into the same category as query model
Techniques Used in Evaluation Shape distributions (SD) Shape distributions with point pair classifications (SD-Class) Reeb graph comparison (Reeb) Shape distributions with weights learning (SD-Learn) Global Feature Extraction comparison (Scale-Space)
LEGO Classification X-Shape Axles Cylindrical Parts Wheels-Gears Plates
Functional Classification Springs Screws Gears Nuts Brackets Housings Linkage arms Functional Classification
Manufacturing Classification Cast-then-machined: Prismatic Machined:
Experimental Results for Local Feature Extraction Feature decomposition on CAD data Feature decomposition on partial and scanned data Perform retrieval experiments Functional classification on CAD dataset Retrieval on partial and scanned data
Experimental Results: CAD Data
Experimental Results: Partial Data
Experimental Results: Partial Data
Experimental Results: Partial Data
Experimental Results: Partial Data
Experimental Results: Partial Data
Experimental Results: Scanned Data From Exact Representation 360° Scan Single Scan
Experimental Results: Scanned Data From Exact Representation 360° Scan Single Scan An example of one-to-many correspondence
Experimental Results: Scanned Data From Exact Representation 360° Scan Single Scan An example of one-to-one correspondence
Experimental Results: Scanned Data From Exact Representation 360° Scan Single Scan An example of one-to-one correspondence
Experimental Results: Scanned Data From Exact Representation 360° Scan Single Scan An example of many-to-many correspondence
Experimental Results: Scanned Data From Exact Representation 360° Scan Single Scan An example of one-to-many correspondence
Experimental Results: Scanned Data From Exact Representation 360° Scan Single Scan An example of one-to-one correspondence
Retrieval Using Functional Classification Techniques used: Reeb graph comparison (Reeb) Global Feature Extraction (Scale-Space) Local Feature Extraction (Local Scale-Space)
Retrieval Using Functional Classification
Retrieval on Partial and Scanned Data Construct feature graphs for CAD dataset Obtain several scanned and partial models For every scanned or partial model: Compare with every model in CAD dataset Sort by distance Return k nearest models
Retrieval on Partial and Scanned Data Partial Data From Exact Representation 360° Scan Single Scan query CAD Database
Retrieval on Partial and Scanned Data Full Scan Partial Scan Partial CAD
Retrieval on Partial and Scanned Data Full Scan Partial Scan Partial CAD
Retrieval on Partial and Scanned Data Full Scan Partial Scan Partial CAD
Retrieval on Partial and Scanned Data Models Returned Correct Queries 5 3 / 9 10 5 / 9 15 7 / 9 20 9 / 9
Summary of Experimental Results Feature extraction is acceptable Matching could be improved Matching for Global Feature Extraction: Comparison of feature pairs is weak, drawn from Reeb Graph technique Matching for Local Feature Extraction: No feature pairs comparison No many-to-many matching No handling for noise features
Conclusions & Contributions Parameterizable feature extraction for CAD Features depend only on distance measure Applicable to partial and scanned data Query CAD database with scanned data Attempted to address matching problem
Future Work Introduce matching for feature graphs Better comparison for feature pairs Handle many-to-many matching Identify noise features Develop various distance measures That resemble traditional CAD features Approximate B-Rep from polyhedral models
Q&A Sponsored by:
The Eckart-Young Theorem The Eckart-Young Theorem: Given an n by m matrix X of rank r ≤ m ≤ n, and its singular value decomposition, ULV', where U is an n by m matrix, L is an m by m diagonal matrix of singular values, and V is an m by m matrix such that U'U = In and V'V = VV' = Im with the singular values arranged in decreasing sequence λ1 ≥ λ2 ≥ λ3 ≥ ... ≥ λm ≥ 0 then there exists an n by m matrix B of rank s, s ≤ r, which minimizes the sum of the squared error between the elements of X and the corresponding elements of B when B = UΛsV' where the diagonal elements of the m by m diagonal matrix Λs are λ1 ≥ λ2 ≥ λ3 ≥ ... ≥ λs > λs+1 = λs+2 = ... = λm = 0