Download presentation
Presentation is loading. Please wait.
Published byAnis Reeves Modified over 8 years ago
1
Partial Shape Matching
2
Outline: Motivation Sum of Squared Distances
3
Motivation We have seen a large number of different shape descriptors: –Shape Distributions –Extended Gaussian Images –Shape Histograms –Gaussian EDT –Wavelets –Spherical Parameterizations –Spherical Extent Functions –Light Field Descriptors –Shock Graphs –Reeb Graphs Representation Theory
4
Challenge Partial shape matching problem: Given a part of a model S and a whole model M determine if the part is a subset of the whole S M. Representation Theory SM
5
Difficulty For whole object matching, we would associate a shape descriptor v M to every model M and would define the measure of similarity between models M and N as the distance between their descriptors: D(M,N)=||v M -v N || Representation Theory
6
Difficulty For partial object matching, we cannot use the same approach: –Vector differences are symmetric but subset matching is not: –If S M, then we would like to have v S ≠v M and: which means that we cannot use difference norms to measure similarity. Representation Theory
7
Motivation We have seen a number of different ways for addressing the alignment problem: –Center of Mass Normalization –Scale Normalization –PCA Alignment –Translation Invariance –Rotation Invariance Representation Theory
8
Motivation Most of these methods will change give very different descriptors if only part of the model is given. –Center of mass, variance, and principal axes of a part of the model will not be the same as those of the whole. Representation Theory
9
Motivation Most of these methods will change give very different descriptors if only part of the model is given. –Changing the values of a function will change the (non- constant) frequency distribution in non-trivial ways. Representation Theory
10
Outline: Motivation Sum of Squared Distances
11
Goal Design a new paradigm for shape matching that associates a simple structure to each shape M→v M such that if S M, then: – v S ≠v M (unless S=M) –but D(S,M)=0 That is, we would like to define a measure of similarity that answers the question: “How close is S to being a subset of M?” Representation Theory
12
Key Idea Instead of using the norm of the difference, use the dot product: Then, S is a subset of M if is orthogonal to. To do this, we have to define different descriptors for a model depending on whether it is the target or the query. Representation Theory
13
Implementation For a model M, represent the model by two different 3D function: Representation Theory M Raster M EDT M
14
Implementation Then Raster M is non-zero only on the boundary points of the model, and EDT M is non-zero everywhere else. Consequently we have: and hence: Representation Theory M Raster M EDT M
15
Implementation Moreover, if S M, then we still have: so that: Representation Theory EDT M S Raster S M
16
What is the value of D(S,M)? Representation Theory
17
What is the value of D(S,M)? Representation Theory
18
What is the value of D(S,M)? Since Raster S is equal to 1 for points that lie on S and equal to 0 everywhere else: Representation Theory
19
What is the value of D(S,M)? So that distance between S and M is equal to the sum of squared distances from points on S to the nearest point M. Representation Theory S M
20
What is the value of D(S,M)? Note that if we rasterize the models into an nxnxn voxel grid, then a brute force computation would compute the sum of the distances for each of O(n 2 ) on the query by testing against each of O(n 2 ) points on the target for the minimum distance, giving a total running time of O(n 4 ). By pre-computing the EDT, we reduce the computation to O(n 2 ) operations. Representation Theory
21
Advantages Model similarity is defined in terms of the dot- product: –We can still use SVD for efficiency and compression (since rotations do not change the dot product) –We can still use fast correlation methods (translation, rotation, axial flip) but now we want to find the transformation minimizing the correlation. Representation Theory
22
Advantages We can use a symmetric version of this for whole object matching. Representation Theory
23
Advantages We can perform importance matching by assigning a value larger than 1 to sub-regions of the rasterization. Representation Theory
24
Disadvantage Aside from using fast Fourier / Spherical-Harmonic / Wigner-D transforms, we still have no good way to address the alignment problem. Representation Theory
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.