6/26/2006CGI'06, Hangzhou China1 Sub-sampling for Efficient Spectral Mesh Processing Rong Liu, Varun Jain and Hao Zhang GrUVi lab, Simon Fraser University,

Slides:



Advertisements
Similar presentations
Coherent Laplacian 3D protrusion segmentation Oxford Brookes Vision Group Queen Mary, University of London, 11/12/2009 Fabio Cuzzolin.
Advertisements

Partitional Algorithms to Detect Complex Clusters
3D Geometry for Computer Graphics
Component Analysis (Review)
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Nonlinear Dimension Reduction Presenter: Xingwei Yang The powerpoint is organized from: 1.Ronald R. Coifman et al. (Yale University) 2. Jieping Ye, (Arizona.
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) Dimensionality Reductions or data projections Random projections.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Robust 3D Shape Correspondence in the Spectral Domain
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Lecture 21: Spectral Clustering
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Shape-Based Retrieval of Articulated 3D Models Using Spectral Embedding GrUVi Lab, School of Computing Science Simon Fraser University, Burnaby, BC Canada.
Principal Component Analysis
Unsupervised Learning - PCA The neural approach->PCA; SVD; kernel PCA Hertz chapter 8 Presentation based on Touretzky + various additions.
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
An Introduction to Kernel-Based Learning Algorithms K.-R. Muller, S. Mika, G. Ratsch, K. Tsuda and B. Scholkopf Presented by: Joanna Giforos CS8980: Topics.
Correspondence & Symmetry
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Numerical geometry of non-rigid shapes Spectral Methods Tutorial. Spectral Methods Tutorial 6 © Maks Ovsjanikov tosca.cs.technion.ac.il/book Numerical.
Gaussian Information Bottleneck Gal Chechik Amir Globerson, Naftali Tishby, Yair Weiss.
Face Recognition Using Eigenfaces
3D Geometry for Computer Graphics
Flattening via Multi- Dimensional Scaling Ron Kimmel Computer Science Department Geometric Image Processing Lab Technion-Israel.
Olga Sorkine’s slides Tel Aviv University. 2 Spectra and diagonalization A If A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis).
Amos Storkey, School of Informatics. Density Traversal Clustering and Generative Kernels a generative framework for spectral clustering Amos Storkey, Tom.
Texture Mapping using Surface Flattening via Multi-Dimensional Scaling G.Zigelman, R.Kimmel, N.Kiryati IEEE Transactions on Visualization and Computer.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Diffusion Maps and Spectral Clustering
Summarized by Soo-Jin Kim
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Presented By Wanchen Lu 2/25/2013
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
Expression-invariant Face Recognition using Geodesic Distance Isometries Kerry Widder A Review of ‘Robust expression-invariant face recognition from partially.
Spectral Global Intrinsic Symmetry Invariant Functions Hui Wang Shijiazhuang Tiedao University Patricio Simari The Catholic University of America Zhixun.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Axial Flip Invariance and Fast Exhaustive Searching with Wavelets Matthew Bolitho.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Andreas Papadopoulos - [DEXA 2015] Clustering Attributed Multi-graphs with Information Ranking 26th International.
1 LING 696B: MDS and non-linear methods of dimension reduction.
Spectral Sequencing Based on Graph Distance Rong Liu, Hao Zhang, Oliver van Kaick {lrong, haoz, cs.sfu.ca {lrong, haoz, cs.sfu.ca.
Principal Manifolds and Probabilistic Subspaces for Visual Recognition Baback Moghaddam TPAMI, June John Galeotti Advanced Perception February 12,
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Advanced Artificial Intelligence Lecture 8: Advance machine learning.
A Part-aware Surface Metric for Shape Analysis Rong Liu 1, Hao Zhang 1, Ariel Shamir 2, and Daniel Cohen-Or 3 1 Simon Fraser University, Canada 2 The Interdisciplinary.
3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
Mesh Segmentation via Spectral Embedding and Contour Analysis Speaker: Min Meng
Irena Váňová. B A1A1. A2A2. A3A3. repeat until no sample is misclassified … labels of classes Perceptron algorithm for i=1...N if then end * * * * *
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Unsupervised Learning II Feature Extraction
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Document Clustering with Prior Knowledge Xiang Ji et al. Document Clustering with Prior Knowledge. SIGIR 2006 Presenter: Suhan Yu.
Efficient non-linear analysis of large data sets
Spectral Methods for Dimensionality
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
LECTURE 10: DISCRIMINANT ANALYSIS
You can check broken videos in this slide here :
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Image Segmentation Techniques
Principal Component Analysis
Spectral Clustering Eric Xing Lecture 8, August 13, 2010
Symmetric Matrices and Quadratic Forms
LECTURE 09: DISCRIMINANT ANALYSIS
Presentation transcript:

6/26/2006CGI'06, Hangzhou China1 Sub-sampling for Efficient Spectral Mesh Processing Rong Liu, Varun Jain and Hao Zhang GrUVi lab, Simon Fraser University, Burnaby, Canada

6/26/2006CGI'06, Hangzhou China2 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China3 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China4 Spectral Applications spectral clustering [Ng et. al., 02] spectral mesh compression [Karni and Gotsman, 00] watermarking [Ohbuchi et. al., 01] spectral mesh segmentation [Liu and Zhang, 04] face recognition in eigenspace [Turk, 01] spectral mesh correspondence [Jain and Zhang, 06] “affinity matrix” W, its eigen-decomposition texture mapping using MDS [Zigelman et. al., 02]

6/26/2006CGI'06, Hangzhou China5 Spectral Embedding W =W = 0.56 j i i j W = EΛE T n points, dimension 2 E = … embedding space, dimension n row i i j

6/26/2006CGI'06, Hangzhou China6 Bottlenecks Computation of W, O(n 2 ). Apply sub-sampling to compute partial W. Eigenvalue decomposition of W, O(n 3 ). Apply Nyström method to approximate the eigenvectors of W. How to sample to make Nyström work better ?

6/26/2006CGI'06, Hangzhou China7 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China8 Sub-sampling Compute partial affinities n points O (n 2 ) O (l. n) complexity : Z = X U Y l sample points W = affinities between X and Y affinities within X

6/26/2006CGI'06, Hangzhou China9 Nyström Method [Williams and Seeger, 2001] Approximate Eigenvectors W = A BTBT B C, A = UΛU T O (n 3 ) O (l 2. n) complexity : U = U B T UΛ -1 approximate eigenvectors

6/26/2006CGI'06, Hangzhou China10 Schur Complement U B T UΛ -1 Λ U T = A BTA-1BBTA-1B B BTBT W = UΛU T = W = A C B BTBT Schur Complement = C - B T A -1 B F F Practically, SC is not useful to measure the quality of a sample set. SC =

6/26/2006CGI'06, Hangzhou China11 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China12 PCA and KPCA [Schölkopf et al, 1998] covariance matrix C X dimension 2 covariance matrix C φ( X) X feature space, high dimension (infinite) X is implicitly defined by a kernel matrix K, where K ij =

6/26/2006CGI'06, Hangzhou China13 Training Set for KPCA K = L MTMT M N L = EΛE T E = E M T EΛ -1 ˙ Λ -1/2

6/26/2006CGI'06, Hangzhou China14 Nyström Method and KPCA W = A BTBT B C A = UΛU T U = U B T UΛ -1 Nyström KPCA w/ training set K = L MTMT M N L = EΛE T E = E M T EΛ -1 ˙ Λ -1/2

6/26/2006CGI'06, Hangzhou China15 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China16 When Nyström Works Well ? When the training set of KPCA works well ? Training set should minimize: subspace spanned by training points

6/26/2006CGI'06, Hangzhou China17 Objective Function minimize : maximize: W = A BTBT B C evaluation:

6/26/2006CGI'06, Hangzhou China18 Compare Γ and SCSC Given two sampling sets S 1 and S 2 1.Test data are generated using Gaussian distribution; 2.Test is repeated for 100 times; 3.4% inconsistency.

6/26/2006CGI'06, Hangzhou China19 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China20 How to sample: Greedy Scheme Maximize: Greedy Sampling Scheme: W = A BTBT B C AB Best candidate sampling scheme: To find the best 1% with probability 95%, we only need to search for the best one from a random subset of size 90 ( log(0.01)/log(0.95) ) regardless of the problem size.

6/26/2006CGI'06, Hangzhou China21 Properties of Γ ( 0, m ), m is the column size of B maximize 1 T (A -1 1) 1.A is symmetric. 2.Diagonals of A are 1. 3.Off-diagonals of A are in (0, 1). It can be shown that when A’s columns are canonical basis of the Euclidean space, the maxima is obtained.

6/26/2006CGI'06, Hangzhou China22 How to Sample: Farthest Point Scheme A = … In order for A ’s columns to be close to canonical basis, the off-diagonals should be close to zero. This means the distances between each pair of samples should be as large as possible, namely Samples are mutually farthest away.

6/26/2006CGI'06, Hangzhou China23 Farthest Sampling Scheme

6/26/2006CGI'06, Hangzhou China24 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China25 Mesh Correspondence M (1) D (1) W (1) EΛ -1/2 M (1) M (2) D (2) W (2) EΛ -1/2 M (2)

6/26/2006CGI'06, Hangzhou China26 without sampling farthest point sampling random sampling (vertices sampled: 10, total vertices: 250)

6/26/2006CGI'06, Hangzhou China27 (vertices sampled: 10 total vertices: 2000)

6/26/2006CGI'06, Hangzhou China28 correspondence error against mesh size correspond a series a slimmed mesh with the original mesh a correspondence error at a certain vertex is defined as the geodesic distance between the matched point and the ground-truth matching point.

6/26/2006CGI'06, Hangzhou China29 Mesh Segmentation M D W EΛ -1/2

6/26/2006CGI'06, Hangzhou China30 (b, d) obtained using farthest point sampling (a, c) obtained using random sampling faces sampled: 10 number in brackets: value of Γ

6/26/2006CGI'06, Hangzhou China31 w/o sampling, it takes 30s to handle a mesh with 4000 faces. 2.2 GHz Processor 1GB RAM

6/26/2006CGI'06, Hangzhou China32 Roadmap Background Nyström Method Kernel PCA (KPCA) Measuring Nyström Quality using KPCA Sampling Schemes Applications Conclusion and Future Work

6/26/2006CGI'06, Hangzhou China33 Conclusion Nyström approximation can be considered as using training data in Kernel PCA. Objective function Γ effectively quantifies the quality of a sample set. Γ leads to two sampling schemes: greedy scheme and farthest point scheme. Farthest point sampling scheme outperforms random sampling.

6/26/2006CGI'06, Hangzhou China34 Future Work Study the influence of kernel functions to Nyström method. Further improve the sampling scheme.

6/26/2006CGI'06, Hangzhou China35 Thank you ! Questions ?

6/26/2006CGI'06, Hangzhou China36 Mesh Correspondence 1.Given any two models, M (1) and M (2), build the geodesic distance matrices D (1) and D (2). D ij encodes the geodesic distance between vertices i and j ; 2. D (1)  W (1), D (2)  W (2), using Gaussian kernel. 3.Compute the eigenvalue decomposition of W (1) and W (2), and use the corresponding eigenvectors to define the spectral-embedded models M (1) and M (2). handle bending, uniform scaling and rigid body transformation. 4.Compute the correspondence between M (1) and M (2).

6/26/2006CGI'06, Hangzhou China37 Mesh Segmentation 1.Given a model M, somehow define the distances between each pair of faces; the distances are stored in matrix D ; 2. D  W ; 3.Compute the eigenvalue decomposition of W, and use the eigenvectors to spectral-embed the faces. 4.Cluster (K-means) the embedded faces. Each cluster corresponds to a segment of the original model.

6/26/2006CGI'06, Hangzhou China38 Maximize: Given any two sampling sets S 1 and S 2, S 1 is superior to S 2 iff Efficient to compute. Minimize: (schur complement)schur complement S 1 is superior to S 2 iff Very expensive to compute. Γ and Schur Complement SC = C - B T A -1 B