Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University

Slides:



Advertisements
Similar presentations
Coherent Laplacian 3D protrusion segmentation Oxford Brookes Vision Group Queen Mary, University of London, 11/12/2009 Fabio Cuzzolin.
Advertisements

1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.
A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1.
Nonlinear Dimension Reduction Presenter: Xingwei Yang The powerpoint is organized from: 1.Ronald R. Coifman et al. (Yale University) 2. Jieping Ye, (Arizona.
Manifold Learning Dimensionality Reduction. Outline Introduction Dim. Reduction Manifold Isomap Overall procedure Approximating geodesic dist. Dijkstra’s.
Two-View Geometry CS Sastry and Yang
Presented by: Mingyuan Zhou Duke University, ECE April 3, 2009
1 A Survey on Distance Metric Learning (Part 1) Gerry Tesauro IBM T.J.Watson Research Center.
Non-linear Dimensionality Reduction CMPUT 466/551 Nilanjan Ray Prepared on materials from the book Non-linear dimensionality reduction By Lee and Verleysen,
Two-view geometry.
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Isomap Algorithm.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
One-Shot Multi-Set Non-rigid Feature-Spatial Matching
“Random Projections on Smooth Manifolds” -A short summary
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
Principal Component Analysis
Spectral embedding Lecture 6 1 © Alexander & Michael Bronstein
LLE and ISOMAP Analysis of Robot Images Rong Xu. Background Intuition of Dimensionality Reduction Linear Approach –PCA(Principal Component Analysis) Nonlinear.
1 Numerical geometry of non-rigid shapes Spectral Methods Tutorial. Spectral Methods Tutorial 6 © Maks Ovsjanikov tosca.cs.technion.ac.il/book Numerical.
Three Algorithms for Nonlinear Dimensionality Reduction Haixuan Yang Group Meeting Jan. 011, 2005.
3D Geometry for Computer Graphics
Distance Metric Learning: A Comprehensive Survey
A Global Geometric Framework for Nonlinear Dimensionality Reduction Joshua B. Tenenbaum, Vin de Silva, John C. Langford Presented by Napat Triroj.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
1 Numerical geometry of non-rigid shapes Non-Euclidean Embedding Non-Euclidean Embedding Lecture 6 © Alexander & Michael Bronstein tosca.cs.technion.ac.il/book.
NonLinear Dimensionality Reduction or Unfolding Manifolds Tennenbaum|Silva|Langford [Isomap] Roweis|Saul [Locally Linear Embedding] Presented by Vikas.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
Nonlinear Dimensionality Reduction by Locally Linear Embedding Sam T. Roweis and Lawrence K. Saul Reference: "Nonlinear dimensionality reduction by locally.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Similarities, Distances and Manifold Learning Prof. Richard C. Wilson Dept. of Computer Science University of York.
Nonlinear Dimensionality Reduction for Hyperspectral Image Classification Tim Doster Advisors: John Benedetto & Wojciech Czaja.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University
Dimensionality reduction: Some Assumptions High-dimensional data often lies on or near a much lower dimensional, curved manifold. A good way to represent.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
THE MANIFOLDS OF SPATIAL HEARING Ramani Duraiswami | Vikas C. Raykar Perceptual Interfaces and Reality Lab University of Maryland, College park.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
CSE 185 Introduction to Computer Vision Face Recognition.
Dimensionality Reduction
Manifold learning: MDS and Isomap
CSC2535: Computation in Neural Networks Lecture 12: Non-linear dimensionality reduction Geoffrey Hinton.
Dimensionality Reduction Part 2: Nonlinear Methods
1 LING 696B: MDS and non-linear methods of dimension reduction.
Nonlinear Dimensionality Reduction Approach (ISOMAP)
Jan Kamenický.  Many features ⇒ many dimensions  Dimensionality reduction ◦ Feature extraction (useful representation) ◦ Classification ◦ Visualization.
H. Lexie Yang1, Dr. Melba M. Crawford2
Data Mining Course 0 Manifold learning Xin Yang. Data Mining Course 1 Outline Manifold and Manifold Learning Classical Dimensionality Reduction Semi-Supervised.
Non-Linear Dimensionality Reduction
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Math 285 Project Diffusion Maps Xiaoyan Chong Department of Mathematics and Statistics San Jose State University.
CSC321: Lecture 25: Non-linear dimensionality reduction Geoffrey Hinton.
Nonlinear Dimension Reduction: Semi-Definite Embedding vs. Local Linear Embedding Li Zhang and Lin Liao.
CSC321: Extra Lecture (not on the exam) Non-linear dimensionality reduction Geoffrey Hinton.
Multi-index Evaluation Algorithm Based on Locally Linear Embedding for the Node importance in Complex Networks Fang Hu
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Eric Xing © Eric CMU, Machine Learning Data visualization and dimensionality reduction Eric Xing Lecture 7, August 13, 2010.
Spectral Methods for Dimensionality
Nonlinear Dimensionality Reduction
Unsupervised Riemannian Clustering of Probability Density Functions
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
ISOMAP TRACKING WITH PARTICLE FILTERING
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
Object Modeling with Layers
Nonlinear Dimension Reduction:
Using Manifold Structure for Partially Labeled Classification
NonLinear Dimensionality Reduction or Unfolding Manifolds
Presentation transcript:

Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University

Review: MDS Metric scaling (classical MDS) –assuming D is the squared distance matrix. Non-metric scaling –deal with more general dissimilarity measures Solutions: (1) Add a large constant to its diagonal. (2) Find its nearest positive semi-definite matrix by setting all negative eigenvalues to zero.

Review: Isomap Constructing neighbourhood graph G For each pair of points in G, Computing shortest path distances ---- geodesic distances. Use Classical MDS with geodesic distances. –Euclidean distance  Geodesic distance

Review: Isomap Isomap is still a global approach –All pairwise distances considered The resulting distance matrix is dense –Isomap does not scale to large datasets –Landmark Isomap proposed to overcome this problem LLE –local approach –The resulting matrix is sparse Apply efficient sparse matrix solvers

Outline of lecture Local Linear Embedding (LLE) –Intuition –Least squares problem –Eigenvalue problem Summary

LLE Nonlinear Dimensionality Reduction by Locally Linear Embedding –Roweis and Saul –Science – 2323http:// 2323

Characterictics of a Manifold M x1x1 x2x2 R2R2 RnRn z x x: coordinate for z Locally it is a linear patch Key: how to combine all local patches together?

LLE: Intuition Assumption: manifold is approximately “linear” when viewed locally, that is, in a small neighborhood –Approximation error, e(W), can be made small Local neighborhood is effected by the constraint W ij =0 if z i is not a neighbor of z j A good projection should preserve this local geometric property as much as possible

We expect each data point and its neighbors to lie on or close to a locally linear patch of the manifold. Each point can be written as a linear combination of its neighbors. The weights chosen to minimize the reconstruction Error. LLE: Intuition

The weights that minimize the reconstruction errors are invariant to rotation, rescaling and translation of the data points. –Invariance to translation is enforced by adding the constraint that the weights sum to one. –The weights characterize the intrinsic geometric properties of each neighborhood. The same weights that reconstruct the data points in D dimensions should reconstruct it in the manifold in d dimensions. –Local geometry is preserved LLE: Intuition

Use the same weights from the original space Low-dimensional embedding the i-th row of W

Local Linear Embedding (LLE) Assumption: manifold is approximately “linear” when viewed locally, that is, in a small neighborhood Approximation error,  (W), can be made small Meaning of W: a linear representation of every data point by its neighbors –This is an intrinsic geometrical property of the manifold A good projection should preserve this geometric property as much as possible

Constrained Least Square problem Compute the optimal weight for each point individually: Neightbors of x Zero for all non-neighbors of x

Finding a Map to a Lower Dimensional Space Y i in R k : projected vector for X i The geometrical property is best preserved if the error below is small Y is given by the eigenvectors of the lowest d non-zero eigenvalues of the matrix Use the same weights computed above

Eigenvalue problem

A simpler approach The following is a more direct and simpler derivation for Y:

Eigenvalue problem Add the following constraint: The optimal d-dimensional embedding is given by the bottom 2 nd to (d+1)-th eigenvectors of the following matrix: (Note that 0 is its smallest eigenvalue.)

The LLE algorithm

Examples Images of faces mapped into the embedding space described by the first two coordinates of LLE. Representative faces are shown next to circled points. The bottom images correspond to points along the top-right path (linked by solid line) illustrating one particular mode of variability in pose and expression.

Some Limitations of LLE require dense data points on the manifold for good estimation A good neighborhood seems essential to their success –How to choose k? –Too few neighbors Result in rank deficient tangent space and lead to over-fitting –Too many neighbors Tangent space will not match local geometry well

Experiment on LLE

Applications Application of Locally Linear Embedding to the Spaces of Social Interactions and Signed Distance Functions Hyperspectral Image Processing Using Locally Linear Embedding Feature Dimension Reduction For Microarray Data Analysis Using Locally Linear Embedding Locally linear embedding algorithm. Extensions and applications –

Next class Topics –Laplacian Eigenmaps Readings –Laplacian Eigenmaps for Dimensionality Reduction and Data Representation - Belkin and Niyog