Download presentation
Presentation is loading. Please wait.
1
Manifold Clustering with Applications to Computer Vision and Diffusion Imaging
René Vidal Center for Imaging Science Institute for Computational Medicine Johns Hopkins University Title: Segmentation of Dynamic Scenes and Textures Abstract: Dynamic scenes are video sequences containing multiple objects moving in front of dynamic backgrounds, e.g. a bird floating on water. One can model such scenes as the output of a collection of dynamical models exhibiting discontinuous behavior both in space, due to the presence of multiple moving objects, and in time, due to the appearance and disappearance of objects. Segmentation of dynamic scenes is then equivalent to the identification of this mixture of dynamical models from the image data. Unfortunately, although the identification of a single dynamical model is a well understood problem, the identification of multiple hybrid dynamical models is not. Even in the case of static data, e.g. a set of points living in multiple subspaces, data segmentation is usually thought of as a "chicken-and-egg" problem. This is because in order to estimate a mixture of models one needs to first segment the data and in order to segment the data one needs to know the model parameters. Therefore, static data segmentation is usually solved by alternating data clustering and model fitting using, e.g., the Expectation Maximization (EM) algorithm. Our recent work on Generalized Principal Component Analysis (GPCA) has shown that the "chicken-and-egg" dilemma can be tackled using algebraic geometric techniques. In the case of data living in a collection of (static) subspaces, one can segment the data by fitting a set of polynomials to all data points (without first clustering the data) and then differentiating these polynomials to obtain the model parameters for each group. In this talk, we will present ongoing work addressing the extension of GPCA to time-series data living in a collection of multiple moving subspaces. The approach combines classical GPCA with newly developed recursive hybrid system identification algorithms. We will also present applications of DGPCA in image/video segmentation, 3-D motion segmentation, dynamic texture segmentation, and heart motion analysis.
2
The Center for Imaging Science
Established 1998 Biomedical Engineering Computer Science CENTER FOR IMAGING SCIENCE The Center for Imaging Science ( (CIS) was established in July 1998 as a research center at The Johns Hopkins University, Whiting School of Engineering, under the direction of Professor Michael I. Miller. The CIS is an interdepartmental Center with the faculty having their principal appointments in the Departments of Biomedical Engineering, Computer Science, Electrical and Computer Engineering, and Mathematical Sciences. Electrical and Computer Engineering Mathematical Sciences
3
Research areas in CIS Computational anatomy Computational vision
Measuring thickness, volume and shape of structures and its variation in health and disease Examples (Barta, Miller, Ratnanather, Younes, Vidal, Winslow) Analysis of the nature of cardiac shape, fiber and laminar structure remodeling in heart disease Analysis of the hippocampus to study neurological disorders (Alzheimer, schizophrenia, Huntington) Computational vision Recognizing faces, objects, materials, scenes and actions in images and videos (Geman, Hager, Vidal)
4
Vision, Dynamics and Learning Lab @ JHU
Biomedical Imaging Image segmentation Fiber tracking Diffusion MRI Computer Vision Motion segmentation Dynamic textures Action recognition Camera networks Signal processing Compressed sensing Consensus on manifolds Dynamical Systems Hybrid system identification Kernels on dynamical systems Machine Learning Generalized PCA Manifold clustering
5
Data segmentation and clustering
Given a set of points, separate them into multiple groups Discriminative methods: learn boundary Generative methods: learn mixture model, using, e.g. Expectation Maximization
6
Dimensionality reduction and clustering
In many problems data is high-dimensional: can reduce dimensionality using, e.g. Principal Component Analysis Image compression Recognition Faces (Eigenfaces) Image segmentation Intensity (black-white) Texture
7
Segmentation problems in dynamic vision
Segmentation of video and dynamic textures Segmentation of rigid-body motions
8
Segmentation problems in dynamic vision
Segmentation of rigid-body motions from dynamic textures
9
Clustering data on non Euclidean spaces
Mixtures of linear spaces Mixtures of algebraic varieties Mixtures of Lie groups “Chicken-and-egg” problems Given segmentation, estimate models Given models, segment the data Initialization? Need to combine Algebra/geometry, dynamics and statistics
10
Talk outline Part I: Clustering Linear Manifolds
Generalized Principal Component Analysis (GPCA) (Vidal-Ma-Sasty ’03, ‘04, ‘05) Part II: Clustering Nonlinear Manifolds Linear/nonlinear manifolds of Euclidean space (Goh-Vidal ‘07) Submanifolds of a Riemannian manifold (Goh-Vidal ‘08) Part III: Applications Segmentation of video shots Segmentation of rigid body motions (Vidal-Tron-Hartley’08) Segmentation of dynamic textures (Ghoreyshi’06,Vidal’06 ) Segmentation of diffusion tensor images (Goh-Vidal’08) Segmentation of probability density functions (Goh-Vidal’08)
11
Principal Component Analysis (PCA)
Given a set of points x1, x2, …, xN Geometric PCA: find a subspace S passing through them Statistical PCA: find projection directions that maximize the variance Solution (Beltrami’1873, Jordan’1874, Hotelling’33, Eckart-Householder-Young’36) Applications: data compression, regression, computer vision (eigenfaces), pattern recognition, genomics As we all know, Principal Component Analysis problem refers to the problem of estimating a SINGLE subspace from sample data points. Although there are various ways of solving PCA, a simple solution consist of building a matrix with all the data points, computing its SVD, and then extracting a basis for the subspace from the columns of the U matrix, and the dimension of the subspace from the rank of the U matrix. There is no question that PCA is one of the most popular techniques for dimensionality reduction in various engineering disciplines. In computer vision, in particular, a successful application has been found in face recognition under the name of eigenfaces. Basis for S
12
Extensions of PCA Higher order SVD (Tucker’66, Davis’02)
Independent Component Analysis (Common ‘94) Probabilistic PCA (Tipping-Bishop ’99) Identify subspace from noisy data Gaussian noise: standard PCA Noise in exponential family (Collins et al.’01) Nonlinear dimensionality reduction Multidimensional scaling (Torgerson’58) Locally linear embedding (Roweis-Saul ’00) Isomap (Tenenbaum ’00) Nonlinear PCA (Scholkopf-Smola-Muller ’98) Identify nonlinear manifold by applying PCA to data embedded in high-dimensional space Principal Curves and Principal Geodesic Analysis (Hastie-Stuetzle’89, Tishbirany ‘92, Fletcher ‘04) There have been various attempts to generalize PCA in different directions. For example, Probabilistic PCA considers the case of noisy data and tries to estimate THE subspace in a maximum likelihood sense. For the case of noise in the exponential family, Collins has shown that this can be done using convex optimization techniques. Other extensions considered the case of data lying on manifold, the so-called NonLinear PCA or Kernel PCA. This problem is usually solved by embedding the data into a higher dimensional space and then assuming that the embedded data DOES live in a linear subspace. Of course the correct embedding to use depends on the problem at hand, and learning the embedding is a current a topic of research in the machine learning community. A third extension considers the case of identifying multiple subspaces at the same time. It is this case extension the one I will talk about in this talk under the name of Generalized PCA.
13
Generalized Principal Component Analysis
Given a set of points lying in multiple subspaces, identify The number of subspaces and their dimensions A basis for each subspace The segmentation of the data points “Chicken-and-egg” problem Given segmentation, estimate subspaces Given subspaces, segment the data Given a set of data points lying on a collection of linear subspaces, without knowing which point corresponds to which subspace and without even knowing the number of subspaces, estimate the number of subspaces, a basis for each subspace and the segmentation of the data. As stated, the GPCA problem is one of those challenging CHICKEN-AND-EGG problems in the following sense. If we knew the segmentation of the data points, and hence the number of subspaces, then we could solve the problem by applying standard PCA to each group. On the other hand, if we had a basis for each subspace, we could easily assign each point to each cluster. The main challenge is that we neither know the segmentation of the data points nor a basis for each subspace, and the questions is how to estimate everything from the data only. Previous geometric approaches to GPCA have been proposed in the context of motion segmentation, and under the assumption that the subspaces do NOT intersect. Such approaches are based on first clustering the data, using for example K-means or spectral clustering techniques, and then applying standard PCA to each group. Statistical methods, on the other hand, model the data as a mixture model whose parameters are the mixing proportions and the basis for each subspace. The estimation of the mixture is a (usually non-convex) optimization problem that can be solved with various techniques. One of the most popular ones is the expectation maximization algorithm (EM) which alternates between the clustering of the data and the estimation of the subspaces. Unfortunately, the convergence of such iterative algorithms depends on good initialization. At present, initialization is mostly done either at random, using another iterative algorithm such as K-means, or else in an ad-hoc fashion depending on the particular application. It is here where our main motivation comes into the picture. We would like to know if we can find a way of initializing iterative algorithms in an algebraic fashion. For example, if you look at the two lines in this picture, there has to be a way of estimating those two lines directly from data in a one shot solution, at least in the ABSENCE of noise.
14
Applications of GPCA Geometry Image compression Segmentation
Vanishing points Image compression Segmentation Intensity (black-white) Texture Motion (2-D, 3-D) Video (host-guest) Recognition Faces (Eigenfaces) Man - Woman Human Gaits Dynamic Textures Water-bird Biomedical imaging Hybrid systems identification One of the reasons we are interested in GPCA is because there are various problems in computer vision that have to do with the simultaneous estimation of multiple models from visual data. Consider for example segmenting an image into different regions based on intensity, texture of motion information. Consider also the recognition of various static and dynamic processes such as human faces or human gaits from visual data. Although in this talk I will only consider the first class of problems, it turns out that, at least from a mathematical perspective, all the above problems can be converted into following generalization of principal component analysis, which we conveniently refer to as GPCA
15
Prior work on subspace clustering
Iterative algorithms K-subspaces (Kamhalta-Leen ‘94, Ho et al. ’03) RANSAC, subspace selection and growing (Leonardis et al. ’02) Probabilistic approaches Mixtures of PPCA (Tipping-Bishop ’99, Grubber-Weiss ’04) Multi-Stage Learning (Kanatani ’04) Agglomerative Lossy Compression (Ma et al. ’07) Initialization Geometric approaches: 2 planes in R3 (Shizawa-Maze ’91) Factorization-based approaches for independent subspaces of equal dimension (Boult-Brown ‘91, Costeira-Kanade ‘98, Gear ’08, Kanatani ’01) Spectral clustering based approaches (Zelnik-Manor ‘03, Yan-Pollefeys ’06, Fan’06, Chen-Lerman’08) Given a set of data points lying on a collection of linear subspaces, without knowing which point corresponds to which subspace and without even knowing the number of subspaces, estimate the number of subspaces, a basis for each subspace and the segmentation of the data. As stated, the GPCA problem is one of those challenging CHICKEN-AND-EGG problems in the following sense. If we knew the segmentation of the data points, and hence the number of subspaces, then we could solve the problem by applying standard PCA to each group. On the other hand, if we had a basis for each subspace, we could easily assign each point to each cluster. The main challenge is that we neither know the segmentation of the data points nor a basis for each subspace, and the questions is how to estimate everything from the data only. Previous geometric approaches to GPCA have been proposed in the context of motion segmentation, and under the assumption that the subspaces do NOT intersect. Such approaches are based on first clustering the data, using for example K-means or spectral clustering techniques, and then applying standard PCA to each group. Statistical methods, on the other hand, model the data as a mixture model whose parameters are the mixing proportions and the basis for each subspace. The estimation of the mixture is a (usually non-convex) optimization problem that can be solved with various techniques. One of the most popular ones is the expectation maximization algorithm (EM) which alternates between the clustering of the data and the estimation of the subspaces. Unfortunately, the convergence of such iterative algorithms depends on good initialization. At present, initialization is mostly done either at random, using another iterative algorithm such as K-means, or else in an ad-hoc fashion depending on the particular application. It is here where our main motivation comes into the picture. We would like to know if we can find a way of initializing iterative algorithms in an algebraic fashion. For example, if you look at the two lines in this picture, there has to be a way of estimating those two lines directly from data in a one shot solution, at least in the ABSENCE of noise.
16
Basic ideas behind GPCA
Towards an analytic solution to subspace clustering Can we estimate ALL models simultaneously using ALL data? When can we do so analytically? In closed form? Is there a formula for the number of models? Will consider the most general case Subspaces of unknown and possibly different dimensions Subspaces may intersect arbitrarily (not only at the origin) GPCA is an algebraic geometric approach to data segmentation Number of subspaces = degree of a polynomial Subspace basis = derivatives of a polynomial Subspace clustering is algebraically equivalent to Polynomial fitting Polynomial differentiation We are therefore interested in finding an analytic solution to the above segmentation problem. In particular we would like to know if we can estimate all the subspaces simultaneously from all the data without having to iterate between data clustering and subspace estimation. Furthermore we would like to know if we can do so in an ANALYTIC fashion, and even more, we would like to understand under what conditions we can do it in closed form and whether there is a formula for the number of subspaces. It turns out that all these questions can be answered using tools from algebraic geometry. Since at first data segmentation and algebraic geometry may seem to be totally disconnected, let me first tell you what the connection is. We will model the number of subspaces as the degree of a polynomial and the subspace parameters (basis) as the roots of a polynomial, or more precisely as the factors of the polynomial. With this algebraic geometric algebraic geometric interpretation in mind, we will be able to prove the following the GPCA problem. First, there are some geometric constraints that do not depend on the segmentation of the data. Therefore, the chicken-and-egg dilemma can be resolved since the segmentation of the dada can be algebraically eliminated. Second, one can prove that the problem has a unique solution, which can be obtained in closed form if and only if the number of subspaces is less than or equal to four And third, the solution can be computed using linear algebraic techniques. In the presence of noise, one can use the algebraic solution to GPCA to initialize any iterative algorithm, for example EM. In the case of zero-mean Gaussian noise, we will show that the E-step can be algebraically eliminated, since we will derive an objective function that depends on the motion parameters only.
17
Representing one subspace
One plane One line One subspace can be represented with Set of linear equations Set of polynomials of degree 1
18
Representing n subspaces
Two planes One plane and one line Plane: Line: A union of n subspaces can be represented with a set of homogeneous polynomials of degree n De Morgan’s rule
19
Fitting polynomials to data points
Polynomials can be written linearly in terms of the vector of coefficients by using polynomial embedding Coefficients of the polynomials can be computed from nullspace of embedded data Solve using least squares N = #data points Veronese map
20
Finding a basis for each subspace
Case of hyperplanes: Only one polynomial Number of subspaces Basis are normal vectors Problems Computing roots may be sensitive to noise The estimated polynomial may not perfectly factor with noisy Cannot be applied to subspaces of different dimensions Polynomials are estimated up to change of basis, hence they may not factor, even with perfect data Polynomial Factorization (GPCA-PFA) [CVPR 2003] Find roots of polynomial of degree in one variable Solve linear systems in variables Solution obtained in closed form for
21
Finding a basis for each subspace
To learn a mixture of subspaces we just need one positive example per class Polynomial Differentiation (GPCA-PDA) [CVPR’04]
22
Choosing one point per subspace
With noise and outliers Polynomials may not be a perfect union of subspaces Normals can estimated correctly by choosing points optimally Distance to closest subspace without knowing segmentation?
23
GPCA for hyperplane segmentation
Coefficients of the polynomial can be computed from null space of embedded data matrix Solve using least squares N = #data points Number of subspaces can be computed from the rank of embedded data matrix Normal to the subspaces can be computed from the derivatives of the polynomial
24
GPCA for subspaces of different dimensions
There are multiple polynomials fitting the data The derivative of each polynomial gives a different normal vector Can obtain a basis for the subspace by applying PCA to normal vectors
25
Dealing with high-dimensional data
Minimum number of points K = dimension of ambient space n = number of subspaces In practice the dimension of each subspace ki is much smaller than K Number and dimension of the subspaces is preserved by a linear projection onto a subspace of dimension Can remove outliers by robustly fitting the subspace Open problem: how to choose projection? PCA? Subspace 1 Subspace 2
26
GPCA with spectral clustering
Build a similarity matrix between pairs of points Use eigenvectors to cluster data How to define a similarity for subspaces? Want points in the same subspace to be close Want points in different subspace to be far Use GPCA to get basis Distance: subspace angles
27
Extensions of GPCA to nonlinear manifolds
Segmentation of quadratic forms (Rao-Yang-Ma ’05, ‘06) Segmentation of bilinear surfaces (Vidal-Ma-Soatto-Sastry ’03, ‘06) Segmentation of mixed linear and bilinear (Singaraju-Vidal ‘06) Segmentation of trilinear surfaces (Hartley-Vidal ’05, ’08)
28
Part II: Nonlinear Manifold Clustering
René Vidal Center for Imaging Science Institute for Computational Medicine Johns Hopkins University Title: Segmentation of Dynamic Scenes and Textures Abstract: Dynamic scenes are video sequences containing multiple objects moving in front of dynamic backgrounds, e.g. a bird floating on water. One can model such scenes as the output of a collection of dynamical models exhibiting discontinuous behavior both in space, due to the presence of multiple moving objects, and in time, due to the appearance and disappearance of objects. Segmentation of dynamic scenes is then equivalent to the identification of this mixture of dynamical models from the image data. Unfortunately, although the identification of a single dynamical model is a well understood problem, the identification of multiple hybrid dynamical models is not. Even in the case of static data, e.g. a set of points living in multiple subspaces, data segmentation is usually thought of as a "chicken-and-egg" problem. This is because in order to estimate a mixture of models one needs to first segment the data and in order to segment the data one needs to know the model parameters. Therefore, static data segmentation is usually solved by alternating data clustering and model fitting using, e.g., the Expectation Maximization (EM) algorithm. Our recent work on Generalized Principal Component Analysis (GPCA) has shown that the "chicken-and-egg" dilemma can be tackled using algebraic geometric techniques. In the case of data living in a collection of (static) subspaces, one can segment the data by fitting a set of polynomials to all data points (without first clustering the data) and then differentiating these polynomials to obtain the model parameters for each group. In this talk, we will present ongoing work addressing the extension of GPCA to time-series data living in a collection of multiple moving subspaces. The approach combines classical GPCA with newly developed recursive hybrid system identification algorithms. We will also present applications of DGPCA in image/video segmentation, 3-D motion segmentation, dynamic texture segmentation, and heart motion analysis.
29
Locally Linear Manifold Clustering (LLMC)
Nonlinear manifolds in a Euclidean space Nonlinear sub-manifolds in a Riemannian space Goals Develop framework for simultaneous clustering & dimensionality reduction Reduce manifold clustering to central clustering Contributions Extend NLDR from one sub-manifold to multiple sub-manifolds Extend NLDR from Euclidean spaces to Riemannian spaces Show that when sub-manifolds are separated, all points in one sub-manifold are mapped to single point
30
Nonlinear dimension reduction & clustering
Global techniques Isomap (Tenenbaum et al. ‘00) Kernel PCA (Schölkopf-Smola’98) Local techniques Locally Linear Embedding (LLE) (Roweis-Saul ’00) Laplacian Eigenmaps (LE) (Belkin-Niyogi ‘02) Hessian LLE (HLLE) (Donoho-Grimes ‘03) Local Tangent Space Alignment (Zha-Zhang’05) Maximum Variance Unfolding (Weinberger-Saul ‘04) Conformal Eigenmaps (Sha-Saul’05) Clustering based on geometry LLE+Spectral clustering (Polito-Perona ’02) Spectral embedding and clustering (Brand-Huang’03) Isomap+EM (Souvenir-Pless’05) Clustering based on dimension Fractal dimension (Barbara-Chen’00) Tensor voting (Mordohai-Medioni’05) Dimension induced clustering (Gionis et al. ’05) Translated Poisson mixtures (Haro et al.’08)
31
Locally linear embedding (LLE)
Find the k-nearest neighbors of each data point according to the Euclidean distance. Compute a matrix that represents the local neighborhood as the affine subspace spanned by a point and its k-nearest neighbors Find which minimize the error Solve a sparse eigenvalue problem on matrix The first eigenvector is the constant vector corresponding to eigenvalue 0. Alvina: please change and put LLE equations (cost function), so that linear interpolation in Riemannian case makes sense. 32
32
Locally linear manifold clustering
Nonlinear manifolds If the manifolds are k-separated is block-diagonal and Vectors in the null space are of the form if point i belongs to group j otherwise If the manifolds are not k-separated Linear and nonlinear manifolds and If is a basis for , membership vectors can be found as , where Alvina, please put the equations everywhere 33
33
Extending LLE to Riemannian manifolds
Manifold geometry essential only in first two steps of each algorithm. How to select the kNN? by incorporating the Riemannian distance How to compute the matrix representing the local geometry? 34
34
Extending LLE to Riemannian manifolds
LLE involves writing each data point as a linear combination of its neighbors. Euclidean case: need to solve a least-squares problem. Riemannian case: interpolation problem on the manifold. How should the data points be interpolated? What cost function should be minimized? 35
35
Part III: Applications in Computer Vision and Diffusion Tensor Imaging
René Vidal Center for Imaging Science Institute for Computational Medicine Johns Hopkins University Title: Segmentation of Dynamic Scenes and Textures Abstract: Dynamic scenes are video sequences containing multiple objects moving in front of dynamic backgrounds, e.g. a bird floating on water. One can model such scenes as the output of a collection of dynamical models exhibiting discontinuous behavior both in space, due to the presence of multiple moving objects, and in time, due to the appearance and disappearance of objects. Segmentation of dynamic scenes is then equivalent to the identification of this mixture of dynamical models from the image data. Unfortunately, although the identification of a single dynamical model is a well understood problem, the identification of multiple hybrid dynamical models is not. Even in the case of static data, e.g. a set of points living in multiple subspaces, data segmentation is usually thought of as a "chicken-and-egg" problem. This is because in order to estimate a mixture of models one needs to first segment the data and in order to segment the data one needs to know the model parameters. Therefore, static data segmentation is usually solved by alternating data clustering and model fitting using, e.g., the Expectation Maximization (EM) algorithm. Our recent work on Generalized Principal Component Analysis (GPCA) has shown that the "chicken-and-egg" dilemma can be tackled using algebraic geometric techniques. In the case of data living in a collection of (static) subspaces, one can segment the data by fitting a set of polynomials to all data points (without first clustering the data) and then differentiating these polynomials to obtain the model parameters for each group. In this talk, we will present ongoing work addressing the extension of GPCA to time-series data living in a collection of multiple moving subspaces. The approach combines classical GPCA with newly developed recursive hybrid system identification algorithms. We will also present applications of DGPCA in image/video segmentation, 3-D motion segmentation, dynamic texture segmentation, and heart motion analysis.
36
3-D motion segmentation problem
Given a set of point correspondences in multiple views, determine Number of motion models Motion model: affine, homography, fundamental matrix, trifocal tensor Segmentation: model to which each pixel belongs This problem is equivalent to a manifold clustering problem Affine projection 2F-dimension al trajectories live in a linear subspaces of dimension 2, 3, or 4 Perspective projection Two views related by a bilinear constraint 2F-dimensional trajectories live in a 3-dimensional manifold
37
Single-body factorization
Structure = 3D surface Affine camera model p = point f = frame Motion of one rigid-body lives in a 4-D subspace (Boult and Brown ’91, Tomasi and Kanade ‘92) P = #points F = #frames Motion = camera position and orientation
38
Hopkins 155 motion segmentation database
Collected 155 sequences 120 with 2 motions 35 with 3 motions Types of sequences Checkerboard sequences: mostly full dimensional and independent motions Traffic sequences: mostly degenerate (linear, planar) and partially dependent motions Articulated sequences: mostly full dimensional and partially dependent motions Point correspondences In few cases, provided by Kanatani & Pollefeys In most cases, extracted semi-automatically with OpenCV
39
Experimental results: Hopkins 155 database
2 motions, 120 sequences, 266 points, 30 frames 3 motions, 35 sequences, 398 points, 29 frames Re-do the table so as to Eliminate REF GPCA is leftmost column Add your method in second column
40
Modeling dynamic textures
Examples of dynamic textures: Model temporal evolution as the output of a linear dynamical system (LDS): Soatto et al. ‘01 dynamics z t + 1 = A v y C w images appearance
41
Segmentation of dynamic textures
Model intensity at each pixel as the output of an AR model Define regressors & parameters Regressors with same texture live in a hyperplane Multiple dynamic textures live in multiple hyperplanes Can cluster the regressors using GPCA water steam A hyperplane
42
Segmentation of dynamic textures
Add one slide with dynamic texture segmentation results in entire database: left picture showing some examples from database, right table with errors in segmentation compared with Chan and Vasconcelos 46
43
Diffusion weighted MR imaging (DW-MRI)
DW-MRI is a 3-D imaging technique that measures the diffusion of water molecules in living tissues Direction of maximum diffusion is indicative of the anisotropy of the medium It can be used to measure local fiber orientation in anisotropic living tissues DW-MRI provides a tremendous opportunity to develop quantitative tools that can help the development of precise biological models Brain development, multiple sclerosis, Alzheimer disease, schizophrenia, autism, reading disability Hagmann et. al. Neuroimage, 2004
44
Segmentation of diffusion images: SPSD(3)
Diffusion at each voxel is represented by Metric Exponential map Logarithm map Geodesic interpolation Add metrics and geodesic Make 54
45
Segmentation of diffusion images: SPSD(3)
Segmentation of Cingulum and Corpus Callosum Segmentation of Ventricles (isotropic) Genu of corpus callosum (very anisotropic) Forceps minor (diverge and mingle with other fiber bundles) Cingulum Corpus Callosum CSF (ventricles) genu of corpus callosum forceps minor
46
Segmentation of probability density function
Square-root representation The Fisher-Rao metric geodesic distance exponential map logarithm map Texture database 3 classes 92 images in each class
47
Summary GPCA: algorithm for clustering subspaces of a Euclidean space of unknown/different dimensions and with arbitrary intersections Projecting data onto a low-dimensional subspace Fitting polynomials to projected subspaces Differentiating polynomials to obtain a basis LLMC: algorithm for clustering sub-manifolds of a Riemannian space Extend NLDR from one sub-manifold to multiple sub-manifolds Extend NLDR from Euclidean spaces to Riemannian spaces Reduces manifold clustering to a central clustering problem Applications to many problems in computer vision Segmentation of video clips, rigid body motions, dynamic textures Segmentation of diffusion tensor images, probability density functions Update this 57
48
References: Springer-Verlag 2008
49
Slides, MATLAB code, papers
50
Vision Lab @ Johns Hopkins University
For more information, Vision Johns Hopkins University Thank You! NSF CNS JHU APL NIH RO1 HL082729 WSE-APL NIH-NHLBI NSF CNS NSF ISS ARL Robotics-CTA ONR N
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.