Download presentation
Presentation is loading. Please wait.
Published byJeffry Higgins Modified over 8 years ago
1
Classification on Manifolds Suman K. Sen joint work with Dr. J. S. Marron & Dr. Mark Foskey
2
2 Outline Introduction Introduction to M-reps Goal Classification Popular methods of classification Why do M-reps need special treatment ? Proposed Approach Results Future Research
3
3 2 groups: Patients (56) and Controls (26). Can we say anything about the shape variation between the 2 groups?? Styner et al. (2004) m-rep model of hippocampus
4
4 Hippocampus
5
5 Visualization: Change of shape along separating direction Blue : rightly classified Magenta: misclassified +’s & 0’s are the two classes X-axis : scores along separating direction Right panel: projected model
6
6 DWD in Face Recognition, (cont.) Registered Data Shifts and scale Manually chosen To align eyes and mouth Still large variation See males vs. females???
7
7 DWD in Face Recognition, (cont.) DWD Direction Good separation Images “make sense” Garbage at ends? (extrapolation effects?)
8
8 Visualization: Change of shape along separating direction Separation better shape change – flattening at the top
9
9 M-reps Medial Atom
10
10 Mreps
11
11 Definitions Geodesic: Curve locally minimizing distance between points Exponential Map Exp p (X): Maps point X T p M on to the Manifold M along geodesics
12
12 ExpMaps and LogMaps X T p M, 9 x (t) with X as its initial velocity x (t) = Exp p (tX). || x / t||(t) = ||X||, therefore preserves distance LogMap Inverse of ExpMap d(x,y)= ||Log x (y)|| = ||Log y (x)||
13
13 Literature Review on M-reps Medial Locus, proposed by Blum (1967). Property studied in 2D by Blum and Nagel (1978) and in 3D by Nackman and Pizer (1985). Pizer et al. (1999) describes discrete M-reps. Yushkevich et al. (2003) describes continuous M-reps. Terriberry and Gerig (2006) treats continuous M-reps with branching.
14
14 Classification X i : attributes describing individuals (i=1…n) n: # of individuals Y i : class label Є {1,2,…,K} K: # of classes NOTE: We work with only two groups; and for some mathematical convenience Y Є {1,-1}. Goal of Classification: Given a set of (X i, Y i ), find a rule f(x) that assigns a new individual to a group on the basis of its attributes X.
15
15 Classification: Popular Methods Mean Difference (MD): assigns new observation to the class whose mean is closest to it. Fisher (1936): improvement over MD; optimal rule when the two classes are from Normal distribution & have same covariance matrix. Now called Fisher Linear Discrimination (FLD). Vapnik (1982, 1995): Support Vector Machine (SVM). Also see Burges (1998). Marron et al. (2004): Distance Weighted Discrimination (DWD); unlike SVM it does not suffer from “data piling” and improves generalizability in High Dimension Low Sample Size (HDLSS) situations. Kernel Embedding: Linear Classification done after embedding data in higher dimensional space. See Sch Ö lkopf and Smola (2002). http://www.kernel-machines.org/publications.html : list of publications on kernel methodshttp://www.kernel-machines.org/publications.html
16
16 Classification These methods give us: a)Separating plane b)Normal vector (separating direction) c) Projections of data on the separating direction
17
17 Different approach in Manifolds ? Difficult to describe separating surfaces No inner products Easier to calculate distances
18
18 Approaches generally taken ApproachesDrawbacks Flatten; e.g., in case of a cylinder (R×S 1 ). Geodesic distance is not used. Doing Euclidean statistics on tangent plane at the overall mean (Fletcher et al., 2003, 2004) The choice of base points where tangent plane is created is important. Consider points as if data are embedded in higher dimensional Euclidean space R d. Separating surface is not contained in the manifold. Moreover, the projected data on the separating directions are not interpretable.
19
19 Importance of Geodesic Distance jjjjjjj
20
20 Choice of Base Point Black and blue points represent different groups. Figure shows that choice of base point has a significant effect.
21
21 Meaningful Projections
22
22 It’s Important To Work On Manifold
23
23 Proposed Approach in Manifolds Key concept – control points (representative of a class). Use distance from control points.
24
24 Proposed Approach in Manifolds Key concept – control points (representative of a class). Use distance from control points.
25
25 Proposed Approach in Manifolds
26
26 Proposed Approach in Manifolds Key concept – control points (representative of a class). Use distance from control points. Goal: find “good” control points. For e.g., in the sphere, control points corr. to red boundary separates the data better.
27
27 Decision Function f(x) = d 2 (c -1,x) – d 2 (c 1,x) If f(x) > 0, then x 1, else x -1 If yf(x) > 0, correct decision ( y is the class label (+/-1) ) < 0, wrong NOTE: H={x: f(x)=0} : the separating boundary. Level set for f(x) = 0 f(x) = 1 c C -1 o c1c1 oH
28
28 Proposed Methods 1) Geodesic Mean Difference (GMD) Analogous to Mean Difference Method, we take the two control points as the geodesic mean of the two classes. 2) Iterative Tangent Plane SVM (ITanSVM) Standard SVM done on tangent plane, with the base point carefully chosen through iterative steps. 3) Manifold SVM (MSVM) A generalization of the SVM criterion to Manifolds.
29
29 ITanSVM: The Algorithm 1)Calculate mean of the 2 classes, and then compute their mean (c). Construct the tangent plane at c.
30
30 ITanSVM: The Algorithm 2) Compute the SVM decision line on the tangent plane
31
31 ITanSVM: The Algorithm 3) Find the pair of points so that a) SVM line is the perpendicular bisector of the line joining the points, and b) The distance between the new points to the old points are minimum.
32
32 ITanSVM: The Algorithm 4) Map these 2 new points back to the manifold.
33
33 Manifold SVM (MSVM): the setup Decision fn: = Distance of point x i from the separating plane given by c 1 and c -1 Goal: find c 1 and c -1 such that: maximize the min distance of the training points to H (one of the ways to look at SVM that generalizes to manifolds).
34
34 SVM Separating rule (w,b) between 2 groups w.x + b =0
35
35 Results: Hippocampi Data
36
36 Results: Hippocampi Data Separation shown by different methods GMD MSVM TSVM ITanSVM
37
37 Results: Generated Ellipsoids 25 randomly distorted (bending, twisting, tapering) ellipsoids. Two groups –11 with negative twist parameter. –14 with positive twist parameter.
38
38 Results: Generated Ellipsoids
39
39 Results: Generated Ellipsoids
40
40 Future Research Extend DWD for Manifold data. Marron et al. (2004): Distance Weighted Discrimination (DWD): unlike SVM it does not suffer from “data piling” and improves generalizability in HDLSS situations.
41
41 Future Research Application to Diffusion Tensor Imaging data (at each voxel data observed is a 3X3 positive definite matrix). Develop MSVM for multi-category case.
42
42 THANKS
43
43 Results: Hippocampi Data d(c 1,c -1 ) vs λ
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.