Download presentation
Presentation is loading. Please wait.
1
Object Orie’d Data Analysis, Last Time Kernel Embedding –Use linear methods in a non-linear way Support Vector Machines –Completely Non-Gaussian Classification Distance Weighted Discrimination –HDLSS Improvement of SVM –Used in microarray data combination –Face Data, Male vs. Female
2
Support Vector Machines Forgotten last time, Important Extension: Multi-Class SVMs Hsu & Lin (2002) Lee, Lin, & Wahba (2002) Defined for “ implicit ” version “ Direction Based ” variation???
3
Distance Weighted Discrim ’ n 2=d Visualization: Pushes Plane Away From Data All Points Have Some Influence
4
Distance Weighted Discrim ’ n Maximal Data Piling
5
HDLSS Discrim ’ n Simulations Main idea: Comparison of SVM (Support Vector Machine) DWD (Distance Weighted Discrimination) MD (Mean Difference, a.k.a. Centroid) Linear versions, across dimensions
6
HDLSS Discrim ’ n Simulations Overall Approach: Study different known phenomena –Spherical Gaussians –Outliers –Polynomial Embedding Common Sample Sizes But wide range of dimensions
7
HDLSS Discrim ’ n Simulations Spherical Gaussians:
8
HDLSS Discrim ’ n Simulations Spherical Gaussians: Same setup as before Means shifted in dim 1 only, All methods pretty good Harder problem for higher dimension SVM noticeably worse MD best (Likelihood method) DWD very close to MS Methods converge for higher dimension??
9
HDLSS Discrim ’ n Simulations Outlier Mixture:
10
HDLSS Discrim ’ n Simulations Outlier Mixture: 80% dim. 1, other dims 0 20% dim. 1 ±100, dim. 2 ±500, others 0 MD is a disaster, driven by outliers SVM & DWD are both very robust SVM is best DWD very close to SVM (insig ’ t difference) Methods converge for higher dimension?? Ignore RLR (a mistake)
11
HDLSS Discrim ’ n Simulations Wobble Mixture:
12
HDLSS Discrim ’ n Simulations Wobble Mixture: 80% dim. 1, other dims 0 20% dim. 1 ±0.1, rand dim ±100, others 0 MD still very bad, driven by outliers SVM & DWD are both very robust SVM loses (affected by margin push) DWD slightly better (by w ’ ted influence) Methods converge for higher dimension?? Ignore RLR (a mistake)
13
HDLSS Discrim ’ n Simulations Nested Spheres:
14
HDLSS Discrim ’ n Simulations Nested Spheres: 1 st d/2 dim ’ s, Gaussian with var 1 or C 2 nd d/2 dim ’ s, the squares of the 1 st dim ’ s (as for 2 nd degree polynomial embedding) Each method best somewhere MD best in highest d (data non-Gaussian) Methods not comparable (realistic) Methods converge for higher dimension?? HDLSS space is a strange place Ignore RLR (a mistake)
15
HDLSS Discrim ’ n Simulations Conclusions: Everything (sensible) is best sometimes DWD often very near best MD weak beyond Gaussian Caution about simulations (and examples): Very easy to cherry pick best ones Good practice in Machine Learning –“ Ignore method proposed, but read paper for useful comparison of others ”
16
HDLSS Discrim ’ n Simulations Caution: There are additional players E.g. Regularized Logistic Regression looks also very competitive Interesting Phenomenon: All methods come together in very high dimensions???
17
17 UNC, Stat & OR HDLSS Asymptotics: Simple Paradoxes, I For dim’al Standard Normal dist’n: Euclidean Distance to Origin (as ): - Data lie roughly on surface of sphere of radius - Yet origin is point of highest density??? - Paradox resolved by: density w. r. t. Lebesgue Measure
18
18 UNC, Stat & OR HDLSS Asymptotics: Simple Paradoxes, II For dim’al Standard Normal dist’n: indep. of Euclidean Dist. between and (as ): Distance tends to non-random constant: Can extend to Where do they all go??? (we can only perceive 3 dim’ns)
19
19 UNC, Stat & OR HDLSS Asymptotics: Simple Paradoxes, III For dim’al Standard Normal dist’n: indep. of High dim’al Angles (as ): - Everything is orthogonal??? - Where do they all go??? (again our perceptual limitations) - Again 1st order structure is non-random
20
20 UNC, Stat & OR HDLSS Asy’s: Geometrical Representation, I Assume, let Study Subspace Generated by Data Hyperplane through 0, of dimension Points are “nearly equidistant to 0”, & dist Within plane, can “rotate towards Unit Simplex” All Gaussian data sets are“near Unit Simplex Vertices”!!! “Randomness” appears only in rotation of simplex Hall, Marron & Neeman (2005)
21
21 UNC, Stat & OR HDLSS Asy’s: Geometrical Representation, II Assume, let Study Hyperplane Generated by Data dimensional hyperplane Points are pairwise equidistant, dist Points lie at vertices of “regular hedron” Again “randomness in data” is only in rotation Surprisingly rigid structure in data?
22
22 UNC, Stat & OR HDLSS Asy’s: Geometrical Representation, III Simulation View: shows “rigidity after rotation”
23
23 UNC, Stat & OR HDLSS Asy’s: Geometrical Representation, III Straightforward Generalizations: non-Gaussian data: only need moments non-independent: use “mixing conditions” Mild Eigenvalue condition on Theoretical Cov. (with J. Ahn, K. Muller & Y. Chi) All based on simple “Laws of Large Numbers”
24
24 UNC, Stat & OR HDLSS Asy’s: Geometrical Representation, IV Explanation of Observed (Simulation) Behavior: “everything similar for very high d ” 2 popn’s are 2 simplices (i.e. regular n-hedrons) All are same distance from the other class i.e. everything is a support vector i.e. all sensible directions show “data piling” so “sensible methods are all nearly the same” Including 1 - NN
25
25 UNC, Stat & OR HDLSS Asy’s: Geometrical Representation, V Further Consequences of Geometric Representation 1. Inefficiency of DWD for uneven sample size (motivates weighted version, work in progress) 2. DWD more stable than SVM (based on deeper limiting distributions) (reflects intuitive idea feeling sampling variation) (something like mean vs. median) 3. 1-NN rule inefficiency is quantified.
26
26 UNC, Stat & OR The Future of Geometrical Representation? HDLSS version of “optimality” results? “Contiguity” approach? Params depend on d? Rates of Convergence? Improvements of DWD? (e.g. other functions of distance than inverse) It is still early days …
27
27 UNC, Stat & OR NCI 60 Data Recall from Sept. 6 & 8 NCI 60 Cell Lines Interesting benchmark, since same cells Data Web available: http://discover.nci.nih.gov/datasetsNature2000.jsp Both cDNA and Affymetrix Platforms
28
28 UNC, Stat & OR NCI 60: Fully Adjusted Data, Melanoma Cluster BREAST.MDAMB435 BREAST.MDN MELAN.MALME3M MELAN.SKMEL2 MELAN.SKMEL5 MELAN.SKMEL28 MELAN.M14 MELAN.UACC62 MELAN.UACC257
29
29 UNC, Stat & OR NCI 60: Fully Adjusted Data, Leukemia Cluster LEUK.CCRFCEM LEUK.K562 LEUK.MOLT4 LEUK.HL60 LEUK.RPMI8266 LEUK.SR
30
30 UNC, Stat & OR NCI 60: Views using DWD Dir’ns (focus on biology)
31
31 UNC, Stat & OR Real Clusters in NCI 60 Data? From Sept. 8: Simple Visual Approach: Randomly relabel data (Cancer Types) Recompute DWD dir’ns & visualization Get heuristic impression from this Some types appeared signif’ly different Others did not Deeper Approach: Formal Hypothesis Testing
32
32 UNC, Stat & OR HDLSS Hypothesis Testing Approach: DiProPerm Test Direction – Projection – Permutation Ideas: Find an appropriate Direction vector Project data into that 1-d subspace Construct a 1-d test statistic Analyze significance by Permutation
33
33 UNC, Stat & OR HDLSS Hypothesis Testing – DiProPerm test DiProPerm Test Context: Given 2 sub-populations, X & Y Are they from the same distribution? Or significantly different? H 0 : L X = L Y vs. H 1 : L X ≠ L Y
34
34 UNC, Stat & OR HDLSS Hypothesis Testing – DiProPerm test Reasonable Direction vectors: Mean Difference SVM Maximal Data Piling DWD (used in the following) Any good discrimination direction…
35
35 UNC, Stat & OR HDLSS Hypothesis Testing – DiProPerm test Reasonable Projected 1-d statistics: Two sample t-test (used here) Chi-square test for different variances Kolmogorov - Smirnov Any good distributional test…
36
36 UNC, Stat & OR HDLSS Hypothesis Testing – DiProPerm test DiProPerm Test Steps: 1. For original data: Find Direction vector Project Data, Compute True Test Statistic 2. For (many) random relabellings of data: Find Direction vector Project Data, Compute Perm’d Test Stat 3. Compare: True Stat among population of Perm’d Stat’s Quantile gives p-value
37
37 UNC, Stat & OR HDLSS Hypothesis Testing – DiProPerm test Remarks: Generally can’t use standard null dist’ns… e.g. Students t-table, for t-statistic Because Direction and Projection give nonstandard context I.e. violate traditional assumptions E.g. DWD finds separating directions Giving completely invalid test This motivates Permutation approach
38
38 UNC, Stat & OR Improved Statistical Power - NCI 60 Melanoma
39
39 UNC, Stat & OR Improved Statistical Power - NCI 60 Leukemia
40
40 UNC, Stat & OR Improved Statistical Power - NCI 60 NSCLC
41
41 UNC, Stat & OR Improved Statistical Power - NCI 60 Renal
42
42 UNC, Stat & OR Improved Statistical Power - NCI 60 CNS
43
43 UNC, Stat & OR Improved Statistical Power - NCI 60 Ovarian
44
44 UNC, Stat & OR Improved Statistical Power - NCI 60 Colon
45
45 UNC, Stat & OR Improved Statistical Power - NCI 60 Breast
46
46 UNC, Stat & OR Improved Statistical Power - Summary Type cDNA -tAffy -tComb -t Affy-PComb-P Melanoma 36.839.951.8 e-70 Leukemia 18.323.827.5 0.120.00001 NSCLC 17.325.123.5 0.180.02 Renal 15.620.122.0 0.540.04 CNS 13.418.618.9 0.620.21 Ovarian 11.220.817.0 0.210.27 Colon 10.317.416.3 0.740.58 Breast 13.819.619.3 0.510.16
47
47 UNC, Stat & OR HDLSS Hypothesis Testing – DiProPerm test Many Open Questions on DiProPerm Test: Which Direction is “Best”? Which 1-d Projected test statistic? Permutation vs. altern’es (bootstrap?)??? How do these interact? What are asymptotic properties?
48
Independent Component Analysis Idea: Find dir ’ ns that maximize indepen ’ ce Motivating Context: Signal Processing Blind Source Separation References: Cardoso (1989) Cardoso & Souloumiac (1993) Lee (1998) Hyv ä rinen and Oja (1999) Hyv ä rinen, Karhunen and Oja (2001)
49
Independent Component Analysis ICA, motivating example: Cocktail party problem Hear several simultaneous conversations would like to “ separate them ” Model for “ conversations ” : time series: and
50
Independent Component Analysis Cocktail Party Problem
51
Independent Component Analysis ICA, motivating example: Cocktail party problem What the ears hear: Ear 1: Mixed version of signals: Ear 2: A second mixture:
52
Independent Component Analysis What the ears hear: Mixed versions
53
Independent Component Analysis Goal: Recover “ signal ” from “ data ” for unknown “ mixture matrix ”, where, for all Goal is to find “ separating weights ”,, so that, for all Problem: would be fine, but is unknown
54
Independent Component Analysis Solution 1: PCA
55
Independent Component Analysis Solution 2: ICA
56
Independent Component Analysis “ Solutions ” for Cocktail Party example: Approach 1: PCA (on “ population of 2-d vectors ” ) Directions of Greatest Variability do not solve this problem Approach 2: ICA (will describe method later) Independent Component directions do solve the problem (modulo “ sign changes ” and “ identification ” )
57
Independent Component Analysis Relation to FDA: recall “ data matrix ” Signal Processing: focus on rows ( time series, for ) Functional Data Analysis: focus on columns ( data vectors) Note: same 2 different viewpoints as dual problems in PCA
58
Independent Component Analysis FDA Style Scatterplot View - Signals
59
Independent Component Analysis FDA Style Scatterplot View - Data
60
Independent Component Analysis FDA Style Scatterplot View: Scatterplots give hint how blind recovery is possible Affine Transformation stretches indep ’ t signals into dependent Inversion is key to ICA (even when is unknown)
61
Independent Component Analysis Why not PCA? Finds direction of greatest variability Wrong direction for signal separation
62
Independent Component Analysis ICA Step 1: “ sphere the data ” (i.e. find linear transfo to make mean =, cov = ) i.e. work with requires of full rank (at least, i.e. no HDLSS) search for independence beyond linear and quadratic structure
63
Independent Component Analysis ICA Step 2: Find directions that make (sphered) data as independent as possible Worst case: Gaussian Sphered data are independent Interesting “ converse application ” of C.L.T.: For and independent (& non-Gaussian) is “ more Gaussian ” for so maximal independence comes from least Gaussian directions
64
Independent Component Analysis ICA Step 2: Find dir ’ ns that make (sphered) data as independent as possible Recall “ independence ” means: Joint distribution is product of Marginals In cocktail party example: Happens only when rotated so support parallel to axes Otherwise have blank areas, while marginals are non-zero
65
Independent Component Analysis Parallel Idea (and key to algorithm): Find directions that max non-Gaussianity Reason: starting from independent coordinates most projections are Gaussian (since projection is “ linear combo ” ) Mathematics behind this: Diaconis and Freedman (1984)
66
Independent Component Analysis Worst case for ICA: Gaussian marginals Then sphered data are independent So have independence in all directions Thus can ’ t find useful directions Gaussian distribution is characterized by: Independent & spherically symmetric
67
Independent Component Analysis Criteria for non-Gaussianity / independence: kurtosis (, 4 th order cumulant) negative entropy mutual information nonparametric maximum likelihood “ infomax ” in neural networks interesting connections between these
68
Independent Component Analysis Matlab Algorithm (optimizing any of above): FastICA http://www.cis.hut.fi/projects/ica/fastica/ Numerical gradient search method Can find directions iteratively Or by simultaneous optimization Appears fast, with good defaults Should we worry about local optima???
69
Independent Component Analysis Notational summary: 1.First sphere data: 2.Apply ICA: find rotation to make rows of independent 3.Can transform back to original data scale:
70
Independent Component Analysis Identifiability problem 1: Generally can ’ t order rows of (& ) Since for a permutation matrix (pre-multiplication by swaps rows) (post-multiplication by swaps columns) for each col ’ n, i.e. So and are also solutions (i.e. )
71
Independent Component Analysis Identifiability problem 1: Row Order Saw this in Cocktail Party Example FastICA: orders by non-Gaussian-ness?
72
Independent Component Analysis Identifiability problem 2: Can ’ t find scale of elements of Since for a (full rank) diagonal matrix (pre-mult ’ n by is scalar mult ’ n of rows) (post-mult ’ n by is scalar mult ’ n of col ’ s) for each col ’ n, i.e. So and are also solutions
73
Independent Component Analysis Identifiability problem 2: Signal Scale Not so clear in Cocktail Party Example
74
Independent Component Analysis Signal Processing Scale identification: (Hyv ä rinen and Oja) Choose scale so each signal has unit average energy: Preserves energy along rows of data matrix Explains same scales in Cocktail Party Example
75
Independent Component Analysis Would like to do: More toy examples Illustrating how non-Gaussianity works Like to see some? Check out old course notes: http://www.stat.unc.edu/postscript/papers/marron/Teaching/CornellFDA/Lecture03-11-02/FDA03-11-02.pdf http://www.stat.unc.edu/postscript/papers/marron/Teaching/CornellFDA/Lecture03-25-02/FDA03-25-02.pdf
76
Independent Component Analysis One more “ Would like to do ” : ICA testing of multivariate Gaussianity Usual approaches: 1-d tests on marginals New Idea: use ICA to find “ least Gaussian Directions ”, and base test on those. Koch, Marron and Chen (2004)
77
Unfortunately Not Covered DWD & Micro-array Outcomes Data Windup from FDA04-22-02.doc –General Conclusion –Validation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.