Download presentation
Presentation is loading. Please wait.
1
Object Orie’d Data Analysis, Last Time
Kernel Embedding Embed data in higher dimensional manifold Gives greater flexibility to linear methods Support Vector Machines Aimed at very non-Gaussian Data E.g. from Kernel Embedding Distance Weighted Discrimination HDLSS Improvement of SVM
2
Support Vector Machines
Graphical View, using Toy Example: Find separating plane To maximize distances from data to plane In particular smallest distance Data points closest are called support vectors Gap between is called margin
3
Support Vector Machines
Graphical View, using Toy Example:
4
Support Vector Machines
Forgotten last time, Important Extension: Multi-Class SVMs Hsu & Lin (2002) Lee, Lin, & Wahba (2002) Defined for “implicit” version “Direction Based” variation???
5
Support Vector Machines
Also forgotten last time, Toy examples illustrating Explicit vs. Implicit Kernel Embedding As well as effect of window width, σ on Gaussian kernel embedding
6
SVMs, Comput’n & Embedding
For an “Embedding Map”, e.g. Explicit Embedding: Maximize: Get classification function: Straightforward application of embedding But loses inner product advantage
7
SVMs, Comput’n & Embedding
Implicit Embedding: Maximize: Get classification function: Still defined only via inner products Retains optimization advantage Thus used very commonly Comparison to explicit embedding? Which is “better”???
8
Support Vector Machines
Target Toy Data set:
9
Support Vector Machines
Explicit Embedding, window σ = 0.1:
10
Support Vector Machines
Explicit Embedding, window σ = 1:
11
Support Vector Machines
Explicit Embedding, window σ = 10:
12
Support Vector Machines
Explicit Embedding, window σ = 100:
13
Support Vector Machines
Notes on Explicit Embedding: Too small Poor generalizability Too big miss important regions Classical lessons from kernel smoothing Surprisingly large “reasonable region” I.e. parameter less critical (sometimes?) Also explore projections (in kernel space)
14
Support Vector Machines
Kernel space projection, window σ = 0.1:
15
Support Vector Machines
Kernel space projection, window σ = 1:
16
Support Vector Machines
Kernel space projection, window σ = 10:
17
Support Vector Machines
Kernel space projection, window σ = 100:
18
Support Vector Machines
Kernel space projection, window σ = 100:
19
Support Vector Machines
Notes on Kernel space projection: Too small Great separation But recall, poor generalizability Too big no longer separable As above: Classical lessons from kernel smoothing Surprisingly large “reasonable region” I.e. parameter less critical (sometimes?) Also explore projections (in kernel space)
20
Support Vector Machines
Implicit Embedding, window σ = 0.1:
21
Support Vector Machines
Implicit Embedding, window σ = 0.5:
22
Support Vector Machines
Implicit Embedding, window σ = 1:
23
Support Vector Machines
Implicit Embedding, window σ = 10:
24
Support Vector Machines
Notes on Implicit Embedding: Similar Large vs. Small lessons Range of “reasonable results” Seems to be smaller (note different range of windows) Much different “edge” behavior Interesting topic for future work…
25
Distance Weighted Discrim’n
2-d Visualization: Pushes Plane Away From Data All Points Have Some Influence
26
Distance Weighted Discrim’n
References for more on DWD: Current paper: Marron, Todd and Ahn (2007) Links to more papers: Ahn (2007) JAVA Implementation of DWD: caBIG (2006) SDPT3 Software: Toh (2007) 26
27
Batch and Source Adjustment
Recall from Class Notes 8/28/07 For Stanford Breast Cancer Data (C. Perou) Analysis in Benito, et al (2004) Bioinformatics, 20, Adjust for Source Effects Different sources of mRNA Adjust for Batch Effects Arrays fabricated at different times
28
Source Batch Adj: Biological Class Col. & Symbols
29
Source Batch Adj: Source Colors
30
Source Batch Adj: PC 1-3 & DWD direction
31
Source Batch Adj: DWD Source Adjustment
32
Source Batch Adj: Source Adj’d, PCA view
33
Source Batch Adj: S. & B Adj’d, Adj’d PCA
34
Why not adjust using SVM?
Major Problem: Proj’d Distrib’al Shape Triangular Dist’ns (opposite skewed) Does not allow sensible rigid shift
35
Why not adjust using SVM?
Nicely Fixed by DWD Projected Dist’ns near Gaussian Sensible to shift
36
(although still not perfect)
Why not adjust by means? DWD is complicated: value added? Xuxin Liu example… Key is sizes of biological subtypes Differing ratio trips up mean But DWD more robust (although still not perfect)
37
Why not adjust by means? Next time: Work in before and after, slides like from DWDnormPreso.ppt In Research/Bioinf/caBIG
38
Twiddle ratios of subtypes
39
DWD robust against non-proportional subtypes…
Why not adjust by means? DWD robust against non-proportional subtypes… Mathematical Statistical Question: Are there mathematics behind this? (will answer next time…) 39
40
DWD in Face Recognition
Face Images as Data (with M. Benito & D. Peña) Male – Female Difference? Discrimination Rule? Represented as long vector of pixel gray levels Registration is critical
41
DWD in Face Recognition, (cont.)
Registered Data Shifts and scale Manually chosen To align eyes and mouth Still large variation See males vs. females???
42
DWD in Face Recognition , (cont.)
DWD Direction Good separation Images “make sense” Garbage at ends? (extrapolation effects?)
43
DWD in Face Recognition , (cont.)
Unregistered Version Much blurrier Since features don’t properly line up Nonlinear Variation But DWD still works Can see M-F differ’ce?
44
DWD in Face Recognition , (cont.)
Interesting summary: Jump between means (in DWD direction) Clear separation of Maleness vs. Femaleness
45
DWD in Face Recognition , (cont.)
Fun Comparison: Jump between means (in SVM direction) Also distinguishes Maleness vs. Femaleness But not as well as DWD
46
DWD in Face Recognition , (cont.)
Analysis of difference: Project onto normals SVM has “small gap” (feels noise artifacts?) DWD “more informative” (feels real structure?)
47
DWD in Face Recognition, (cont.)
Current Work: Focus on “drivers”: (regions of interest) Relation to Discr’n? Which is “best”? Lessons for human perception?
48
Outcomes Data Breast Cancer Study (C. M. Perou):
Outcome of interest = death or survival Connection with gene expression? Approach: Treat death vs. survival during study as “classes” Find “direction that best separates the classes”
49
Outcomes Data Find “direction that best separates the classes” 49
50
Outcomes Data Find “direction that best separates the classes” 50
51
Outcomes Data Find “direction that best separates classes”
DWD Projection SVM Projection Notes: SVM is “better separated”? (recall “data piling” problems….) DWD gives “more spread between sub-populations”??? (perhaps “more stable”?) 51
52
(reflects pointing in gene direction)
Outcomes Data Which is “better”? Approach: Find “genes of interest” To maximize loadings of direction vectors (reflects pointing in gene direction) Show intensity plot (of gene expression) Using top 20 genes in each direction 52
53
Outcomes Data Which is “better”? Study with gene intensity plot
Order cases by DWD score (proj’n) Order genes by DWD loading (vec. entry) Reduce to top & bottom 20 Color map shows gene expression Shows genes that drive classification Gene names also available 53
54
Outcomes Data Which is “better”? DWD direction 54
55
Outcomes Data Which is “better”? SVM direction 55
56
Outcomes Data Which is “better”?
DWD finds genes showing better separation SVM genes are less informative 56
57
Outcomes Data How about Centroid (Mean Diff’nce) Method? 57
58
Outcomes Data How about Centroid (Mean Diff’nce) Method? 58
59
Outcomes Data Compare to DWD direction 59
60
Very simple things often “best”
Outcomes Data How about Centroid (Mean Diff’nce) Method? Best yet, in terms of red – green plot? Projections unacceptably mixed? These are two different goals… Try for trade-off? Scale space approach??? Interesting philosophical point: Very simple things often “best” 60
61
Outcomes Data Weakness of above analysis:
Some with “genes prone to disease” have not died yet Perhaps can see in DWD expression plot? Better analysis: More sophisticated survival methods Work in progress w/ Brent Johnson, Danyu Li, Helen Zhang 61
62
Distance Weighted Discrim’n
2=d Visualization: Pushes Plane Away From Data All Points Have Some Influence
63
Distance Weighted Discrim’n
Maximal Data Piling BatchEffect1fig4Big.ps
64
HDLSS Discrim’n Simulations
Main idea: Comparison of SVM (Support Vector Machine) DWD (Distance Weighted Discrimination) MD (Mean Difference, a.k.a. Centroid) Linear versions, across dimensions 64
65
HDLSS Discrim’n Simulations
Overall Approach: Study different known phenomena Spherical Gaussians Outliers Polynomial Embedding Common Sample Sizes But wide range of dimensions
66
HDLSS Discrim’n Simulations
Spherical Gaussians: DWD1figEBig.ps
67
HDLSS Discrim’n Simulations
Spherical Gaussians: Same setup as before Means shifted in dim 1 only, All methods pretty good Harder problem for higher dimension SVM noticeably worse MD best (Likelihood method) DWD very close to MD Methods converge for higher dimension??
68
HDLSS Discrim’n Simulations
Outlier Mixture: DWD1figFBig.ps
69
HDLSS Discrim’n Simulations
Outlier Mixture: 80% dim , other dims 0 20% dim. 1 ±100, dim. 2 ±500, others 0 MD is a disaster, driven by outliers SVM & DWD are both very robust SVM is best DWD very close to SVM (insig’t difference) Methods converge for higher dimension?? Ignore RLR (a mistake)
70
HDLSS Discrim’n Simulations
Wobble Mixture: DWD1figGBig.ps
71
HDLSS Discrim’n Simulations
Wobble Mixture: 80% dim , other dims 0 20% dim. 1 ±0.1, rand dim ±100, others 0 MD still very bad, driven by outliers SVM & DWD are both very robust SVM loses (affected by margin push) DWD slightly better (by w’ted influence) Methods converge for higher dimension?? Ignore RLR (a mistake)
72
HDLSS Discrim’n Simulations
Nested Spheres: DWD1figHBig.ps
73
HDLSS Discrim’n Simulations
Nested Spheres: 1st d/2 dim’s, Gaussian with var 1 or C 2nd d/2 dim’s, the squares of the 1st dim’s (as for 2nd degree polynomial embedding) Each method best somewhere MD best in highest d (data non-Gaussian) Methods not comparable (realistic) Methods converge for higher dimension?? HDLSS space is a strange place Ignore RLR (a mistake)
74
HDLSS Discrim’n Simulations
Conclusions: Everything (sensible) is best sometimes DWD often very near best MD weak beyond Gaussian Caution about simulations (and examples): Very easy to cherry pick best ones Good practice in Machine Learning “Ignore method proposed, but read paper for useful comparison of others”
75
HDLSS Discrim’n Simulations
Caution: There are additional players E.g. Regularized Logistic Regression looks also very competitive Interesting Phenomenon: All methods come together in very high dimensions???
76
HDLSS Discrim’n Simulations
Can we say more about: All methods come together in very high dimensions??? Mathematical Statistical Question: Mathematics behind this??? (will answer next time) 76
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.