PCA as Optimization (Cont.) Recall Toy Example Empirical (Sample) EigenVectors Theoretical Distribution & Eigenvectors Different!

Slides:



Advertisements
Similar presentations
Object Orie’d Data Analysis, Last Time •Clustering –Quantify with Cluster Index –Simple 1-d examples –Local mininizers –Impact of outliers •SigClust –When.
Advertisements

Dimension reduction (1)
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Started detailed look at PCA Reviewed linear algebra Today: More linear algebra Multivariate.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
x – independent variable (input)
Dimensional reduction, PCA
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Exploring Microarray data Javier Cabrera. Outline 1.Exploratory Analysis Steps. 2.Microarray Data as Multivariate Data. 3.Dimension Reduction 4.Correlation.
PATTERN RECOGNITION : PRINCIPAL COMPONENTS ANALYSIS Prof.Dr.Cevdet Demir
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
The UNIVERSITY of Kansas EECS 800 Research Seminar Mining Biological Data Instructor: Luke Huan Fall, 2006.
Object Orie’d Data Analysis, Last Time Finished Algebra Review Multivariate Probability Review PCA as an Optimization Problem (Eigen-decomp. gives rotation,
Detailed Look at PCA Three Important (& Interesting) Viewpoints: 1. Mathematics 2. Numerics 3. Statistics 1 st : Review Linear Alg. and Multivar. Prob.
Chapter 2 Dimensionality Reduction. Linear Methods
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
Object Orie’d Data Analysis, Last Time OODA in Image Analysis –Landmarks, Boundary Rep ’ ns, Medial Rep ’ ns Mildly Non-Euclidean Spaces –M-rep data on.
Object Orie’d Data Analysis, Last Time
Classification (Supervised Clustering) Naomi Altman Nov '06.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Return to Big Picture Main statistical goals of OODA: Understanding population structure –Low dim ’ al Projections, PCA … Classification (i. e. Discrimination)
Robust PCA Robust PCA 3: Spherical PCA. Robust PCA.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Object Orie’d Data Analysis, Last Time Discrimination for manifold data (Sen) –Simple Tangent plane SVM –Iterated TANgent plane SVM –Manifold SVM Interesting.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Descriptive Statistics vs. Factor Analysis Descriptive statistics will inform on the prevalence of a phenomenon, among a given population, captured by.
Object Orie’d Data Analysis, Last Time Gene Cell Cycle Data Microarrays and HDLSS visualization DWD bias adjustment NCI 60 Data Today: Detailed (math ’
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
This supervised learning technique uses Bayes’ rule but is different in philosophy from the well known work of Aitken, Taroni, et al. Bayes’ rule: Pr is.
SWISS Score Nice Graphical Introduction:. SWISS Score Toy Examples (2-d): Which are “More Clustered?”
Maximal Data Piling Visual similarity of & ? Can show (Ahn & Marron 2009), for d < n: I.e. directions are the same! How can this be? Note lengths are different.
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Return to Big Picture Main statistical goals of OODA: Understanding population structure –Low dim ’ al Projections, PCA … Classification (i. e. Discrimination)
Object Orie’d Data Analysis, Last Time SiZer Analysis –Statistical Inference for Histograms & S.P.s Yeast Cell Cycle Data OODA in Image Analysis –Landmarks,
PATTERN RECOGNITION : PRINCIPAL COMPONENTS ANALYSIS Richard Brereton
Principle Component Analysis and its use in MA clustering Lecture 12.
Participant Presentations Please Sign Up: Name (Onyen is fine, or …) Are You ENRolled? Tentative Title (???? Is OK) When: Thurs., Early, Oct., Nov.,
Principal Component Analysis
Object Orie’d Data Analysis, Last Time PCA Redistribution of Energy - ANOVA PCA Data Representation PCA Simulation Alternate PCA Computation Primal – Dual.
Object Orie’d Data Analysis, Last Time PCA Redistribution of Energy - ANOVA PCA Data Representation PCA Simulation Alternate PCA Computation Primal – Dual.
GWAS Data Analysis. L1 PCA Challenge: L1 Projections Hard to Interpret (i.e. Little Data Insight) Solution: 1)Compute PC Directions Using L1 2)Compute.
PCA Data Represent ’ n (Cont.). PCA Simulation Idea: given Mean Vector Eigenvectors Eigenvalues Simulate data from Corresponding Normal Distribution.
Central limit theorem - go to web applet. Correlation maps vs. regression maps PNA is a time series of fluctuations in 500 mb heights PNA = 0.25 *
Cornea Data Main Point: OODA Beyond FDA Recall Interplay: Object Space  Descriptor Space.
Methods of multivariate analysis Ing. Jozef Palkovič, PhD.
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Linear Algebra Review Multivariate Probability Review PCA as an Optimization Problem (Eigen-decomp.
Return to Big Picture Main statistical goals of OODA:
Object Orie’d Data Analysis, Last Time
Background on Classification
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Exploring Microarray data
LECTURE 10: DISCRIMINANT ANALYSIS
Functional Data Analysis
Dimension Reduction via PCA (Principal Component Analysis)
Statistics – O. R. 881 Object Oriented Data Analysis
Principal Nested Spheres Analysis
Participant Presentations
Probabilistic Models with Latent Variables
Principal Component Analysis
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Feature space tansformation methods
Generally Discriminant Analysis
LECTURE 09: DISCRIMINANT ANALYSIS
Parametric Methods Berlin Chen, 2005 References:
Multivariate Methods Berlin Chen, 2005 References:
Unsupervised Learning
Participant Presentations
Presentation transcript:

PCA as Optimization (Cont.) Recall Toy Example Empirical (Sample) EigenVectors Theoretical Distribution & Eigenvectors Different!

Connect Math to Graphics (Cont.) 2-d Toy Example PC1 Projections Best 1-d Approximations of Data

Connect Math to Graphics (Cont.) 2-d Toy Example PC2 Projections (= PC1 Resid’s) 2 nd Best 1-d Approximations of Data

PCA Redist ’ n of Energy (Cont.)

Eigenvalues Provide Atoms of SS Decompos’n Useful Plots are: Power Spectrum: vs. log Power Spectrum: vs. Cumulative Power Spectrum: vs. Note PCA Gives SS’s for Free (As Eigenval’s), But Watch Factors of

PCA Redist ’ n of Energy (Cont.) Note, have already considered some of these Useful Plots: Power Spectrum Cumulative Power Spectrum Common Terminology: Power Spectrum is Called “Scree Plot” Kruskal (1964) Cattell (1966) (all but name “scree”) (1 st Appearance of name???)

Correlation PCA A related (& better known?) variation of PCA: Replace cov. matrix with correlation matrix I.e. do eigen analysis of Where

Correlation PCA Toy Example Contrasting Cov vs. Corr. 1 st Comp: Nearly Flat 2 nd Comp: Contrast 3 rd & 4 th : Very Small

Correlation PCA Toy Example Contrasting Cov vs. Corr. Use Rescaled Axes (Typical Default) All Comp’s Visible Good or Bad ???

Correlation PCA Toy Example Contrasting Cov vs. Corr. Correlation “Whitened” Version All Much Different Which Is Right ???

PCA vs. SVD 2-d Toy Example Direction of “Maximal Variation”??? PC1 Solution (Mean Centered) Very Good!

PCA vs. SVD 2-d Toy Example Direction of “Maximal Variation”??? SV1 Solution (Origin Centered) Poor Rep’n

PCA vs. SVD Sometimes “SVD Analysis of Data” = Uncentered PCA Investigate with Similar Toy Example: Conclusions: PCA Generally Better Unless “Origin Is Important” Deeper Look: Zhang et al (2007)

Different Views of PCA Toy Example Comparison of Fit Lines:  PC1  Regression of Y on X  Regression of X on Y

Different Views of PCA Projected Residuals

Different Views of PCA Vertical Residuals (X predicts Y)

Different Views of PCA Solves several optimization problems: 1.Direction to maximize SS of 1-d proj’d data 2.Direction to minimize SS of residuals (same, by Pythagorean Theorem) 3.“Best fit line” to data in “orthogonal sense” (vs. regression of Y on X = vertical sense & regression of X on Y = horizontal sense) Use one that makes sense…

PCA Data Represent ’ n (Cont.)

Choice of in Reduced Rank Represent’n: Generally Very Slippery Problem Not Recommended: o Arbitrary Choice o E.g. % Variation Explained 90%? 95%?

PCA Data Represent ’ n (Cont.) Choice of in Reduced Rank Represent’n: Generally Very Slippery Problem SCREE Plot (Kruskal 1964): Find Knee in Power Spectrum

PCA Data Represent ’ n (Cont.) SCREE Plot Drawbacks: What is a Knee? What if There are Several? Knees Depend on Scaling (Power? log?) Personal Suggestions: Find Auxiliary Cutoffs (Inter-Rater Variation) Use the Full Range (ala Scale Space)

PCA Simulation Idea: given Mean Vector Eigenvectors Eigenvalues Simulate data from Corresponding Normal Distribution

PCA Simulation Idea: given Mean Vector Eigenvectors Eigenvalues Simulate data from Corresponding Normal Distribution Approach: Invert PCA Data Represent’n where

PCA & Graphical Displays Small caution on PC directions & plotting: PCA directions (may) have sign flip Mathematically no difference Numerically caused artifact of round off Can have large graphical impact

PCA & Graphical Displays Toy Example (2 colored “clusters” in data)

PCA & Graphical Displays Toy Example (1 point moved)

PCA & Graphical Displays Toy Example (1 point moved) Important Point: Constant Axes

PCA & Graphical Displays Original Data (arbitrary PC flip)

PCA & Graphical Displays Point Moved Data (arbitrary PC flip) Much Harder To See Moving Point

PCA & Graphical Displays

Alternate PCA Computation

Review of Linear Algebra (Cont.) Recall SVD Full Representation: = Graphics Display Assumes

Review of Linear Algebra (Cont.) Recall SVD Reduced Representation: =

Review of Linear Algebra (Cont.)

Alternate PCA Computation Singular Value Decomposition, Computational Advantage (for Rank ): Use Compact Form, only need to find e-vec ’ s s-val ’ s scores Other Components not Useful So can be much faster for

Alternate PCA Computation Another Variation: Dual PCA Idea: Useful to View Data Matrix

Alternate PCA Computation Another Variation: Dual PCA Idea: Useful to View Data Matrix as Col’ns as Data Objects

Alternate PCA Computation Another Variation: Dual PCA Idea: Useful to View Data Matrix as Both Col’ns as Data Objects & Rows as Data Objects

Alternate PCA Computation Aside on Row-Column Data Object Choice: Col’ns as Data Objects & Rows as Data Objects

Alternate PCA Computation Aside on Row-Column Data Object Choice: Personal Choice: (~Matlab,Linear Algebra) Col’ns as Data Objects & Rows as Data Objects

Alternate PCA Computation Aside on Row-Column Data Object Choice: Personal Choice: (~Matlab,Linear Algebra) Another Choice: (~SAS,R, Linear Models) Variables Col’ns as Data Objects & Rows as Data Objects

Alternate PCA Computation Aside on Row-Column Data Object Choice: Personal Choice: (~Matlab,Linear Algebra) Another Choice: (~SAS,R, Linear Models) Careful When Discussing! Col’ns as Data Objects & Rows as Data Objects

Alternate PCA Computation Dual PCA Computation: Same as above, but replace with

Alternate PCA Computation Dual PCA Computation: Same as above, but replace with So can almost replace with

Alternate PCA Computation Dual PCA Computation: Same as above, but replace with So can almost replace with Then use SVD,, to get:

Alternate PCA Computation Dual PCA Computation: Same as above, but replace with So can almost replace with Then use SVD,, to get: Note: Same Eigenvalues

Alternate PCA Computation

Functional Data Analysis Recall from 1 st Class Meeting: Spanish Mortality Data

Functional Data Analysis Interesting Data Set: Mortality Data For Spanish Males (thus can relate to history) Each curve is a single year x coordinate is age Note: Choice made of Data Object (could also study age as curves, x coordinate = time)

Important Issue: What are the Data Objects? Curves (years) : Mortality vs. Age Curves (Ages) : Mortality vs. Year Note: Rows vs. Columns of Data Matrix Functional Data Analysis

Mortality Time Series Recall Improved Coloring: Rainbow Representing Year: Magenta = 1908 Red = 2002

Object Space View of Projections Onto PC1 Direction Main Mode Of Variation: Constant Across Ages Mortality Time Series

Shows Major Improvement Over Time (medical technology, etc.) And Change In Age Rounding Blips Mortality Time Series

Object Space View of Projections Onto PC2 Direction 2nd Mode Of Variation: Difference Between & Rest Mortality Time Series

Scores Plot Feature (Point Cloud) Space View Connecting Lines Highlight Time Order Good View of Historical Effects Mortality Time Series

Demography Data Dual PCA Idea: Rows and Columns trade places Terminology: from optimization Insights come from studying “primal” & “dual” problems

Primal / Dual PCA Consider “Data Matrix”

Primal / Dual PCA Consider “Data Matrix” Primal Analysis: Columns are data vectors

Primal / Dual PCA Consider “Data Matrix” Dual Analysis: Rows are data vectors

Demography Data Recall Primal - Raw Data Rainbow Color Scheme Allowed Good Interpretation

Demography Data Dual PCA - Raw Data Hot Metal Color Scheme To Help Keep Primal & Dual Separate

Demography Data Color Code (Ages)

Demography Data Dual PCA - Raw Data Note: Flu Pandemic

Demography Data Dual PCA - Raw Data Note: Flu Pandemic & Spanish Civil War

Demography Data Dual PCA - Raw Data Curves Indexed By Ages 1-95

Demography Data Dual PCA - Raw Data 1 st Year of Life Is Dangerous

Demography Data Dual PCA - Raw Data 1 st Year of Life Is Dangerous Later Childhood Years Much Improved

Demography Data Dual PCA

Demography Data Dual PCA Years on Horizontal Axes

Demography Data Dual PCA Note: Hard To See / Interpret Smaller Effects (Lost in Scaling)

Demography Data Dual PCA Choose Axis Limits To Maximize Visible Variation

Demography Data Dual PCA Mean Shows Some History Flu Pandemic Civil War

Demography Data Dual PCA PC1 Shows Mortality Increases With Age

Demography Data Dual PCA PC2 Shows Improvements Strongest For Young

Demography Data Dual PCA PC3 Shows Automobile Effects

Demography Data Dual PCA Scores Linear Connections Highlight Age Ordering

Demography Data Dual PCA Scores Note PC2 & PC1 Together Show Mortality vs. Age

Demography Data Dual PCA Scores PC2 Captures “Age Rounding”

Demography Data

Primal / Dual PCA Which is “Better”? Same Info, Displayed Differently Here: Prefer Primal, As Indicated by Graphics Quality

Primal / Dual PCA Which is “Better”? In General: Either Can Be Best Try Both and Choose Or Use “Natural Choice” of Data Object

Primal / Dual PCA Important Early Version: BiPlot Display Overlay Primal & Dual PCAs Not Easy to Interpret Gabriel, K. R. (1971)

Return to Big Picture Main statistical goals of OODA: Understanding population structure –Low dim ’ al Projections, PCA … Classification (i. e. Discrimination) –Understanding 2+ populations Time Series of Data Objects –Chemical Spectra, Mortality Data “Vertical Integration” of Data Types

Classification - Discrimination Background: Two Class (Binary) version: Using “ training data ” from Class +1 and Class -1 Develop a “ rule ” for assigning new data to a Class Canonical Example: Disease Diagnosis New Patients are “ Healthy ” or “ Ill ” Determine based on measurements

Classification - Discrimination Important Distinction: Classification vs. Clustering Classification: Class labels are known, Goal: understand differences Clustering: Goal: Find class labels (to be similar) Both are about clumps of similar data, but much different goals

Classification - Discrimination Important Distinction: Classification vs. Clustering Useful terminology: Classification: supervised learning Clustering: unsupervised learning

Classification - Discrimination Terminology: For statisticians, these are synonyms

Classification - Discrimination Terminology: For statisticians, these are synonyms For biologists, classification means: Constructing taxonomies And sorting organisms into them (maybe this is why discrimination was used, until politically incorrect … )

Classification (i.e. discrimination) There are a number of: Approaches Philosophies Schools of Thought Too often cast as: Statistics vs. EE - CS

Classification (i.e. discrimination) EE – CS variations: Pattern Recognition Artificial Intelligence Neural Networks Data Mining Machine Learning

Classification (i.e. discrimination) Differing Viewpoints: Statistics Model Classes with Probability Distribut ’ ns Use to study class diff ’ s & find rules EE – CS Data are just Sets of Numbers Rules distinguish between these Current thought: combine these

Classification (i.e. discrimination) Important Overview Reference: Duda, Hart and Stork (2001) Too much about neural nets??? Pizer disagrees … Update of Duda & Hart (1973)

Classification (i.e. discrimination) For a more classical statistical view: McLachlan (2004). Gaussian Likelihood theory, etc. Not well tuned to HDLSS data

Classification Basics Personal Viewpoint: Point Clouds

Classification Basics Simple and Natural Approach: Mean Difference a.k.a. Centroid Method Find “ skewer through two meatballs ”

Classification Basics For Simple Toy Example: Project On MD & split at center

Classification Basics Why not use PCA? Reasonable Result? Doesn ’ t use class labels … Good? Bad?

Classification Basics Harder Example (slanted clouds):

Classification Basics PCA for slanted clouds: PC1 terrible PC2 better? Still misses right dir ’ n Doesn ’ t use Class Labels

Classification Basics Mean Difference for slanted clouds: A little better? Still misses right dir ’ n Want to account for covariance

Classification Basics Mean Difference & Covariance, Simplest Approach: Rescale (standardize) coordinate axes i. e. replace (full) data matrix: Then do Mean Difference Called “ Na ï ve Bayes Approach ”

Classification Basics Na ï ve Bayes Reference: Domingos & Pazzani (1997) Most sensible contexts: Non-comparable data E.g. different units

Classification Basics Problem with Na ï ve Bayes: Only adjusts Variances Not Covariances Doesn ’ t solve this problem

Classification Basics Better Solution: Fisher Linear Discrimination Gets the right dir ’ n How does it work?

Fisher Linear Discrimination Other common terminology (for FLD): Linear Discriminant Analysis (LDA) Original Paper: Fisher (1936)

Fisher Linear Discrimination Careful development: Useful notation (data vectors of length ): Class +1: Class -1:

Fisher Linear Discrimination Careful development: Useful notation (data vectors of length ): Class +1: Class -1: Centerpoints: and

Fisher Linear Discrimination Covariances, for (outer products) Based on centered, normalized data matrices:

Fisher Linear Discrimination Covariances, for (outer products) Based on centered, normalized data matrices: Note: use “ MLE ” version of estimated covariance matrices, for simpler notation

Fisher Linear Discrimination Major Assumption: Class covariances are the same (or “ similar ” ) Like this: Not this:

Fisher Linear Discrimination Good estimate of (common) within class cov? Pooled (weighted average) within class cov: based on the combined full data matrix:

Fisher Linear Discrimination Good estimate of (common) within class cov? Pooled (weighted average) within class cov: based on the combined full data matrix: Note: Different Means

Fisher Linear Discrimination Note: is similar to from before I.e. covariance matrix ignoring class labels Important Difference: Class by Class Centering Will be important later

Fisher Linear Discrimination Simple way to find “ correct cov. adjustment ” : Individually transform subpopulations so “ spherical ” about their means For define

Fisher Linear Discrimination For define Note: This spheres the data, in the sense that

Fisher Linear Discrimination Then: In Transformed Space, Best separating hyperplane is Perpendicular bisector of line between means

Fisher Linear Discrimination In Transformed Space, Separating Hyperplane has: Transformed Normal Vector:

Fisher Linear Discrimination In Transformed Space, Separating Hyperplane has: Transformed Normal Vector: Transformed Intercept:

Fisher Linear Discrimination In Transformed Space, Separating Hyperplane has: Transformed Normal Vector: Transformed Intercept: Sep. Hyperp. has Equation:

Fisher Linear Discrimination Thus discrimination rule is: Given a new data vector, Choose Class +1 when:

Fisher Linear Discrimination Thus discrimination rule is: Given a new data vector, Choose Class +1 when: i.e. (transforming back to original space) Using, for symmetric and invertible:

Fisher Linear Discrimination Thus discrimination rule is: Given a new data vector, Choose Class +1 when: i.e. (transforming back to original space) where:

Fisher Linear Discrimination So (in orig ’ l space) have separ ’ ting hyperplane with: Normal vector: Intercept:

Fisher Linear Discrimination Relationship to Mahalanobis distance Idea: For, a natural distance measure is:

Fisher Linear Discrimination Relationship to Mahalanobis distance Idea: For, a natural distance measure is: “ unit free ”, i.e. “ standardized ” essentially mod out covariance structure

Fisher Linear Discrimination Relationship to Mahalanobis distance Idea: For, a natural distance measure is: “ unit free ”, i.e. “ standardized ” essentially mod out covariance structure Euclidean dist. applied to & Same as key transformation for FLD I.e. FLD is mean difference in Mahalanobis space

Participant Presentation Sherif Faraq Rational Design of ED Free Chemicals