Stats 346.3 Multivariate Data Analysis Stats 848.3.

Slides:



Advertisements
Similar presentations
Discrimination amongst k populations. We want to determine if an observation vector comes from one of the k populations For this purpose we need to partition.
Advertisements

Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Ch 7.7: Fundamental Matrices
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Applied Informatics Štefan BEREŽNÝ
Lecture 3: A brief background to multivariate statistics
Linear Algebra.
Chapter 4 Systems of Linear Equations; Matrices Section 6 Matrix Equations and Systems of Linear Equations.
The General Linear Model. The Simple Linear Model Linear Regression.
Multivariate distributions. The Normal distribution.
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
Canonical correlations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Ch 7.2: Review of Matrices For theoretical and computation reasons, we review results of matrix theory in this section and the next. A matrix A is an m.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
Orthogonality and Least Squares
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Techniques for studying correlation and covariance structure
Correlation. The sample covariance matrix: where.
Stats & Linear Models.
Multivariate Data and Matrix Algebra Review BMTRY 726 Spring 2012.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Systems of Linear Equation and Matrices
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chap. 6 Linear Transformations
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
BIOL 582 Supplemental Material Matrices, Matrix calculations, GLM using matrix algebra.
Matrices Matrices For grade 1, undergraduate students For grade 1, undergraduate students Made by Department of Math.,Anqing Teachers college.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Stats Multivariate Data Analysis. Instructor:W.H.Laverty Office:235 McLean Hall Phone: Lectures: M W F 9:30am - 10:20am McLean Hall.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Chapter 2 … part1 Matrices Linear Algebra S 1. Ch2_2 2.1 Addition, Scalar Multiplication, and Multiplication of Matrices Definition A matrix is a rectangular.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Stats & Summary. The Woodbury Theorem where the inverses.
Linear Algebra Chapter 2 Matrices.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Lecture XXVI.  The material for this lecture is found in James R. Schott Matrix Analysis for Statistics (New York: John Wiley & Sons, Inc. 1997).  A.
Introduction to Vectors and Matrices
Review of Matrix Operations
Matrices and Vectors Review Objective
Multivariate Data Analysis
Systems of First Order Linear Equations
Numerical Analysis Lecture 16.
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Principal Components What matters most?.
Introduction to Vectors and Matrices
Chapter-1 Multivariate Normal Distributions
Eigenvalues and Eigenvectors
Presentation transcript:

Stats Multivariate Data Analysis Stats 848.3

Instructor:W.H.Laverty Office:235 McLean Hall Phone: Lectures: M W F 12:30am - 1:20pm Biol 123 Evaluation:Assignments, Term tests - 40% Final Examination - 60%

Dates for midterm tests: 1.Friday, February 06 2.Friday, March 20 Each test and the Final Exam are Open Book Students are allowed to take in Notes, texts, formula sheets, calculators (laptop computers.)

Text: Stat 346 –Multivariate Statistical Methods – Donald Morrison Not Required - I will give a list of other useful texts that will be in the library

Bibliography 1.Cooley, W.W., and Lohnes P.R. (1962). Multivariate Procedures for the Behavioural Sciences, Wiley, New York. 2.Fienberg, S. (1980), Analysis of Cross-Classified Data, MIT Press, Cambridge, Mass. 3.Fingelton, B. (1984), Models for Category Counts, Cambridge University Press. 4.Johnson, R.A. and Wichern D.W. Applied Multivariate Statistical Analysis, Prentice Hall. 5.Morrison, D.F. (1976), Multivariate Statistical Methods, McGraw-Hill, New York. 6.Seal, H. (1968), Multivariate Statistical Analysis for Biologists, Metheun, London 7. Alan Agresti (1990) Categorical Data Analysis, Wiley, New York.

The lectures will be given in Power Point They are now posted on the Stats 346 web page

Course Outline

Introduction

Review of Linear Algebra and Matrix Analysis Review of Linear Statistical Theory Chapter 2 Chapter 1

Multivariate Normal distribution Multivariate Data plots Correlation - sample estimates and tests Canonical Correlation Chapter 3

Mean Vectors and Covariance matrices Single sample procedures Two sample procedures Profile Analysis Chapter 4

Multivariate Analysis of Variance (MANOVA) Chapter 5

Classification and Discrimination Discriminant Analysis Logistic Regression (if time permits) Cluster Analysis Chapters 6

The structure of correlation Principal Components Analysis (PCA) Factor Analysis Chapter 9

Multivariate Multiple Regression (if time permits) References TBA

Discrete Multivariate Analysis (if time permits) References: TBA

Introduction

Multivariate Data We have collected data for each case in the sample or population on not just one variable but on several variables – X 1, X 2, … X p This is likely the situation – very rarely do you collect data on a single variable. The variables maybe 1.Discrete (Categorical) 2.Continuous (Numerical) The variables may be 1.Dependent (Response variables) 2.Independent (Predictor variables)

Independent variables Dependent Variables CategoricalContinuousContinuous & Categorical Categorical Multiway frequency Analysis (Log Linear Model) Discriminant Analysis Continuous ANOVA (single dep var) MANOVA (Mult dep var) MULTIPLE REGRESSION (single dep variable) MULTIVARIATE MULTIPLE REGRESSION (multiple dependent variable) ANACOVA (single dep var) MANACOVA (Mult dep var) Continuous & Categorical ?? A chart illustrating Statistical Procedures

Multivariate Techniques Multivariate Techniques can be classified as follows: 1.Techniques that are direct analogues of univariate procedures. There are univariate techniques that are then generalized to the multivariate situarion e. g. The two independent sample t test, generalized to Hotelling’s T 2 test ANOVA (Analysis of Variance) generalized to MANOVA (Multivariate Analysis of Variance)

2.Techniques that are purely multivariate procedures. Correlation, Partial correlation, Multiple correlation, Canonical Correlation Principle component Analysis, Factor Analysis -These are techniques for studying complicated correlation structure amongst a collection of variables

3.Techniques for which a univariate procedures could exist but these techniques become much more interesting in the multivariate setting. Cluster Analysis and Classification -Here we try to identify subpopulations from the data Discriminant Analysis -In Discriminant Analysis, we attempt to use a collection of variables to identify the unknown population for which a case is a member

An Example: A survey was given to 132 students Male=35, Female=97 They rated, on a Likert scale 1 to 5 their agreement with each of 40 statements. All statements are related to the Meaning of Life

Questions and Statements

Statements - continued

Cluster Analysis of n = 132 university students using responses from Meaning of Life questionnaire (40 questions)

Discriminant Analysis of n = 132 university students into the three identified populations

A Review of Linear Algebra With some Additions

Matrix Algebra Definition An n × m matrix, A, is a rectangular array of elements n = # of columns m = # of rows dimensions = n × m

Definition A vector, v, of dimension n is an n × 1 matrix rectangular array of elements vectors will be column vectors (they may also be row vectors)

A vector, v, of dimension n can be thought a point in n dimensional space

v2v2 v1v1 v3v3

Matrix Operations Addition Let A = (a ij ) and B = (b ij ) denote two n × m matrices Then the sum, A + B, is the matrix The dimensions of A and B are required to be both n × m.

Scalar Multiplication Let A = (a ij ) denote an n × m matrix and let c be any scalar. Then cA is the matrix

v2v2 v1v1 v3v3 Addition for vectors

v2v2 v1v1 v3v3 Scalar Multiplication for vectors

Matrix multiplication Let A = (a ij ) denote an n × m matrix and B = (b jl ) denote an m × k matrix Then the n × k matrix C = (c il ) where is called the product of A and B and is denoted by A∙B

In the case that A = (a ij ) is an n × m matrix and B = v = (v j ) is an m × 1 vector Then w = A∙v = (w i ) where is an n × 1 vector v2v2 v1v1 v3v3 w2w2 w1w1 w3w3

Definition An n × n identity matrix, I, is the square matrix Note: 1. AI = A 2. IA = A.

Definition (The inverse of an n × n matrix) AB = BA = I, If the matrix B exists then A is called invertible Also B is called the inverse of A and is denoted by A -1 Let A denote the n × n matrix Let B denote an n × n matrix such that

The Woodbury Theorem where the inverses

Then all we need to show is that H(A + BCD) = (A + BCD) H = I. Proof: Let

The Woodbury theorem can be used to find the inverse of some pattern matrices: Example: Find the inverse of the n × n matrix

where hence and

Thus Now using the Woodbury theorem

Thus

where

Note: for n = 2

Also

Now

and This verifies that we have calculated the inverse

Block Matrices Let the n × m matrix be partitioned into sub-matrices A 11, A 12, A 21, A 22, Similarly partition the m × k matrix

Product of Blocked Matrices Then

The Inverse of Blocked Matrices Let the n × n matrix be partitioned into sub-matrices A 11, A 12, A 21, A 22, Similarly partition the n × n matrix Suppose that B = A -1

Product of Blocked Matrices Then

Hence From (1) From (3)

Hence using the Woodbury Theorem or Similarly

From and similarly

Summarizing Let Suppose that A -1 = B then

Example Let Find A -1 = B

The transpose of a matrix Consider the n × m matrix, A is called the transpose of A then the m × n matrix, (also denoted by A T )

Symmetric Matrices An n × n matrix, A, is said to be symmetric if Note:

The trace and the determinant of a square matrix Let A denote then n × n matrix Then

also where

Some properties

Some additional Linear Algebra

Inner product of vectors Let denote two p × 1 vectors. Then.

Note: Let denote two p × 1 vectors. Then.

Note: Let denote two p × 1 vectors. Then. 0

Special Types of Matrices 1.Orthogonal matrices –A matrix is orthogonal if P'P = PP' = I –In this cases P -1 =P'. –Also the rows (columns) of P have length 1 and are orthogonal to each other

then Suppose P is an orthogonal matrix Let denote p × 1 vectors. Orthogonal transformation preserve length and angles – Rotations about the origin, Reflections

The following matrix P is orthogonal Example

Special Types of Matrices (continued) 2.Positive definite matrices –A symmetric matrix, A, is called positive definite if: –A symmetric matrix, A, is called positive semi definite if:

If the matrix A is positive definite then

Theorem The matrix A is positive definite if

Special Types of Matrices (continued) 3.Idempotent matrices –A symmetric matrix, E, is called idempotent if: –Idempotent matrices project vectors onto a linear subspace

Definition Let A be an n × n matrix Let then is called an eigenvalue of A and and is called an eigenvector of A and

Note:

= polynomial of degree n in. Hence there are n possible eigenvalues 1, …, n

Proof A is positive definite if be an eigenvalue and Thereom If the matrix A is symmetric then the eigenvalues of A, 1, …, n,are real. Thereom If the matrix A is positive definite then the eigenvalues of A, 1, …, n, are positive. Let corresponding eigenvector of A.

Proof: Note Thereom If the matrix A is symmetric and the eigenvalues of A are 1, …, n, with corresponding eigenvectors If i ≠ j then

Thereom If the matrix A is symmetric with distinct eigenvalues, 1, …, n, with corresponding eigenvectors Assume

proof Noteand P is called an orthogonal matrix

therefore thus

Comment The previous result is also true if the eigenvalues are not distinct. Namely if the matrix A is symmetric with eigenvalues, 1, …, n, with corresponding eigenvectors of unit length

An algorithm for computing eigenvectors, eigenvalues of positive definite matrices Generally to compute eigenvalues of a matrix we need to first solve the equation for all values of. –|A – I| = 0 (a polynomial of degree n in ) Then solve the equation for the eigenvector

Recall that if A is positive definite then It can be shown that and that

Thus for large values of m The algorithim 1.Compute powers of A - A 2, A 4, A 8, A 16,... 2.Rescale (so that largest element is 1 (say)) 3.Continue until there is no change, The resulting matrix will be 4.Find 5. Find

To find 6.Repeat steps 1 to 5 with the above matrix to find 7.Continue to find

Example A =

Differentiation with respect to a vector, matrix

Differentiation with respect to a vector Let denote a p × 1 vector. Let denote a function of the components of.

1. Suppose Rules

2. Suppose

Example 1. Determine when is a maximum or minimum. solution

2. Determine whenis a maximum if  is the Lagrange multiplier. solution Assume A is a positive definite matrix. This shows that is an eigenvector of A. Thus is the eigenvector of A associated with the largest eigenvalue,.

Differentiation with respect to a matrix Let X denote a q × p matrix. Let f (X) denote a function of the components of X then:

Example Let X denote a p × p matrix. Let f (X) = ln |X| Solution = (i,j) th element of X -1 Note X ij are cofactors

Example Let X and A denote p × p matrices. Solution Let f (X) = tr (AX)

Differentiation of a matrix of functions Let U = (u ij ) denote a q × p matrix of functions of x then:

Rules:

Proof:

The Generalized Inverse of a matrix

Recall B (denoted by A -1 ) is called the inverse of A if AB = BA = I A -1 does not exist for all matrices A A -1 exists only if A is a square matrix and |A| ≠ 0 If A -1 exists then the system of linear equations has a unique solution

Definition B (denoted by A - ) is called the generalized inverse (Moore – Penrose inverse) of A if 1. ABA = A 2. BAB = B 3. (AB)' = AB 4. (BA)' = BA Note: A - is unique Proof: Let B 1 and B 2 satisfying 1. AB i A = A 2. B i AB i = B i 3. (AB i ) ' = AB i 4. (B i A) ' = B i A

Hence B 1 = B 1 AB 1 = B 1 AB 2 AB 1 = B 1 (AB 2 ) ' (AB 1 ) ' = B 1 B 2 ' A ' B 1 ' A ' = B 1 B 2 ' A ' = B 1 AB 2 = B 1 AB 2 AB 2 = (B 1 A)(B 2 A)B 2 = (B 1 A) ' (B 2 A) ' B 2 = A ' B 1 ' A ' B 2 ' B 2 = A ' B 2 ' B 2 = (B 2 A) ' B 2 = B 2 AB 2 = B 2 The general solution of a system of Equations The general solution where is arbitrary

Suppose a solution exists Let

Calculation of the Moore-Penrose g-inverse Let A be a p×q matrix of rank q < p, Proof thus also

Let B be a p×q matrix of rank p < q, Proof thus also

Let C be a p×q matrix of rank k < min(p,q), Proof is symmetric, as well as then C = AB where A is a p×k matrix of rank k and B is a k×q matrix of rank k

References 1. Matrix Algebra Useful for Statistics, Shayle R. Searle 2. Mathematical Tools for Applied Multivariate Analysis, J. Douglas Carroll, Paul E. Green