Presentation is loading. Please wait.

Presentation is loading. Please wait.

Stats 443.3 & 851.3 Linear Models.

Similar presentations


Presentation on theme: "Stats 443.3 & 851.3 Linear Models."— Presentation transcript:

1 Stats & 851.3 Linear Models

2 Assignments, Term tests - 40% Final Examination - 60%
Instructor: W.H.Laverty Office: 235 McLean Hall Phone: Lectures: M W F 9:30am - 10:20am Geol 269 Lab 2:30pm – 3:30 pm Tuesday Evaluation: Assignments, Term tests - 40% Final Examination - 60% 2

3 The lectures will be given in Power Point
3

4 Course Outline 4

5 Introduction 5

6 Review of Linear Algebra and Matrix Analysis
6

7 Review of Probability Theory and Statistical Theory
7

8 Multivariate Normal distribution
8

9 The General Linear Model Theory and Application
9

10 Special applications of The General Linear Model
Analysis of Variance Models, Analysis of Covariance models 10

11 A chart illustrating Statistical Procedures
Independent variables Dependent Variables Categorical Continuous Continuous & Categorical Multiway frequency Analysis (Log Linear Model) Discriminant Analysis ANOVA (single dep var) MANOVA (Mult dep var) MULTIPLE REGRESSION (single dep variable) MULTIVARIATE (multiple dependent variable) ANACOVA (single dep var) MANACOVA (Mult dep var) ??

12 A Review of Linear Algebra
With some Additions

13 Matrix Algebra Definition
An n × m matrix, A, is a rectangular array of elements n = # of columns m = # of rows dimensions = n × m

14 Definition A vector, v, of dimension n is an n × 1 matrix rectangular array of elements vectors will be column vectors (they may also be row vectors)

15 A vector, v, of dimension n
can be thought a point in n dimensional space

16 v3 v2 v1

17 Matrix Operations Addition
Let A = (aij) and B = (bij) denote two n × m matrices Then the sum, A + B, is the matrix The dimensions of A and B are required to be both n × m.

18 Scalar Multiplication
Let A = (aij) denote an n × m matrix and let c be any scalar. Then cA is the matrix

19 Addition for vectors v3 v2 v1

20 Scalar Multiplication for vectors

21 Matrix multiplication
Let A = (aij) denote an n × m matrix and B = (bjl) denote an m × k matrix Then the n × k matrix C = (cil) where is called the product of A and B and is denoted by A∙B

22 In the case that A = (aij) is an n × m matrix and B = v = (vj) is an m × 1 vector
Then w = A∙v = (wi) where is an n × 1 vector w3 v3 w2 v2 w1 v1

23 Definition An n × n identity matrix, I, is the square matrix Note: AI = A IA = A.

24 Definition (The inverse of an n × n matrix)
Let A denote the n × n matrix Let B denote an n × n matrix such that AB = BA = I, If the matrix B exists then A is called invertible Also B is called the inverse of A and is denoted by A-1

25 Note: Let A and B be two matrices whose inverse exists. Let C = AB
Note: Let A and B be two matrices whose inverse exists. Let C = AB. Then the inverse of the matrix C exists and C-1 = B-1A-1. Proof C[B-1A-1] = [AB][B-1A-1] = A[B B-1]A-1 = A[I]A-1 = AA-1=I

26 The Woodbury Theorem where the inverses

27 Proof: Let Then all we need to show is that H(A + BCD) = (A + BCD) H = I.

28

29 The Woodbury theorem can be used to find the inverse of some pattern matrices:
Example: Find the inverse of the n × n matrix

30 where hence and

31 Thus Now using the Woodbury theorem

32 Thus

33 where

34 Note: for n = 2

35 Also

36 Now

37 and This verifies that we have calculated the inverse

38 Block Matrices Let the n × m matrix be partitioned into sub-matrices A11, A12, A21, A22, Similarly partition the m × k matrix

39 Product of Blocked Matrices
Then

40 The Inverse of Blocked Matrices
Let the n × n matrix be partitioned into sub-matrices A11, A12, A21, A22, Similarly partition the n × n matrix Suppose that B = A-1

41 Product of Blocked Matrices
Then

42 Hence From (1) From (3)

43 Hence or using the Woodbury Theorem Similarly

44 From and similarly

45 Summarizing Let Suppose that A-1 = B then

46 Example Let Find A-1 = B

47

48 The transpose of a matrix
Consider the n × m matrix, A then the m × n matrix, (also denoted by AT) is called the transpose of A

49 Symmetric Matrices An n × n matrix, A, is said to be symmetric if
Note:

50 The trace and the determinant of a square matrix
Let A denote then n × n matrix Then

51 also where

52 Some properties

53 Some additional Linear Algebra

54 Inner product of vectors
Let denote two p × 1 vectors. Then.

55 Note: Let denote two p × 1 vectors. Then.

56 Note: Let denote two p × 1 vectors. Then.

57 Special Types of Matrices
Orthogonal matrices A matrix is orthogonal if PˊP = PPˊ = I In this cases P-1=Pˊ . Also the rows (columns) of P have length 1 and are orthogonal to each other

58 Suppose P is an orthogonal matrix
then Let denote p × 1 vectors. Orthogonal transformation preserve length and angles – Rotations about the origin, Reflections

59 Example The following matrix P is orthogonal

60 Special Types of Matrices (continued)
Positive definite matrices A symmetric matrix, A, is called positive definite if: A symmetric matrix, A, is called positive semi definite if:

61 If the matrix A is positive definite then

62 Theorem The matrix A is positive definite if

63 Example

64 Special Types of Matrices (continued)
Idempotent matrices A symmetric matrix, E, is called idempotent if: Idempotent matrices project vectors onto a linear subspace

65 Example

66 Example (continued)

67 Vector subspaces of n

68 Let n denote all n-dimensional vectors (n-dimensional Euclidean space).
Let M denote any subset of n. Then M is a vector subspace of n if: M If M and M then M If M then M

69 Example 1 of vector subspace Let M where is any n-dimensional vector
Example 1 of vector subspace Let M where is any n-dimensional vector. Then M is a vector subspace of n. Note: M is an (n - 1)-dimensional plane through the origin.

70 Proof Now M

71 Projection onto M. Let be any vector M

72 Example 2 of vector subspace Let M Then M is a vector subspace of n
Example 2 of vector subspace Let M Then M is a vector subspace of n. M is called the vector space spanned by the p n -dimensional vectors: M is a the plane of smallest dimension through the origin that contains the vectors:

73 Eigenvectors, Eigenvalues of a matrix

74 Definition Let A be an n × n matrix Let
then l is called an eigenvalue of A and and is called an eigenvector of A and

75 Note:

76 = polynomial of degree n in l.
Hence there are n possible eigenvalues l1, … , ln

77 Thereom If the matrix A is symmetric then the eigenvalues of A, l1, … , ln,are real.
Thereom If the matrix A is positive definite then the eigenvalues of A, l1, … , ln, are positive. Proof A is positive definite if Let be an eigenvalue and corresponding eigenvector of A.

78 Thereom If the matrix A is symmetric and the eigenvalues of A are l1, … , ln, with corresponding eigenvectors If li ≠ lj then Proof: Note

79 Thereom If the matrix A is symmetric with distinct eigenvalues, l1, … , ln, with corresponding eigenvectors Assume

80 proof Note and P is called an orthogonal matrix

81 therefore thus

82 Comment The previous result is also true if the eigenvalues are not distinct. Namely if the matrix A is symmetric with eigenvalues, l1, … , ln, with corresponding eigenvectors of unit length

83 An algorithm for computing eigenvectors, eigenvalues of positive definite matrices
Generally to compute eigenvalues of a matrix we need to first solve the equation for all values of l. |A – lI| = 0 (a polynomial of degree n in l) Then solve the equation for the eigenvector

84 Recall that if A is positive definite then
It can be shown that and that

85 Thus for large values of m
The algorithim Compute powers of A - A2 , A4 , A8 , A16 , ... Rescale (so that largest element is 1 (say)) Continue until there is no change, The resulting matrix will be Find

86 To find Repeat steps 1 to 5 with the above matrix to find Continue to find

87 Example A =

88 Differentiation with respect to a vector, matrix

89 Differentiation with respect to a vector
Let denote a p × 1 vector. Let denote a function of the components of

90 Rules 1. Suppose

91 2. Suppose

92 Example 1. Determine when is a maximum or minimum. solution

93 2. Determine when is a maximum if Assume A is a positive definite matrix. solution l is the Lagrange multiplier. This shows that is an eigenvector of A. Thus is the eigenvector of A associated with the largest eigenvalue, l.

94 Differentiation with respect to a matrix
Let X denote a q × p matrix. Let f (X) denote a function of the components of X then:

95 Example Let X denote a p × p matrix. Let f (X) = ln |X| Solution Note Xij are cofactors = (i,j)th element of X-1

96 Example Let X and A denote p × p matrices. Let f (X) = tr (AX) Solution

97 Differentiation of a matrix of functions
Let U = (uij) denote a q × p matrix of functions of x then:

98 Rules:

99 Proof:

100 Proof:

101 Proof:

102 The Generalized Inverse of a matrix

103 A-1 does not exist for all matrices A
Recall B (denoted by A-1) is called the inverse of A if AB = BA = I A-1 does not exist for all matrices A A-1 exists only if A is a square matrix and |A| ≠ 0 If A-1 exists then the system of linear equations has a unique solution

104 Definition B (denoted by A-) is called the generalized inverse (Moore – Penrose inverse) of A if 1. ABA = A 2. BAB = B 3. (AB)' = AB 4. (BA)' = BA Note: A- is unique Proof: Let B1 and B2 satisfying 1. ABiA = A 2. BiABi = Bi 3. (ABi)' = ABi 4. (BiA)' = BiA

105 Hence B1 = B1AB1 = B1AB2AB1 = B1 (AB2)'(AB1) ' = B1B2'A'B1'A'= B1B2'A' = B1AB2 = B1AB2AB2 = (B1A)(B2A)B2 = (B1A)'(B2A)'B2 = A'B1'A'B2'B2 = A'B2'B2= (B2A)'B2 = B2AB2 = B2 The general solution of a system of Equations The general solution where is arbitrary

106 Suppose a solution exists
Let

107 Calculation of the Moore-Penrose g-inverse
Let A be a p×q matrix of rank q < p, Proof thus also

108 Let B be a p×q matrix of rank p < q,
Proof thus also

109 Let C be a p×q matrix of rank k < min(p,q),
then C = AB where A is a p×k matrix of rank k and B is a k×q matrix of rank k Proof is symmetric, as well as


Download ppt "Stats 443.3 & 851.3 Linear Models."

Similar presentations


Ads by Google