統計計算與模擬 政治大學統計系余清祥 2004 年 3 月 29 日至 4 月 14 日 第七、九週:矩陣運算

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 3 Determinants and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Chapter 10 Curve Fitting and Regression Analysis
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Chapter 8 Numerical Technique 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
統計計算與模擬 政治大學統計系余清祥 2003 年 6 月 9 日 ~ 6 月 10 日 第十六週:估計密度函數
Symmetric Matrices and Quadratic Forms
3D Geometry for Computer Graphics
Chapter 3 Determinants 3.1 The Determinant of a Matrix
Curve-Fitting Regression
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Chapter 2 Matrices Definition of a matrix.
3D Geometry for Computer Graphics
Ordinary least squares regression (OLS)
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Linear regression models in matrix terms. The regression function in matrix terms.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
LIAL HORNSBY SCHNEIDER
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
CHAPTER SIX Eigenvalues
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
2.1 Operations with Matrices 2.2 Properties of Matrix Operations
ECON 1150 Matrix Operations Special Matrices
1 資訊科學數學 13 : Solutions of Linear Systems 陳光琦助理教授 (Kuang-Chi Chen)
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Copyright © 2011 Pearson, Inc. 7.3 Multivariate Linear Systems and Row Operations.
Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Modern Navigation Thomas Herring
Chapter 2 Basic Linear Algebra ( 基本線性代數 ) to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Matrices, Vectors, Determinants.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
ALGEBRAIC EIGEN VALUE PROBLEMS
Lecture 10: Image alignment CS4670/5760: Computer Vision Noah Snavely
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Multivariable Linear Systems and Row Operations
Chapter 8 Numerical Technique
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Chapter 10: Solving Linear Systems of Equations
Numerical Analysis Lecture14.
Symmetric Matrices and Quadratic Forms
政治大學統計系余清祥 2004年5月26日~ 6月7日 第十六、十七週:估計密度函數
Topic 11: Matrix Approach to Linear Regression
Presentation transcript:

統計計算與模擬 政治大學統計系余清祥 2004 年 3 月 29 日至 4 月 14 日 第七、九週:矩陣運算

矩陣運算 (Matrix Computation) 矩陣運算在統計分析上扮演非常重要的角 色,包括以下方法: Multiple Regression Generalized Linear Model Multivariate Analysis Time Series Other topics (Random variables)

Multiple Regression In a multiple regression problem, we want to approximate vector Y by fitted values (a linear function of a set p predictors), i.e., (if the inverse can be solved.) Note: The normal equation is hardly solved directly, unless it is necessary.

In other words, we need to be familiar with matrix computation, such as matrix product and matrix inverse, i.e., X’Y and If the left hand side matrix is a upper (or lower) triangular matrix, then the estimate can be solved easily,

Gauss Elimination Gauss Elimination is a process of “row” operation, such as adding multiples of rows and interchanging rows, that produces a triangular system in the end. Gauss Elimination can be used to construct matrix inverse as well.

An Example of Gauss Elimination:

The idea of Gauss elimination is fine, but it would require a lot of computations.  For example, let X be an n  p matrix. Then we need to compute X’X first and then find the upper triangular matrix for (X’X). This requires a number of n 2  p multiplications for (X’X) and about another p  p(p-1)/2  O(p 3 ) multiplications for the upper triangular matrix.

QR Decomposition If we can find matrices Q and R such that X = QR, where Q is orthonormal (Q’Q = I) and R is upper triangular matrices, then

Gram-Schmidt Algorithm Gram-Schmidt algorithm is a famous algorithm for doing QR decomposition. Algorithm: (Q is n  p and R is p  p.) for (j in 1:p) { r[j,j]  sqrt(sum(x[,j]^2)) x[,j]  x[,j]/r[j,j] if (j < p) for (k in (j+1):p) { r[j,k]  sum(x[,j]*x[,k]) x[,k]  x[,k] – x[,j]*r[j,k] } Note: Check the function “qr” in S-Plus.

Notes: (1) Since R is upper triangular, i.e., is easy to obtain, (2) The idea of the preceding algorithm is

Notes: (continued) (3) If X is not of full rank, one of the columns will be very close to 0. Thus, and so there will be a divide-by-zero error. (4) If we apply Gram-Schmidt algorithm to the augmented matrix, the last column will become the residuals of

Other orthogonalization methods: Householder Transformation is a computationally efficient and numerically stable method for QR decomposition. The Householder transformation we have constructed are n  n matrices, and the transformation Q is the product of p such matrices. Given’s rotation: The matrix X is reduced to upper triangular form by making exactly subdiagonal element equal to zero at each step.

Cholesky Decomposition If A is positive semi-definite, there exists an upper triangular matrix D such that since d rc = 0 for r > c.

Thus, the elements of D equal to Note: If A is “positive semi-definite” if for all vectors v  R p, we have v’Av  0, where A is a p  p matrix. If v’Av = 0 if and only if v = 0, then A is “positive definite.”

Applications of Cholesky decomposition: Simulation of correlated random variables (and also multivariate distributions). Regression Determinant of a symmetric matrix,

Sweep Operator The normal equation is not solved directly in the preceding methods. But sometimes we need to compute sums of squares and cross-products (SSCP) matrix. This is particularly useful in stepwise regression, since we need to compute the residual sum of squares (RSS) before and after a certain variable is added or removed.

Sweep algorithm is also known as “Gauss- Jordan” algorithm. Consider the SSCP matrix where X is n  p and y is p  1. Applications of the Sweep operator to columns 1 through p of A results in the matrix

Details of Sweep algorithm Step 1: Row 1 Step 2: Row 2 Row 2 minus Row 1 times

Notes: (1) If you apply the Sweep operator to columns i 1, i 2, …, i k, you’ll receive the results from regressing y on — the corresponding elements in the last column will be the estimated regression coefficients, the (p+1, p+1) element will contain RSS, and so forth.

Notes: (continued) (2) The Sweep operator has a simple inverse; the two together make it very easy to do stepwise regression. The SSCP matrix is symmetric, and any application of Sweep or its inverse result in a symmetric matrix, so one may take advantage of symmetric storage.

Sweep Algorithm Sweep  function(A, k) { n  nrow(A) for (i in (1:n)[-k]) A[i,j]  A[i,j] – A[i,k]*A[k,j]/A[k,k] # sweep if A[k,k] > 0 # inverse if A[k,k] < 0 A[-k,k]  A[-k,k]/abs(A[k,k]) A[k,-k]  A[k,-k]/abs(A[k,k]) return(A) }

General Least Square (GLM) Consider the model where with V unknown. Consider the Cholesky decomposition of V, V = D’D, and let let Then y* = X*  +  with and we may proceed with before. A special case (WLS): V = diag{v 1,…,v n }.

Singular Value Decomposition In regression problem, we have or equivalently,

The two orthogonal matrices U and V are associated with the following result: The Singular-Value Decomposition  Let X be an arbitrary n  p matrix with n  p. Then there exists orthogonal matrices U: n  n and V: p  p such that

Eigenvalues, Eigenvectors, and Principal Component Analysis The notion of principal components refers to a collection of uncorrelated r.v.’s formed by linear combinations of a set of possibly correlated r.v.’s. Idea: From eigenvalues and eigenvectors, i.e., if x and are eigenvector and eigenvector of a symmetric positive semidefinite matrix A, then

If A is a symmetric positive semi-definite matrix (p  p), then we can find orthogonal matrix  such that A =    ’, where Note: This is called the spectral decomposition of A. The ith row of  is the eigenvector of A which corresponds to the ith eigenvalue i.

Write where We should note that the normal equation does not have a unique solution. is the solution of the normal equations for which is minimized.

Note: There are some handouts from the references regarding “Matrix Computation” “Numerical Linear Algebra” from Internet Chapters 3 to 6 in Monohan (2001) Chapter 3 in Thisted (1988) Students are required to read and understand all these materials, in addition to the powerpoint notes.