Linear Systems, Mainly LU Decomposition

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Chapter 2, Linear Systems, Mainly LU Decomposition.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
Linear Algebraic Equations
Matrices and Systems of Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Ordinary least squares regression (OLS)
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Compiled By Raj G. Tiwari
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Square n-by-n Matrix.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
1 Intel Mathematics Kernel Library (MKL) Quickstart COLA Lab, Department of Mathematics, Nat’l Taiwan University 2010/05/11.
Introduction to Numerical Analysis I MATH/CMPSC 455 PA=LU.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Lecture 8 Matrix Inverse and LU Decomposition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Lesson 3 CSPP58001.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
LEARNING OUTCOMES At the end of this topic, student should be able to :  D efination of matrix  Identify the different types of matrices such as rectangular,
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Matrices, Vectors, Determinants.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Numerical Methods.  LU Decomposition is another method to solve a set of simultaneous linear equations  For most non-singular matrix [A] that one could.
Matrices Introduction.
College Algebra Chapter 6 Matrices and Determinants and Applications
7.3 Linear Systems of Equations. Gauss Elimination
MAT 322: LINEAR ALGEBRA.
Matrices and Vector Concepts
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
7.7 Determinants. Cramer’s Rule
Lecture 2 Matrices Lat Time - Course Overview
Part B. Linear Algebra, Vector Calculus
Chapter 2, Linear Systems, Mainly LU Decomposition
Elementary Linear Algebra Anton & Rorres, 9th Edition
Spring Dr. Jehad Al Dallal
Linear Equations.
Linear Algebra Lecture 15.
Euclidean Inner Product on Rn
Chapter 10: Solving Linear Systems of Equations
Elementary Matrix Methid For find Inverse
Numerical Analysis Lecture14.
Numerical Analysis Lecture10.
Chapter 1: Linear Equations in Linear Algebra
Linear Systems Numerical Methods.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Lecture 8 Matrix Inverse and LU Decomposition
Chapter 2, Linear Systems, Mainly LU Decomposition
Presentation transcript:

Linear Systems, Mainly LU Decomposition Chapter number refers to “Numerical Recipes” chapters.

Linear Systems In matrix form: A x = b

Existence and Solutions M = N (square matrix) and nonsingular, a unique solution exists M < N, more variables than equations – need to find the null space of A, A x = 0 (e.g. SVD) M > N, one can ask for least-square solution, e.g., from (ATA) x = (AT b) M is the number of equations

Some Concepts in Linear Algebra Vector space and its dimension Null space of a matrix A: the set of all x such that Ax = 0 Range of A: the set of all Ax Rank of A: max number of linearly independent rows or columns Rank of A + Null space Dim of A = number of columns of A

Computation Tasks and Pitfalls Solve A x = b Find A-1 Compute det(A) Round off error makes the system singular, or numerical instability makes the answer wrong.

Gauss Elimination Basic facts about linear equations Interchanging any two rows of A and b does not change the solution x Replace a row by a linear combination of itself and any other row does not change x Interchange column permutes the solution Example of Gauss elimination Pivoting Gauss elimination makes reduced echelon form, which is helpful to find null space or determine if the linear system is consistent.

LU Decomposition Thus A x = (LU) x = L (U x) = b LU = A Thus A x = (LU) x = L (U x) = b Let y = U x, then solve y in L y = b by forward substitution Solve x in U x = y by backward substitution

Crout’s Algorithm Set for all i For each j = 1, 2, 3, …, N (a) for i = 1, 2, …, j (b) for i = j +1, j +2, …, N

Order of Update in Crout’s Algorithm

Pivoting Pivoting is essential for stability Interchange rows to get largest βii Implicit pivoting (when comparing for the biggest elements in a column, use the normalized one so that the largest coefficient in an equation is 1)

Computational Complexity How many basic steps does the LU decomposition program take? Big O( … ) notation & asymptotic performance O(N3) More precisely, how many additions and multiplications one need to perform?

Compute A-1 Let B = A-1 then AB = I (I is identity matrix) or A [b1, b2, …,bN] = [e1, e2, …,eN] or A bj = ej for j = 1, 2, …, N where bj is the j-th column of B. I.e., to compute A-1, we solve a linear system N times, each with unit vector ej.

Compute det(A) Definition of determinant Properties of determinant Since det(LU) = det(L)det(U), thus det(A) = det(U) =

Use LAPACK Lapack is a free, high quality linear algebra solver package (downloadable at www.netlib.org/lapack/). Much more sophisticated than NR routines. Written in Fortran 77, but can be used with Fortran 90 or 95 Calling inside C is possible (but machine/compiler dependent)

What Lapack can do? Solution of linear systems, Ax = b Least-square problem, min ||Ax-b||2 Singular value decomposition, A = UsVT Eigenvalue problems, Ax = x.

An Example for Lapack Fortran Routine SUBROUTINE DSYEVR(JOBZ, RANGE, UPLO, N, A, LDA, VL, VU, IL, IU, ABSTOL, M, W, Z, LDZ, ISUPPZ, WORK, LWORK, IWORK, LIWORK, INFO) DSYEVR computes selected eigenvalues and, optionally, eigenvectors of a real symmetric matrix T. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. JOBZ (input) CHARACTER*1 = 'N': Compute eigenvalues only; = 'V': Compute eigenvalues and eigenvectors. RANGE (input) CHARACTER*1 = 'A': all eigenvalues will be found. = 'V': all eigenvalues in the half-open interval (VL,VU] will be found. = 'I': the IL-th through IU-th eigenvalues will be found. For RANGE = 'V' or 'I' and IU - IL < N - 1, DSTEBZ and DSTEIN are called UPLO (input) CHARACTER*1 = 'U': Upper triangle of A is stored; = 'L': Lower triangle of A is stored. N (input) INTEGER The order of the matrix A. N >= 0. A (input/output) DOUBLE PRECISION array, dimension (LDA, N) On entry, the symmetric matrix A. If UPLO = 'U', the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A. If UPLO = 'L', the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A. On exit, the lower triangle (if UPLO='L') or the upper triangle (if UPLO='U') of A, including the diagonal, is destroyed.

An example for calling Lapack in C 1. Prototype declaration in C: dsyevr_(char *JOBZ, char *RANGE, char *UPLO, int *N, real *A, int *LDA, real *VL, real *VU, int *IL, int *IU, real *ABSTOL, int *M, real *W, real *Z, int *LDZ, int *ISUPPZ, real *WORK, int *LWORK, int *IWORK, int *LIWORK, int *INFO); 2. Pass everything as pointers 3. Call the routine as dsyevr_(&JOBZ, &RANGE, &UPLO, &N, A, &LDA, ….) See the program eigen.c for more detail.

Reading, Reference Read NR, Chap. 2 M. T. Heath, “Scientific Computing: an introductory survey”. For a more thorough treatment on numerical linear algebra computation problems, see G H Golub & C F van Loan, “Matrix Computations” See also J. Stoer & R. Bulirsch, “Introduction to Numerical Analysis”, but a very theoretical book.

Problems for Lecture 2 (Numerical solution of linear systems) 1. Consider the following linear equation (a) Solve the system with LU decomposition, following Crout’s algorithm exactly without pivoting. (b) Find the inverse of the 3x3 matrix, using LU decomposition. (c) Find the determinant of the 3x3 matrix, using LU decomposition. 2. Can we do LU decomposition for all square, real matrices? What is the condition for the existence of an LU decomposition?