Matrix Factorization Lecture #7 EEE 574 Dr. Dan Tylavsky.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Chapter: 3c System of Linear Equations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Lecture #14 EEE 574 Dr. Dan Tylavsky Matrix Inversion Lemma.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
1.5 Elementary Matrices and a Method for Finding
Systems of Linear Equations
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Solution of linear system of equations
Chapter 9 Gauss Elimination The Islamic University of Gaza
Sparse Matrix Storage Lecture #3 EEE 574 Dr. Dan Tylavsky.
1cs542g-term Notes  Assignment 1 is out (due October 5)  Matrix storage: usually column-major.
Linear Algebraic Equations
Linear Systems of Equations Ax = b Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
Sparse Triangular Matrix Equations Lecture #6 EEE 574 Dr. Dan Tylavsky.
Special Matrices and Gauss-Siedel
Analysis of Triangular Factorization Lecture #8 EEE 574 Dr. Dan Tylavsky.
Solving systems using matrices
Fill Lecture #9 EEE 574 Dr. Dan Tylavsky. Fill © Copyright 1999 Daniel Tylavsky 4 When we solve Ax=b, (A sparse), fill-in will occur in the L,U factor.
Special Matrices and Gauss-Siedel
ECIV 520 Structural Analysis II Review of Matrix Algebra.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 4 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
INDR 262 INTRODUCTION TO OPTIMIZATION METHODS LINEAR ALGEBRA INDR 262 Metin Türkay 1.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
CHAPTER SIX Eigenvalues
Computer Engineering Majors Authors: Autar Kaw
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
10/17/ Gauss-Siedel Method Industrial Engineering Majors Authors: Autar Kaw
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
10/26/ Gauss-Siedel Method Civil Engineering Majors Authors: Autar Kaw Transforming.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 8 Matrix Inverse and LU Decomposition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Lesson 3 CSPP58001.
Linear Systems – Iterative methods
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Chapter 1 Systems of Linear Equations Linear Algebra.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Mechanical Engineering Majors Authors: Autar Kaw
2/26/ Gauss-Siedel Method Electrical Engineering Majors Authors: Autar Kaw
3/6/ Gauss-Siedel Method Major: All Engineering Majors Author: دکتر ابوالفضل رنجبر نوعی
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Matrices, Vectors, Determinants.
Def: A matrix A in reduced-row-echelon form if 1)A is row-echelon form 2)All leading entries = 1 3)A column containing a leading entry 1 has 0’s everywhere.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
10.4 Matrix Algebra. 1. Matrix Notation A matrix is an array of numbers. Definition Definition: The Dimension of a matrix is m x n “m by n” where m =
Lecture 9 Numerical Analysis. Solution of Linear System of Equations Chapter 3.
Iterative Solution Methods
Chapter: 3c System of Linear Equations
7.7 Determinants. Cramer’s Rule
Gauss-Siedel Method.
Numerical Analysis Lecture12.
Matrix Methods Summary
Numerical Analysis Lecture14.
Numerical Analysis Lecture10.
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations:
Presentation transcript:

Matrix Factorization Lecture #7 EEE 574 Dr. Dan Tylavsky

Matrix Factorization © Copyright 1999 Daniel Tylavsky 4 Let’s look at solving Ax=b –A, b are dense. 4 Two possible (general approaches): –Indirect Methods - methods which asymptotically approach (but never reach) the true solution as the number of steps increases. Ex: Point Jacobi, Gauss-Siedel, Successive Over relaxation, Multigrid

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Direct Methods - methods that yield the theoretically exact result in a fixed number of steps. (Assuming an infinitely precise computing engine.) Ex: Elimination Methods –Gauss Elimination –Gauss-Jordan Elimination –Cholesky Factorization (Numerically Symmetric) –LU or LDU factorization (a.k.a., Product form of the Inverse),(Variations Include Crout, Doolittle, Banachiewicz) Orthogonalization Methods –QR factorization –Given Rotations

Matrix Factorization © Copyright 1999 Daniel Tylavsky Conjugate Gradient Methods - Exact solution in # of steps equal to the number of different eigenvalues. –Various Variations 4 Advantages of Elimination Methods –No convergence criteria (however, pivoting in general non-positive definite case.) –Factors can be used repeatedly. –Factors can be ‘easily’ modified to accommodate matrix/network changes. –Partial factorized matrices give linear network equivalents.

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Definitions Symmetric:A=A T Positive Definite: x T Ax>0, (A>0 is short hand notation) Diagonally Dominant:,with inequality for at least one i. Properly Diagonally Dominant:for all i. Rank: Order of the largest sub-matrix with nonzero determinant.

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Properties Symmetric pos. def. (A>0) matrix has all positive real eigenvalues. If A>0, then Det(A)=|A|>0. A rank 1 matrix must have the outer-product form, xy T. Symmetric A>0 has a unique Cholesky factorization. A=U T U, A=U’ T DU’ If A is properly diagonally dominant and all of its diagonal elements are positive, then A is positive definite.

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Factorizing a non-symmetric matrix. –I claim A can be expanded in the following form:

Matrix Factorization © Copyright 1999 Daniel Tylavsky –To find the value of A 2, multiply. –Therefore: Outer product term of first row & first column (absent. diag. element)

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Do this repeatedly until we have the product form of A.

Matrix Factorization © Copyright 1999 Daniel Tylavsky –To find A3, multiply the result. –We can write:

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Continuing on yields: –where: –We can write:

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Elementary-Matrix-Multiplication-Superposition Principal. ijij

Matrix Factorization © Copyright 1999 Daniel Tylavsky –This implies I can write:

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Taking the transpose of the elementary-matrix- multiplication-superposition principal allows me to write: –Hence we can write:

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Let’s work though an example: –Divide Top Row by a 11. –Compute –Divide 2nd Row by  2. –Divide 3rd Row by  3. –Compute

Matrix Factorization © Copyright 1999 Daniel Tylavsky 4 Individual Problem: Verify that previous answer is correct. 4 Note that result isn’t symmetrical. 4 Individual Problem: Factor A=LU for:

Matrix Factorization © Copyright 1999 Daniel Tylavsky 4 To make this symmetric use the L’DL’ T form. –Let –Write: –Division by diag. of row & col. same time.

Matrix Factorization © Copyright 1999 Daniel Tylavsky 4 Individual Problem: Modify LU factors of A: 4 To get L’DL’ T :

Matrix Factorization © Copyright 1999 Daniel Tylavsky 4 Data structure for storing LU factors? 4 For LU factors could use LC-D-UR/X:

Matrix Factorization © Copyright 1999 Daniel Tylavsky 4 For L’DL’ T factors could use CR(L)-D/X:

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Divide Top Row by a 11. –Compute for 2nd row. –Divide 2nd Row by  2. –If we store & work through A by rows, factorization is done in a slightly different way: –Compute and for 3rd row.

Matrix Factorization © Copyright 1999 Daniel Tylavsky 4 Individual Problem: Finish the factorization.

Matrix Factorization © Copyright 1999 Daniel Tylavsky –Cholesky Factorization (Symmetric A=LL T.) –A can be expanded in the following form: Etc. –Not used. Calculating square-roots is expensive.

The End