Solution of Sparse Linear Systems

Slides:



Advertisements
Similar presentations
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Advertisements

Solving linear systems through nested dissection Noga Alon Tel Aviv University Raphael Yuster University of Haifa.
ECE 552 Numerical Circuit Analysis Chapter Four SPARSE MATRIX SOLUTION TECHNIQUES Copyright © I. Hajj 2012 All rights reserved.
Using Sparse Matrix Reordering Algorithms for Cluster Identification Chris Mueller Dec 9, 2004.
Linear Systems LU Factorization CSE 541 Roger Crawfis.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
CS 484. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
MATH 685/ CSI 700/ OR 682 Lecture Notes
1 Parallel Algorithms II Topics: matrix and graph algorithms.
Linear Systems of Equations
Solving Linear Systems (Numerical Recipes, Chap 2)
CHAPTER ONE Matrices and System Equations
Major: All Engineering Majors Authors: Autar Kaw
1 Copyright © 2015, 2011, 2007 Pearson Education, Inc. Chapter 4-1 Systems of Equations and Inequalities Chapter 4.
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Solution of linear system of equations
Linear Algebraic Equations
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 17 Solution of Systems of Equations.
Special Matrices and Gauss-Siedel
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 4 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Chapter 5 Determinants.
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Scientific Computing Linear Systems – LU Factorization.
ECON 1150 Matrix Operations Special Matrices
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
 Row and Reduced Row Echelon  Elementary Matrices.
Square n-by-n Matrix.
Matrices King Saud University. If m and n are positive integers, then an m  n matrix is a rectangular array in which each entry a ij of the matrix is.
Copyright © Cengage Learning. All rights reserved. 7.4 Matrices and Systems of Equations.
Copyright © 2011 Pearson, Inc. 7.3 Multivariate Linear Systems and Row Operations.
Slide Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Sec 3.1 Introduction to Linear System Sec 3.2 Matrices and Gaussian Elemination The graph is a line in xy-plane The graph is a line in xyz-plane.
Math 201 for Management Students
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Linear Systems Gaussian Elimination CSE 541 Roger Crawfis.
MATH 250 Linear Equations and Matrices
Sec 3.2 Matrices and Gaussian Elemination Coefficient Matrix 3 x 3 Coefficient Matrix 3 x 3 Augmented Coefficient Matrix 3 x 4 Augmented Coefficient Matrix.
Matrix Solutions to Linear Systems. 1. Write the augmented matrix for each system of linear equations.
Perfect Gaussian Elimination and Chordality By Shashank Rao.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Lecture 8 Matrix Inverse and LU Decomposition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Matrices and Systems of Equations
Copyright © 2009 Pearson Education, Inc. CHAPTER 9: Systems of Equations and Matrices 9.1 Systems of Equations in Two Variables 9.2 Systems of Equations.
Matrices and Systems of Equations
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Systems Dinesh A.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Section 5.3 MatricesAnd Systems of Equations. Systems of Equations in Two Variables.
Symmetric-pattern multifrontal factorization T(A) G(A)
Solution of Sparse Linear Systems Numerical Simulation CSE245 Thanks to: Kin Sou, Deepak Ramaswamy, Michal Rewienski, Jacob White, Shihhsien Kuo and Karen.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Parallel Direct Methods for Sparse Linear Systems
CS 290N / 219: Sparse Matrix Algorithms
Solving Linear Systems Ax=b
Linear Equations.
Linear Algebra Lecture 15.
Elementary Matrix Methid For find Inverse
CS 290H Lecture 3 Fill: bounds and heuristics
Major: All Engineering Majors Authors: Autar Kaw
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Solution of Sparse Linear Systems A Case Study in Computational Science & Engineering Solution of Sparse Linear Systems Direct Methods Systematic transformation of system of equations into equivalent systems, until the unknown variables are easily solved for. Iterative methods Starting with an initial “guess” for the unknown vector, successively “improve” the guess, until it is “sufficiently” close to the solution.

Direct Solution of Linear Systems Gaussian Elimination div by 2 *(-1) *(-3) Unknowns solved by back-substitution after Gaussian Elimination

LU Decomposition More efficient than Gaussian Eimination when solving many systems with the same coefficient matrix. First A is decomposed into product: A = LU To solve linear system Ax=b, we need to solve (LU)x=b Let z=Ux; we have L(Ux)=b, or Lz=b. This can be solved for z by forward-substitution. Since Ux=z, and z is now known, we can solve for x by back-substitution. =

Cholesky Factorization If A is symmetric and positive definite , it can be factored in the form Cholesky factorization requires only around half as many arithmetic operations as LU decomposition. The forward and back-substitution process is the same as with LU decomposition. =

Sparse Linear Systems A significant fraction of matrix elements are known to be zero, e.g. matrix arising from a finite-difference discretization of a PDE: At most 5 non-zero elements in any row of the matrix, irrespective of the size of the matrix (number of grid points). Sparse matrix is represented in some compact form that keeps information about the non-zero elements. 1 2 3 4 5 6 1 4 -1 0 -1 0 0 2 -1 4 -1 0 -1 0 3 0 -1 4 0 -1 -1 4 -1 0 0 4 -1 0 5 0 -1 0 -1 4 -1 6 0 0 -1 0 -1 4 1 2 3 4 5 6

Sparse Linear Systems For a 100 by 100 grid, with a finite difference discretization using a 5-point stencil, less than .05% of the matrix elements are non-zero. n 2 1 1 n n 2 Physical nxn Grid n 2 Resulting n 2 x sparse matrix

Compressed Sparse Row Format A commonly used representation for sparse matrices: 0 1 2 3 4 5 0 4 -1 0 -1 0 0 1 -1 4 -1 0 -1 0 2 0 -1 4 0 0 -1 3 -1 0 0 4 -1 0 4 0 -1 0 -1 4 -1 5 0 0 -1 0 -1 4 rb 0 3 7 10 13 17 20 a 4 -1 -1 -1 4 -1 -1 -1 4 -1 -1 4 -1 -1 -1 4 -1 -1 -1 4 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 0 1 3 0 1 2 4 1 2 5 0 3 4 1 3 4 5 2 4 5 col for (i = 0; i<n; i++) for(j=rb[i];j<rb[i+1];j++) y[i] += a[j]*x[col[j]]; Sparse MV Multiply for (i = 0; i<n; i++) for(j=0;j<n;j++) y[i] += a[i][j]*x[j]; Dense MV Multiply

Fill-in Non-Zeros During solution of sparse linear system (by GE or LU or Cholesky), row-updates often result in creation of non-zero entries that were originally zero. Row updates using row-1 result in fill-in non-zeros (F).

Effect of reordering on fill-in Re-ordering the equations (rows) or unknowns (columns) can result in significant change in the number of fill-in non-zeros, and hence time for matrix factorization. Fill-in with GE Reorder rows/cols No fill-in with GE

Associated graph of matrix A graph-based view of matrix’s sparsity structure is extremely useful in generating low-fill re-orderings. The associated graph of a symmetric sparse matrix has a vertex corresponding to each row/col. of matrix, and an edge corresponding to each non-zero matrix entry. 3 2 6 5 4 1

Fill-in and graph transformation Row-i updates row-j, j>i iff Aji is non-zero; in the associated graph a matrix non-zero corresponds to an edge. Row-update(i->j) could cause fill-in non-zero Ajk corresponding to all non-zeros Aik. After all updates from row-i, all neighbors of vertex i in the associated graph form a clique. l l i j i j k k

Fill-in and graph transformation Each row’s effect on fill-in generation is captured by the “clique” transformation on the associated graph. The graph view is valuable in suggesting matrix re-ordering approaches. 3 2 6 5 4 1 3 2 6 5 4 1 3 2 6 5 4 1 4 1 2 3 6 5

Matrix re-ordering: Minimum Degree Graph-based algorithm for generating low-fill re-ordering. Matrix permutation is viewed as node-numbering problem in associated graph. Low-degree nodes are numbered early - so that they are removed without adding many fill-in edges. For example, minimum-degree finds a no-fill ordering. d=1 d=3 d=1 1 d=2 d=3 2 d=1 1 d=2 d=3 2 3 1 d=2 2 3 1 4 d=1

Re-ordered matrix 3 2 6 5 4 1 1 4 5 6 3 2

Matrix re-ordering: Nested Dissection Find a minimal vertex-separator to bisect associated graph; number those nodes last; recursively apply to both halves. Property: Given a numbering of nodes, fill-in Aij exists, j>i, iff there is a path from i to j in graph using only lower numbered vertices. No fill-in edges between one half and other half of partition. 43 49 1-21 22-42 19 21 40 42

Comparison of Ordering Schemes Number of non-zeros after fill-in Sparse matrix factorization time