Solving Linear Systems: Iterative Methods and Sparse Systems

Slides:



Advertisements
Similar presentations
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Advertisements

MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A A A A.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
CSCI 317 Mike Heroux1 Sparse Matrix Computations CSCI 317 Mike Heroux.
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
Solving Linear Systems: Iterative Methods and Sparse Systems COS 323.
Revision.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
1 Systems of Linear Equations Error Analysis and System Condition.
Finding the Inverse of a Matrix
Matrices Write and Augmented Matrix of a system of Linear Equations Write the system from the augmented matrix Solve Systems of Linear Equations using.

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Scientific Computing Linear Systems – LU Factorization.
Iterative Methods for Solving Linear Systems Leo Magallon & Morgan Ulloa.
MATH 250 Linear Equations and Matrices
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Row 1 Row 2 Row 3 Row m Column 1Column 2Column 3 Column 4.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Lesson 3 CSPP58001.
Linear Systems – Iterative methods
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Programming Massively Parallel Graphics Multiprocessors using CUDA Final Project Amirhassan Asgari Kamiabad
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
1 Numerical Methods Solution of Systems of Linear Equations.
LU Decomposition ● In Gauss elimination; Forward elimination Backward substitution Major computational effort Low computational effort can be used for.
College Algebra Chapter 6 Matrices and Determinants and Applications
MTH108 Business Math I Lecture 20.
Iterative Solution Methods
MAT 322: LINEAR ALGEBRA.
Spring Dr. Jehad Al Dallal
Solving Linear Systems Ax=b
Gauss-Siedel Method.
Linear Algebraic Equations and Matrices
L9Matrix and linear equation
Matrices 3 1.
Lecture 22: Parallel Algorithms
Singular Value Decomposition
7.3 Matrices.
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
CSCE569 Parallel Computing
Lecture 11 Matrices and Linear Algebra with MATLAB
Unit 3: Matrices
Numerical Analysis Lecture14.
CSE 541 – Numerical Methods
~ Least Squares example
~ Least Squares example
Lecture 8 Matrix Inverse and LU Decomposition
Linear Algebra Lecture 16.
Ax = b Methods for Solution of the System of Equations:
Pivoting, Perturbation Analysis, Scaling and Equilibration
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Solving Linear Systems: Iterative Methods and Sparse Systems COS 323

Direct vs. Iterative Methods So far, have looked at direct methods for solving linear systems Predictable number of steps No answer until the very end Alternative: iterative methods Start with approximate answer Each iteration improves accuracy Stop once estimated error below tolerance

Benefits of Iterative Algorithms Some iterative algorithms designed for accuracy: Direct methods subject to roundoff error Iterate to reduce error to O( ) Some algorithms produce answer faster Most important class: sparse matrix solvers Speed depends on # of nonzero elements, not total # of elements Today: iterative improvement of accuracy, solving sparse systems (not necessarily iteratively)

Iterative Improvement Suppose you’ve solved (or think you’ve solved) some system Ax=b Can check answer by computing residual: r = b – Axcomputed If r is small (compared to b), x is accurate What if it’s not?

Iterative Improvement Large residual caused by error in x: e = xcorrect – xcomputed If we knew the error, could try to improve x: xcorrect = xcomputed + e Solve for error: Axcomputed = A(xcorrect – e) = b – r Axcorrect – Ae = b – r Ae = r

Iterative Improvement So, compute residual, solve for e, and apply correction to estimate of x If original system solved using LU, this is relatively fast (relative to O(n3), that is): O(n2) matrix/vector multiplication + O(n) vector subtraction to solve for r O(n2) forward/backsubstitution to solve for e O(n) vector addition to correct estimate of x

Sparse Systems Many applications require solution of large linear systems (n = thousands to millions) Local constraints or interactions: most entries are 0 Wasteful to store all n2 entries Difficult or impossible to use O(n3) algorithms Goal: solve system with: Storage proportional to # of nonzero elements Running time << n3

Special Case: Band Diagonal Last time: tridiagonal (or band diagonal) systems Storage O(n): only relevant diagonals Time O(n): Gauss-Jordan with bookkeeping

Cyclic Tridiagonal Interesting extension: cyclic tridiagonal Could derive yet another special case algorithm, but there’s a better way

Updating Inverse Suppose we have some fast way of finding A-1 for some matrix A Now A changes in a special way: A* = A + uvT for some n1 vectors u and v Goal: find a fast way of computing (A*)-1 Eventually, a fast way of solving (A*) x = b

Sherman-Morrison Formula

Sherman-Morrison Formula

Sherman-Morrison Formula

Applying Sherman-Morrison Let’s consider cyclic tridiagonal again: Take

Applying Sherman-Morrison Solve Ay=b, Az=u using special fast algorithm Applying Sherman-Morrison takes a couple of dot products Total: O(n) time Generalization for several corrections: Woodbury

More General Sparse Matrices More generally, we can represent sparse matrices by noting which elements are nonzero Critical for Ax and ATx to be efficient: proportional to # of nonzero elements We’ll see an algorithm for solving Ax=b using only these two operations!

Compressed Sparse Row Format Three arrays Values: actual numbers in the matrix Cols: column of corresponding entry in values Rows: index of first entry in each row Example: (zero-based) values 3 2 3 2 5 1 2 3 cols 1 2 3 0 3 1 2 3 rows 0 3 5 5 8

Compressed Sparse Row Format Multiplying Ax: for (i = 0; i < n; i++) { out[i] = 0; for (j = rows[i]; j < rows[i+1]; j++) out[i] += values[j] * x[ cols[j] ]; } values 3 2 3 2 5 1 2 3 cols 1 2 3 0 3 1 2 3 rows 0 3 5 5 8

Solving Sparse Systems Transform problem to a function minimization! Solve Ax=b  Minimize f(x) = xTAx – 2bTx To motivate this, consider 1D: f(x) = ax2 – 2bx df/dx = 2ax – 2b = 0 ax = b

Solving Sparse Systems Preferred method: conjugate gradients Recall: plain gradient descent has a problem…

Solving Sparse Systems … that’s solved by conjugate gradients Walk along direction Polak and Ribiere formula:

Solving Sparse Systems Easiest to think about A = symmetric First ingredient: need to evaluate gradient As advertised, this only involves A multiplied by a vector

Solving Sparse Systems Second ingredient: given point xi, direction di, minimize function in that direction

Solving Sparse Systems So, each iteration just requires a few sparse matrix – vector multiplies (plus some dot products, etc.) If matrix is nn and has m nonzero entries, each iteration is O(max(m,n)) Conjugate gradients may need n iterations for “perfect” convergence, but often get decent answer well before then