1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.

Slides:



Advertisements
Similar presentations
Nonnegative Matrix Factorization with Sparseness Constraints S. Race MA591R.
Advertisements

Numerical Solution of Linear Equations
Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 9 Instructor: Tim Warburton.
Lecture 19: Parallel Algorithms
MATH 685/ CSI 700/ OR 682 Lecture Notes
1.7 Diagonal, Triangular, and Symmetric Matrices.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill Companies,
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Wed. by 5, by Office Hours: Today & tomorrow, 11-1 –Ideas for Friday?
Solving Linear Systems: Iterative Methods and Sparse Systems COS 323.
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
Dr. Jie Zou PHY Chapter 3 Solution of Simultaneous Linear Algebraic Equations: Lecture (III) Note: Besides the main textbook, also see Ref: Applied.
Thomas algorithm to solve tridiagonal matrices
Part 3 Chapter 8 Linear Algebraic Equations and Matrices PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The.
1.7 Diagonal, Triangular, and Symmetric Matrices 1.
Chapter 8 Objectives Understanding matrix notation.
A Factored Sparse Approximate Inverse software package (FSAIPACK) for the parallel preconditioning of linear systems Massimiliano Ferronato, Carlo Janna,
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Engineering Analysis ENG 3420 Fall 2009
Scientific Computing Linear Systems – LU Factorization.
ECON 1150 Matrix Operations Special Matrices
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Solving Scale Linear Systems (Example system continued) Lecture 14 MA/CS 471 Fall 2003.
Integrating Trilinos Solvers to SEAM code Dagoberto A.R. Justo – UNM Tim Warburton – UNM Bill Spotz – Sandia.
Lecture 22 MA471 Fall Advection Equation Recall the 2D advection equation: We will use a Runge-Kutta time integrator and spectral representation.
MA/CS 375 Fall MA/CS 375 Fall 2003 Lecture 19.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 21.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 31.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 8.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 21.
Solution of Sparse Linear Systems
Fundamentals of Engineering Analysis
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 24.
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Today. by 5, by –Ideas for Friday? Linear Systems Basics Matlab and.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 23.
Lecture 21 MA471 Fall 03. Recall Jacobi Smoothing We recall that the relaxed Jacobi scheme: Smooths out the highest frequency modes fastest.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 13.
Programming Massively Parallel Graphics Multiprocessors using CUDA Final Project Amirhassan Asgari Kamiabad
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 19.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 25.
4 4.2 © 2016 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 32.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 28.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Elementary Matrix Theory
Linear Algebraic Equations and Matrices
Linear Algebraic Equations and Matrices
MA/CS 375 Fall 2002 Lecture 9 MA/CS375 Fall 2002.
MA/CS 375 Spring 2002 Lecture 19 MA/CS 375 Fall 2002.
Lecture 19 MA471 Fall 2003.
Singular Value Decomposition
Deflated Conjugate Gradient Method
GPU Implementations for Finite Element Methods
Lecture 11 Matrices and Linear Algebra with MATLAB
Part 3 Chapter 10 LU Factorization
ECEN 615 Methods of Electric Power Systems Analysis
Solving Linear Systems: Iterative Methods and Sparse Systems
Introduction to Scientific Computing II
Presentation transcript:

1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34

2 Today’s Class A few ways to solve Ax=b for a sparse matrix A in Matlab. Followed by time for work on HW 10. On Friday 04/25/03 we will benefit from a special lecture on preconditioning for iterative methods presented by: – Dr D.M. Day of Sandia National Laboratories

3 Recall: Summary of Temporal Implicit Schemes Backwards Euler is unconditionally stable for non- negative diffusion parameter D (i.e. any dt>=0) and first order in dt. Crank-Nicholson is unconditionally stable for non- negative diffusion parameter D (i.e. any dt>=0) and second order in dt. ESDIRK4 – generalizes to fourth order in dt.

4 Backwards Euler Linear System Given Cn we wish to find a Cn+1 which satisfies: For simplicity we define: Note that A is a symmetric, positive matrix.

5 Lazy Way To Build The Matrix.. Don’t tell anyone I told you this but here’s an easy way to program the construction of the DG operator. The first step is to understand that if I set up a vector which only has one non-zero entry in the n’th entry and then pass it to umDIFFUSIONop then the returned vector will be the n’th column of the A matrix.

6 Laziness cont The next step is to use the sparsity of If I set one of the node values in the center white triangle to one and multiply by the Neumann DG derivative operator then the result vector will have non-zero entries in the red triangles and the original white triangle. If I take this result vector and premultiply by the Dirichlet DG derivative operator then there will be non-zero entries in the red, white and blue triangles..

7 Finding The Neighbors and Their Neighbors of an Element In umMESH.m there is code which computes the sparse connectivity matrix umSparseEtoE. For: the matrix is: To find neighbors and neighbors of neighbors we consider the square of the connectivity matrix:

8 Example cont Double connectivity matrix: i.e. Element 1 is within two elements of 2, 3, 4 Element 2 is within two elements of 1, 3, 4 Element 3 is within two elements of 1, 2, 4, 5 Element 4 is within two elements of 1, 2, 3, 5 Element 5 is within two elements of 3,

9 Matlab Implementation (Connectivity): Here we compute the square of the element connectivity matrix:

10 Building DG Matrix umDIFFUSIONilu.m 16) Create sparse matrix 20-41) for each node, compute the row of the matrix 44) Compute Cholesky factorization 47) Compute incomplete Cholesky factorization.

11 umDIFFUSIONpartop.m In this function we apply the operation of the matrix on a part of the vector.

12 First Part of the Part Matrix Multiply umDIFFUSIONpartop.m In this function we apply the Helmholtz operator to all the elements specified in the elmts argument.

13 Applying the Helmholtz Operator umDIFFUSIONpartop.m cont

14 Driver umDIFFUSIONdemo.m Driver for the implicit DG diffusion solver.

15 umDIFFUSIONrun.m Note the changes to pcg and the call to: umDIFFUSIONilu

16 In Action It takes a while to build the matrix… We can look at the sparsity pattern of the matrix:

17 Some Options For Solving Ax=b We will consider a couple of many options for accelerating the process of solving Ax=b 1)Cholesky factorization of A before time stepping and repeated backsolving. 2)Incomplete Cholesky factorization of A and using this as a preconditioner.

18 Option 1: Cholesky factorization Use Cholesky factorization to decompose A into the product of a lower triangular matrix C and its transpose: Then every time we need to peform a backsolve twice: Each backsolve takes N^2 operations (N=total number of unknowns)

19 Option 1: Sparsity of Cholesky Factor

20 Option 1: Sanity Check on Cholesky Factorization We can check to see how stable the computation of the Cholesky factorization was: Not bad – we lost about 4 decimal places…

21 Option 1: Condition number We can use condest to estimate the condition number of the matrix. We see that the condition number is about 800 so we may well expect to lose 3 decimal places in computing the factorization..

22 Option 1: Direct Solver Code 31-32) note two back solves..

23 Option 2: Incomplete Cholesky Preconditioner We can use an incomplete Cholesky preconditioner in the PCG algorithm. The idea is to use cholinc to compute the Cholesky factorization with a drop tolerance used to determine if entries in the factor are created.

24 Sparsity of Incomplete Cholesky Factor cholinc(A, 1e-3) Only non-zero entriesOnly non-zero entries cholinc(A, ‘0’)

25 Iteration Count Per Time Step Comparing unpreconditioned and preconditioned with incomplete Cholesky:

26 Option 2: Incomplete Cholesky Preconditioner We can use an incomplete Cholesky preconditioner in the PCG algorithm. 30) Call to PCG uses incomplete Cholesky factorization.

27 Alternative Iterative Schemes in Matlab BICG, BICGSTAB, CGS, GMRES, LSQR, MINRES They all have the same interface. For details type: > help gmres