1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.

Slides:



Advertisements
Similar presentations
Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Advertisements

Accuracy Robert Strzodka. 2Overview Precision and Accuracy Hardware Resources Mixed Precision Iterative Refinement.
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
Applied Linear Algebra - in honor of Hans SchneiderMay 25, 2010 A Look-Back Technique of Restart for the GMRES(m) Method Akira IMAKURA † Tomohiro SOGABE.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Solution of linear system of equations
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
Solving Linear Systems: Iterative Methods and Sparse Systems COS 323.
1 Systems of Linear Equations Error Analysis and System Condition.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
Revision.
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
Sparse Matrix Methods Day 1: Overview Matlab and examples Data structures Ax=b Sparse matrices and graphs Fill-reducing matrix permutations Matching and.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
CS240A: Conjugate Gradients and the Model Problem.
Optimization/Learning on the GPU (supplement figure slides) CIS 665 Joe Kider.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
1 Systems of Linear Equations Error Analysis and System Condition.
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
UNCONSTRAINED MULTIVARIABLE
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
CS 219: Sparse Matrix Algorithms
On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using Numerical Integration Takamitsu Watanabe and Yusaku.
Parallel Solution of the Poisson Problem Using MPI
CS240A: Conjugate Gradients and the Model Problem.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Administrivia: October 5, 2009 Homework 1 due Wednesday Reading in Davis: Skim section 6.1 (the fill bounds will make more sense next week) Read section.
Chapter 10 Minimization or Maximization of Functions.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Report from LBNL TOPS Meeting TOPS/ – 2Investigators  Staff Members:  Parry Husbands  Sherry Li  Osni Marques  Esmond G. Ng 
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
Programming assignment # 3 Numerical Methods for PDEs Spring 2007 Jim E. Jones.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Numeric/symbolic exact rational linear system solver Bryan Youse, David Wood, B. David Saunders University of Delaware, Newark, Delaware Outlook [1] Zhendong.
Symbolic/numeric exact rational linear system solver Bryan Youse, B. David Saunders University of Delaware, Newark, Delaware Outlook [1] Zhendong Wan.
Iterative Solution Methods
Introduction to the Finite Element Method
Model Problem: Solving Poisson’s equation for temperature
CS 290N / 219: Sparse Matrix Algorithms
A computational loop k k Integration Newton Iteration
CSE 245: Computer Aided Circuit Simulation and Verification
GPU Implementations for Finite Element Methods
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
CSCE569 Parallel Computing
Conjugate Gradient Method
CS5321 Numerical Optimization
Numerical Linear Algebra
Solving Linear Systems: Iterative Methods and Sparse Systems
Administrivia: November 9, 2009
Performance Optimization
A computational loop k k Integration Newton Iteration
Pivoting, Perturbation Analysis, Scaling and Equilibration
CS5321 Numerical Optimization
Presentation transcript:

1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl

2 Robustness of Cholesky Recall the Cholesky algorithm – a lot of subtractions/additions  cancellation and round-off errors accumulate Sparse Cholesky with Symbolic Factorization provides high performance – but what about accuracy and robustness?

3 Test case: IPM All IPMs implementations involve solving a system of linear equations (ADA T x=b) in each step. Usually in IPM when approaching the optimum the ADA T matrix becomes ill-conditioned.

4 Sparse Ax=b solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More General

5 Iterative Refinement A technique for improving a computed solution to a linear system Ax = b. r is constructed in higher precision. x 2 should be more accurate (why?) Algorithm 0. (Solve Ax 1 =b someway – LU/Chol) 1.Compute the residual r = b – Ax 1 2.Solve the correction d in Ad = r 3.Update the solution x 2 = x 1 + d

6 Iterative Refinement 1. L L T = chol(A) % Choleskey factorization (SINGLE) O(n 3 ) 2. x = L\(L T \b) % Back solve (SINGLE) O(n 2 ) 2. r = b – Ax % Residual (DOUBLE) O(n 2 ) 3. while ( || r || not small enough ) %stopping criteria 3.1 d = L\(L T \r) % Choleskey fct. on the residual (SINGLE) O(n 2 ) 3.2 x = x + d % new solution (DOUBLE) O(n 2 ) 3.3 r = b - Ax % new residual (DOUBLE) O(n 2 ) COST: (SINGLE) O(n 3 ) + #ITER * (DOUBLE) O(n 2 ) My implementation is available here:

7 Convergence rate of IR n=40, Cond. #: 3.2* n=60, Cond. #: 1.6* n=80 Cond#: 5* n=100, Cond#: 1.4*

8 Convergence rate of IR N=250, Condition number: 1.9* … For N>350, Cond#= 1.6*10 11 : No convergence iteration ||Err|| 2

9 More Accurate Conventional Gaussian Elimination   With extra precise iterative refinement

10 Conjugate Gradient in a nutshell Iterative method for solving Ax=b Minimizes the a quadratic function: f(x) = 1/2x T Ax-b T x+c Choose search direction that are conjugated to each other. In non-finite precision converges after n iterations. But to solve efficiently CG needs a good preconditioners – not available for the general case.

11 Conjugate Gradient One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0 = 0, r 0 = b, p 0 = r 0 for k = 1, 2, 3,... α k = (r T k-1 r k-1 ) / (p T k-1 Ap k-1 ) step length x k = x k-1 + α k p k-1 approx solution r k = r k-1 – α k Ap k-1 residual β k = (r T k r k ) / (r T k-1 r k-1 ) improvement p k = r k + β k p k-1 search direction

12 Reference John R. Gilbert, University of California. Talk at "Sparse Matrix Days in MIT“ Exploiting the Performance of 32 bit Floating Point Arithmetic in Obtaining 64 bit Accuracy (2006); Julie Langou et. al, Innovative Computation Laboratory, Computer Science Department, University of Tennessee “The Future of LAPACK and ScaLAPACK” Jim Demmel, UC Berkeley.

13 Thank you for listening