1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.

Slides:



Advertisements
Similar presentations
Example 1 Matrix Solution of Linear Systems Chapter 7.2 Use matrix row operations to solve the system of equations  2009 PBLPathways.
Advertisements

Applied Linear Algebra - in honor of Hans SchneiderMay 25, 2010 A Look-Back Technique of Restart for the GMRES(m) Method Akira IMAKURA † Tomohiro SOGABE.
Solving Equations = 4x – 5(6x – 10) -132 = 4x – 30x = -26x = -26x 7 = x.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
2.7.6 Conjugate Gradient Method for a Sparse System Shi & Bo.
Matrix Factorization Lecture #7 EEE 574 Dr. Dan Tylavsky.
Multilevel Incomplete Factorizations for Non-Linear FE problems in Geomechanics DMMMSA – University of Padova Department of Mathematical Methods and Models.
Function Optimization Newton’s Method. Conjugate Gradients
Lecture 2 Linear Variational Problems (Part II). Conjugate Gradient Algorithms for Linear Variational Problems in Hilbert Spaces 1.Introduction. Synopsis.
CG Enlarge the class of CG matrices CG in non-standard way applied to a class of symmetric indefinite matrices Gene Golub: for the construction of a 3-term.
Shawn Sickel A Comparison of some Iterative Methods in Scientific Computing.
A Comparison of Some Iterative Methods in Scientific Computing Shawn Sickel Man-Cheng Yeung Jon Held Department of Mathematics Introduction: Large sparse.
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Wed. by 5, by Office Hours: Today & tomorrow, 11-1 –Ideas for Friday?
AppxA_01fig_PChem.jpg Complex Numbers i. AppxA_02fig_PChem.jpg Complex Conjugate.
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
CS240A: Conjugate Gradients and the Model Problem.
Function Optimization. Newton’s Method Conjugate Gradients Method
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
Section 8.3 – Systems of Linear Equations - Determinants Using Determinants to Solve Systems of Equations A determinant is a value that is obtained from.
Example: Introduction to Krylov Subspace Methods DEF: Krylov sequence
A Factored Sparse Approximate Inverse software package (FSAIPACK) for the parallel preconditioning of linear systems Massimiliano Ferronato, Carlo Janna,
Numerical Linear Algebra IKI Course outline Review linear algebra Square linear systems Least Square Problems Eigen Problems Text: Applied Numerical.
Qualifier Exam in HPC February 10 th, Quasi-Newton methods Alexandru Cioaca.
6.5 – Applying Systems of Linear Equations
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
DE Weak Form Linear System
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
Parallel Solution of the Poisson Problem Using MPI
On implicit-factorization block preconditioners Sue Dollar 1,2, Nick Gould 3, Wil Schilders 2,4 and Andy Wathen 1 1 Oxford University Computing Laboratory,
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES Read pp Stop at “Inverse of a Matrix” box.
Lecture 1: Math Review EEE2108: Optimization 서강대학교 전자공학과 2012 학년도 1 학기.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
2.1 – Linear and Quadratic Equations Linear Equations.
Partial Derivatives Example: Find If solution: Partial Derivatives Example: Find If solution: gradient grad(u) = gradient.
Direct and Iterative Methods for Sparse Linear Systems
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Programming Massively Parallel Graphics Multiprocessors using CUDA Final Project Amirhassan Asgari Kamiabad
F. Fairag, H Tawfiq and M. Al-Shahrani Department of Math & Stat Department of Mathematics and Statistics, KFUPM. Nov 6, 2013 Preconditioning Technique.
Linear System expensive p=[0,0.2,0.4,0.45,0.5,0.55,0.6,0.8,1]; t=[1:8; 2:9]; e=[1,9]; n = length(p); % number of nodes m = size(t,2); % number of elements.
Algebra Review. Systems of Equations Review: Substitution Linear Combination 2 Methods to Solve:
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
I. Quadratic Forms and Canonical Forms Def 1 : Definition 2 : If linear operations.
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Fast 3D Least-squares Migration with a Deblurring Filter Wei Dai.
1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
PreCalculus Section 14.3 Solve linear equations using matrices
Parallel Direct Methods for Sparse Linear Systems
Differential Equations
Chapter 7 – Systems of Linear Equations and Inequalities
A Comparison of some Iterative Methods in Scientific Computing
CS5321 Numerical Optimization
Solve System by Linear Combination / Addition Method
Michael Overton Scientific Computing Group Broad Interests
CS5321 Numerical Optimization
Conjugate Gradient Method
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Solving simultaneous linear and quadratic equations
Numerical Linear Algebra
4.3 Determinants and Cramer’s Rule
Unit 1 Representing Real Numbers
Sec 3.5 Inverses of Matrices
Solving a System of Linear Equations
1-6: Absolute Value Equations
CS5321 Numerical Optimization
Presentation transcript:

1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes and Stiefel [1] is an effective popular method for solving large, sparse symmetric positive definite (SPD). [1] M. R. Hestenes and E. Stiefel. Methods of conjugate gradient for solving linear systems. Journal of Research of the Natural Bureau of Standards, 49: , 1952

Standard inner product defined by:

Preconditioner Non-singular

Standard inner product defined by: Defined by:For any real symmetric Is an inner product The symmetric bilinear form Pos. def.

Self-adjoint in H-symmetric

Computational Fluid Dynamics Optimizations Saddle Point Problem Preconditioner Symmetric Indefinite Non-symmetric Positive definite Is H-symmetric and positive definite

PreconditionerInner Product USE SPD in H H H H H ^ ^ ^

SPD CG Symm MINRES Non-Sym GMRES

PreconditionerInner Product USE SPD in H H H H H

2008

PreconditionerInner Product USE SPD in H USE Preconditioner SPD in H Can we ??exist Inner Product