Cojugate Gradient Method Zhengru Zhang ( 张争茹 ) Office: Math. Building 413(West) 2010 年教学实践周 7.12-7.16.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Linear Systems of Equations
1.5 Elementary Matrices and a Method for Finding
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Numerical Algorithms Matrix multiplication
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Wed. by 5, by Office Hours: Today & tomorrow, 11-1 –Ideas for Friday?
Matrices and Systems of Equations
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
10.1 Gaussian Elimination Method
資訊科學數學11 : Linear Equation and Matrices
INDR 262 INTRODUCTION TO OPTIMIZATION METHODS LINEAR ALGEBRA INDR 262 Metin Türkay 1.
Elementary Linear Algebra Howard Anton Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Chapter 1.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Computational Optimization
Elementary Linear Algebra Howard Anton Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Chapter 1.
Scientific Computing Linear Systems – LU Factorization.
1 資訊科學數學 13 : Solutions of Linear Systems 陳光琦助理教授 (Kuang-Chi Chen)
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Math 201 for Management Students
MA2213 Lecture 5 Linear Equations (Direct Solvers)
More on Inverse. Last Week Review Matrix – Rule of addition – Rule of multiplication – Transpose – Main Diagonal – Dot Product Block Multiplication Matrix.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
We will use Gauss-Jordan elimination to determine the solution set of this linear system.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Lesson 3 CSPP58001.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Linear Systems What is the Matrix?. Outline Announcements: –Homework III: due Today. by 5, by –Ideas for Friday? Linear Systems Basics Matlab and.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Linear Systems Dinesh A.
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
2 2.5 © 2016 Pearson Education, Ltd. Matrix Algebra MATRIX FACTORIZATIONS.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
ALGEBRAIC EIGEN VALUE PROBLEMS
Iterative Solution Methods
Solving Linear Systems Ax=b
Numerical Analysis Lecture12.
Chapter 5 Systems and Matricies. Chapter 5 Systems and Matricies.
Linear Algebra Lecture 15.
CSE 245: Computer Aided Circuit Simulation and Verification
Metode Eliminasi Pertemuan – 4, 5, 6 Mata Kuliah : Analisis Numerik
CSE 245: Computer Aided Circuit Simulation and Verification
CS5321 Numerical Optimization
資訊科學數學13 : Solutions of Linear Systems
Numerical Analysis Lecture10.
Linear Systems Numerical Methods.
Solving Linear Systems: Iterative Methods and Sparse Systems
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Multiply by 5/40 and sum with 2nd row
Home assignment #3 (1) (Total 3 problems) Due: 12 November 2018
Presentation transcript:

Cojugate Gradient Method Zhengru Zhang ( 张争茹 ) Office: Math. Building 413(West) 2010 年教学实践周

Outline Aim Method of Gauss Elimination Basic Iterative Methods Conjugate Gradient Method –Derivation –Theory –Algorithm References Homework & Project

Aim Solve linear algebraic system like a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2... a n1 x 1 + a n2 x a nn x n = b n Using matrix, the above system can be written as Ax=b A is a N x N matrix, b is a N x 1 vector Consider the case: A is large and sparse

Method of Gauss Elimination

U = A, L = I for k = 1 to N-1 for j = k +1 to N l jk = u jk /u kk u j,k:m = u j,k:m – l jk u k,k:m Algorithm of Gaussian Elimination without Pivoting LU Factorization, let A=LU Solve Ly=b Solve Ux=y

Operation Count of Gauss Elimination Gauss Elimination and Back Substitution There are 3 loops There are 2 flops per entry For each k, the inner loop is repeated for rows k +1, …, N Cost: about About N 3 flops

Instability of Gaussian Elimination without Pivoting A1=A1= A2=A2= Examples Remedy Pivoting Partial Pivoting Complete Pivoting

Algorithm of Gaussian Elimination with Partial Pivoting U = A, L = I For k = 1 to N-1 for j = k +1 to N l jk = u jk /u kk u j,k:m = u j,k:m – l jk u k,k:m

Basic Iterative Methods How to construct iterative sequence? Convergence? Conditions? Convergence rate?

Jacobi iteration X [k+1] = D -1 (L+U) X [k] + D -1 b B = D -1 (L+U) Gauss Seidel iteration X [k+1] = (D-L) -1 U X [k] + (D-L) -1 b B = (D-L) -1 U Iterative method X [k+1] = BX [k] +g converges if and only if  (B) < 1 Convergence rate ||X [k] -X * ||  ||X [1] -X [0] ||, where q =||B||<1

Steepest Decent Method Consider the case: A is symmetric positive definite Quadratic functional  (x) = x T Ax - 2b T x The solution of Ax=b is equivalent to find the minimizer of the functional  (x) Method of optimization: find a direction p k and a step  k

Steepest Decent Method Determine p k and  k Suppose that p k is determined. Let’s start from x k Let f(  ) =  (x k +  p k ) = (x k +  p k ) T A(x k +  p k )-2b T (x k +  p k ) =  2 p k T Ap k - 2  r k T p k +  (x k ) where r k = b - Ax k (Residual) By calculas f ’ (  ) = 2  p k T Ap k - 2r k T p k =0 Then let x k+1 = x k +  k p k

Algorithm for Steepest Decent Method Verify  (x k+1 ) -  (x k ) =  (x k +  k p k ) -  (x k ) =  k 2 p k T Ap k - 2  k r k T p k How to determine the direction p k ? take as the negative gradient p k = r k

Algorithm Convergence Theorem Suppose the eigenvalues of A then there holds where

Conjugate Gradient Method Derivation Negative gradient direction r k is the locally steepest decent direction, but it may not be the global one Consider a new direction: combination of r k and p k-1 Initially, take p 0 = r 0, x 1 = x 0 +  0 p 0 For step k +1, choose  and  to minimize By calculas

The corresponding minimizer is  0 and  0 satisfy take Let In summary, 0

Algorithm for CG method Operations involved: Transpose, Scalar Multiply, Matrix Add, Matrix Multiply Where  and  are obtained in a simple form

Properties for CG method Orthogonal properties Theoretically, CG method is an exact method. Actually, works as an iterative method. Convergence rate: where

References 徐树方,高立,张平文, 数值线性代数,北京大学出版社,北京, 2007 袁亚湘,孙文瑜, 最优化理论与方法,科学出版社,北京, 2000 Yousef Saad, Iterative Methods for Sparse Linear Systems, 2000

Home & Project Due at the end of this week Solve the following linear systems using CG method where Set n = 100, 200, 300, 400, 500 Use Matlab to graph the solution (j, u j ) Problem: Minimize the functional E(u)=∫(|  u| 2 +u 2 -2fu )dx The corresponding Euler-Lagrange equation is  E/  u=-2  u+2u-2f=0 or -  u+u=f -u xx + u =f 0<x<1 f=(1+4  2 )sin2  x u(0)=u(1)=0

Home & Project Due at the end of this week Solve the following linear systems using CG method The unknowns can be ordered as below

The coefficient matrix is of the block tridiagonal form Where S Tridiagonal matrix with diagonal entry: other entry: Set n=20,40,80,100. Find the solution Use Matlab to graph the solution (i, j, u ij ) -  u+u=f (x,y)  (0,1)  (0,1) u(x,y)=100(x 2 -x)(y 2 -y) f=200(y-y 2 ) + 200(x-x 2 ) + 100(x 2 -x)(y 2 -y)

The End