Jonathan Richard Shewchuk Reading Group Presention By David Cline An Introduction to the Conjugate Gradient Method without the Agonizing Pain Jonathan Richard Shewchuk Reading Group Presention By David Cline 4/16/2017
Linear System Unknown vector Known vector (what we want to find) Square matrix 4/16/2017
Matrix Multiplication 4/16/2017
Positive Definite Matrix [ x1 x2 … xn ] > 0 * Also, all eigenvalues of the matrix are positive 4/16/2017
Quadtratic form An expression of the form 4/16/2017
Why do we care? The gradient of the quadratic form is our original system if A is symmetric: 4/16/2017
Visual interpretation 4/16/2017
Example Problem: 4/16/2017
Visual representation f(x) f(x) f’(x) 4/16/2017
Solution the solution to the system, x, is the global minimum of f. … if A is symmetric, And since A is positive definite, x is the global minimum of f 4/16/2017
Definitions Error Residual Whenever you read ‘residual’, Think ‘the direction of steepest Descent’. 4/16/2017
Method of steepest descent Start with arbitrary point, x(0) move in direction opposite gradient of f, r(0) reach minimum in that direction at distance alpha repeat 4/16/2017
Steepest descent, mathematically - OR - 4/16/2017
Steepest descent, graphically 4/16/2017
Eigen vectors 4/16/2017
Steepest descent does well: Steepest descent converges in one Iteration if the error term is an Eigenvector. Steepest descent converges in one Iteration if the all the eigenvalues Are equal. 4/16/2017
Steepest descent does poorly If the error term is a mix of large and small eigenvectors, steepest descent will move back and forth along toward the solution, but take many iterations to converge. The worst case convergence is related to the ratio of the largest and smallest eigenvalues of A, called the “condition number”: 4/16/2017
Convergence of steepest descent: # iterations “energy norm” at iteration i “energy norm” at iteration 0 4/16/2017
How can we speed up or guarantee convergence? Use the eigenvectors as directions. terminates in n iterations. 4/16/2017
Method of conjugate directions Instead of eigenvectors, which are too hard to compute, use directions that are “conjugate” or “A-orthogonal”: 4/16/2017
Method of conjugate directions 4/16/2017
How to find conjugate directions? Gram-Shmidt Conjugation: Start with n linearly independent vectors u0…un-1 For each vector, subract those parts that are not A-orthogonal to the other processed vectors: 4/16/2017
Problem Gram-Schmidt conjugation is slow and we have to store all of the vectors we have created. 4/16/2017
Conjugate Gradient Method Apply the method of conjugate directions, but use the residuals for the u values: ui = r(i) 4/16/2017
How does this help us? It turns out that the residual ri is A-orthogonal to all of the previous residuals, except ri-1, so we simply make it A-orthogonal to ri-1, and we are set. 4/16/2017
Simplifying further k=i-1 4/16/2017
Putting it all together Start with steepest descent Compute distance to bottom Of parabola Slide down to bottom of parabola Compute steepest descent At next location Remove part of vector that Is not A-orthogonal to di 4/16/2017
Starting and stopping Start either with a rough estimate of the solution, or the zero vector. Stop when the norm of the residual is small enough. 4/16/2017
Benefit over steepest descent 4/16/2017
Preconditioning 4/16/2017
Diagonal preconditioning Just use the diagonal of A as M. A diagonal matrix is easy to invert, but of course it isn’t the best method out there. 4/16/2017
CG on the normal equations If A is not symmetric, or positive-definite, or not square, we can’t use CG directly to solve However, we can use it to solve the system is always symmetric, positive definite and square. The problem that we solve with this is the least-squares fit but the condition number increases. Also note that we never actually have to form Instead we multiply by AT and then by A. 4/16/2017
4/16/2017