Download presentation
Presentation is loading. Please wait.
Published byOscar Hancock Modified over 9 years ago
1
Steepest Decent and Conjugate Gradients (CG)
2
Solving of the linear equation system
3
Steepest Decent and Conjugate Gradients (CG) Solving of the linear equation system Problem: dimension n too big, or not enough time for gauss elimination Iterative methods are used to get an approximate solution.
4
Steepest Decent and Conjugate Gradients (CG) Solving of the linear equation system Problem: dimension n too big, or not enough time for gauss elimination Iterative methods are used to get an approximate solution. Definition Iterative method: given starting point, do steps hopefully converge to the right solution
5
starting issues
6
Solving is equivalent to minimizing
7
starting issues Solving is equivalent to minimizing A has to be symmetric positive definite:
8
starting issues
9
starting issues If A is also positive definite the solution of is the minimum
10
starting issues If A is also positive definite the solution of is the minimum
11
starting issues error: The norm of the error shows how far we are away from the exact solution, but can’t be computed without knowing of the exact solution.
12
starting issues error: The norm of the error shows how far we are away from the exact solution, but can’t be computed without knowing of the exact solution. residual: can be calculated
13
Steepest Decent
14
We are at the point. How do we reach ?
15
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( )
16
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( ) how far should we go?
17
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( ) how far should we go? Choose so that is minimized:
18
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( ) how far should we go? Choose so that is minimized:
19
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( ) how far should we go? Choose so that is minimized:
20
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( ) how far should we go? Choose so that is minimized:
21
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( ) how far should we go? Choose so that is minimized:
22
Steepest Decent We are at the point. How do we reach ? Idea: go into the direction in which decreases most quickly ( ) how far should we go? Choose so that is minimized:
23
Steepest Decent one step of steepest decent can be calculated as follows:
24
Steepest Decent one step of steepest decent can be calculated as follows: stopping criterion: or with an given small It would be better to use the error instead of the residual, but you can’t calculate the error.
25
Steepest Decent Method of steepest decent:
26
Steepest Decent As you can see the starting point is important!
27
Steepest Decent As you can see the starting point is important! When you know anything about the solution use it to guess a good starting point. Otherwise you can choose a starting point you want e.g..
28
Steepest Decent - Convergence
29
Definition energy norm:
30
Steepest Decent - Convergence Definition energy norm: Definition condition: ( is the largest and the smallest eigenvalue of A)
31
Steepest Decent - Convergence Definition energy norm: Definition condition: ( is the largest and the smallest eigenvalue of A) convergence gets worse when the condition gets larger
32
Conjugate Gradients
33
is there a better direction?
34
Conjugate Gradients is there a better direction? Idea: orthogonal search directions
35
Conjugate Gradients is there a better direction? Idea: orthogonal search directions
36
Conjugate Gradients is there a better direction? Idea: orthogonal search directions only walk once in each direction and minimize
37
Conjugate Gradients is there a better direction? Idea: orthogonal search directions only walk once in each direction and minimize maximal n steps are needed to reach the exact solution
38
Conjugate Gradients is there a better direction? Idea: orthogonal search directions only walk once in each direction and minimize maximal n steps are needed to reach the exact solution has to be orthogonal to
39
Conjugate Gradients example with the coordinate axes as orthogonal search directions:
40
Conjugate Gradients example with the coordinate axes as orthogonal search directions: Problem: can’t be computed because (you don’t know !)
41
Conjugate Gradients new idea: A-orthogonal
42
Conjugate Gradients new idea: A-orthogonal Definition A-orthogonal: A-orthogonal (reminder: orthogonal: )
43
Conjugate Gradients new idea: A-orthogonal Definition A-orthogonal: A-orthogonal (reminder: orthogonal: ) now has to be A-orthogonal to
44
Conjugate Gradients new idea: A-orthogonal Definition A-orthogonal: A-orthogonal (reminder: orthogonal: ) now has to be A-orthogonal to
45
Conjugate Gradients new idea: A-orthogonal Definition A-orthogonal: A-orthogonal (reminder: orthogonal: ) now has to be A-orthogonal to can be computed!
46
Conjugate Gradients A set of A-orthogonal directions can be found with n linearly independent vectors and conjugate Gram- Schmidt (same idea as Gram-Schmidt).
47
Conjugate Gradients Gram-Schmidt: linearly independent vectors
48
Conjugate Gradients Gram-Schmidt: linearly independent vectors
49
Conjugate Gradients Gram-Schmidt: linearly independent vectors conjugate Gram-Schmidt:
50
Conjugate Gradients A set of A-orthogonal directions can be found with n linearly independent vectors and conjugate Gram- Schmidt (same idea as Gram-Schmidt). CG works by setting (makes conjugate Gram- Schmidt easy)
51
Conjugate Gradients A set of A-orthogonal directions can be found with n linearly independent vectors and conjugate Gram- Schmidt (same idea as Gram-Schmidt). CG works by setting (makes conjugate Gram- Schmidt easy) with
52
Conjugate Gradients
53
Conjugate Gradients
54
Conjugate Gradients
55
Conjugate Gradients
56
Conjugate Gradients
57
Conjugate Gradients
58
Conjugate Gradients
59
Conjugate Gradients
60
Conjugate Gradients
61
Conjugate Gradients
62
Conjugate Gradients
65
Method of Conjugate Gradients:
66
Conjugate Gradients - Convergence
68
Conjugate Gradients - Convergence for steepest decent for CG Convergence of CG is much better!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.