Download presentation
Presentation is loading. Please wait.
1
EE, NCKU Tien-Hao Chang (Darby Chang)
Numerical Analysis EE, NCKU Tien-Hao Chang (Darby Chang)
2
In the previous slide Rootfinding Bisection method False position
multiplicity Bisection method Intermediate Value Theorem convergence measures False position yet another simple enclosure method advantage and disadvantage in comparison with bisection method
3
In this slide Fixed point iteration scheme Newton’s method
what is a fixed point? iteration function convergence Newton’s method tangent line approximation Secant method
4
Rootfinding Simple enclosure Fixed point iteration
Intermediate Value Theorem guarantee to converge convergence rate is slow bisection and false position Fixed point iteration Mean Value Theorem rapid convergence loss of guaranteed convergence
5
Fixed Point Iteration Schemes
2.3 Fixed Point Iteration Schemes
7
There is at least one point on the graph at which the tangent lines is parallel to the secant line
8
Mean Value Theorem We use a slightly different formulation
An example of using this theorem proof the inequality
9
Fixed Points
11
Fixed points Consider the function sinx
thought of as moving the input value of π/6 to the output value 1/2 the sine function maps zero to zero the sine function fixes the location of 0 x=0 is said to be a fixed point of the function sinx
13
Number of fixed points According to the previous figure, a trivial question is how many fixed points of a given function?
16
Only sufficient conditions
Namely, not necessary conditions it is possible for a function to violate one or more of the hypotheses, yet still have a (possibly unique) fixed point
17
Fixed Point Iteration
18
Fixed point iteration If it is known that a function g has a fixed point, one way to approximate the value of that fixed point is ‘fixed point iteration scheme’ These can be defined as follows:
19
In action
22
About fixed point iteration
23
Relation to rootfinding
Now we know what fixed point iteration is, but how to apply it on rootfinding? More precisely, given a rootfinding equation, f(x)=x3+x2-3x-3=0, what is its iteration function g(x)? hint
24
Iteration function Algebraically transform to the form
x = … f(x) = x3 + x2 – 3x – 3 x = x3 + x2 – 2x – 3 x = (x3 + x2 – 3 ) / 3 … Every rootfinding problem can be transformed into any number of fixed point problems (fortunately or unfortunately?)
26
In action
28
Analysis #1 iteration function converges #2 fails to converge
but to a fixed point outside the interval (1,2) #2 fails to converge despite attaining values quite close to #1 #3 and #5 converge rapidly #3 add one correct decimal every iteration #5 doubles correct decimals every iteration #4 converges, but very slow
29
Convergence This analysis suggests a trivial question
the fixed point of g is justified in our previous theorem
33
k demonstrates the importance of the parameter k k = 1/2
when k → 0, rapid when k → 1, dramatically slow k = 1/2 roughly the same as the bisection method
34
Fixed Point Iteration Schemes Order of Convergence
All about the derivatives, g(k)(p)
40
Stopping condition
42
Two steps
43
The first step
44
The second step
45
2.3 Fixed Point Iteration Schemes
46
2.4 Newton’s Method
49
Newtoon’ Method Definition
50
In action
52
In the previous example
Newton’s method used 8 function evaluations Bisection method requires 36 evaluations starting from (1,2) False position requires 31 evaluations starting from (1,2)
55
Initial guess Are these comparisons fair?
answer Are these comparisons fair? p0=0.48, converges to after 5 iterations p0=0.4, fails to converges after 5000 iterations p0=0, converges to after 42 iterations example
56
p0 in Newton’s method Not guaranteed to converge
p0=0.4, fails to converge May converge to a value very far from p0 p0=0, converges to Heavily dependent on the choice of p0
57
Convergence Analysis for Newton’s Method
58
The simplest plan of attack is to apply the general fixed point iteration convergence theorem
59
Analysis strategy To do this, it is must be shown that there exists such an interval, I, which contains the root p, for which
65
Newton’s Method Guaranteed to Converge?
Why sometimes Newton’s method does not converge? This theorem guarantees that δ exists But it may be very small hint answer
66
http://img2. timeinc. net/people/i/2007/startracks/071008/brad_pitt300
Oh no! After these annoying analyses, the Newton’s method is still not guaranteed to converge!?
67
Don’t worry Actually, there is an intuitive method
Combine Newton’s method and bisection method Newton’s method first if an approximation falls outside current interval, then apply bisection method to obtain a better guess (Can you write an algorithm for this method?)
68
Newton’s Method Convergence analysis
At least quadratic g’(p)=0, since f(p)=0 Stopping condition
69
Recall that
70
Is Newton’s method always faster?
72
In action
74
2.4 Newton’s Method
75
2.5 Secant Method
76
Secant method Because that Newton’s method
2 function evaluations per iteration requires the derivative Secant method is a variation on either false position or Newton’s method 1 additional function evaluation per iteration does not require the derivative Let’s see the figure first answer
78
Secant method Secant method is a variation on either false position or Newton’s method 1 additional function evaluation per iteration does not require the derivative does not maintain an interval pn+1 is calculated with pn and pn-1
83
2.5 Secant Method
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.