EE, NCKU Tien-Hao Chang (Darby Chang) Numerical Analysis EE, NCKU Tien-Hao Chang (Darby Chang)
In the previous slide Rootfinding Bisection method False position multiplicity Bisection method Intermediate Value Theorem convergence measures False position yet another simple enclosure method advantage and disadvantage in comparison with bisection method
In this slide Fixed point iteration scheme Newton’s method what is a fixed point? iteration function convergence Newton’s method tangent line approximation Secant method
Rootfinding Simple enclosure Fixed point iteration Intermediate Value Theorem guarantee to converge convergence rate is slow bisection and false position Fixed point iteration Mean Value Theorem rapid convergence loss of guaranteed convergence
Fixed Point Iteration Schemes 2.3 Fixed Point Iteration Schemes
There is at least one point on the graph at which the tangent lines is parallel to the secant line
Mean Value Theorem We use a slightly different formulation An example of using this theorem proof the inequality
Fixed Points
Fixed points Consider the function sinx thought of as moving the input value of π/6 to the output value 1/2 the sine function maps zero to zero the sine function fixes the location of 0 x=0 is said to be a fixed point of the function sinx
Number of fixed points According to the previous figure, a trivial question is how many fixed points of a given function?
Only sufficient conditions Namely, not necessary conditions it is possible for a function to violate one or more of the hypotheses, yet still have a (possibly unique) fixed point
Fixed Point Iteration
Fixed point iteration If it is known that a function g has a fixed point, one way to approximate the value of that fixed point is ‘fixed point iteration scheme’ These can be defined as follows:
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action
About fixed point iteration
Relation to rootfinding Now we know what fixed point iteration is, but how to apply it on rootfinding? More precisely, given a rootfinding equation, f(x)=x3+x2-3x-3=0, what is its iteration function g(x)? hint
Iteration function Algebraically transform to the form x = … f(x) = x3 + x2 – 3x – 3 x = x3 + x2 – 2x – 3 x = (x3 + x2 – 3 ) / 3 … Every rootfinding problem can be transformed into any number of fixed point problems (fortunately or unfortunately?)
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action
Analysis #1 iteration function converges #2 fails to converge but to a fixed point outside the interval (1,2) #2 fails to converge despite attaining values quite close to #1 #3 and #5 converge rapidly #3 add one correct decimal every iteration #5 doubles correct decimals every iteration #4 converges, but very slow
Convergence This analysis suggests a trivial question the fixed point of g is justified in our previous theorem
k demonstrates the importance of the parameter k k = 1/2 when k → 0, rapid when k → 1, dramatically slow k = 1/2 roughly the same as the bisection method
Fixed Point Iteration Schemes Order of Convergence All about the derivatives, g(k)(p)
Stopping condition
Two steps
The first step
The second step
2.3 Fixed Point Iteration Schemes
2.4 Newton’s Method
Newtoon’ Method Definition
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action
In the previous example Newton’s method used 8 function evaluations Bisection method requires 36 evaluations starting from (1,2) False position requires 31 evaluations starting from (1,2)
Initial guess Are these comparisons fair? answer Are these comparisons fair? p0=0.48, converges to 0.4510472613 after 5 iterations p0=0.4, fails to converges after 5000 iterations p0=0, converges to 697.4995475 after 42 iterations example
p0 in Newton’s method Not guaranteed to converge p0=0.4, fails to converge May converge to a value very far from p0 p0=0, converges to 697.4995475 Heavily dependent on the choice of p0
Convergence Analysis for Newton’s Method
The simplest plan of attack is to apply the general fixed point iteration convergence theorem
Analysis strategy To do this, it is must be shown that there exists such an interval, I, which contains the root p, for which
Newton’s Method Guaranteed to Converge? Why sometimes Newton’s method does not converge? This theorem guarantees that δ exists But it may be very small hint answer
http://img2. timeinc. net/people/i/2007/startracks/071008/brad_pitt300 http://img2.timeinc.net/people/i/2007/startracks/071008/brad_pitt300.jpg Oh no! After these annoying analyses, the Newton’s method is still not guaranteed to converge!?
Don’t worry Actually, there is an intuitive method Combine Newton’s method and bisection method Newton’s method first if an approximation falls outside current interval, then apply bisection method to obtain a better guess (Can you write an algorithm for this method?)
Newton’s Method Convergence analysis At least quadratic g’(p)=0, since f(p)=0 Stopping condition
Recall that http://www.dianadepasquale.com/ThinkingMonkey.jpg
Is Newton’s method always faster?
http://thomashawk.com/hello/209/1017/1024/Jackson%20Running.jpg In action
2.4 Newton’s Method
2.5 Secant Method
Secant method Because that Newton’s method 2 function evaluations per iteration requires the derivative Secant method is a variation on either false position or Newton’s method 1 additional function evaluation per iteration does not require the derivative Let’s see the figure first answer
Secant method Secant method is a variation on either false position or Newton’s method 1 additional function evaluation per iteration does not require the derivative does not maintain an interval pn+1 is calculated with pn and pn-1
2.5 Secant Method