Chapter 3 Root Finding
3.1 The Bisection Method Let f be a continues function. Suppose we know that f(a) f(b) < 0, then there is a root between a and b.
Example 3.1 A formal statement is given in Algorithm 3.1.
Theorem 3.1 Bisection Convergence and Error
Bisection Method Advantage: Disadvantage: A global method: it always converge no matter how far you start from the actual root. Disadvantage: It cannot be used to find roots when the function is tangent is the axis and does not pass through the axis. For example: It converges slowly compared with other methods.
3.2 Newton’s Method: Derivation and Examples Newton’s method is the classic algorithm for finding roots of functions. Two good derivations of Newton’s method: Geometric derivation Analytic derivation
Newton’s Method : Geometric Derivation
Newton’s Method : Geometric Derivation The fundamental idea in Newton’s method is to use the tangent line approximation to the function f at point . The point-slope formula for the equation of the straight line gives us: Continue the process with another straight line to get
Newton’s Method : Analytic Derivation
Example 3.2
Newton’s Method Advantage: Disadvantage: Very fast Not a global method For example: Figure 3.3 (root x = 0.5) Another example: Figure 3.4 (root x = 0.05) In these example, the initial point should be carefully chosen. Newton’s method will cycle indefinitely. Newton’s method will just hop back and forth between two values. For example: Consider (root x = 0)
because the root is positive Initial value Wrong predictions, because the root is positive Very close to the actual root
3.3 How to Stop Newton’s Method Ideally, we would want to stop when the error is sufficiently small. (p. 12)
To make sure f(xn) is also small enough
3.4 Application: Division using Newton’s Method The purpose is to illustrate the use of Newtown’s method and the analysis of the resulting iteration. f ’(x) f (x)
The way that computer stores numbers: Questions: When does this iteration converge and how fast? What initial guesses x0 will work for us? The way that computer stores numbers:
From (2.11) p.53 Initial x0 p.56
Example 3.3
3.5 The Newton Error Formula
Definition 3.1 The requirement that C be nonzero and finite actually forces p to be a single unique value. Linear convergence: p = 1 Quadratic convergence: p = 2 Superlinearly convergence: but
Example 3.6
3.6 Newton’s Method: Theory and Convergence Its proof is shown at pp. 106-108.
3.7 Application: Computation of the Square Root
The relative error satisfies Questions: Can we find an initial guess such that Newton’s method will always converge for b on this interval? How rapidly will it converge? The Newton error formula (3.12) applied to : (3.25) The relative error satisfies (3.26)
relative error
How to find the initial value? Choose the midpoint of the interval For example: If , Using linear interpolation b is known
3.8 The Secant Method: Derivation and Examples An obvious drawback of Newton’s method is that it requires a formula for the derivative of f. One obvious way to deal with this problem is to use an approximation to the derivative in the Newton formula. For example: Another method: the secant method Used a secant line
The Secant Method
The Secant Method
The Secant Method Its advantages over Newton’s method: It not require the derivative. It can be coded in a way requiring only a single function evaluation per iteration. Newton’s requires two, one for the function and one for the derivative.
Example 3.7
Error Estimation The error formula for the secant method:
The Convergence This is almost the same as Newton’s method.
3.9 Fixed-point Iteration The goal of this section is to use the added understanding of simple iteration to enhance our understanding of and ability to solve root-finding problems. The root of f is equal to the fixed-point of g. root
Fixed-point Iteration Because show that this kind of point is called a fixed point of the function g, and an iteration of the form (3.33) is called a fixed-point iteration for g.
Fixed point Root
Example 3.8
g (x)
Theorem 3.5
Theorem 3.5 (con.)
3.10 Special Topics in Root-finding Method 3.10.1 Extrapolation and Acceleration The examples have some mistakes, so we jump this subsection.
3.10.2 Variants of Newton’s Method Newton’s method v.s. the chord method v.s. How much do we lose? The chord method is only linear, and only locally convergent. The chord method is useful in solving nonlinear systems of equations. One interesting variant of the chord method updates the point at which the derivative is evaluated, but not every iteration.
Example 3.12
Other Approximations to the Derivative In Section 3.8, a method using a finite difference approximation to the derivative in Newton’s method. Only linear convergence (shown on pages 133 to 134)
3.10.3 The Secant Method: Theory and Convergence The proof is shown on pages 136 to 139. You can study it by yourselves.
3.10.4 Multiple Roots So far our study of root-finding methods has assumed that the derivative of the function does not vanish at the root: What happens if the derivative does vanish at the root?
Example 3.13 -1
L’Hopital’s Rule for forms of type 0/0
Another example (f(x)=1-xe1-x)
Another example (f(x)=1-xe1-x) The data (Table 3.10a) suggests that both iterations are converging, but neither one is converging as rapidly as we might have expected. Can we explain this? The fact that will have an effect on both Newton and secant methods. The error formulas (3.12) and (3.50) and limits (3.24) and (3.47) all require that . Can we find anything more in the way of an explanation?
Discussion—Newton’s Method Assume f has a double root and Note that we no longer have , therefore (according to Theorem 3.7 page 124) we no longer have quadratic convergence.
Discussion—Newton’s Method If we change the Newton iteration to be now we have . (quadratic convergence) More generally, The problem with this technique is that it requires that we know the degree of multiplicity of the root ahead of time.
Discussion—Newton’s Method So an alternative is needed. The draw back of this method is that applying Newton’s method to u will require that we have a formula for the second derivative of f.
Discussion—Newton’s Method (3.60) (3.61)
Table 3.10
Discussion From Table 3.10, we can see the accuracy is not as good as past. What is going on? Let’s look at a graph of the polynomial Fig. 3.11 shows a plot of 8000 points from this curve on the interval [0.45, 0.55] (root = 0.5) Premature convergence This is not caused by the root-finding method. It is because using finite precision arithmetic.
3.10.5 In Search of Fast Global Convergence: Hybrid Algorithm Bisection method: slow but steady and reliable Newton’s method and the secant method: fast but potentially unreliable Brent’s algorithm: incorporate these basic ideas into an algorithm Algorithm 3.6
Example 3.14 Step 3. (b) Step 3. (c) Step 1. Step 2. (a) Step 2. (b)
Another Example