Download presentation
Presentation is loading. Please wait.
Published byHilary McDonald Modified over 9 years ago
2
linear 2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear function using Taylor’s expansion. Let p 0 [a, b] be an approximation to p such that f ’(p 0 ) 0. Consider the first Taylor polynomial of f (x) expanded about p 0 : where x lies between p 0 and x. Assume that | p p 0 | is small, then (p p 0 ) 2 is much smaller. Then: x y p p0p0
3
Chapter 2 Solutions of Equations in One Variable – Newton’s Method Theorem: Let f C 2 [a, b]. If p [a, b] is such that f(p) = 0 and f ’(p) 0, then there exists a > 0 such that Newton’s method generates a sequence { p n } (n = 1, 2, …) converging to p for any initial approximation p 0 [p – , p + ]. Proof: Newton’s method is just p n = g( p n – 1 ) for n 1 with a. Is g(x) continuous in a neighborhood of p? f ’(p) 0 and is continuousf ’(x) 0 in a neighborhood of p b. Is g’(x) bounded by 0 < k < 1 in a neighborhood of p? g’(x) =g’(p) =0 f ”(x) is continuous g’(x) is small and is continuous in a neighborhood of p 2/12
4
Chapter 2 Solutions of Equations in One Variable – Newton’s Method Proof (continued): c. Does g(x) map [p – , p + ] into itself? = | g(x) – g(p) | = | g’( ) | | x – p | k | x – p | < | x – p | | g(x) – p | < Note: The convergence of Newton’s method depends on the selection of the initial approximation. p p0p0 p0p0 p0p0 3/12
5
Chapter 2 Solutions of Equations in One Variable – Newton’s Method Secant Method : What is wrong: Newton’s method requires f ’(x) at each approximation. Frequently, f ’(x) is far more difficult and needs more arithmetic operations to calculate than f(x). p0p0 p1p1 tangent line secant line tangent secant Have to start with 2 initial approximations. Slower than Newton’s Method and still requires a good initial approximation. HW: p.75 #13 (b)(c), p.76 #15 4/12
6
Chapter 2 Solutions of Equations in One Variable – Newton’s Method Lab 02. Root of a Polynomial Time Limit: 1 second; Points: 3 A polynomial of degree n has the common form as Your task is to write a program to find a root of a given polynomial in a given interval. 5/12
7
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods 2.4 Error Analysis for Iterative Methods Definition: Suppose { p n } ( n = 0, 1, 2, …) is a sequence that converges to p, with p n p for all n. If positive constants and exist with then { p n } ( n = 0, 1, 2, …) converges to p of order , with asymptotic error constant. (i)If =1, the sequence is linearly convergent. (ii)If =2, the sequence is quadratically convergent. The the value of , the faster the convergence. larger Q: What is the order of convergence for an iterative method with g’(p) 0? A: Linearly convergent. 6/12
8
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods Q: What is the order of convergence for Newton’s method (where g’(p) = 0) ? A: From Taylor’s expansion we have As long as f ’(p) 0, Newton’s method is at least quadratically convergent. Fast near a simple root. Q: How can we practically determine and ? Theorem: Let p be a fixed point of g(x). If there exists some constant 2 such that g C [p – , p + ], g’(p) = … = g ( – 1) (p) = 0, and g ( ) (p) 0. Then the iterations with p n = g( p n – 1 ), n 1, is of order . This is a one line proof...if we start sufficiently far to the left. p n n 7/12
9
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods Q: What is the order of convergence for Newton’s method if the root is NOT simple ? A: If p is a root of f of multiplicity m, then f(x) = (x – p) m q(x) and q(x) 0. Newton’s method is just p n = g( p n – 1 ) for n 1 with g’(p) = It is convergent, but not quadratically. Q: Is there anyway to speed it up? A: Yes! Equivalently transform the multiple root of f into the simple root of another function, and then apply Newton’s method. 8/12
10
Chapter 2 Solutions of Equations in One Variable – Error Analysis for Iterative Methods Let, then the multiple root of f = the simple root of . Apply Newton’s method to : Quadratic convergence Requires additional calculation of f ”(x); The denominator consists of the difference of two numbers that are both close to 0. HW: p.86 #11 9/12
11
Chapter 2 Solutions of Equations in One Variable – Accelerating Convergence 2.5 Accelerating Convergence Aitken’s 2 Method: x y y = x y = g(x) p p0p0 t(p 0, p 1 ) p1p1 p2p2 t(p 1, p 2 ) 10/12
12
Chapter 2 Solutions of Equations in One Variable – Accelerating Convergence Definition: For a given sequence { p n } (n = 1, 2, …), the forward difference p n is defined by p n = p n+1 – p n for n 0. Higher powers, k p n, are defined recursively by k p n = ( k – 1 p n ) for for k 2. Aitken’s 2 Method: for n 0. Theorem: Suppose that { p n } (n = 1, 2, …) is a sequence that converges linearly to the limit p and that for all sufficiently large values of n we have ( p n – p )( p n+1 – p ) > 0. Then the sequence { } (n = 1, 2, …) converges to p faster than { p n } (n = 1, 2, …) in the sense that Steffensen’s Method: Local quadratic convergence if g’(p) 1. 11/12
13
Chapter 2 Solutions of Equations in One Variable – Accelerating Convergence Algorithm: Steffensen’s Acceleration Find a solution to x = g(x) given an initial approximation p 0. Input: initial approximation p 0 ; tolerance TOL; maximum number of iterations N max. Output: approximate solution x or message of failure. Step 1 Set i = 1; Step 2 While ( i N max ) do steps 3-6 Step 3 Set p 1 = g(p 0 ) ; p 2 = g(p 1 ) ; p = p 0 ( p 1 p 0 ) 2 / ( p 2 2 p 1 + p 0 ) ; Step 4 If | p p 0 | < TOL then Output (p); /* successful */ STOP; Step 5 Set i ++; Step 6 Set p 0 = p ; /* update p 0 */ Step 7 Output (The method failed after N max iterations); /* unsuccessful */ STOP. 12/12
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.