Download presentation
1
Open Methods
2
Numerical methods Direct methods Iterative methods
Solve a numerical problem by a finite sequence of operations In absence of round off errors deliver an exact solution; e.g., solving a linear system Ax = b by Gaussian elimination Iterative methods Solve a numerical problem (e.g., finding the root of a system of equations) by finding successive approximations to the solution from an initial guess Stopping criteria: the relative error is smaller than a pre-specified value 2
3
Numerical methods Iterative methods Convergence of a numerical methods
何時使用? The only alternative for non-linear systems of equations Often useful even for linear problems involving a large number of variables where direct methods would be prohibitively expensive or impossible Convergence of a numerical methods If successive approximations lead to increasingly smaller relative error Opposite to divergent 3
4
Iterative Methods for Finding the Roots
Bracketing methods Open methods Require only a single starting value or two starting values that do not necessarily bracket a root May diverge as the computation progresses, but when they do converge, they usually do so much faster than bracketing methods
5
Bracketing vs Open Convergence vs Divergence
a) Bracketing method Start with an interval Open method Start with a single initial guess b) Diverging open method c) Converging open method - note speed!
6
Simple Fixed-Point Iteration (簡單固定點迭代法)
Rewrite the function f(x)=0 so that x is on the left-hand side of the equation x=g(x) Use iteration xi+1=g(xi) to find a value that reaches convergence Use the new function g to predict a new value of x Approximate error Graphically the root is at the intersection of two curves: y1(x) = g(x) y2(x) = x 例
7
Example Solve f(x)=e-x-x Re-write as: x=g(x) x=e-x
Start with an initial guess (0) Continue until some tolerance is reached i xi |a| % |t| % |t|i/|t|i-1 0.0000 1 1.0000 76.322 0.763 2 0.3679 35.135 0.460 3 0.6922 46.854 22.050 0.628 4 0.5005 38.309 11.755 0.533
8
More on Convergence Solution is at the intersection of the two curves. Identify the point on y2 corresponding to the initial guess; the next guess corresponds to the value of the argument x where y1(x) = y2(x) Convergence requires that the derivative of g(x) near the root has a magnitude < 1 (a) Convergent, 0 ≤ g’ < 1 (b) Convergent, -1 <g’ ≤ 0 (c) Divergent, g’ > 1 (d) Divergent, g’ < -1
9
Steps of Fixed-Point Iteration
x = g(x), f(x) = x - g(x) = 0 Step 1: Guess x0 and calculate y0 = g(x0) Step 2: Let x1 = g(x0) Step 3: Examine if x1 is the solution of f(x) = 0 Step 4. If not, repeat the iteration, x0 = x1
10
Exercise Use simple fixed-point iteration to locate the root of
Use an initial guess of x0 = 0.5
11
Newton-Raphson Method (牛頓-拉夫生法)
At the root, f(xi+1) = 0 Express values of the function and its derivative at xi Graphically draw the tangent line to the f(x) curve at some guess x, then follow the tangent line to where it crosses the x-axis
12
Newton-Raphson Method: Example
False position - secant line Newton’s method - tangent line root x* xi+1 xi
13
Newton-Raphson Method
Step 1: Start at the point (x1, f(x1)) Step 2: The intersection of the tangent of f(x) at this point and the x-axis x2 = x1 - f(x1)/f’(x1) Step 3: Examine if f(x2) = 0 or |x2 - x1| < tolerance Step 4: If yes, solution xr = x2 If not, x1 ← x2, repeat the iteration
14
Newton’s Method Note that an evaluation of the derivative (slope) is required You may have to do this numerically Open method – Convergence depends on the initial guess (not guaranteed) However, Newton method can converge very quickly (quadratic convergence)
16
Bungee Jumper Problem Use the Newton-Raphson method
Need to evaluate the function and its derivative Given cd = 0.25 kg/m, v = 36 m/s, t = 4 s, and g = 9.81 m2/s, determine the mass of the bungee jumper
17
Bungee Jumper Problem >> y=inline('sqrt(9.81*m/0.25)*tanh(sqrt(9.81*0.25/m)*4)-36','m') y = Inline function: y(m) = sqrt(9.81*m/0.25)*tanh(sqrt(9.81*0.25/m)*4)-36 >> dy=inline('1/2*sqrt(9.81/(m*0.25))*tanh(sqrt(9.81*0.25/m)*4)-9.81/(2*m)*4*sech(sqrt(9.81*0.25/m)*4)^2','m') dy = dy(m) = 1/2*sqrt(9.81/(m*0.25))*tanh(sqrt(9.81*0.25/m)*4)-9.81/(2*m)*4*sech(sqrt(9.81*0.25/m)*4)^2 >> format short; root = newtraph(y,dy,140, ) root =
18
Multiple Roots (重根) A multiple root (double, triple, etc.) occurs where the function is tangent (正切) to the x axis
19
Examples of Multiple Roots
20
Multiple Roots: Problems
Problems with multiple roots The function does not change sign at even multiple roots (i.e., m = 2, 4, 6, …) f’(x) goes to zero - need to put a zero check for f(x) in program Slower convergence (linear instead of quadratic) of Newton-Raphson and secant methods for multiple roots
21
Modified Newton-Raphson Method
When the multiplicity of the root is known Double root: m = 2 Triple root: m = 3 Simple but need to know the multiplicity m Maintain quadratic convergence
22
Multiple Root with Multiplicity m f(x)=x5 11x4 + 46x3 90x2 + 81x 27
double root three roots Multiplicity m m = 1: single root m = 2: double root m = 3: triple root
23
Can be used for both single and multiple roots
(m = 1: original Newton’s method) m=1: single root m=2: double root m=3: triple root etc.
24
Original Newton’s method m = 1 Modified Newton’s Method m = 2
» multiple1('multi_func','multi_dfunc'); enter multiplicity of the root = 1 enter initial guess x1 = 1.3 allowable tolerance tol = 1.e-6 maximum number of iterations max = 100 Newton method has converged step x y » multiple1('multi_func','multi_dfunc'); enter multiplicity of the root = 2 enter initial guess x1 = 1.3 allowable tolerance tol = 1.e-6 maximum number of iterations max = 100 Newton method has converged step x y Double root: m = 2 f(x) = x5 11x4 + 46x3 90x2 + 81x 27 = 0
25
Remarks: Newton-Raphson Method
Although Newton-Raphson converges rapidly, it may diverge and fail to find roots if an inflection point (f’’=0) is near the root if there is a local minimum or maximum (f’=0) if there are multiple roots if a zero slope is reached Open method, convergence not guaranteed
26
Newton-Raphson Method
Examples of poor convergence Pro: Error of the (i+1)th iteration is roughly proportional to the square of the error of the ith iteration - this is called quadratic convergence (二次收斂) Con: Some functions show slow or poor convergence
27
Secant Method (正割法) Use secant line instead of tangent line at f(xi)
28
Secant Method Formula for the secant method
Similar to the false position method in form (c.f. 書5.7式) Still requires two initial estimates But it does not bracket the root at all times - there is no sign test
29
False-Position and Secant Methods
30
Secant Method演算法 Open Method
1. Begin with any two endpoints [a, b] = [x0 , x1] 2. Calculate x2 using the secant method formula 3. Replace x0 with x1, replace x1 with x2 and repeat from (2) until convergence is reached Use the two most recently generated points in subsequent iterations (not a bracket method!)
31
Exercise Use the secant method to estimate the root of f(x) = e-x – x. Start with the estimates xi-1 = 0 and x0 = 1.0.
32
Secant Method優點、缺點 Advantage Disadvantage
Can converge even faster and it does not need to bracket the root Disadvantage It is not guaranteed to converge! It may diverge (fail to yield an answer)
33
Convergence not Guaranteed
y = ln x no sign check, may not bracket the root
34
Secant method False position method
» [x1 f1]=secant('my_func',0,1,1.e-15,100); secant method has converged step x f » [x2 f2]=false_position('my_func',0,1,1.e-15,100); false_position method has converged step xl xu x f
35
Secant method may converge even faster and does not need to bracket the root
False position
36
Bisection -- 47 iterations False position -- 15 iterations
Convergence criterion Bisection iterations False position iterations Secant iterations Newton’s iterations Bisection False position Secant Newton’s
37
Modified Secant Method
Use fractional perturbation (分式擾動) instead of two arbitrary values to estimate the derivative is a small perturbation fraction (e.g., xi/xi = 106)
38
MATLAB Function: fzero
Bracketing methods – reliable but slow Open methods – fast but possibly unreliable MATLAB fzero – fast and reliable Find real root of an equation (not suitable for double root!) fzero(function, x0) fzero(function, [x0 x1])
39
fzero unable to find the double root of
f(x) = x5 11x4 + 46x3 90x2 + 81x 27 = 0 >> root=fzero('multi_func',-10) root = >> root=fzero('multi_func',1000) >> root=fzero('multi_func',[ ]) >> root=fzero('multi_func',[-2 2]) ??? Error using ==> fzero The function values at the interval endpoints must differ in sign. function f = multi_func(x) % Exact solutions: x = 1 (double) and 2 (triple) f = x.^5 - 11*x.^4 + 46*x.^3 - 90*x.^2 + 81*x - 27;
40
Root of Polynomials Bisection, false-position, Newton-Raphson, secant methods cannot be easily used to determine all roots of higher-order polynomials Muller’s method (Chapra and Canale, 2002) Bairstow method (Chapra and Canale, 2002) MATLAB function: roots
41
Secant and Muller’s Method
42
y(x) Secant line x1 x Muller’s Method x3 x2 Parabola
Fit a parabola (quadratic) to exact curve Find both real and complex roots (x2 + rx + s = 0) y(x) Secant line x1 x3 x2 x Parabola
43
MATLAB Function: roots
Recast the root evaluation task as an eigenvalue problem (Chapter 20) Zeros of a nth-order polynomial r = roots(c) - roots c = poly(r) - inverse function
44
Roots of Polynomial Consider the 6th-order polynomial
>> r = roots(c) r = i i 3.0000 2.0000 >> polyval(c, r), format long g ans = e e-012i e e-012i e-012 e-013 e-013 e-014
45
f(x) = x5 11x4 + 46x3 90x2 + 81x 27 = (x 1)2(x 3)3
>> c = [ ]; r = roots(c) r = i i i i
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.