Presentation is loading. Please wait.

Presentation is loading. Please wait.

Open Methods.

Similar presentations


Presentation on theme: "Open Methods."— Presentation transcript:

1 Open Methods

2 Numerical methods Direct methods Iterative methods
Solve a numerical problem by a finite sequence of operations In absence of round off errors deliver an exact solution; e.g., solving a linear system Ax = b by Gaussian elimination Iterative methods Solve a numerical problem (e.g., finding the root of a system of equations) by finding successive approximations to the solution from an initial guess Stopping criteria: the relative error is smaller than a pre-specified value 2

3 Numerical methods Iterative methods Convergence of a numerical methods
何時使用? The only alternative for non-linear systems of equations Often useful even for linear problems involving a large number of variables where direct methods would be prohibitively expensive or impossible Convergence of a numerical methods If successive approximations lead to increasingly smaller relative error Opposite to divergent 3

4 Iterative Methods for Finding the Roots
Bracketing methods Open methods Require only a single starting value or two starting values that do not necessarily bracket a root May diverge as the computation progresses, but when they do converge, they usually do so much faster than bracketing methods

5 Bracketing vs Open Convergence vs Divergence
a) Bracketing method Start with an interval Open method Start with a single initial guess b) Diverging open method c) Converging open method - note speed!

6 Simple Fixed-Point Iteration (簡單固定點迭代法)
Rewrite the function f(x)=0 so that x is on the left-hand side of the equation x=g(x) Use iteration xi+1=g(xi) to find a value that reaches convergence Use the new function g to predict a new value of x Approximate error Graphically the root is at the intersection of two curves: y1(x) = g(x) y2(x) = x

7 Example Solve f(x)=e-x-x Re-write as: x=g(x)  x=e-x
Start with an initial guess (0) Continue until certain tolerance is reached i xi |a| % |t| % |t|i/|t|i-1 0.0000 1 1.0000 76.322 0.763 2 0.3679 35.135 0.460 3 0.6922 46.854 22.050 0.628 4 0.5005 38.309 11.755 0.533

8 More on Convergence Solution is at the intersection of the two curves. Identify the point on y2 corresponding to the initial guess; the next guess corresponds to the value of the argument x where y1(x) = y2(x) Convergence requires that the derivative of g(x) near the root has a magnitude < 1 (a) Convergent, 0 ≤ g’ < 1 (b) Convergent, -1 <g’ ≤ 0 (c) Divergent, g’ > 1 (d) Divergent, g’ < -1

9 Steps of Fixed-Point Iteration
x = g(x), f(x) = x - g(x) = 0 Step 1: Guess x0 and calculate y0 = g(x0) Step 2: Let x1 = g(x0) Step 3: Examine if x1 is the solution of f(x) = 0 Step 4. If not, repeat the iteration, x0 = x1

10 Exercise Use simple fixed-point iteration to locate the root of
Use an initial guess of x0 = 0.5

11 Newton-Raphson Method (牛頓-拉夫生法)
At the root, f(xi+1) = 0 Express values of the function and its derivative at xi Graphically draw the tangent line to the f(x) curve at some guess x, then follow the tangent line to where it crosses the x-axis

12 Newton-Raphson Method: Example
False position - secant line Newton’s method - tangent line root x* xi+1 xi

13 Newton-Raphson Method
Step 1: Start at the point (x1, f(x1)) Step 2: The intersection of the tangent of f(x) at this point and the x-axis x2 = x1 - f(x1)/f’(x1) Step 3: Examine if f(x2) = 0 or |x2 - x1| < tolerance Step 4: If yes, solution xr = x2 If not, x1 ← x2, repeat the iteration

14 Newton’s Method Note: Evaluation of the derivative (slope) is required
You may have to do this numerically Open method – Convergence depends on the initial guess (not guaranteed) However, Newton method can converge very quickly (quadratic convergence)

15

16 Bungee Jumper Problem Use the Newton-Raphson method
Need to evaluate the function and its derivative Given cd = 0.25 kg/m, v = 36 m/s, t = 4 s, and g = 9.81 m2/s, determine the mass of the bungee jumper

17 Bungee Jumper Problem >> y=inline('sqrt(9.81*m/0.25)*tanh(sqrt(9.81*0.25/m)*4)-36','m') y = Inline function: y(m) = sqrt(9.81*m/0.25)*tanh(sqrt(9.81*0.25/m)*4)-36 >> dy=inline('1/2*sqrt(9.81/(m*0.25))*tanh(sqrt(9.81*0.25/m)*4)-9.81/(2*m)*4*sech(sqrt(9.81*0.25/m)*4)^2','m') dy = dy(m) = 1/2*sqrt(9.81/(m*0.25))*tanh(sqrt(9.81*0.25/m)*4)-9.81/(2*m)*4*sech(sqrt(9.81*0.25/m)*4)^2 >> format short; root = newtraph(y,dy,140, ) root =

18 Multiple Roots (重根) A multiple root (double, triple, etc.) occurs where the function is tangent (正切) to the x axis

19 Examples of Multiple Roots

20 Multiple Roots: Problems
Problems with multiple roots The function does not change sign at even multiple roots (i.e., m = 2, 4, 6, …) f’(x) goes to zero - need to put a zero check for f(x) in program Slower convergence (linear instead of quadratic) of Newton-Raphson and secant methods for multiple roots

21 Modified Newton-Raphson Method
When the multiplicity of the root is known Double root: m = 2 Triple root: m = 3 Simple but need to know the multiplicity m Maintain quadratic convergence

22 Multiple Root with Multiplicity m f(x)=x5  11x4 + 46x3  90x2 + 81x  27
double root three roots Multiplicity m m = 1: single root m = 2: double root m = 3: triple root

23 Can be used for both single and multiple roots
(m = 1: original Newton’s method) m=1: single root m=2: double root m=3: triple root etc.

24 Original Newton’s method m = 1 Modified Newton’s Method m = 2
» multiple1('multi_func','multi_dfunc'); enter multiplicity of the root = 1 enter initial guess x1 = 1.3 allowable tolerance tol = 1.e-6 maximum number of iterations max = 100 Newton method has converged step x y » multiple1('multi_func','multi_dfunc'); enter multiplicity of the root = 2 enter initial guess x1 = 1.3 allowable tolerance tol = 1.e-6 maximum number of iterations max = 100 Newton method has converged step x y Double root: m = 2 f(x) = x5  11x4 + 46x3  90x2 + 81x  27 = 0

25 Remarks: Newton-Raphson Method
Although Newton-Raphson converges rapidly, it may diverge and fail to find roots if an inflection point (f’’=0) is near the root if there is a local minimum or maximum (f’=0) if there are multiple roots if a zero slope is reached Open method, convergence not guaranteed

26 Newton-Raphson Method
Examples of poor convergence Pro: Error of the (i+1)th iteration is roughly proportional to the square of the error of the ith iteration - this is called quadratic convergence (二次收斂) Con: Some functions show slow or poor convergence

27 Secant Method (正割法) Use secant line instead of tangent line at f(xi)

28 Secant Method Formula for the secant method
Similar to the false position method (c.f. 書5.7式) Still requires two initial estimates But it does not bracket the root at all times - there is no sign test … (5.7)

29 False-Position vs. Secant Methods

30 Secant Method演算法 Algorithm
1. Begin with any two endpoints [a, b] = [x0 , x1] 2. Calculate x2 using the secant method formula 3. Replace x0 with x1, replace x1 with x2 and repeat from (2) until convergence is reached Use the two most recently generated points in subsequent iterations (not a bracket method!)

31 Exercise Use the secant method to estimate the root of f(x) = e-x – x. Start with the estimates x0 = 0 and x1 = 1.0

32 正割法的優缺點 Advantage Disadvantage
Can converge faster and does not need to bracket the root Disadvantage Not guaranteed to converge! May diverge (fail to yield an answer)

33 Convergence Not Guaranteed
y = ln x no sign check, may not bracket the root

34 Secant method False position method
» [x1 f1]=secant('my_func',0,1,1.e-15,100); secant method has converged step x f » [x2 f2]=false_position('my_func',0,1,1.e-15,100); false_position method has converged step xl xu x f

35 Secant method may converge faster and does not need to bracket the root
False position

36 Bisection -- 47 iterations False position -- 15 iterations
Convergence criterion Bisection iterations False position iterations Secant iterations Newton’s iterations Bisection False position Secant Newton’s

37 Modified Secant Method
Use fractional perturbation (分式擾動) instead of two arbitrary values to estimate the derivative  is a small perturbation fraction (e.g., xi/xi = 106)

38 MATLAB Function: fzero
Bracketing methods – reliable but slow Open methods – fast but possibly unreliable MATLAB fzero – fast and reliable Applies both bracketing methods and open methods Find real root of an equation (not suitable for double root!) 語法 function: function handle to the function being evaluated x0: initial guess x: location of the root fx: function evaluated at that root [x0 x1]:x0 and x1 are guesses that bracket a sign change fzero(function, x0) fzero(function, [x0 x1]) [x, fx] = fzero(function, x0)

39 MATLAB Function: fzero 例
[x, fx] = x^10-1, 0.5) % Use fzero to find roots of f(x)=x10-1 starting with an initial % guess of x=0.5

40 fzero unable to find the double root of
f(x) = x5  11x4 + 46x3  90x2 + 81x  27 = 0 >> root=fzero('multi_func',-10) root = >> root=fzero('multi_func',1000) >> root=fzero('multi_func',[ ]) >> root=fzero('multi_func',[-2 2]) ??? Error using ==> fzero The function values at the interval endpoints must differ in sign. function f = multi_func(x) % Exact solutions: x = 1 (double) and 2 (triple) f = x.^5 - 11*x.^4 + 46*x.^3 - 90*x.^2 + 81*x - 27;

41 fzero Options Options may be passed to fzero as a third input argument - a data structure created by the optimset command options = optimset(‘par1’, val1, ‘par2’, val2,…) parn: name of the parameter to be set valn: value to set that parameter Parameters commonly used with fzero display: When set to ‘iter’, displays detailed records of all the iterations tolx: Sets a termination tolerance on x

42 >> options = optimset('display','iter');
>> [x, fx] = x^10-1, 0.5, options) Search for an interval around 0.5 containing a sign change: Func-count a f(a) b f(b) Procedure initial interval search search search search search search search search search search search search Search for a zero in the interval [-0.14, 1.14]: Func-count x f(x) Procedure initial interpolation bisection bisection bisection interpolation interpolation e interpolation e interpolation e interpolation interpolation Zero found in the interval [-0.14, 1.14] x = 1 fx =

43 Root of Polynomials Bisection, false-position, Newton-Raphson, secant methods cannot be easily used to determine all the roots of higher-order polynomials Other methods like Muller’s method (Chapra and Canale, 2002) or Bairstow method (Chapra and Canale, 2002) help MATLAB function: roots

44 補充:Secant and Muller’s Method

45 y(x) Secant line x1 x 補充:Muller’s Method x3 x2 Parabola
Fit a parabola (quadratic) to exact curve Find both real and complex roots (x2 + rx + s = 0) y(x) Secant line x1 x3 x2 x Parabola

46 MATLAB Function: roots
Recast the root evaluation task as an eigenvalue problem (見第二十章) Zeros of a nth-order polynomial r = roots(c) - roots c = poly(r) - inverse function

47 MATLAB Function: roots
roots: built in function to determine all the roots of a polynomial, including imaginary and complex ones 語法 x = roots(c) x is a column vector containing the roots c is a row vector containing the polynomial coefficients Example: Find the roots of f(x)=x5-3.5x4+2.75x x x+1.25 >> x = roots([ ]) >> x = 2.0000 i i 0.5000

48 poly, polyval poly: determine polynomial coefficients if roots are given: b = poly([0.5 -1]) Finds f(x) where f(x) =0 for x=0.5 and x=-1 MATLAB reports b = [ ] This corresponds to f(x)=x2+0.5x-0.5 polyval: evaluate a polynomial at one or more points a = [ ]; If used as coefficients of a polynomial, this corresponds to f(x)=x5-3.5x4+2.75x x x+1.25 polyval(a, 1) % calculates f(1), which MATLAB reports as

49 Roots of Polynomial:例 Consider the 6th-order polynomial
>> r = roots(c) r = i i 3.0000 2.0000 >> polyval(c, r), format long g ans = e e-012i e e-012i e-012 e-013 e-013 e-014

50 f(x) = x5  11x4 + 46x3  90x2 + 81x  27 = (x  1)2(x  3)3
>> c = [ ]; r = roots(c) r = i i i i

51 作業1a 指數平均的權重值如何定才合理? 撰寫MATLAB程式,利用「指數平均」技巧作預測,並決定最適合的權重值p
研讀 之素材,瞭解exponential average的運算方式 下載一份excel資料檔置於 撰寫MATLAB程式,利用「指數平均」技巧作預測,並決定最適合的權重值p 程式能自動讀入該.xls檔內相應欄位之值,並透過MATLAB程式進行計算 將用以預測下一年度的考生數 程式碼寄至老師 信箱 Due 4/14 13:00


Download ppt "Open Methods."

Similar presentations


Ads by Google