Numerical Solutions of Ordinary Differential Equations CHAPTER 6 Numerical Solutions of Ordinary Differential Equations
Contents 6.1 Euler Methods and Error Analysis 6.2 Runge-Kutta Methods 6.3 Multistep Methods 6.4 Higher-Order Equations and Systems 6.5 Second-Order Boundary-Value Problems
6.1 Euler Method and Error Analysis Introduction Recall the backbone of Euler’s Method yn+1 = yn + hf(xn, yn) (1) Errors in Numerical Methods One of the most important error sources is round-off error.
Truncation Errors for Euler’s Methods This algorithm gives only a straight-line approximation to the solution. This error is called the local truncation error, or discretization error. To derive a formula for the truncation error of Euler’s method, we use the Taylor’s formula with remainder. Where c is some point between a and x.
Setting k = 1, a = xn, x = xn+1 = xn + h, we have. or Setting k = 1, a = xn, x = xn+1 = xn + h, we have or Hence the truncation error in yn+1 of Euler’s method is where xn < c < xn+1 The value of c is usually unknown, but an upper bound is where
Note: The e(h) is said to be of order hn, denoted by O(hn), if there exists a constant C such that |e(h)| Chn for h sufficiently small.
Example 1 Find a bound for the local truncation errors for Euler’s method applied to Solution From the solution we have so In particular, h = 0.1, then the upper bound by replacing c by 1.1 is
Example 1 (2) When we take five steps, replacing c by 1.5, then (2)
Improved Euler’s Method (3) where (4) is commonly known as the Improved Euler’s method. See Fig 6.1 In general, the improved Euler’s method is an example of predictor-corrector method.
Fig 6.1
Example 2 Use the improved Euler’s method to obtain the approximate value y(1.5) for the solution of . Compare the results for h = 0.1 and h = 0.05. Solution With x0 = 1, y0 = 1, f(xn, yn) = 2xnyn , h = 0.1 y1* = y0 + (0.1)(2xy) = 1.2 Using (3) with x1 = 1 + h = 1.1 The results are given in Table 6.3 and 6.4.
Table 6.3 Actual value Abs. error % rel. error 1.00 1.0000 0.0000 0.00 1.10 1.2320 1.2337 0.0017 0.14 1.20 1.5479 1.5527 0.0048 0.31 1.30 1.9832 1.9937 0.0106 0.53 1.40 2.5908 2.6117 0.0209 0.80 1.50 3.4509 3.4904 0.0394 1.13
Table 6.4 Actual value Abs. error % rel. error 1.00 1.0000 0.0000 0.00 1.05 1.1077 1.1079 0.0002 0.02 1.10 1.2332 1.2337 0.0004 0.04 1.15 1.3798 1.3806 0.0008 0.06 1.20 1.5514 1.5527 0.0013 0.08 1.25 1.7531 1.7551 0.0020 0.11 1.30 1.9909 1.9937 0.0029 0.14 1.35 2.2721 2.2762 0.0041 0.18 1.40 2.6060 2.6117 0.0057 0.22 1.45 3.0038 3.0117 0.0079 0.26 1.50 3.4795 3.4904 0.0108 0.31
Truncation Errors for the Improved Euler’s Method Note that he local truncation error is O(h3).
6.2 Runge-Kutta Methods Runge-Kutta Method All the Runge-Kutta Methods are generalizations of the basic Euler’s Formula, that the slope function f is replaced by a weighted average of slopes over the interval xn x xn+1 (1) where the weights wi, i = 1, 2, …, m are constants satisfying w1 + w2 + … + wm = 0, and ki is the function evaluated at a selected point (x, y) for which xn x xn+1.
The number m is called the order The number m is called the order. If we take m = 1, w1 = 1, k1 = f(x, yn), we get the Euler’s method. Simply it is the first-order Runge-Kutta method.
A Second-order Runge-Kutta Method We try to find some values of constants so that the formula (2) where k1= f(xn, yn), k2= f(xn+h, yn+hk1) agrees with a Taylor formula with degree 2. These constants satisfy (3) then (4) where w2 0.
Eg: we choose w2 = ½ , yields w1 = ½ , = 1, = 1, and (2) becomes Eg: we choose w2 = ½ , yields w1 = ½ , = 1, = 1, and (2) becomes yn+1= yn+(k1+ k2)h/2 where k1= f(xn, yn), k2= f(xn+h, yn+hk1). Since xn + h = xn+1, yn + hk1 = yn + hf(xn, yn), it is identical to the improved Euler’s methods.
A Forth-order Runge-Kutta Method We try to find parameters so that the formula (5) where agrees with a Taylor formula with degree 4.
The most commonly used set of values yields the following results. (6)
Example 1 Use the RK4 method with h = 0.1 to obtain y(1.5) for the solution of y’ = 2xy, y(1) = 1. Solution We first compute the case n = 0.
Example 1 (2) Therefore, See table 6.5.
Table 6.5 h=0.1 Actual value Abs. error % rel. error 1.00 1.0000 0.0000 0.00 1.10 1.2337 1.20 1.5527 1.30 1.9937 1.40 2.6116 2.6117 0.0001 1.50 3.4902 3.4904
Table 6.6 shows some comparisons. xn Euler Improved Euler RK4 Actual value 1.00 1.0000 1.10 1.2000 1.2320 1.2337 1.05 1.1000 1.1077 1.1079 1.20 1.4640 1.5479 1.5527 1.2155 1.2332 1.30 1.8154 1.9832 1.9937 1.15 1.3492 1.3798 1.3806 1.40 2.2874 2.5908 2.6116 2.6117 1.5044 1.5514 1.50 2.9278 3.4509 3.4902 3.4904 1.25 1.6849 1.7531 1.7551 1.8955 1.9909 1.35 2.1419 2.2721 2.2762 2.4311 2.6060 1.45 2.7714 3.0038 3.0117 3.1733 3.4795 3.4903
Truncation Error for the RK4 method Since it is of degree 4, then the local truncation error is O(h5) and the global truncation error is O(h4). However, this is beyond the scope of this text.
Example 2 Find a bound for the local truncation error of RK4 for the solution of Solution By computing the fifth derivative of the known solution we get (7) Thus with c = 1.5, then (7) = 0.00028. Table 6.7 gives the approximations to the solution of the initial-value problem at x = 1.5 by the RK4 method.
Table 6.7 h Approx Error 0.1 3.49021064 1.32321089 10-4 0.05 3.49033382 9.13776090 10-6
6.3 Multistep Method Adams-Bashforth-Moulton Method The predictor is the Adams-Bashforth formula (1) where n 3.
The value of yn+1* is then substituted into the Adams-Moulton corrector (2)
Example 1 Use the above method with h = 0.2 to obtain y(0.8) for the solution of Solution With h = 0.2, y(0.8) will be approximated by y4. To get started, we use the RK4 method with x0 = 0, y0 = 1, h = 0.2 to obtain y1 = 1.02140000, y2 = 1.09181796, y3 = 1.22210646
Example 1 (2) Now with x0 = 0, x1 = 0.2, x3 = 0.4, x4 = 0.6, and f(x, y) = x + y – 1, we find Then predictor (1) gives
Example 1 (3) To use the corrector (2), we need
Stability of Numerical Methods We say a numerical method is stable, if small changes in the initial condition result in only small changes in the computed solution.
6.4 Higher-Order Equations and Systems Second-Order IVPs An IVP (1) can be expressed by (2) Since y’(x0) = u0, then y(x0) = y0, u(x0) = u0. Apply the Euler’s method to (2) (3)
whereas the RK4 method is applied (4) where In general,
Example 1 Use the Euler’s method to obtain y(0.2), where (5) Solution Let y’ = u, then (5) becomes From (3)
Example 1 (2) Using h = 0.1, y0 = 1, u0 = 2, we find
Fig 6.2 Fig 6.2 shows the comparison of results between by Euler’s method and by the RK4 method.
Example 2 Write as a system of first-order DEs. Solution We write After simplification
Example 2 (2) Let Then the original system can be
Numerical Solution of a System The solution of a system of the form can be approximated by numerical methods.
For example, by the RK4 method: (6) looks like this: (7)
where (8)
Example 3 Consider Use the RK4 method to approximate x(0.6) and y(0.6) with h = 0.2 and h = 0.1. Solution With h = 0.2 and the given data, from (8)
Example 3 (2)
Example 3 (3) Therefore, from (7) we get See Fig 6.3 and Table 6.8, 6.9.
Fig 6.3
Table 6.8 0.00 -1.0000 6.0000 0.20 9.2453 19.0683 0.40 46.0327 55.1203 0.60 158.9430 150.8192
Table 6.9 0.00 -1.0000 6.0000 0.10 2.3840 10.8883 0.20 9.3379 19.1332 0.30 22.5541 32.8539 0.40 46.5103 55.4420 0.50 88.5729 93.3006 0.60 160.7563 152.0025
6.5 Second-Order BVPs Finite Difference Approximation The Taylor series at a point a of y(x) is If we set h = x – a, then Rewrite the last expression as (1) and (2)
If h is small, we can neglect the y” and higher-order terms, then. (3) If h is small, we can neglect the y” and higher-order terms, then (3) (4) Subtracting (1) and (2) also gives (5) If we ignore the terms involving h3 and higher, then by adding (1) and (2) (6)
The right-hand dies of(3), (4), (5), (6) are called difference quotients, and these differences are called finite differences. y(x + h) – y(x) : forward difference y(x) – y(x – h) : backward difference y(x + h) – y(x – h): central difference y(x + h) – 2y(x) + y(x – h): central difference
Finite Difference Method Consider the BVP: (7) Suppose a = x0 < x1 < … < xn < b represents a regular partition of the interval [a, b], that is, xi = a + ih, where i = 0, 1, 2, ..., n, and h = (b – a)/n. These points are called interior mesh points. If we let yi = y(xi), Pi = P(xi), Qi = Q(xi), fi = f(xi),
and if y’ and y” in (7) are replaced by (5) and (6), then we have. or and if y’ and y” in (7) are replaced by (5) and (6), then we have or (8) This is known as a finite difference equation.
Example 1 Use (8) with n = 4 to approximate the solution of the BVP Solution We have P = 0, Q = –4, f(x) = 0, h = (1 – 0)/4 = ¼ . Hence we have (9) The interior points are x1= 0 + 1/4, x2= 0 + 2/4, x3= 0 + 3/4, then (9) gives
Example (2) Together with y0 = 0, y4 = 5, then We obtain y1 = 0.7256, y2 = 1.6327, y3 = 2.9479.
Example 2 Use (8) with n = 10 to approximate the solution of the BVP Solution We have P = 3, Q = 2, f(x) = 4x2, h = (2 – 1)/10 = 0.1, hence (8) becomes (10) The interior points are x1= 1.1, x2= 1.2, …, x9= 1.9, y0 = 1, y10 = 6, then (10) gives
Example 2 (2) Then we can solve the above system of equations to obtain y1, y2, …, y9.
Shooting Method Another way of approximating a solution is called the shooting method. The starting point of this method is the replacement of the BVP by the IVP: (11) where m1 is simply a guess. This is left as an exercise. See problem 14.
Thank You !