Numerical Solutions of Ordinary Differential Equations

Slides:



Advertisements
Similar presentations
SE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture KFUPM Read , 26-2, 27-1 CISE301_Topic8L8&9 KFUPM.
Advertisements

Chapter 8 Elliptic Equation.
Ordinary Differential Equations
Numerical Computation
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 31 Ordinary Differential Equations.
Dr. Jie Zou PHY Chapter 9 Ordinary Differential Equations: Initial-Value Problems Lecture (II) 1 1 Besides the main textbook, also see Ref.: “Applied.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW III.
Numerical Solutions of Ordinary Differential Equations
NUMERICAL SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS
Differential Equations and Boundary Value Problems
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Boyce/DiPrima 9th ed, Ch 8.4: Multistep Methods Elementary Differential Equations and Boundary Value Problems, 9th edition, by William E. Boyce and Richard.
Lecture 35 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
Numerical Methods on Partial Differential Equation Md. Mashiur Rahman Department of Physics University of Chittagong Laplace Equation.
EE3561_Unit 8Al-Dhaifallah14351 EE 3561 : Computational Methods Unit 8 Solution of Ordinary Differential Equations Lesson 3: Midpoint and Heun’s Predictor.
Copyleft  2005 by Media Lab Ordinary Differential Equations Boundary Value Problems.
1 EEE 431 Computational Methods in Electrodynamics Lecture 4 By Dr. Rasime Uyguroglu
Engineering Analysis – Computational Fluid Dynamics –
Ch 8.2: Improvements on the Euler Method Consider the initial value problem y' = f (t, y), y(t 0 ) = y 0, with solution  (t). For many problems, Euler’s.
Today’s class Ordinary Differential Equations Runge-Kutta Methods
Lecture 40 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
Ordinary Differential Equations
1/14  5.2 Euler’s Method Compute the approximations of y(t) at a set of ( usually equally-spaced ) mesh points a = t 0 < t 1
Lecture 39 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
NUMERICAL DIFFERENTIATION or DIFFERENCE APPROXIMATION Used to evaluate derivatives of a function using the functional values at grid points. They are.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
1 Week 11 Numerical methods for ODEs 1.The basics: finite differences, meshes 2.The Euler method.
Keywords (ordinary/partial) differencial equation ( 常 / 偏 ) 微分方程 difference equation 差分方程 initial-value problem 初值问题 convex 凸的 concave 凹的 perturbed problem.
Announcements Please read Chapters 11 and 12
1.1 Basic Concepts. Modeling
CHAPTER 3 NUMERICAL METHODS
Ordinary Differential Equations
Part 7 - Chapter 25.
Chapter 7 Numerical Differentiation and Integration
Numerical Analysis Lecture 25.
Ordinary Differential Equations
ECE 576 – Power System Dynamics and Stability
NUMERICAL DIFFERENTIATION AND INTEGRATION
525602:Advanced Numerical Methods for ME
Class Notes 18: Numerical Methods (1/2)
SE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture KFUPM (Term 101) Section 04 Read , 26-2, 27-1 CISE301_Topic8L4&5.
Class Notes 19: Numerical Methods (2/2)
CSE Differentiation Roger Crawfis.
Ch 8.6: Systems of First Order Equations
Solution of Equations by Iteration
Numerical Analysis Lecture 45.
Chapter 27.
Chapter 26.
Numerical Analysis Lecture 23.
Part 7 - Chapter 25.
Numerical Analysis Lecture 26.
Numerical Analysis Lecture 37.
Numerical Solutions of Ordinary Differential Equations
SE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture KFUPM (Term 101) Section 04 Read , 26-2, 27-1 CISE301_Topic8L2.
5.3 Higher-Order Taylor Methods
Numerical Analysis Lecture 38.
Numerical Analysis Lecture 2.
SE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture KFUPM (Term 101) Section 04 Read , 26-2, 27-1 CISE301_Topic8L6.
SE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture KFUPM Read , 26-2, 27-1 CISE301_Topic8L3 KFUPM.
MATH 175: Numerical Analysis II
Ch5 Initial-Value Problems for ODE
Numerical Computation and Optimization
Differential equations
Numerical Analysis Lecture 36.
CISE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture KFUPM Read , 26-2, 27-1 CISE301_Topic8L7 KFUPM.
Errors and Error Analysis Lecture 2
CISE301: Numerical Methods Topic 8 Ordinary Differential Equations (ODEs) Lecture KFUPM Read , 26-2, 27-1 CISE301_Topic8L6 KFUPM.
Modeling and Simulation: Exploring Dynamic System Behaviour
Presentation transcript:

Numerical Solutions of Ordinary Differential Equations CHAPTER 6 Numerical Solutions of Ordinary Differential Equations

Contents 6.1 Euler Methods and Error Analysis 6.2 Runge-Kutta Methods 6.3 Multistep Methods 6.4 Higher-Order Equations and Systems 6.5 Second-Order Boundary-Value Problems

6.1 Euler Method and Error Analysis Introduction Recall the backbone of Euler’s Method yn+1 = yn + hf(xn, yn) (1) Errors in Numerical Methods One of the most important error sources is round-off error.

Truncation Errors for Euler’s Methods This algorithm gives only a straight-line approximation to the solution. This error is called the local truncation error, or discretization error. To derive a formula for the truncation error of Euler’s method, we use the Taylor’s formula with remainder. Where c is some point between a and x.

Setting k = 1, a = xn, x = xn+1 = xn + h, we have. or Setting k = 1, a = xn, x = xn+1 = xn + h, we have or Hence the truncation error in yn+1 of Euler’s method is where xn < c < xn+1 The value of c is usually unknown, but an upper bound is where

Note: The e(h) is said to be of order hn, denoted by O(hn), if there exists a constant C such that |e(h)|  Chn for h sufficiently small.

Example 1 Find a bound for the local truncation errors for Euler’s method applied to Solution From the solution we have so In particular, h = 0.1, then the upper bound by replacing c by 1.1 is

Example 1 (2) When we take five steps, replacing c by 1.5, then (2)

Improved Euler’s Method (3) where (4) is commonly known as the Improved Euler’s method. See Fig 6.1 In general, the improved Euler’s method is an example of predictor-corrector method.

Fig 6.1

Example 2 Use the improved Euler’s method to obtain the approximate value y(1.5) for the solution of . Compare the results for h = 0.1 and h = 0.05. Solution With x0 = 1, y0 = 1, f(xn, yn) = 2xnyn , h = 0.1 y1* = y0 + (0.1)(2xy) = 1.2 Using (3) with x1 = 1 + h = 1.1 The results are given in Table 6.3 and 6.4.

Table 6.3 Actual value Abs. error % rel. error 1.00 1.0000 0.0000 0.00 1.10 1.2320 1.2337 0.0017 0.14 1.20 1.5479 1.5527 0.0048 0.31 1.30 1.9832 1.9937 0.0106 0.53 1.40 2.5908 2.6117 0.0209 0.80 1.50 3.4509 3.4904 0.0394 1.13

Table 6.4 Actual value Abs. error % rel. error 1.00 1.0000 0.0000 0.00 1.05 1.1077 1.1079 0.0002 0.02 1.10 1.2332 1.2337 0.0004 0.04 1.15 1.3798 1.3806 0.0008 0.06 1.20 1.5514 1.5527 0.0013 0.08 1.25 1.7531 1.7551 0.0020 0.11 1.30 1.9909 1.9937 0.0029 0.14 1.35 2.2721 2.2762 0.0041 0.18 1.40 2.6060 2.6117 0.0057 0.22 1.45 3.0038 3.0117 0.0079 0.26 1.50 3.4795 3.4904 0.0108 0.31

Truncation Errors for the Improved Euler’s Method Note that he local truncation error is O(h3).

6.2 Runge-Kutta Methods Runge-Kutta Method All the Runge-Kutta Methods are generalizations of the basic Euler’s Formula, that the slope function f is replaced by a weighted average of slopes over the interval xn  x  xn+1 (1) where the weights wi, i = 1, 2, …, m are constants satisfying w1 + w2 + … + wm = 0, and ki is the function evaluated at a selected point (x, y) for which xn  x  xn+1.

The number m is called the order The number m is called the order. If we take m = 1, w1 = 1, k1 = f(x, yn), we get the Euler’s method. Simply it is the first-order Runge-Kutta method.

A Second-order Runge-Kutta Method We try to find some values of constants so that the formula (2) where k1= f(xn, yn), k2= f(xn+h, yn+hk1) agrees with a Taylor formula with degree 2. These constants satisfy (3) then (4) where w2  0.

Eg: we choose w2 = ½ , yields w1 = ½ ,  = 1,  = 1, and (2) becomes Eg: we choose w2 = ½ , yields w1 = ½ ,  = 1,  = 1, and (2) becomes yn+1= yn+(k1+ k2)h/2 where k1= f(xn, yn), k2= f(xn+h, yn+hk1). Since xn + h = xn+1, yn + hk1 = yn + hf(xn, yn), it is identical to the improved Euler’s methods.

A Forth-order Runge-Kutta Method We try to find parameters so that the formula (5) where agrees with a Taylor formula with degree 4.

The most commonly used set of values yields the following results. (6)

Example 1 Use the RK4 method with h = 0.1 to obtain y(1.5) for the solution of y’ = 2xy, y(1) = 1. Solution We first compute the case n = 0.

Example 1 (2) Therefore, See table 6.5.

Table 6.5 h=0.1 Actual value Abs. error % rel. error 1.00 1.0000 0.0000 0.00 1.10 1.2337 1.20 1.5527 1.30 1.9937 1.40 2.6116 2.6117 0.0001 1.50 3.4902 3.4904

Table 6.6 shows some comparisons. xn Euler Improved Euler RK4 Actual value 1.00 1.0000 1.10 1.2000 1.2320 1.2337 1.05 1.1000 1.1077 1.1079 1.20 1.4640 1.5479 1.5527 1.2155 1.2332 1.30 1.8154 1.9832 1.9937 1.15 1.3492 1.3798 1.3806 1.40 2.2874 2.5908 2.6116 2.6117 1.5044 1.5514 1.50 2.9278 3.4509 3.4902 3.4904 1.25 1.6849 1.7531 1.7551 1.8955 1.9909 1.35 2.1419 2.2721 2.2762 2.4311 2.6060 1.45 2.7714 3.0038 3.0117 3.1733 3.4795 3.4903

Truncation Error for the RK4 method Since it is of degree 4, then the local truncation error is O(h5) and the global truncation error is O(h4). However, this is beyond the scope of this text.

Example 2 Find a bound for the local truncation error of RK4 for the solution of Solution By computing the fifth derivative of the known solution we get (7) Thus with c = 1.5, then (7) = 0.00028. Table 6.7 gives the approximations to the solution of the initial-value problem at x = 1.5 by the RK4 method.

Table 6.7 h Approx Error 0.1 3.49021064 1.32321089  10-4 0.05 3.49033382 9.13776090  10-6

6.3 Multistep Method Adams-Bashforth-Moulton Method The predictor is the Adams-Bashforth formula (1) where n  3.

The value of yn+1* is then substituted into the Adams-Moulton corrector (2)

Example 1 Use the above method with h = 0.2 to obtain y(0.8) for the solution of Solution With h = 0.2, y(0.8) will be approximated by y4. To get started, we use the RK4 method with x0 = 0, y0 = 1, h = 0.2 to obtain y1 = 1.02140000, y2 = 1.09181796, y3 = 1.22210646

Example 1 (2) Now with x0 = 0, x1 = 0.2, x3 = 0.4, x4 = 0.6, and f(x, y) = x + y – 1, we find Then predictor (1) gives

Example 1 (3) To use the corrector (2), we need

Stability of Numerical Methods We say a numerical method is stable, if small changes in the initial condition result in only small changes in the computed solution.

6.4 Higher-Order Equations and Systems Second-Order IVPs An IVP (1) can be expressed by (2) Since y’(x0) = u0, then y(x0) = y0, u(x0) = u0. Apply the Euler’s method to (2) (3)

whereas the RK4 method is applied (4) where In general,

Example 1 Use the Euler’s method to obtain y(0.2), where (5) Solution Let y’ = u, then (5) becomes From (3)

Example 1 (2) Using h = 0.1, y0 = 1, u0 = 2, we find

Fig 6.2 Fig 6.2 shows the comparison of results between by Euler’s method and by the RK4 method.

Example 2 Write as a system of first-order DEs. Solution We write After simplification

Example 2 (2) Let Then the original system can be

Numerical Solution of a System The solution of a system of the form can be approximated by numerical methods.

For example, by the RK4 method: (6) looks like this: (7)

where (8)

Example 3 Consider Use the RK4 method to approximate x(0.6) and y(0.6) with h = 0.2 and h = 0.1. Solution With h = 0.2 and the given data, from (8)

Example 3 (2)

Example 3 (3) Therefore, from (7) we get See Fig 6.3 and Table 6.8, 6.9.

Fig 6.3

Table 6.8 0.00 -1.0000 6.0000 0.20 9.2453 19.0683 0.40 46.0327 55.1203 0.60 158.9430 150.8192

Table 6.9 0.00 -1.0000 6.0000 0.10 2.3840 10.8883 0.20 9.3379 19.1332 0.30 22.5541 32.8539 0.40 46.5103 55.4420 0.50 88.5729 93.3006 0.60 160.7563 152.0025

6.5 Second-Order BVPs Finite Difference Approximation The Taylor series at a point a of y(x) is If we set h = x – a, then Rewrite the last expression as (1) and (2)

If h is small, we can neglect the y” and higher-order terms, then. (3) If h is small, we can neglect the y” and higher-order terms, then (3) (4) Subtracting (1) and (2) also gives (5) If we ignore the terms involving h3 and higher, then by adding (1) and (2) (6)

The right-hand dies of(3), (4), (5), (6) are called difference quotients, and these differences are called finite differences. y(x + h) – y(x) : forward difference y(x) – y(x – h) : backward difference y(x + h) – y(x – h): central difference y(x + h) – 2y(x) + y(x – h): central difference

Finite Difference Method Consider the BVP: (7) Suppose a = x0 < x1 < … < xn < b represents a regular partition of the interval [a, b], that is, xi = a + ih, where i = 0, 1, 2, ..., n, and h = (b – a)/n. These points are called interior mesh points. If we let yi = y(xi), Pi = P(xi), Qi = Q(xi), fi = f(xi),

and if y’ and y” in (7) are replaced by (5) and (6), then we have. or and if y’ and y” in (7) are replaced by (5) and (6), then we have or (8) This is known as a finite difference equation.

Example 1 Use (8) with n = 4 to approximate the solution of the BVP Solution We have P = 0, Q = –4, f(x) = 0, h = (1 – 0)/4 = ¼ . Hence we have (9) The interior points are x1= 0 + 1/4, x2= 0 + 2/4, x3= 0 + 3/4, then (9) gives

Example (2) Together with y0 = 0, y4 = 5, then We obtain y1 = 0.7256, y2 = 1.6327, y3 = 2.9479.

Example 2 Use (8) with n = 10 to approximate the solution of the BVP Solution We have P = 3, Q = 2, f(x) = 4x2, h = (2 – 1)/10 = 0.1, hence (8) becomes (10) The interior points are x1= 1.1, x2= 1.2, …, x9= 1.9, y0 = 1, y10 = 6, then (10) gives

Example 2 (2) Then we can solve the above system of equations to obtain y1, y2, …, y9.

Shooting Method Another way of approximating a solution is called the shooting method. The starting point of this method is the replacement of the BVP by the IVP: (11) where m1 is simply a guess. This is left as an exercise. See problem 14.

Thank You !