Download presentation
1
Computational Methods in Physics PHYS 3437
Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
2
Today’s Lecture Numerical solutions of ordinary differential equations
Euler method Runge-Kutta Adaptive step size selection You can skip this lecture if you have a significant amount of experience with numerical ODEs
3
Simplest approach to ODEs
To begin consider a general first order ordinary differential equation Using a Taylor expansion of the solution y(x) Which suggests we predict forward function values from a starting point x0,y0
4
“Crude” (forward) Euler solver graphically
y(x)=F(x) y1 y0 h x x x2
5
Useful notation n = step number Dx = width of interval
6
Numerical errors Discretization error: Round-off error
Error resulting from the computation of quantities that have been calculated using an approximation to the true solution We may neglect higher order terms for example This error is sometimes called truncation error Unavoidable problem in numerical integration work Would occur even in the presence of infinite precision Round-off error Result of finite precision arithmetic
7
Discretization & round-off errors
Suppose the exact solution at a given value of xn is (y=)F(xn), the (accumulated) discretization error is defined as Caused by approximate formula to estimate yn+1 Input data at start of step do not necessarily correspond to exact soln Accumulated round-off error is defined as Where Yn is the value we actually calculate after round-off rather than the true value yn So the absolute value of the total error is given by the inequality
8
Problems with the forward Euler method
This method relies upon the derivative at the beginning of each interval to predict forward to the end of each interval Any errors in the solution tend to get amplified quickly Calculated solution quickly diverges away from the true solution as the error grows We can precisely calculate the error in a single step as follows
9
Local discretization error in Euler Method
The easiest way to calculate the local error is to use a Taylor expansion: These three terms correspond to Euler’s method - The local discretization error in the approximation at one step is thus given by
10
Error over a series of intervals
Be VERY careful about the distinction between The local discretization error ei and the global error En accumulated over a series of intervals. In many books/lectures a clear definition of what error is being discussed isn’t always given. Roughly speaking, if the local error is O(hn) then the global error will be O(hn-1). This happens because n=(xn-x0)/h and you sum over n intervals. Unfortunately, calculating En usually requires fairly complicated derivations that we don’t have time to do. y0 x0 x1 x xn
11
Modified Euler Method (“extrapolate-integrate”)
The Euler method is flawed because we use the beginning of the interval to predict to the end If we could use the midpoint derivative that would usually be expected to be a better approximation If the slope is changing rapidly across an interval using the midpoint slope usually gives us something closer to the average over the interval (of course it doesn’t always have to be better, just on average) However since y’mid=f(xmid,ymid) and we don’t know ymid (only y0) & we have to estimate it
12
Modified Euler Method Let y’(x)=f(x,y) and y(x0)=y0
Let the step size be given by h=x1-x0, f0=f(x0,y0), x1/2=x0+h/2, x3/2=x1+h/2 etc.
13
Modified Euler Method graphically
In this method the slope is estimated at the mid-point and the y(x) is integrated forward using that slope. y(t) f1/2 y1 ft1/2 y0 Can show that errors in this method are x x1/ x1
14
Improved Euler Method Similar in concept to the trapezoid rule in quadrature, by utilizing an average of the start and end point derivatives we can better approximate the change in y over the interval This is the simplest example of so called “Predictor-Corrector” methods, (which includes Runge-Kutta methods)
15
Improved Euler method graphically
y(x) yn xn xn+1 Can show that errors in this method are also
16
Higher order scheme preliminaries
As a first motivation consider repeating Improved E. method but with two substeps y(x)=F(x) y0 x x1/ x1 Indicates y1 was obtained with two h/2 steps
17
General points about higher order methods
Higher order accurate schemes will use more substep evaluations We need to ensure these are chosen optimally so as to produce the most accuracy for the least number of function evaluations Let’s go back to the Improved Euler method on a series of steps and apply Richardson-Extrapolation-esque approach
18
4th order Runge-Kutta For a single Improved Euler method we get
A global evaluation with n steps will produce an error En, of order O(h2) If we halve the size of h then the global error produced by (D) with the smaller step size must be ¼ that of the previous evaluation That must correspond to the error produced by using the formula y1,h/2 (i.e. C) over the n steps
19
Subtraction step – defining
Since we know the global error term produced by using y1,h/2 Err(y1,h/2)=K(h/2)2=Kh2/4 (E) Global error term for y1,h Err(y1,h)=Kh2 (F) Subtract (F) from 4×(E) & define new y1: This formula will actually be globally 4th order accurate!
20
Comments There are an infinite number of ways of choosing slopes and substeps Mathematically all these methods are parameterized and limited by using appropriate Taylor expansions and matching coefficients See W. Gear “Numerical Initial Value Problems in ODEs”, 1971, pgs 27-36 However, one must always make an arbitrary choice to derive a specific algorithm Many numerical analysts have considered all the possible choices and 4th order Runge-Kutta is considered a very good method
21
Classical Runge-Kutta method
Equivalent to the five term Taylor expansion Interpret the sum over interior points as an average slope. Local discretization error is proportional to h5, so halving h leads to a 1/32 reduction in the local disc. error. Very widely used method No proof of this – algebra is extremely lengthy!
22
Runge-Kutta graphically
x x1/ x1
23
Comparison of methods x Euler h=0.1 Improved Euler h=0.1 RK h=0.2
Let’s compare solutions of the following system: x Euler h=0.1 Improved Euler h=0.1 RK h=0.2 RK h=0.1 Exact 1.0 0.2 1.5 1.595 2.5016 0.4 4.4774 0.6 0.8
24
Convergence with changes in h
Solutions now taken at x=1 h Euler Improved Euler RK Exact 0.2 0.1 0.05 0.025 0.01 0.7% error 0.06% error ~10-7 error RK with 10 steps better than improved Euler with 100 steps!
26
Adapting the step size What value of h is optimal?
If we consider functions that are smooth in some regions and rapidly changing in others then h for one region is not appropriate in another h ought to vary from one region to another Let’s examine an algorithm to vary h We’ll use the relative error on a given interval as the controlling parameter
27
Algorithm to adapt h Let e=fractional error tolerance (say 10-4)
Let D=xmax-xmin be the domain over which y’=f(x,y) is to be solved Initially let h=0.1×e×D Use R-K to find y1=y(xmin+h) in one step h Next use two R-K steps of size h/2 to find a new ŷ1(xmin+h) (most accurate estimate yet) (Now have two values we can apply Richardson-Extrapolation to)
28
Algorithm to adapt h 6) Compute y*1=y*(xmin+h) using y1 and ŷ1 using R.E. Note since error is prop to h4 must use 1/16 multiplier 7) Estimate error: 8) If err < 0.1e h is too small. Increase by 1.5 for next step If err > e h is too big. Halve h and repeat iteration If 0.1 e ≤ err ≤ e h is OK. 9) When y* value is acceptable increment all x values to do next zone 10) Stop when x=xmax
29
Notes The R.E. ensures that even though we appear to do 3 R-K steps per h zone, we actually get a much faster convergence rate than R-K with a step of h/3 The adaptive grid is a necessary part of the error estimation process Without local adaption you always have to choose the h for the hardest regions in your problem You may well be wasting time in regions that are easy to solve It doesn’t make any sense to use R-K without adaptive stepping!
30
Summary As with numerical integration, low order methods do not exhibit high accuracy RK methods provide a good compromise between computational cost and stability Adaptive RK methods are extremely powerful and using convergence tests that use R.E. ensures optimal usage of compute cycles
31
Next lecture Issues with the adaptive step size algorithm
Second order ODEs Nth order ODEs
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.