Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)

Slides:



Advertisements
Similar presentations
Example Project and Numerical Integration Computational Neuroscience 03 Lecture 11.
Advertisements

Formal Computational Skills
Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 9 Instructor: Tim Warburton.
Roundoff and truncation errors
CSE 330: Numerical Methods
Computational Modeling for Engineering MECN 6040
Integration Techniques
Computational Methods in Physics PHYS 3437
Ordinary Differential Equations
MATH 685/ CSI 700/ OR 682 Lecture Notes
Today’s class Romberg integration Gauss quadrature Numerical Methods
Computational Methods in Physics PHYS 3437
1cs542g-term Notes  Notes for last part of Oct 11 and all of Oct 12 lecture online now  Another extra class this Friday 1-2pm.
Total Recall Math, Part 2 Ordinary diff. equations First order ODE, one boundary/initial condition: Second order ODE.
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Note 5 Numerical Integration Spring 2010 Prof. Chung-Kuan Cheng 1.
Initial-Value Problems
Math Calculus I Part 8 Power series, Taylor series.
Ordinary Differential Equations (ODEs) 1Daniel Baur / Numerical Methods for Chemical Engineers / Implicit ODE Solvers Daniel Baur ETH Zurich, Institut.
Math Calculus I Part 8 Power series, Taylor series.
Ordinary Differential Equations (ODEs)
Numerical solution of Differential and Integral Equations PSCi702 October 19, 2005.
1 Chapter 6 Numerical Methods for Ordinary Differential Equations.
Ordinary Differential Equations (ODEs) 1Daniel Baur / Numerical Methods for Chemical Engineers / Explicit ODE Solvers Daniel Baur ETH Zurich, Institut.
Professor Walter W. Olson Department of Mechanical, Industrial and Manufacturing Engineering University of Toledo Solving ODE.
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Boyce/DiPrima 9th ed, Ch 8.4: Multistep Methods Elementary Differential Equations and Boundary Value Problems, 9th edition, by William E. Boyce and Richard.
Lecture 35 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
Numerical Integration Methods
EE3561_Unit 8Al-Dhaifallah14351 EE 3561 : Computational Methods Unit 8 Solution of Ordinary Differential Equations Lesson 3: Midpoint and Heun’s Predictor.
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 5a.
Loop Application: Numerical Methods, Part 1 The power of Matlab Mathematics + Coding.
Integration of 3-body encounter. Figure taken from
Scientific Computing Numerical Solution Of Ordinary Differential Equations - Euler’s Method.
6. Introduction to Spectral method. Finite difference method – approximate a function locally using lower order interpolating polynomials. Spectral method.
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
CSC 211 Data Structures Lecture 13
Scientific Computing Multi-Step and Predictor-Corrector Methods.
ISCG8025 Machine Learning for Intelligent Data and Information Processing Week 3 Practical Notes Application Advice *Courtesy of Associate Professor Andrew.
Large Timestep Issues Lecture 12 Alessandra Nardi Thanks to Prof. Sangiovanni, Prof. Newton, Prof. White, Deepak Ramaswamy, Michal Rewienski, and Karen.
Applications of Loops: The power of MATLAB Mathematics + Coding
Chapter 10 ordinary differential equations (ODEs) Chapter 11 systems of ODEs (6 th edition)
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Announcements Read Chapters 11 and 12 (sections 12.1 to 12.3)
Intro to Simulink April 15, Copyright , Andy Packard. This work is licensed under the Creative Commons.
Please remember: When you me, do it to Please type “numerical-15” at the beginning of the subject line Do not reply to my gmail,
Ch 8.2: Improvements on the Euler Method Consider the initial value problem y' = f (t, y), y(t 0 ) = y 0, with solution  (t). For many problems, Euler’s.
Lecture 40 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
Ordinary Differential Equations
1/14  5.2 Euler’s Method Compute the approximations of y(t) at a set of ( usually equally-spaced ) mesh points a = t 0 < t 1
Intro to Simulink Modified by Gary Balas 20 Feb 2011 Copyright , Andy Packard. This work is licensed under.
Lecture 39 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 6 - Chapters 22 and 23.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 32.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
Lecture 11 Alessandra Nardi
Class Notes 19: Numerical Methods (2/2)
CSE 245: Computer Aided Circuit Simulation and Verification
Chapter 26.
MATH 175: NUMERICAL ANALYSIS II
Numerical Analysis Lecture 37.
Overview Class #2 (Jan 16) Brief introduction to time-stepping ODEs
MATH 175: Numerical Analysis II
Presentation transcript:

Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)

Today’s Lecture Notes on projects & talks Notes on projects & talks Issues with adaptive step size selection Issues with adaptive step size selection Second order ODEs Second order ODEs nth order ODEs nth order ODEs Algorithm outline for handling nth order ODEs Algorithm outline for handling nth order ODEs Advanced warning – no class on March 31st

Project time-line By now you should have approximately ½ of the project coding & development done By now you should have approximately ½ of the project coding & development done There are only 4 weeks until the first talk There are only 4 weeks until the first talk I would advise you to have at least some outline of how your write-up will look already I would advise you to have at least some outline of how your write-up will look already Project reports are due on the last day of term Project reports are due on the last day of term

Talk Format Allowing for changing over presenters, connecting/disconnecting etc, we can allow up to 60 minutes per lecture period Allowing for changing over presenters, connecting/disconnecting etc, we can allow up to 60 minutes per lecture period 3 people per lecture means 20 minutes per presenter 3 people per lecture means 20 minutes per presenter 15 minute presentation 15 minute presentation 5 minutes for questions 5 minutes for questions Remember your presentation will be marked by your peers Remember your presentation will be marked by your peers

Issues with the adaptive step size algorithm Step (8) If err < 0.1  h is too small. Increase by 1.5 for next step Step (8) If err < 0.1  h is too small. Increase by 1.5 for next step If err >  h is too big. Halve h and repeat iteration If err >  h is too big. Halve h and repeat iteration If 0.1  ≤ err ≤   h is OK. If 0.1  ≤ err ≤   h is OK. Clearly if there is problems with convergence this step will continue to keep dividing h forever. You need to set up a limit here, either by not allowing h to get smaller than some preset limit or counting the number of times you halve h Clearly if there is problems with convergence this step will continue to keep dividing h forever. You need to set up a limit here, either by not allowing h to get smaller than some preset limit or counting the number of times you halve h Because of rounding error as you increment x it is very unlikely that you will precisely hit x max with the final step Because of rounding error as you increment x it is very unlikely that you will precisely hit x max with the final step Therefore you should choose h=min(h,x max -x) where x is current position Therefore you should choose h=min(h,x max -x) where x is current position

Issues with the adaptive step size algorithm: more “gotcha’s” Step (7) Estimate error Step (7) Estimate error Should add a very small (“tiny”) number to the absolute value of the denominator to avoid a divide by zero (when the two estimates are very close to zero you might - in incredibly rare situations - get the two numbers adding to zero). Should add a very small (“tiny”) number to the absolute value of the denominator to avoid a divide by zero (when the two estimates are very close to zero you might - in incredibly rare situations - get the two numbers adding to zero). Since you store values of y and x in an array it is possible that you may run out of storage (because you need too many points). You should do some check of the number of positions stored versus the maximum possible allowed in your decleration of the x & y arrays. This will avoid the possibility of a segmentation fault. Since you store values of y and x in an array it is possible that you may run out of storage (because you need too many points). You should do some check of the number of positions stored versus the maximum possible allowed in your decleration of the x & y arrays. This will avoid the possibility of a segmentation fault.

Second order ODEs Consider the general second order ODE Consider the general second order ODE We now require two initial values be provided, namely the y 0 value and the derivative y´(x 0 ) We now require two initial values be provided, namely the y 0 value and the derivative y´(x 0 ) These are called the Cauchy conditions These are called the Cauchy conditions If we let z=y´, then z´=y´´ and we have If we let z=y´, then z´=y´´ and we have (1)

Second order ODEs cont We have thus turned a second order ODE into a first order ODE for a vector We have thus turned a second order ODE into a first order ODE for a vector Can apply the R-K solver to the system but you now have two components to integrate Can apply the R-K solver to the system but you now have two components to integrate At each step must update all x, y and z values At each step must update all x, y and z values

Diagrammatically x 0 x 1 y(x) y0y0 x 0 x 1 z(x) z0z0 Remember z’=g(x,y,y’) Remember y’=z y´ 0 =z 0 z1z1 z´ 0 =g 0 y1y1

nth order ODEs Systems with higher than second order derivatives are actually quite rare in physics Systems with higher than second order derivatives are actually quite rare in physics Nonetheless we can adapt the idea for 2 nd order systems to nth order Nonetheless we can adapt the idea for 2 nd order systems to nth order Suppose we have a system specified by Suppose we have a system specified by Such an equation requires n initial values for the derivatives, suppose again we have the Cauchy conds. Such an equation requires n initial values for the derivatives, suppose again we have the Cauchy conds.

Definitions for the nth order system

So another system of coupled first order equations that can be solved via R-K.

Algorithm for solving such a system Useful parameters to set: imax=number of points at which y(x) is to be evaluated (1000 is reasonable) nmax=highest order of ODE to be solved (say 9) errmax=highest tolerated error (say 1.0×10 -9 ) Declare x(imax),y(imax,nmax),y0(nmax),yl(nmax), ym(nmax),yr(nmax),ytilde(nmax),ystar(nmax) ym(nmax),yr(nmax),ytilde(nmax),ystar(nmax) Note that the definitions used here are not quite consistent with the variable definitions used in the discussion of the single function case.

User inputs Need to get from the user (not necessarily at run time) Need to get from the user (not necessarily at run time) The g(x,y,y’,…) function The g(x,y,y’,…) function Domain of x, i.e. what are xmin and xmax Domain of x, i.e. what are xmin and xmax What is the order of the ODE to be solved, stored in variable nord What is the order of the ODE to be solved, stored in variable nord What are the initial values for y and the derivatives (the Cauchy conditions), these are stored in y0(nmax) What are the initial values for y and the derivatives (the Cauchy conditions), these are stored in y0(nmax)

Correspondence of arrays to variables The arrays y(imax,nmax), y0(nmax) corresponds to the y derivatives as follows: The arrays y(imax,nmax), y0(nmax) corresponds to the y derivatives as follows: y(1:imax,1)≡y vals with y(1,1)=y 0 =y0(1) y(1:imax,1)≡y vals with y(1,1)=y 0 =y0(1) y(1:imax,2)≡y´ vals with y(1,2)=y´ 0 =y0(2) y(1:imax,2)≡y´ vals with y(1,2)=y´ 0 =y0(2) y(1:imax,3)≡y´´ vals with y(1,3)=y´´ 0 =y0(3) y(1:imax,3)≡y´´ vals with y(1,3)=y´´ 0 =y0(3) y(1:imax,nord)≡y (nord-1) vals with y(1,nord)=y (n-1) 0 = y0(nord) y(1:imax,nord)≡y (nord-1) vals with y(1,nord)=y (n-1) 0 = y0(nord) | |

Choose initial step size & initialize yl values Apply same criterion as standard Runge-Kutta Apply same criterion as standard Runge-Kutta dx=0.1×errmax×(xmax-xmin) dx=0.1×errmax×(xmax-xmin) dxmin=10 -3 ×dx dxmin=10 -3 ×dx We can use this value to ensure that adaptive step size is never less than 1/1000 th of the initial guess We can use this value to ensure that adaptive step size is never less than 1/1000 th of the initial guess x(1)=xl=xmin x(1)=xl=xmin Set initial position for solver Set initial position for solver Initialize yl (left y-values for the first interval) Initialize yl (left y-values for the first interval) do n=1,nord yl(n)=y0(n)=y(1,n) do n=1,nord yl(n)=y0(n)=y(1,n)

Start adaptive loop Set i=1 this will count number of x positions evaluated Set i=1 this will count number of x positions evaluated dxh=dx/2 --- half width of zone dxh=dx/2 --- half width of zone xr=xl+dx --- right hand boundary xr=xl+dx --- right hand boundary xm=xl+dxh --- mid point of zone xm=xl+dxh --- mid point of zone Perform R-K calculations on this zone for all y n Perform R-K calculations on this zone for all y n Need to calculate all y values on right boundary using a single R-K step (stored in ytilde(nmax)) Need to calculate all y values on right boundary using a single R-K step (stored in ytilde(nmax)) Need to calculate all y values on right boundary using two half R-K steps (stored in ystar(nmax)) – for example Need to calculate all y values on right boundary using two half R-K steps (stored in ystar(nmax)) – for example call rk(xl,yl,ytilde,nord,dx) call rk(xl,yl,ym,nord,dxh) call rk(xm,ym,ystar,nord,dxh)

Now evaluate R.E.-esque value yr(n) err=0. err=0. do n=1,nord do n=1,nord If err< 0.1×errmax then increase dx: dx=dx*1.5 If err< 0.1×errmax then increase dx: dx=dx*1.5 If err > errmax  dx is too big: dx=max(dxh,dxmin) and repeat evaluation of this zone If err > errmax  dx is too big: dx=max(dxh,dxmin) and repeat evaluation of this zone If 0.1 errmax  ≤ err ≤ errmax  dx is OK & we need to store all results If 0.1 errmax  ≤ err ≤ errmax  dx is OK & we need to store all results Error is now set over all the functions being integrated

Update values and prepare for next step Increment i: i=i+1 (check that it doesn’t exceed imax) Increment i: i=i+1 (check that it doesn’t exceed imax) x(i)=xr x(i)=xr xl=xr (note should check xl hasn’t gone past xmax & that h value is chosen appropriately) xl=xr (note should check xl hasn’t gone past xmax & that h value is chosen appropriately) do n=1,nord do n=1,nord y(i,n)=yr(n) y(i,n)=yr(n) yl(n)=yr(n) yl(n)=yr(n) Then return to top of loop and do next step Then return to top of loop and do next step

Runge-Kutta routine Subroutine rk(xl,yl,yr,nord,dx) Subroutine rk(xl,yl,yr,nord,dx) yl and yr will be arrays of size nord yl and yr will be arrays of size nord Set useful variables: Set useful variables: dxh=dx/2. dxh=dx/2. xr=xl+dx xr=xl+dx xm=xl+dxh xm=xl+dxh Recall, given y’=f(x,y), (xl,yl), and dx then Recall, given y’=f(x,y), (xl,yl), and dx then

Steps in algorithm Complications: have a vector of functions and vector of yl values Complications: have a vector of functions and vector of yl values call derivs(xl,yl,f0,nord) (sets all function f0 values) call derivs(xl,yl,f0,nord) (sets all function f0 values) do n=1,nord do n=1,nord call derivs(xm,,nord) (sets function values) call derivs(xm,,nord) (sets function values) do n=1,nord do n=1,nord call derivs(xm,,nord) (sets function values) call derivs(xm,,nord) (sets function values) do n=1,nord do n=1,nord call derivs(xr,,nord) (sets function values) call derivs(xr,,nord) (sets function values)

Calculate R-K formula & derivs subroutine do i=1,nord do i=1,nord That’s the end of the R-K routine That’s the end of the R-K routine subroutine derivs(x,y,yp,nord) subroutine derivs(x,y,yp,nord) do n=1,nord-1: yp(n)=y(n+1) do n=1,nord-1: yp(n)=y(n+1) Lastly set yp(nord)=g(x,y,y´,…,y (nord-1) ) Lastly set yp(nord)=g(x,y,y´,…,y (nord-1) )

Summary There are a few issues with the adaptive step size algorithm you need to be concerned about (avoiding divide by zero etc.) There are a few issues with the adaptive step size algorithm you need to be concerned about (avoiding divide by zero etc.) Second order systems can be turned into coupled first order systems Second order systems can be turned into coupled first order systems Nth order systems can be turned into n coupled first order systems Nth order systems can be turned into n coupled first order systems The adaptive R-K algorithm for vectors is in principle similar to that for a single function The adaptive R-K algorithm for vectors is in principle similar to that for a single function However, must loop over vectors However, must loop over vectors

Implicit Methods: Backward Euler method Recall in the Forward Euler method Recall in the Forward Euler method Rather than predicting forward, we can predict backward using the value of f(t n,y n ) Rather than predicting forward, we can predict backward using the value of f(t n,y n ) Rewriting in terms of n+1 and n Rewriting in terms of n+1 and n Replacing y n+1 =r we need to use a root finding method to solve Replacing y n+1 =r we need to use a root finding method to solve NON-EXAMINABLE BUT USEFUL TO KNOW

Notes on implicit Methods Implicit methods tend to be more stable, and for the same step size more accurate Implicit methods tend to be more stable, and for the same step size more accurate The trade-off is the increased expense of using an interative procedure to find the roots The trade-off is the increased expense of using an interative procedure to find the roots This can become very expensive in coupled systems This can become very expensive in coupled systems Richardson Extrapolation can also be applied to implicit ODEs, results in higher order schemes with good convergence properties Richardson Extrapolation can also be applied to implicit ODEs, results in higher order schemes with good convergence properties Use exactly the same procedure as outline in previous lecture, compare expansions at h and h/2 Use exactly the same procedure as outline in previous lecture, compare expansions at h and h/2 Crank-Nicholson is another popular implicit method Crank-Nicholson is another popular implicit method Relies upon the derivative at both the start and end points of the interval Relies upon the derivative at both the start and end points of the interval Second order accurate solution Second order accurate solution

Multistep methods Thus far we’ve consider self starting methods that use only values from x n and x n+1 Thus far we’ve consider self starting methods that use only values from x n and x n+1 Alternatively, accuracy can be improved by using a linear combination of additional points Alternatively, accuracy can be improved by using a linear combination of additional points Utilize y n-s+1,y n-s+2,…,y n to construct approximations to derivatives of order up to s, at t n Utilize y n-s+1,y n-s+2,…,y n to construct approximations to derivatives of order up to s, at t n Example for s=2 Example for s=2

Comparison of single and multistep methods

Second order method We can utilize this relationship to describe a multistep second order method We can utilize this relationship to describe a multistep second order method Generalized to higher orders, these methods are known as Adams-Bashforth (predictor) methods Generalized to higher orders, these methods are known as Adams-Bashforth (predictor) methods s=1 recovers the Euler method s=1 recovers the Euler method Implicit methodologies are possible as well Implicit methodologies are possible as well Adams-Moulton (predictor-corrector) methods Adams-Moulton (predictor-corrector) methods Since these methods rely on multisteps the first few values of y must be calculated by another method, e.g. RK Since these methods rely on multisteps the first few values of y must be calculated by another method, e.g. RK Starting method needs to be as accurate as the multistep method Starting method needs to be as accurate as the multistep method

“Stiff” problems Definition of stiffness: Definition of stiffness: “Loosely speaking, the initial value problem is referred to as being stiff if the absolute stability requirement dictates a much smaller time step than is needed to satisfy approximation requirements alone.” “Loosely speaking, the initial value problem is referred to as being stiff if the absolute stability requirement dictates a much smaller time step than is needed to satisfy approximation requirements alone.” Fomally: An IVP is stiff in some interval [0,b] if the step size needed to maintain stability of the forward Euler method is much smaller than the step size required to represent the solution accurately. Fomally: An IVP is stiff in some interval [0,b] if the step size needed to maintain stability of the forward Euler method is much smaller than the step size required to represent the solution accurately. Stability requirements are overriding accuracy requirements Stability requirements are overriding accuracy requirements Why does this happen? Why does this happen? Trying to integrate smooth solutions that are surrounded by strongly divergent or oscillatory solutions Trying to integrate smooth solutions that are surrounded by strongly divergent or oscillatory solutions Small deviations away from the true solution lead to forward terms being very inaccurate Small deviations away from the true solution lead to forward terms being very inaccurate

Example Consider: y’(t)=-15y(t), t≥0, y(0)=1 Exact solution: y(t)=e -15t, so y(t)→0 as t→0 If we examine the forward Euler method, strong oscillatory behaviour forces us to take very small steps even though the function looks quite smooth

Implicit methods in stiff problems Because implicit methods can use longer timesteps, they are strongly favoured in integrations of stiff systems Because implicit methods can use longer timesteps, they are strongly favoured in integrations of stiff systems Consider a two-stage Adams-Moulton integrator: Consider a two-stage Adams-Moulton integrator:

Adams-Moulton solution for h=0.125 Much better behaviour and convergence

Choosing stiff solvers This isn’t as easy as you might think This isn’t as easy as you might think Performance of different algorithms can be quite dependent upon the specific problem Performance of different algorithms can be quite dependent upon the specific problem Researchers often write papers comparing the performance of different solvers on a given problem and then advise on which one to use Researchers often write papers comparing the performance of different solvers on a given problem and then advise on which one to use This is a sensible way to do things This is a sensible way to do things I recommend you do the same if you have a stiff problem I recommend you do the same if you have a stiff problem Try solvers from library packages like ODEPACK or the Numerical recipes routines Try solvers from library packages like ODEPACK or the Numerical recipes routines

Stability of the Forward Euler method Stability is more important than the truncation error Stability is more important than the truncation error Consider y’= y for some complex Consider y’= y for some complex Provided Re < 0 solution is bounded Provided Re < 0 solution is bounded Substitute into the Euler method Substitute into the Euler method For y n to remain bounded we must have For y n to remain bounded we must have Thus a poorly chosen  t that breaks the above inequality will lead to y n increasing without limit Thus a poorly chosen  t that breaks the above inequality will lead to y n increasing without limit

Behaviour of small errors We considered the previous y’= y equation because it describes the behaviour of small changes We considered the previous y’= y equation because it describes the behaviour of small changes Suppose we have a solution y s =y+  where  is the small error. Suppose we have a solution y s =y+  where  is the small error. Substitute into y’=f(t,y) and use a Taylor expansion Substitute into y’=f(t,y) and use a Taylor expansion To leading order in 

Next lecture Monte Carlo methods Monte Carlo methods