Reaction-Diffusion Systems - Continued Reactive Random Walks.

Slides:



Advertisements
Similar presentations
Random Processes Introduction (2)
Advertisements

Gas Density: Summary The molar concentrations and densities of gases increase as they are compressed (less volume, right?), but decrease as they are heated.
Kinetics III Lecture 16. Derivation of 5.67 Begin with Assume ∆G/RT is small so that e ∆G/RT = 1+∆G/RT, then Near equilibrium for constant ∆H, ∆S, ∆G.
Partial Differential Equations
David Luebke 1 4/22/2015 CS 332: Algorithms Quicksort.
Chapter 13 MIMs - Mobile Immobile Models. Consider the Following Case You have two connected domains that can exchange mass
Spontaneous recovery in dynamic networks Advisor: H. E. Stanley Collaborators: B. Podobnik S. Havlin S. V. Buldyrev D. Kenett Antonio Majdandzic Boston.
Planning under Uncertainty
Department of EECS University of California, Berkeley EECS 105 Fall 2003, Lecture 9 Lecture 9: PN Junctions Prof. Niknejad.
Group problem solutions 1.(a) (b). 2. In order to be reversible we need or equivalently Now divide by h and let h go to Assuming (as in Holgate,
Chapter 4 Numerical Solutions to the Diffusion Equation.
Lattice calculations: Lattices Tune Calculations Dispersion Momentum Compaction Chromaticity Sextupoles Rende Steerenberg (BE/OP) 17 January 2012 Rende.
Finite Difference Methods Or Computational Calculus.
Monte Carlo Methods in Partial Differential Equations.
Solving Algebraic Equations
Instructor: André Bakker
Introduction To Logarithms. Logarithms were originally developed to simplify complex arithmetic calculations. They were designed to transform multiplicative.
By: Brian Scott. Topics Defining a Stochastic Process Geometric Brownian Motion G.B.M. With Jump Diffusion G.B.M with jump diffusion when volatility is.
CS 4730 Physical Simulation CS 4730 – Computer Game Design.
Ch 9 pages ; Lecture 21 – Schrodinger’s equation.
CompuCell Software Current capabilities and Research Plan Rajiv Chaturvedi Jesús A. Izaguirre With Patrick M. Virtue.
Unit 11, Part 2: Introduction To Logarithms. Logarithms were originally developed to simplify complex arithmetic calculations. They were designed to transform.
1 Lesson 9: Solution of Integral Equations & Integral Boltzmann Transport Eqn Neumann self-linked equations Neumann self-linked equations Attacking the.
AMBIENT AIR CONCENTRATION MODELING Types of Pollutant Sources Point Sources e.g., stacks or vents Area Sources e.g., landfills, ponds, storage piles Volume.
Unit 11, Part 2: Introduction To Logarithms. Logarithms were originally developed to simplify complex arithmetic calculations. They were designed to transform.
Chapter 1 Computing Tools Analytic and Algorithmic Solutions Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter MIMs - Mobile Immobile Models Diffusive Mobile Regions.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Project funded by the Future and Emerging Technologies arm of the IST Programme Analytical Insights into Immune Search Niloy Ganguly Center for High Performance.
Chemical Reactions in Ideal Gases. Non-reacting ideal gas mixture Consider a binary mixture of molecules of types A and B. The canonical partition function.
Physics 430: Lecture 25 Coupled Oscillations
 We just discussed statistical mechanical principles which allow us to calculate the properties of a complex macroscopic system from its microscopic characteristics.
Bayesian Reasoning: Tempering & Sampling A/Prof Geraint F. Lewis Rm 560:
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
General Relativity Physics Honours 2008 A/Prof. Geraint F. Lewis Rm 560, A29 Lecture Notes 10.
Chapter 15 – CTRW Continuous Time Random Walks. Random Walks So far we have been looking at random walks with the following Langevin equation  is a.
InflationInflation Andrei Linde Lecture 2. Inflation as a theory of a harmonic oscillator Eternal Inflation.
Example 1: a basic fraction problem from chapter 1 What is the common Denominator? Factor 3 2*3 2*2 3 * 2 * 2 = 12 You need the factors of every denominator.
Lecture 2 Molecular dynamics simulates a system by numerically following the path of all particles in phase space as a function of time the time T must.
Chapter solving exponential and logarithmic functions.
Reaction-Diffusion Systems Reactive Random Walks.
M3U7D3 Warm Up x = 2 Solve each equation = x 3 2. x ½ = = 3 x = 4 3x Graph the following: 5. y = 2x 2 x = 16 x = 3 x = 2.
Vector Quantization Vector quantization is used in many applications such as image and voice compression, voice recognition (in general statistical pattern.
Assignment for the course: “Introduction to Statistical Thermodynamics of Soft and Biological Matter” Dima Lukatsky In the first.
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Formulation of 2D‐FDTD without a PML.
Introduction To Logarithms. Warm up You are investing $2500 in an account that earns 4% interest. How much money will you have in your account at the.
CSE 872 Dr. Charles B. Owen Advanced Computer Graphics1 Water Computational Fluid Dynamics Volumes Lagrangian vs. Eulerian modelling Navier-Stokes equations.
+ Chapter 16 – Markov Chain Random Walks McCTRWs – the fast food of random walks.
Derivation of Oxygen Diffusion Equations:
CHAPTER 2 MASS BALANCE and APPLICATION
© 2016 Carl Lund, all rights reserved A First Course on Kinetics and Reaction Engineering Class 40.
Stoichiometry Chapter 12.
Numerical Solutions to the Diffusion Equation
Diffusion over potential barriers with colored noise
Introduction To Logarithms
Introduction to Algorithms
Tutorial/HW Week #8 WRF Chapter 23; WWWR Chapters ID Chapter 14
The break signal in climate records: Random walk or random deviations
Lecture 19 MA471 Fall 2003.
Introduction To Logarithms
Introduction To Logarithms
Adjustment of Temperature Trends In Landstations After Homogenization ATTILAH Uriah Heat Unavoidably Remaining Inaccuracies After Homogenization Heedfully.
Linear Equations Notes & Practice.
Introduction To Logarithms
STOCHASTIC HYDROLOGY Random Processes
Basics of graphing motion And studying their slopes S.Caesar
Presentation transcript:

Reaction-Diffusion Systems - Continued Reactive Random Walks

The grand question How do you code what we described in our last lecture? Here we focus on how to implement it, but do not worry about doing it efficiently

Consider the following problem (somewhat analogous to the beaker experiment from Chem 101)  We have a one dimensional domain of size L that is randomly filled with an equal mass of A and B.  A and B can only move by diffusion and react at some rate k  Because of the finite size of our domain we impose boundary conditions. For the sake of simplicity we assume periodicity (although results are virtually identical for any bounded setu – e.g. no flux)  Periodicity means that a particle that exist the right boundary enters through the left and vice versa (i.e. you have a sequence of identical domains next to one another). Note that this also means that particles close to one boundary can interact with particles close to the other

First we set up the general conditions (Matlab code below)

Now we enter a loop that marches over time for kk=1:Nsteps dt=min(dt*epsilon,dtmax); %define timestep allowing it to increase to a maximum Pr=k*mp*dt; %probability of reaction

Update particle positions by a Brownian random walk With this we are done with the motion part of the algorithm

Now enter the reaction loop

Update your x field to kill reacted particles and calculate your domina concentration End here means loop jumps back to start the next time step and process is repeated until desired number of time steps is completed

Plot desired results

Statistical method  Note that this is a stochastic method and it means that each time you run the code you will get slightly different results, relating to the specific random initial condition you run.  For cases likes these it makes sense to think about the ensemble result (i.e. the average over several realizations) to physically understand what is going on)

Example 10 realizations

Example 10 realizations - plots

Ensemble average

What???  Great, we appear to have a numerical method that works well at early times where blue and red match perfectly, but diverges at late times in a way that other methods will not  First impression – USELESS tool  But let’s look at this more closely

What’s going on… Let’s take a look at concentrations in 1d Early Late Intermediate Benson & Meerschaert 2008, WRR

What’s going on… Let’s take a look at concentrations in 1d Early Late Intermediate Isolated Islands of A and B form limiting reaction by how quickly A and B diffuse into one another Benson & Meerschaert 2008, WRR Incomplete Mixing

Is our method actually capturing some real?  Remember we are interested in calculating, the average concentration in the domain, but what happens when is not a good measure of the actual concentration field. Let’s go back to our governing equations Let’s as before break concentrations into mea and fluctuations

The governing equation for or is now  New term due the fluctuations that did not exist when we solved for in the beaker – i.e. we assumed the new term was small  At early times it is small – so the well mixed solution works great, but at late time it is not and so it takes over  If we assume a structure for (called invoking a closure argument and next week you will learn the physical basis for it) then we may be able to solve this equation.  Let’s assume =  t -1/2  For now you may consider  a constant, but next week you will learn what this constant is

Our closed equation is  This type of equation is called a Riccati equation and you can generally solve it. The solution is an ugly combination of Bessel functions and it is difficult to see anything useful from it’s general form  Don’t worry if this is meaningless to you – it’s just to show that it can be done

However  If we take the solution from the earlier page and do a small time expansion it becomes  And at late times it looks like

Can be understood, using what is called a dominant balance argument  At early times  Perturbation term in negligible and so you recover the solution we derived in the last class – explains why models match well at early times  At late times  May seem confusing, but at late time the balance is between both terms on the RHS, which means  Or