Presentation is loading. Please wait.

Presentation is loading. Please wait.

Part I Convergence Results Andrew Stuart John Terry Paul Tupper R.K

Similar presentations


Presentation on theme: "Part I Convergence Results Andrew Stuart John Terry Paul Tupper R.K"— Presentation transcript:

1 Dimension Reduction in “Heat Bath” Models Raz Kupferman The Hebrew University

2 Part I Convergence Results Andrew Stuart John Terry Paul Tupper R.K
Ergodicity results: Paul Tupper’s talk.

3 Set-up: Kac-Zwanzig Models
A (large) mechanical system consisting of a “distinguished” particle interacting through springs with a collection of “heat bath” particles. The heat bath particles have random initial data (Gibbsian distribution). Goal: derive “reduced” dynamics for the distinguished particle.

4 Motivation Represents a class of problems where dimension reduction is sought. Rigorous analysis. Convenient test problem for recent dimension reduction approaches/techniques: optimal prediction (Kast) stochastic modeling hidden Markov models (Huisinga-Stuart-Schuette) coarse grained time stepping (Warren-Stuart, Hald-K) time-series model identification (Stuart-Wiberg)

5 The governing equations
The Hamiltonian: (Pn,Qn): coordinates of distinguished particle (pj,qj): coordinates of j-th heat bath particle mj: mass of j-th particle kj: stiffness of j-th spring V(Q): external potential The equations of motion:

6 Initial data Heat bath particles have random initial data: Gibbs distribution with temperature T: Initial data are independent Gaussian: i.i.d N(0,1)

7 Generalized Langevin Equation
Solve the (p,q) equations and substitute back into the (P,Q) equation (Ford-Kac 65, Zwanzig 73) Memory kernel: Random forcing: “Fluctuation-dissipation”:

8 Choice of parameters Heat baths are characterized by broad and dense spectra. Random set of frequencies: Ansatz for spring constants: Assumption: f2() is bounded and decays faster than 1/.

9 Generalized Langevin Eq.
Memory kernel: (Monte-Carlo approximation of the Fourier transform of f2) Random forcing: (Monte-Carlo approximation of a stochastic integral)

10 Lemma For almost every choice of frequencies (-a.s.) Kn(t) converges pointwise to K(t), the Fourier cosine transform of f2(). KnK in L2(,L2[0,T]) Theorem: (-a.s.) the sequence of random functions Fn(t) converges weakly in C[0,T] to a stationary Gaussian process F(t) with mean zero and auto-covariance K(t); (FnF). Proof: CLT + tightness can be extended to “long term” behavior: convergence of empirical finite-dimensional distributions (Paul Tupper’s talk)

11 Example If we choose then
and, by the above theorem, Fn(t) converges weakly to the Ornstein-Uhlenbeck (OU) process U(t) defined by the Ito SDE:

12 Convergence of Qn(t) Theorem: (-a.s.) Qn(t) converges weakly in C2[0,T] to the solution Q(t) of the limiting stochastic integro-diff, equation: Proof: the mapping (Kn,Fn)Qn is continuous

13 Back to example if then Qn(t) converges to the solution of
which is equivalent to the (memoryless!!) SDE

14 Numerical validation Empirical distribution of Qn(t) for n=5000 and various choices of V(Q) compared with the invariant measure of the limiting SDE single well double well triple well extremely long correlation time

15 “Unresolved” component of the solution are modeled by an auxiliary, memoryless, stochastic variable.
Bottom line: instead of solving a large, stiff system in 2(n+1) variables, solve a Markovian system of 3 SDEs! Similar results can be obtained for nonlinear interactions. (Stuart-K ‘03)

16 Part II Fractional Diffusion

17 Fractional (or anomalous) diffusion:
Found in a variety of systems and models: (e.g., Brownian particles in polymeric fluids, continuous-time random walk) In all known cases, fractional diffusion reflects the divergence of relaxation times; extreme non-Markovian behaviour. Question: can we construct a heat bath models that generated anomalous diffusion?

18 Reminder then If we take Memory kernel: Parameters: Random forcing:
power law decay of memory kernel If we take

19 The limiting GLE Theorem: (-a.s.) Qn(t) converges weakly in C1[0,T] to the solution Q(t) of the limiting stochastic integro-diff, equation: F(t) is a Gaussian process with covariance K(t); derivative of fractional Brownian motion (1/f-noise) (Interpreted in distributional sense)

20 Solving the limiting GLE
For a free particle, V’(Q)=0, and a particle in a quadratic potential well, V’(Q)=Q, the SIDE can be solved using the Laplace transform. Free particle: Gaussian profile, variance given by sub-diffusive (Mittag-Leffler) function of time, var(Q)~t. Quadratic potential: sub-exponential approach to the Boltzmann distribution.

21 Numerical results Variance of an ensemble of 3000 systems, V(Q)=0 (compared to exact solution of the GLE)

22 Quadratic well: evolving distribution of 10,000 systems (dashed line: Boltzmann distribution)

23 What about dimensional reduction?
Even a system with power-law memory can be well approximated by a Markovian system with a few (less than 10) auxiliary variables. How? Consider the following Markovian SDE: u(t) : vector of size m A: mxm constant matrix C: mxm constant matrix G: constant m-vector

24 Solve for u(t) and substitute into Q(t) equation:
where Goal: find G,A,C so that fluc.-diss. is satisfied and the kernel approximates power-law decay.

25 It is easier to approximate in Laplace domain:
(and the Laplace transform of a power is a power). The RHS is a rational function of degree (m-1) over m. Pade approximation of the Laplace transform of the memory kernel (classical methods in linear sys. theory). Even nicer if kernel has continued-fraction representation

26 Laplace transform of memory kernel (solid line) compared with continued-fraction approximation for 2,4,8,16 modes (dashed lines).

27 Variance of GLE (solid line) compared with Markovian approximations with 2,4,8 modes.
Fractional diffusion scaling observed over long time.

28 Comment: Approximation by Markovian system is not only a computational tools. Also an analytical approach to study the statistics of the solution (e.g. calculate stopping times). Controlled approximation (unlike the use of a “Fractional Fokker-Planck equation”). Bottom line: Even with long range memory system can be reduced (with high accuracy) into a Markovian system of less than 10 variables (it is “intermediate asymptotics but that what we care about in real life).


Download ppt "Part I Convergence Results Andrew Stuart John Terry Paul Tupper R.K"

Similar presentations


Ads by Google