Variants of Stochastic Simulation Algorithm Henry Ato Ogoe Department of Computer Science Åbo Akademi University.

Slides:



Advertisements
Similar presentations
Stochastic algebraic models SAMSI Transition Workshop June 18, 2009 Reinhard Laubenbacher Virginia Bioinformatics Institute and Mathematics Department.
Advertisements

Estimation of Means and Proportions
Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
Cycle and Event Leaping State Space AnalysisThe Goddess Durga Marc Riedel, EE5393, Univ. of Minnesota.
Some foundations of Cellular Simulation Nathan Addy Scientific Programmer The Molecular Sciences Institute November 19, 2007.
Multiscale Stochastic Simulation Algorithm with Stochastic Partial Equilibrium Assumption for Chemically Reacting Systems Linda Petzold and Yang Cao University.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Sampling distributions of alleles under models of neutral evolution.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Bayesian Methods with Monte Carlo Markov Chains III
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Operations Research: Applications and Algorithms
Graduate School of Information Sciences, Tohoku University
Workshop on Stochastic Differential Equations and Statistical Inference for Markov Processes Day 1: January 19 th, Day 2: January 28 th Lahore University.
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
COMPUTER MODELS IN BIOLOGY Bernie Roitberg and Greg Baker.
Stochastic description of gene regulatory mechanisms Georg Fritz Statistical and Biological Physics Group LMU München Albert-Ludwigs Universität.
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Stochastic Simulation of Biological Systems. Chemical Reactions Reactants  Products m 1 R 1 + m 2 R 2 + ··· + m r R r – ! n 1 P 1 + n 2 P 2 + ··· + n.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
The moment generating function of random variable X is given by Moment generating function.
Simulation of Biochemical Reactions for Modeling of Cell DNA Repair Systems Dr. Moustafa Mohamed Salama Laboratory of Radiation Biology, JINR Supervisor.
Exponential Distribution & Poisson Process
1 Chapter 12 Introduction to Statistics A random variable is one in which the exact behavior cannot be predicted, but which may be described in terms of.
Image Analysis and Markov Random Fields (MRFs) Quanren Xiong.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Generalized Semi-Markov Processes (GSMP)
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
9. Convergence and Monte Carlo Errors. Measuring Convergence to Equilibrium Variation distance where P 1 and P 2 are two probability distributions, A.
Recitation on EM slides taken from:
Linda Petzold University of California Santa Barbara
1 Chapter 19 Monte Carlo Valuation. 2 Simulation of future stock prices and using these simulated prices to compute the discounted expected payoff of.
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
The Logistic Growth SDE. Motivation  In population biology the logistic growth model is one of the simplest models of population dynamics.  To begin.
Lecture 2 Basics of probability in statistical simulation and stochastic programming Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius,
Study of Pentacene clustering MAE 715 Project Report By: Krishna Iyengar.
Exploring the connection between sampling problems in Bayesian inference and statistical mechanics Andrew Pohorille NASA-Ames Research Center.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Tracking Multiple Cells By Correspondence Resolution In A Sequential Bayesian Framework Nilanjan Ray Gang Dong Scott T. Acton C.L. Brown Department of.
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
7. Metropolis Algorithm. Markov Chain and Monte Carlo Markov chain theory describes a particularly simple type of stochastic processes. Given a transition.
Biochemical Reactions: how types of molecules combine. Playing by the Rules + + 2a2a b c.
Chapter 19 Monte Carlo Valuation. Copyright © 2006 Pearson Addison-Wesley. All rights reserved Monte Carlo Valuation Simulation of future stock.
Lab for Remote Sensing Hydrology and Spatial Modeling Dept of Bioenvironmental Systems Engineering National Taiwan University 1/45 GEOSTATISTICS INTRODUCTION.
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
SS r SS r This model characterizes how S(t) is changing.
Polymerase Chain Reaction: A Markov Process Approach Mikhail V. Velikanov et al. J. theor. Biol Summarized by 임희웅
CSC321: Introduction to Neural Networks and Machine Learning Lecture 17: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
1 Opinionated in Statistics by Bill Press Lessons #15.5 Poisson Processes and Order Statistics Professor William H. Press, Department of Computer Science,
Modelling Complex Systems Video 4: A simple example in a complex way.
The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang.
Introduction to Sampling based inference and MCMC
Lecture 17: Kinetics and Markov State Models
Markov chain monte carlo
Sampling Distribution
Sampling Distribution
Lecture 2 – Monte Carlo method in finance
Lecture 17: Kinetics and Markov State Models
Estimating Population Size Using Mark and Recapture
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

Variants of Stochastic Simulation Algorithm Henry Ato Ogoe Department of Computer Science Åbo Akademi University

The Stochastic Framework Assume N molecular species {s 1,...,S N } Assume N molecular species {s 1,...,S N } State Vector State Vector X (t) = (X 1 (t),…,X N (t)) where X i (t) is the number of molecules of species S i at time t. M reaction channels R 1,…,R N Assume system is well-stirred and in thermal equilibrium

Dynamics of reaction channel R i is characterized by its Dynamics of reaction channel R i is characterized by its propensity function a j, and propensity function a j, and state change vector v j =(v 1j,…,v Nj ), where v ij gives change in population of S i induced by R j, such that state change vector v j =(v 1j,…,v Nj ), where v ij gives change in population of S i induced by R j, such that a j (x)dt is the probability that, given X(t) = x, one reaction will occur in the next infinitesimal time interval [t,t + dt] a j (x)dt is the probability that, given X(t) = x, one reaction will occur in the next infinitesimal time interval [t,t + dt] R(x) is a jump Markov process R(x) is a jump Markov process The Stochastic Framework

The time evolution of the probabilities of each state is defined by the Chemical Master Equation (CME); The time evolution of the probabilities of each state is defined by the Chemical Master Equation (CME); where P(x,t|x 0,t 0 ) is the probability that X(t)=x given X(t 0 ) = x 0 where P(x,t|x 0,t 0 ) is the probability that X(t)=x given X(t 0 ) = x 0 CME is impractical to solve especially for large systems CME is impractical to solve especially for large systems Alternative approaches??? Alternative approaches???

Alternative Approaches to the CME Exact Simulations Exact Simulations Inexact Simulations/Approximations Inexact Simulations/Approximations

Exact Stochastic Simulation Starting from the initial states, X(t 0 ) the SSA simulated the trajectory by repeatedly updating the states after estimating Starting from the initial states, X(t 0 ) the SSA simulated the trajectory by repeatedly updating the states after estimating 1. τ, the time the next reaction will fire, and 2. μ, the index of the firing reaction Both τ and μ can be estimated probabilistically from the probability density function P(μ,τ) that the next reaction is μ and it occurs at τ. Both τ and μ can be estimated probabilistically from the probability density function P(μ,τ) that the next reaction is μ and it occurs at τ.

Exact Stochastic simulation Let Let It can be shown that It can be shown that Integrating P(μ,τ) over all τ from 0 to ∞ Integrating P(μ,τ) over all τ from 0 to ∞ P(μ = j) = a j /a 0 P(μ = j) = a j /a 0 Summing P(μ,τ) over all μ Summing P(μ,τ) over all μ The two distributions above leads to Gillespie‘s SSA and other mathematically equivalent

variants with different computational efficiency

First Reaction Method (FRM)- Gillespie, 1977 Generate a putative time τ k for each reaction channel R k according to Generate a putative time τ k for each reaction channel R k according to where k = 1,…,M; r 1,…,r M are M statistically independent random samplings of U(0,1) where k = 1,…,M; r 1,…,r M are M statistically independent random samplings of U(0,1) τ = min{τ 1,…,τ M } τ = min{τ 1,…,τ M } μ = index of min{τ 1,…,τ M } μ = index of min{τ 1,…,τ M } Update X X + V μ Update X X + V μ

Flaws ???? Uses M random numbers per time step Uses M random numbers per time step Uses O (M) to update the a k ’s Uses O (M) to update the a k ’s Uses O (M) to identify smallest τ μ Uses O (M) to identify smallest τ μ

Direct Method (DM)-Gillespie, 1977 Draw two independent samples r 1 and r 2 from U(0,1) Draw two independent samples r 1 and r 2 from U(0,1) The index of the firing reaction is the smallest integer satisfying The index of the firing reaction is the smallest integer satisfying

Flaws???? Unnecessary recalculation of all propensities Unnecessary recalculation of all propensities Slow, search depth (the no. of steps taken to identify ) ≈ O (M) Slow, search depth (the no. of steps taken to identify ) ≈ O (M)

Next Reaction Method (NRM) – Gibson & Bruck (2000)