If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Discrete Time Markov Chains
Markov Chains 1.
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
1 Bayesian Methods with Monte Carlo Markov Chains II Henry Horng-Shing Lu Institute of Statistics National Chiao Tung University
Operations Research: Applications and Algorithms
Workshop on Stochastic Differential Equations and Statistical Inference for Markov Processes Day 1: January 19 th, Day 2: January 28 th Lahore University.
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
Lecture 13 – Continuous-Time Markov Chains
What if time ran backwards? If X n, 0 ≤ n ≤ N is a Markov chain, what about Y n = X N-n ? If X n follows the stationary distribution, Y n has stationary.
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
The Gibbs sampler Suppose f is a function from S d to S. We generate a Markov chain by consecutively drawing from (called the full conditionals). The n’th.
TCOM 501: Networking Theory & Fundamentals
Stationary distribution  is a stationary distribution for P t if  =  P t for all t. Theorem:  is stationary for P t iff  G = 0 (under suitable regularity.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
Markov Chains Chapter 16.
The moment generating function of random variable X is given by Moment generating function.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Solutions to group problems 1.g i,i-1 =i  g ii =-i(  ) g i,i+1 =i Hence -g i,i+1 /g ii =  =1-(-g i,i-1 /g ii ). Thus the jump chain goes up one.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
Group exercise For 0≤t 1
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Introduction to Stochastic Models GSLM 54100
Intro. to Stochastic Processes
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Model under consideration: Loss system Collection of resources to which calls with holding time  (c) and class c arrive at random instances. An arriving.
Flows and Networks Plan for today (lecture 2): Questions? Birth-death process Example: pure birth process Example: pure death process Simple queue General.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains.
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Flows and Networks Plan for today (lecture 3):
Advanced Statistical Computing Fall 2016
Industrial Engineering Dep
Discrete Time Markov Chains
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
September 1, 2010 Dr. Itamar Arel College of Engineering
Presentation transcript:

If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities like They are called the finite-dimensional distributions(fdd’s) for the process X. Review

Kolmogorov’s consistency theorem Fdd’s must satisfy the following two conditions): (i) as (ii) for any permutation  of {1,...,n} Then a probability space and a stochastic process exists with these fdd’s.

Markov chains Consider a stochastic process (X n, n≥0) taking values in a discrete state space S. It is a Markov chain if The matrix P=(p ij ) is called the transition matrix. Theorem: P has nonnegative entries and all row sums are one. Such matrices are called stochastic.

Chapman-Kolmogorov are called n-step transition probabilities. The matrix of them is denoted P (n). Theorem (Chapman-Kolmogorov): P (n+m) = P (n) P (m) P (n) = P n Let  k (n) = P(X n = k) and  (n) = (  k (n) ). Then  (m+n) =  (n) P m In particular,  (n) =  (0) P n

A branching process Z 0 =1Z 1 =2Z 2 =4 Z 3 =3

Properties Let u n =P(Z n =0). Then u n =G X (u n-1 ) which is the smallest non- negative root of G X (s)=s. It is 1 if E(X) ≤ 1.

Classification of states A state i for a Markov chain X k is called persistent if and transient otherwise. Let and. j is persistent iff f jj =1. Let

Some results Theorem: (a)P ii (s)=1+F ii (s)P ii (s) (b)P ij (s)=F ij (s)P jj (s) for i ≠ j. Corollary: (a)State j is persistent if and then for all i. (b)State j is transient if and thenfor all i.

Mean recurrence time Let T i = min{n>0: X n = i} and  i = E(T i |X 0 =i). For a transient state  i = ∞. For a persistent state We call a recurrent state positive persistent if  i < ∞, null persistent otherwise. positive recurrent = non-null persistent

Communication Two states i and j communicate,, if for some m. i and j intercommunicate,, if and. Theorem: is an equivalence relation. Theorem: If then (a)i is transient iff j is transient (b)i is persistent iff j is persistent

Closed and irreducible sets A set C of states is closed if p ij =0 for all i in C, j not in C C is irreducible if for all i,j in C. Theorem: S=T+C 1 +C where T are all transient, and the C i are irreducible disjoint closed sets of persistent states Note: The C i are the equivalence classes for

Stationary distribution Theorem: An irreducible chain has a stationary distribution  iff the states are positive persistent. Then  is unique and given by We compute  by solving  P= .

Reversible chains X is called reversible if its time-reversal Y has the same transition probabilities as X Reversible processes: or Law of detailed balance

Law of large numbers Let X be positive persistent. Then To estimate such an integral we can compute averages from a Markov chain with stationary distribution .

The Gibbs sampler Suppose f (=  ) is a function from S d to S. We generate a Markov chain with stationary distribution f by consecutively drawing from (called the full conditionals). The n’th step of the chain is the whole set of d draws from d different conditional distributions.

The Metropolis algorithm Let Q be a symmetric transition matrix. When in state x, the next state is chosen by the following: 1. Draw y from q x, 2. Calculate r=f(y)/f(x) 3. If r≥1 the next value is y 4. If r<1 go to y with probability r, stay at x with probability 1-r Clearly Markov.

The Markov property X(t) is a Markov process if for any n for all j, i 0,...,i n in S and any t 0 <t 1 <...<t n <t. The transition probabilities are homogeneous if p ij (s,t)=p ij (0,t-s). We will usually assume this, and write p ij (t).

The general birth process Probability of birth in (t, t+  t) when n individuals present n  t + o(  t). X(0)=1: so

A result Theorem: For any t>0 A process for which  P 1n (t) < 1 is called dishonest. There is positive probability that the process is not in the state space at time t. It has gone off to infinity, or exploded.

Forward equations so and or

Stationary distribution  is a stationary distribution for P t if  =  P t for all t. Theorem:  is stationary for P t iff  G = 0 (under suitable regularity conditions).

Construction The way the continuous time Markov chains work is: (1)Draw an initial value i 0 from  (0) (2)If, stay in i 0 for a random time which is (3)Draw a new state from the distribution where

Persistence A chain is irreducible if P ij (t) > 0 for some t and for all pairs i,j in S. Fact (Lévy dichotomy): Either P ij (t) > 0 for all t, or P ij (t) = 0 for all t. We call a state persistent if Let Y n = X(nh) be the discrete skeleton of X. Let Q be the transition matrix for Y so persistence in continuous time is the same as persistence for the discrete skeleton

Birth, death, immigration and emigration Let The  n are called death rates, and the n are called birth rates. The process is a birth and death process. If n = n + we have linear birth with immigration. If  n = (  +  )n we have linear death and emigration.

Poisson process Birth process with rate independent of the state # events in (0,t] is independent of # events in (t,t+s], X t has independent increments If we delete points in a Poisson process independently with probability 1-p, we get a Poisson process of rate p.

General definition Consider points in some space S, subset of R d. They constitute a Poisson point pattern if (i)N(A)~Po(  (A)) (ii)N(A) is independent of N(B) for disjoint A and B  () is called the mean measure. If we call  (s) the intensity function.

A conditional property Let N be a Poisson counting process with intensity (x). Suppose A is a set with, N(A)=n, and let Q(B)=  (B)/  (A) be a cumulative distribution. It has density (x)/  (A) Then the points in A have the same distribution as n points drawn independently from the distribution Q.

Brownian motion A Brownian motion process is a stochastic process having: (1)Independent increments (2)Stationary increments (3)Continuity: for any 

Properties of Brownian motion process X(t)~N(  t,  2 t) Continuous paths Finite squared variation Not bounded variation Not differentiable paths