Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Chains.

Similar presentations


Presentation on theme: "Markov Chains."— Presentation transcript:

1 Markov Chains

2 Summary Markov Chains Discrete Time Markov Chains
Homogeneous and non-homogeneous Markov chains Transient and steady state Markov chains Continuous Time Markov Chains

3 Markov Processes Recall the definition of a Markov Process
The future a a process does not depend on its past, only on its present Since we are dealing with “chains”, X(t) can take discrete values from a finite or a countable infinite set. For a discrete-time Markov chain, the notation is also simplified to Where Xk is the value of the state at the kth step

4 Chapman-Kolmogorov Equations
Define the one-step transition probabilities Clearly, for all i, k, and all feasible transitions from state i Define the n-step transition probabilities xi x1 xR xj k u k+n

5 Chapman-Kolmogorov Equations
xi x1 xR xj k u k+n Using total probability Using the memoryless property of Marckov chains Therefore, we obtain the Chapman-Kolmogorov Equation

6 Matrix Form Define the matrix
We can re-write the Chapman-Kolmogorov Equation Choose, u = k+n-1, then Forward Chapman-Kolmogorov One step transition probability

7 Matrix Form Choose, u = k+1, then Backward Chapman-Kolmogorov
One step transition probability

8 Homogeneous Markov Chains
The one-step transition probabilities are independent of time k. Even though the one step transition is independent of k, this does not mean that the joint probability of Xk+1 and Xk is also independent of k Note that

9 Example Consider a two processor computer system where, time is divided into time slots and that operates as follows At most one job can arrive during any time slot and this can happen with probability α. Jobs are served by whichever processor is available, and if both are available then the job is given to processor 1. If both processors are busy, then the job is lost When a processor is busy, it can complete the job with probability β during any one time slot. If a job is submitted during a slot when both processors are busy but at least one processor completes a job, then the job is accepted (departures occur before arrivals). Describe the automaton that models this system. Describe the Markov Chain that describe this model.

10 Example: Automaton 1 2 - / a,d a a -/a/ad - d / a,d,d d dd
Let the number of jobs that are currently processed by the system by the state, then the State Space is given by X= {0, 1, 2}. Event set: a: job arrival, d: job departure Feasible event set: If X=0, then Γ(X)= a If X= 1, 2, then Γ(Χ)= a, d. State Transition Diagram - / a,d a a -/a/ad 1 2 - d / a,d,d d dd

11 Example: Alternative Automaton
Let (X1,X2) indicate whether processor 1 or 2 are busy, Xi= {0, 1}. Event set: a: job arrival, di: job departure from processor i Feasible event set: If X=(0,0), then Γ(X)= a If X=(0,1) then Γ(Χ)= a, d2. If X=(1,0) then Γ(Χ)= a, d If X=(0,1) then Γ(Χ)= a, d1, d2. State Transition Diagram - / a,d1 a a 00 10 11 01 -/a/ad1/ad2 d1 a,d1,d2 - d1,d2 a,d2 d2 d1 -

12 Example: Markov Chain 1 2 p11 p01 p12 p22 p00 p21 p10 p20
For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability p11 p01 p12 p22 1 2 p00 p21 p10 p20

13 Example: Markov Chain 1 2 p01 p11 p12 p00 p10 p21 p20 p22
1 2 p01 p11 p12 p00 p10 p21 p20 p22 Suppose that α = 0.5 and β = 0.7, then,

14 State Holding Times Suppose that at point k, the Markov Chain has transitioned into state Xk=i. An interesting question is how long it will stay at state i. Let V(i) be the random variable that represents the number of time slots that Xk=i. We are interested on the quantity Pr{V(i) = n}

15 State Holding Times This is the Geometric Distribution with parameter pii. Clearly, V(i) has the memoryless property

16 State Probabilities An interesting quantity we are usually interested in is the probability of finding the chain at various states, i.e., we define For all possible states, we define the vector Using total probability we can write In vector form, one can write Or, if homogeneous Markov Chain

17 State Probabilities Example
Suppose that with Find π(k) for k=1,2,… Transient behavior of the system: MCTransient.m In general, the transient behavior is obtained by solving the difference equation

18 Classification of States
Definitions State j is reachable from state i if the probability to go from i to j in n >0 steps is greater than zero (State j is reachable from state i if in the state transition diagram there is a path from i to j). A subset S of the state space X is closed if pij=0 for every iS and j S A state i is said to be absorbing if it is a single element closed set. A closed set S of states is irreducible if any state jS is reachable from every state iS. A Markov chain is said to be irreducible if the state space X is irreducible.

19 Example 1 2 4 1 2 3 Irreducible Markov Chain p01 p12 p00 p10 p21 p22
1 2 p01 p12 p00 p10 p21 p22 Reducible Markov Chain p01 p12 p00 p10 p14 p22 4 p23 p32 p33 1 2 3 Absorbing State Closed irreducible set

20 Transient and Recurrent States
Hitting Time Recurrence Time Tii is the first time that the MC returns to state i. Let ρi be the probability that the state will return back to i given it starts from i. Then, The event that the MC will return to state i given it started from i is equivalent to Tii < ∞, therefore we can write A state is recurrent if ρi=1 and transient if ρi<1

21 Theorems If a Markov Chain has finite state space, then at least one of the states is recurrent. If state i is recurrent and state j is reachable from state i then, state j is also recurrent. If S is a finite closed irreducible set of states, then every state in S is recurrent.

22 Positive and Null Recurrent States
Let Mi be the mean recurrence time of state i A state is said to be positive recurrent if Mi<∞. If Mi=∞ then the state is said to be null-recurrent. Theorems If state i is positive recurrent and state j is reachable from state i then, state j is also positive recurrent. If S is a closed irreducible set of states, then every state in S is positive recurrent or, every state in S is null recurrent, or, every state in S is transient. If S is a finite closed irreducible set of states, then every state in S is positive recurrent.

23 Example 4 1 2 3 p01 p12 p00 p10 p14 p22 p23 p32 p33 Transient States
1 2 3 Transient States Positive Recurrent States Recurrent State

24 Periodic and Aperiodic States
Suppose that the structure of the Markov Chain is such that state i is visited after a number of steps that is an integer multiple of an integer d >1. Then the state is called periodic with period d. If no such integer exists (i.e., d =1) then the state is called aperiodic. Example 1 0.5 2 Periodic State d = 2

25 Steady State Analysis Recall that the probability of finding the MC at state i after the kth step is given by An interesting question is what happens in the “long run”, i.e., This is referred to as steady state or equilibrium or stationary state probability Questions: Do these limits exists? If they exist, do they converge to a legitimate probability distribution, i.e., How do we evaluate πj, for all j.

26 Steady State Analysis Recall the recursive probability
If steady state exists, then π(k+1)  π(k), and therefore the steady state probabilities are given by the solution to the equations and If an Irreducible Markov Chain the presence of periodic states prevents the existence of a steady state probability Example: periodic.m

27 Steady State Analysis THEOREM: In an irreducible aperiodic Markov chain consisting of positive recurrent states a unique stationary state probability vector π exists such that πj > 0 and where Mj is the mean recurrence time of state j The steady state vector π is determined by solving and Ergodic Markov chain.

28 Birth-Death Example 1 i 1-p p
1 i Thus, to find the steady state vector π we need to solve and

29 Birth-Death Example In other words Solving these equations we get
In general Summing all terms we get

30 Birth-Death Example Therefore, for all states j we get
If p<1/2, then All states are transient If p>1/2, then All states are positive recurrent

31 Birth-Death Example If p=1/2, then All states are null recurrent

32 Reducible Markov Chains
Irreducible Set S1 Transient Set T Irreducible Set S2 In steady state, we know that the Markov chain will eventually end in an irreducible set and the previous analysis still holds, or an absorbing state. The only question that arises, in case there are two or more irreducible sets, is the probability it will end in each set

33 Reducible Markov Chains
Transient Set T Irreducible Set S s1 r sn i Suppose we start from state i. Then, there are two ways to go to S. In one step or Go to r T after k steps, and then to S. Define

34 Reducible Markov Chains
First consider the one-step transition Next consider the general case for k=2,3,…

35 Continuous-Time Markov Chains
In this case, transitions can occur at any time Recall the Markov (memoryless) property where t1 < t2 < … < tk Recall that the Markov property implies that X(tk+1) depends only on X(tk) (state memory) It does not matter how long the state at X(tk) (age memory). The transition probabilities now need to be defined for every time instant as pij(t), i.e., the probability that the MC transitions from state i to j at time t.

36 Transition Function Define the transition function
The continuous-time analogue of the Chapman-Kolmokorov equation is Using the memoryless property Define H(s,t)=[pij(s,t)], i,j=1,2,… then Note that H(s, s)= I

37 Transition Rate Matrix
Consider the Chapman-Kolmogorov for s ≤ t ≤ t+Δt Subtracting H(s,t) from both sides and dividing by Δt Taking the limit as Δt0 where the transition rate matrix Q(t) is given by

38 Homogeneous Case In the homogeneous case, the transition functions do not depend on s and t, but only on the difference t-s thus It follows that and the transition rate matrix Thus

39 State Holding Time The time the MC will spend at each state is a random variable with distribution where Explain why…

40 Transition Rate Matrix Q.
Recall that First consider the qij, i ≠ j, thus the above equation can be written as Evaluating this at t = 0, we get that pij(0)= 0 for all i ≠ j The event that will take the state from i to j has exponential residual lifetime with rate λij, therefore, given that in the interval (t,t+τ) one event has occurred, the probability that this transition will occur is given by Gij(τ)=1-exp{-λijτ}.

41 Transition Rate Matrix Q.
Since Gij(τ)=1-exp{-λijτ}. In other words qij is the rate of the Poisson process that activates the event that makes the transition from i to j. Next, consider the qjj, thus Evaluating this at t = 0, we get that Probability that chain leaves state j

42 Transition Rate Matrix Q.
The event that the MC will transition out of state i has exponential residual lifetime with rate Λ(i), therefore, the probability that an event will occur in the interval (t,t+τ) given by Gi(τ)=1-exp{- Λ(i)τ}. Note that for each row i, the sum

43 Transition Probabilities P.
Suppose that state transitions occur at random points in time T1 < T2 <…< Tk <… Let Xk be the state after the transition at Tk Define Recall that in the case of the superposition of two or more Poisson processes, the probability that the next event is from process j is given by λj/Λ. In this case, we have and

44 Example Assume a computer system where jobs arrive according to a Poisson process with rate λ. Each job is processed using a First In First Out (FIFO) policy. The processing time of each job is exponential with rate μ. The computer has buffer to store up to two jobs that wait for processing. Jobs that find the buffer full are lost. Draw the state transition diagram. Find the Rate Transition Matrix Q. Find the State Transition Matrix P

45 Example 1 2 3 a a a a d d d The rate transition matrix is given by
1 2 3 a d d d The rate transition matrix is given by The state transition matrix is given by

46 State Probabilities and Transient Analysis
Similar to the discrete-time case, we define In vector form With initial probabilities Using our previous notation (for homogeneous MC) Obtaining a general solution is not easy! Differentiating with respect to t gives us more “inside”

47 “Probability Fluid” view
We view πj(t) as the level of a “probability fluid” that is stored at each node j (0=empty, 1=full). Change in the probability fluid inflow outflow i r qij qjr j Inflow Outflow

48 Steady State Analysis Often we are interested in the “long-run” probabilistic behavior of the Markov chain, i.e., These are referred to as steady state probabilities or equilibrium state probabilities or stationary state probabilities As with the discrete-time case, we need to address the following questions Under what conditions do the limits exist? If they exist, do they form legitimate probabilities? How can we evaluate these limits?

49 … … Steady State Analysis i r j
Theorem: In an irreducible continuous-time Markov Chain consisting of positive recurrent states, a unique stationary state probability vector π with These vectors are independent of the initial state probability and can be obtained by solving Using the “probability fluid” view i r outflow qij qjr j 0 Change inflow Inflow Outflow

50 Example a a a 1 2 3 a d d d For the previous example, with the above transition function, what are the steady state probabilities Solve

51 Example The solution is obtained

52 Birth-Death Chain 1 i λ0 λ1 λi-1 λi μ1 μi μi+1
1 i λ1 λi-1 λi μ1 μi μi+1 Find the steady state probabilities Similarly to the previous example, And we solve and

53 Example The solution is obtained In general Making the sum equal to 1
Solution exists if

54 Uniformization of Markov Chains
In general, discrete-time models are easier to work with, and computers (that are needed to solve such models) operate in discrete-time Thus, we need a way to turn continuous-time to discrete-time Markov Chains Uniformization Procedure Recall that the total rate out of state i is –qii=Λ(i). Pick a uniform rate γ such that γ ≥ Λ(i) for all states i. The difference γ - Λ(i) implies a “fictitious” event that returns the MC back to state i (self loop).

55 Uniformization of Markov Chains
Uniformization Procedure Let PUij be the transition probability from state I to state j for the discrete-time uniformized Markov Chain, then i j k i j k qij qik Uniformization


Download ppt "Markov Chains."

Similar presentations


Ads by Google