Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Introduction to Markov Chains Resource: Fayez Gebali, Analysis of Computer and.

Similar presentations


Presentation on theme: "Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Introduction to Markov Chains Resource: Fayez Gebali, Analysis of Computer and."— Presentation transcript:

1 Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Introduction to Markov Chains Resource: Fayez Gebali, Analysis of Computer and Communication Networks

2 Contents Markov Chains: Idea and Examples Time development of state probabilities and approach to equilibrium

3 MARKOV CHAINS: IDEA AND EXAMPLES

4 Idea of a Markov Chain A series of random variables S(0), S(1), S(2), …S(n),…  Values of S(n), are states of a system at time n States are integer values 0, 1, 2, … E.g: S(n) number of packets in a buffer in a router at time n S(n) are Random variables with same PMF: Pr[S(n) = k] =.. The Markov Property: the value of S(n) depends only on the value of S(n-1), not on values of S(m) in older times m  Memoryless property (Defined precisely next slide) Example: S(n) state of a LAN Channel at time n.  How many values can S(n) have?

5 Markov Chain The Markov Property:  Pr[S(n) = i| S(1) = s 1, S(2) = s 2, S(3) = s 3,… S(n-1) = j] = Pr[S(n) = i | S(n-1) = j] what’s the meaning ? Define: Transition Matrix: P ij (n) = Pr[S(n) = i | S(n-1) = j] Homogeneity Assumption: P ij (n) is independent of n What is the meaning of the Homogeneity assumption? A set of Random variables S = {S(0), S(1), S(2), …S(n),...} that have the Markov Property and the homogeneity property is called an homogeneous Markov Chain

6 Example: A Router buffer size 4 buffer might have up to 4 packets State variable (random variable)  S(n): buffer occupancy (number of packets at time n)  How many states (possible values of S(n)?) Five states ?? Graphical Description – State transition diagram

7 Transition Matrix P ij is the conditional probability to go from present state j to next state i. P ij (n) = Pr[S(n) = i | S(n-1) = j] P ij from column j to row i 0 ≤ P ij ≤ 1  i P ij = 1 (sum of elements in a column equal 1) why?  P is a column stochastic matrix

8 Exercise 1: A Router buffer, size ∞ buffer might have up to m = ∞ packets S(n): buffer occupancy m+1 states: 0, 1, 2, 3, …m ( m = ∞ in this exercise) 1 st Question of interest:  What is The Transition Matrix: Pr[S(n) = i | S(n-1) = j ] ≡ Pr[i | j] ≡ P ij

9 Exercise E1: assumptions The number of packets in the buffer is unchanged or changed by ±1 in time step. What does that mean? Define two sets of auxiliary random variables: Number of arriving (departing) packets in time step j: A j, D j  Each is a Bernoulli Variable: with values 1 or 0 Pr(A j =1) = a: probability of packet arrival (in time step j)  b = 1- a: probability of no arrival (in time step j) Pr(D j =1) = c: probability of packet departure (in time step j)  d = 1- c: probability of no departure (in time step j) Exercise: Write the Transition Matrix Pr[i | j]

10 Exercise E1: a general buffer (cont’d) State Transition Diagram:  States and Transition probabilities Note: these are conditional probabilities Why Pr[S(n) = 3 | S(n-1) = 2] ≡ Pr[3| 2] is ad? Why Pr[1| 2] is bc? When 0  0 occur? When 2  2 occur?

11 Exercise E1: a general buffer (cont’d) Buffer moves from state k to k+1 if: packet arrives (a) AND no packet departs (d): Pr[k+1|k] = ad Buffer moves from state k+1 to state k if: no packet arrive (b) AND packet departs (c): Pr [k|k+1] = bc Buffer stays in state 0 if : packet arrive and packet departs, OR no packet arrive (b): Pr [0 | 0] = ac+b ≡ f 0 Buffer stays in state k≠0 if: packet arrives AND packet departs (ac) OR no packet arrive AND no packet departs (bd). Pr[k | k] = ac + bd ≡ f k ≠ 0

12 Exercise E1: a general buffer (cont’d) States are arranged in the order 0, 1, 2,… The Transition Matrix P:

13 Exercise E2 A system is modeled by a Markov Chain  {X 0, X 1, … X n,..}  X k (System at time k) can be in two states {1, 2} Transition Matrix P 11 = 0.4, P 12 = 0.2, P 21 = 0.6, P 22 = 0.8 Initial state (at time 0): X 0 = 1 What is the probability that at time 1 the state will be 1 and then at time 2 the state will be 2: Pr[X 2 =2, X 1 = 1| X 0 = 1] This a path: Chain jumps from initial state 1 to next state 1 to next state 2: 2  1  1

14 Exercise E2 (cont’d) Lemma: Pr[X 2 =2, X 1 = 1| X 0 = 1] = P 21 *P 11 Proof: explain how each step is derived Pr[X 2 =2, X 1 =1|X 0 =1] = Pr[X 2 =2, X 1 =1, X 0 =1]/Pr[X 0 =1] = Pr[X 2 =2|X 1 =1,X 0 =1]*Pr[X 1 =1,X 0 =1]/Pr[X 0 =1] = = Pr[X 2 =2|X 1 =1]*Pr[X 1 =1|X 0 = 1]*Pr[X 0 =1]/Pr[X 0 =1] = Pr[X 2 =2|X 1 =1]*Pr[X 1 =1|X 0 =1] ≡ P 21 *P 11 = 0.6*0.4 = 0.24 In general, Conditional Probability of a path is the product of P ij along the path

15 Exercise E3 Same Chain as in E2 (initial state X 0 =1) What is the probability that the chain will go via the following path: X 1 = 1, X 3 = 2, X 5 = 2) Pr[X 5 =2, X 3 = 2, X 1 =1| X 0 = 1] = =  i  j Pr[2  j  2  i  1  1] sum over all i, all j =  i  j P 2j P j2 P 2i P i1 P 11 =  j P 2j P j2  i P 2i P i1 * P 11 =P 2 22 *P 2 21 *P 11 Transition matrix in one step is P Transition matrix in 2 steps is P 2 Pr[X 5 =2, X 3 = 2, X 1 =1| X 0 = 1] = P 2 22 *P 2 21 *P 11

16 Exercise E4 3 people A, B, C play a series of table tennis game  In each game two are playing, the 3 rd watch  Winner of n th game plays in next game against the 3 rd Probability that X beats Y is t X /(t X + t Y ); X, Y ∈ (A, B, C) Represent the process as a Markov Chain and write P State: name of the person who does not play: A, B, or C State A: B, C play  Possible transitions: to B (if B lost), or to C (if C lost)  Probability to stay in state A: P AA =Pr[A|A] = 0 Transition to state B (C wins B): P BA =Pr[B|A]= t C /(t B +t C ) Transition to state C (B wins C): P CA =Pr[C|A] = t B /(t B +t C )

17 Exercise E4 (cont’d) State B: A, C play Probability to stay in state B: P BB = Pr[B|B] = 0 Transition to state A (C wins A): P AB =Pr[A|B] =t C /(t A +t C ) Transition to state C (A wins C): P CB =Pr[C|B] = t A /(t C +t A ) In general, transition from previous state X to next state Y:  At state X: Y played against Z X ≠ Y ≠ Z ∈ (A, B, C)  next state Y, Y doesn’t play, i.e Y lost, the winner was Z  P YX = t Z /(t Y + t Z )  P XX = 0

18 Exercise E5 A store with a certain product At the end of each day the number of items is checked  Policy: If the number ≤ 0 than it is increased to 4 during the night. Customers that ordered but didn’t receive get the items during next day State X n : Number of items at beginning of day n External variable D n : Num of items ordered during day n Let: Pr[D n = 0] = 0.2; Pr[D n = 1] = 0.4; Pr[D n = 2] = 0.4  (no more than 2 items can be ordered in a day) Develop the Transition Matrix P ij =Pr[i | j]

19 Exercise E5 (cont’d) The max value of X n is 4 X n can have the values {1, 2, 3, 4}. It can’t be 0. why? State 1 (one item in store at beginning of a day):  Stay in state 1 if no orders. P 11 = Pr[D n =0] = 0.2  Jump to state 4 if one OR two items were ordered. P 41 = Pr[D n =1] + Pr[D n =2] = 0.4 + 0.4 = 0.8 State 2 (2 items in store at beginning of day):  Stay 2 if nothing was ordered: P 22 = Pr[D n = 0] = 0.2  Jump to 1 if one item ordered: P 12 = Pr[D n = 1] = 0.4  Jump to 4 if 2 items were ordered: P 42 = Pr[D n =2]=0.4

20 Exercise E5 (cont’d) State 3:  Stay in state 3 if nothing was ordered: P 33 = 0.2  Jump to state 2 if one item was ordered: P 23 = 0.4  Jump to state 1 if two items were ordered: P 13 = 0.4 State 4:  Stay in state 4 if nothing was ordered: P 44 = 0.2  Jump to State 3 if one item was ordered: P 14 = 0.4  Jump to state 2 if two items were ordered: P 24 = 0.4

21 Exercise E6 A company has m=2 machines, A, B Probability that a machine will become faulty during a day is p (independent of states of other machines)  Faulty machines sent to a Repair Service Probability that a machine in the Repair Service will be repaired and ready for next day is s. Repaired machines returned in the evening to the company Develop a Markov Model – write matrix P X n Number of operational (not faulty) machines at the beginning of day n  X n is the state at day n: X n can have values 0, 1, 2

22 Exercise E6 (cont’d) State 0 (machines A, B are faulty at beginning of day):  Remain at state 0 if the two machines will not be repaired. P 00 = (1-s) 2  Jump to state 1 if machine A will be repaired AND B will not OR B will be repaired, A will not: P 10 = 2s(1-s)  Jump to state 2 if both A and B will be repaired: P 20 = s 2 State 1 (One machine is OK at beginning of day)  Stay at state 1 if the OK machine will stay OK AND the faulty machine will not be repaired OR the OK machine will become faulty AND the faulty machine will be repaired during the day: P 11 = (1-p)(1-s) + ps

23 Exercise E6 (cont’d) Jump to state 2 if the OK machine will not become faulty AND the faulty machine will be repaired: P 21 = s(1-p) Jump to state 0 if the OK machine becomes faulty AND the faulty machine will not be repaired during the day:  P 01 = p(1-s) State 2 (both machines are OK at beginning of the day)  Stay at state 2 if both machines will not become faulty P 22 = (1-p) 2  Jump to state 1 if machine A becomes faulty & machine B stays OK, OR machine B becomes faulty & machines A stays OK during the day: P 12 = 2p(1-p)  Jumps to state 0: both machines become faulty P 02 = p 2

24 Exercise E7 If it rained for the past two days it will rain tomorrow with probability 0.7 If rained today but not yeterday it will rain tomorrow with probability 0.5 If it rained yesterday but not today it will rain tomorrow with probability 0.4 If it has not rained in the past two days it will rain tomorrow with probability 0.2 Construct a Markov Model and calculate the transition matrix How many states (What are the states)?

25 Example E7 (cont’d) State 0: rain after rain State 1: rain after no rain State 2: no rain after rain State 3: no rain after no rain Transitions: From State 0 (it rained today and yesterday)  Stay in state 0 (rainy day after two rainy days). P 00 = 0.7  Jump to state 1 (rain tomorrow with no rain today). P 10 = 0, impossible, because today it does rain  Jump to state 3 (no rain tomorrow after no rain today) P 30 = 0 impossible, because today it does rain  Jump to state 2 (no rain tomorrow after two rainy days). P 20 = 1 – 0.7 = 0.3

26 Exercise E7 (cont’d) From State 1 (Rain today but no rain yesterday)  Jump to state 0 (Rain today and tomorrow). Probability that after (no-rain, rain) will be rain P 01 = 0.5  Stay at state 1 (Will rain tomorrow but not today). P 11 =0, impossible, because today it does rain  Jump to state 3 (no rain tomorrow and today) P 31 = 0, impossible because today it does rain  Jump to state 2 (no rain tomorrow after rain today). Probability that after (no-rain, rain) will be no-rain. P 21 = 1- 0.5 = 0.5

27 Exercise E7 (cont’d) From State 2 (no-rain today, rained yesterday)  Jump to state 0 (rain today and tomorrow) Probability 0 because it does not rain today. P 02 = 0  Jump to state 1 (rain tomorrow, but not today). Probability of rain after (rain no-rain). P 12 = 0.4  Stay in State 2 (no-rain tomorrow, with rain today). Probability 0 because today there is no rain. P 22 = 0  Jump to state 3 (no rain tomorrow and today). Probability of no-rain after (rain, no-rain). P 32 = 0.6 Similarly transitions from state 3 (no rain after no rain)  P 03 = 0; P 13 = 0.2; P 23 = 0; P 33 = 0.8

28 Exercise E8 A pensioner receives a pension of 2 (thousands) dollars at the beginning of each month He spends k (thousand dollars) during the month, with probabilities a k, k = 1..4. He cannot spend more than 4 Assume a 1 =a 2 =a 3 =a 4 = 1/4 If by the end of the month his capital is more than 3, he keeps 3 for himself, gives the extra to his son Suppose at the beginning of the month he has 3. Calculate the probability that his capital will reach the level of 1 or lower sometime during the following 4 months

29 Exercise E8 (cont’d) Denote X k the capital by the end of month k  Possible states: 1 (capital 1 or lower), 2, 3  Initial state X 0 = 3 Suppose we know P ij transition probability matrix Suppose the system will enter state 1 for the first time at the end of month m (unknown) Denote R = P m Probability of transition from state 3 to state 1 in m steps: R 13 =  jkl…i P 1j P jk P kl …P i3 = (P m ) 13  sum over length m paths from 3 to 1 m is between 1 to 4, but m is unknown!

30 Exercise E8 (cont’d) We construct a new Markov chain with transition matrix Q equal to P except that Q 31 = Q 21 = 0, Q 11 = 1  When the new chain enters state 1, it remains there with probability 1 (1 is absorbing state) The new chain will enter state 1 during n months if and only if it will be at that state at month n We need to calculate the probability of the new chain to go from state 3 to state 1 in n steps (in our case n = 4) The probability to go from state 3 to state 1 in n steps in the new chain is a sum over all paths length n, with first m steps equal to the steps in the original Markov Chain, and last (n- m) steps will be transitions from 1 to 1

31 Exercise E8 (cont’d) Denote S = Q n (in our problem n = 4) S 13 = 1*1*..*1*R 13 ( n-m factors of 1) Construction of transition matrix, Q State 1 (1 or lower at the end of this month)  Q 11 = 1 Q 21 = Q 31 = 0 by definition of Q State 2 (2 at the end of this month)  Jump to state 1 if he will spend 3 OR 4: P 12 = a 3 + a 4  Stay in state 2 if he will spend 2: P 22 = a 2  Jump to state 3 if he will spend 1: P 32 = a 1

32 Exercise E8 (cont’d) State 3: (3 at the end of this month)  Jump to state 1 if he will spend 4: P 13 = a 4  Jump to state 2 if he will spend 3: P 23 = a 3  Stay in state 3 if he will spend 1 OR 2: P 33 = a 1 + a 2 For a 1 = a 2 = a 3 = a 4 = ¼ S 13 = R 13 = Q 4 13 = 201/256

33 Time Development of State Probabilities and approach to equilibrium

34 State Probabilities A system is described by a Markov Chain  {S(0), S(1), … S(n),..};  S(n), the state at time n, a random variable  S(n) can have values from a set size m, i = 1, 2, 3, …m With state probabilities at time n Pr[S(n) = i], i = 1, 2, 3,4, 5, 6, …m Denote: s i (n) = Pr[S(n) = i] Example: If at time 0 the system is definitely at state 5; what are the state probabilities of the system at time 0, s i (0)  s 5 (0) = 1; s j (0) = 0 for all j ≠ 5

35 Time development of State Probabilities Find relation between s i (n) and s j (n-1) Law of total probability:  s i (n) =  j Pr[S(n) = i|S(n-1] = j)*Pr[S(n-1) = j]  s i (n) =  j P ij s j (n-1) sum over j = 1 to m, n = 0, 1, 2… In matrix form: s(n) = P*s(n-1) s(n) a column vector  state probabilities vector at time n   i s i (n) = 1 sum over i from 1 to m, n = 1, 2, …

36 Calculating the State probabilities s( 1 ) = P*s( 0 ) s( 2 ) = P*s( 1 ) = P 2 *s( 0 ) s( 3 ) = P 3 *s( 0 ) What’s the meaning of an element of P 3 ….. s(n) = P n *s( 0 ) What’s the meaning of an element of P n What is the meaning of s(0)?

37 Exercise E9: Memory cache Memory: 3 level hierarchy: cache on chip, RAM, disk  Memory Management Module of OS access blocks of a file from one of these locations Memory State Variable S(n) location of block at time step n States: 1 (in cache), 2 (in RAM), 3 (on disk) Assume location of initial block is in cache (state 1) Assume the conditional probabilities of next block location conditioned on the current block location are given Explain values in this matrix

38 Exercise E9: Memory cache (cont’d) Calculate the probability that location of fourth block is cache s(3) = P 3 *s(0) How much is s(0) ? s(3) = P 3 *s(0) = [0.386, 0.325, 0.289] T The probability that the 4’th block will be accessed from the cache is 0.386; from RAM it is 0.325

39 Markov Chain at equilibrium n  ∞ s(n) = [s 1 (n), s 2 (n), … s m (n)] T state probabilities vector  s i (n) = Pr[S(n) = i] Assume: s(n)  s when n  ∞ meaning? s i a fixed probability of the system to be in state i after a very long time of development (hence equilibrium) How to calculate s? 1: s(n) = P*s(n-1) what do we get when n  ∞ ? s = P*s s is an eigenstate of P with eigenvalue  = 1 2: s(n) = P n *s(0) what do we get when n  ∞ ? s is limit of P n *s(0) when n  ∞ does s depends on s(0)?

40 A motivating example n  ∞ P ∞ all columns the same Denote p a column of P ∞  P ∞ * s(0) = p prove it  s = p the steady state vector  Independent of s(0) All columns of P ∞ are equal to the eigenstate of P with eigenvalue = 1 Each column of P ∞ is the steady state probabilities vector

41 Exercise E10: Steady State probabilities of a Router buffer probability of arrival of a packet during a timestep: a Probability of no arrival: b = 1-a Probability of departure of a packet during timestep: c Probability of no departure: d = 1-c Probability of buffer stays empty: f 0 = 1 – ad Probability of buffer stays same state non-empty: f = ac+bd

42 Exercise E10: Steady State probabilities of a Router buffer (cont’d) s = (s 0, s 1, s 2, …s i,…) T P*s = s Simple recurrence equations:  ads 0 – bcs 1 = 0  ads 0 – gs 1 + bcs 2 =0  ads i-1 – gs i + bcs i+1 = 0  Where g = 1-f Solution: s i = (ad/bc) i s 0 i ≥ 0  s i =  i s 0 where  = (ad/bc) < 1 How to calculate s 0 ?  s 0 = (1-  )


Download ppt "Tel Hai Academic College Department of Computer Science Prof. Reuven Aviv Introduction to Markov Chains Resource: Fayez Gebali, Analysis of Computer and."

Similar presentations


Ads by Google