11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.

Slides:



Advertisements
Similar presentations
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Advertisements

Module 4 Game Theory To accompany Quantitative Analysis for Management, Tenth Edition, by Render, Stair, and Hanna Power Point slides created by Jeff Heyl.
Markov Models.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
Markov Chains.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Operations Research: Applications and Algorithms
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
G12: Management Science Markov Chains.
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Chapter 17 Markov Chains.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Continuous Time Markov Chains and Basic Queueing Theory
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Intro. to Stochastic Processes
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Class Opener:. Identifying Matrices Student Check:
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Stochastic Models Lecture 3 Continuous-Time Markov Processes
Discrete Time Markov Chains
1 Probability and Statistical Inference (9th Edition) Chapter 4 Bivariate Distributions November 4, 2015.
1 a1a1 A1A1 a2a2 a3a3 A2A Mixed Strategies When there is no saddle point: We’ll think of playing the game repeatedly. We continue to assume that.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
Reliability Engineering
How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian.
Discrete Time Markov Chains (A Brief Overview)
Courtesy of J. Bard, L. Page, and J. Heyl
Industrial Engineering Dep
DTMC Applications Ranking Web Pages & Slotted ALOHA
Discrete-time markov chain (continuation)
Markov Chains Part 5.
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Presentation transcript:

11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

n-step transition probabilities (review) 2

Transition prob. matrix n-step transition prob. from state i to j is n-step transition matrix (for all states) is then For instance, two step transition matrix is 3

Chapman-Kolmogorov equations Prob. of going from state i at t=0, passing though state k at t=m, and ending at state j at t=m+n is In matrix notation, 4

state probabilities 5

State probability (pmf of an RV!) Let p(n) = {p j (n)}, for all j  E, be the row vector of state probs. at time n (i.e., state prob. vector) Thus, p(n) is given by From the initial state In matrix notation 6

How an MC changes (Ex 11.10, 11.11) A two-state system 7 Silence (state 0) Speech (state 1) Suppose p(0)=(0,1) Then p(1) = p(0)P= (0,1)P = (0.2, 0.8) p(2)= (0.2,0.8)P= (0,1)P 2 = (0.34, 0.66) p(4)= (0,1)P 4 = (0.507, 0.493) p(8)= (0,1)P 8 = (0.629, 0.371) p(16)= (0,1)P 16 = (0.665, 0.335) p(32)= (0,1)P 32 = (0.667, 0.333) p(64)= (0,1)P 64 = (0.667, 0.333) Suppose p(0)=(1,0) p(1) = p(0)P = (0.9, 0.1) p(2) = (1,0)P 2 = (0.83, 0.17) p(4)= (1,0)P 4 = (0.747, 0.253) p(8)= (1,0)P 8 = (0.686, 0.314) p(16)= (1,0)P 16 = (0.668, 0.332) p(32)= (1,0)P 32 = (0.667, 0.333) p(64)= (1,0)P 64 = (0.667, 0.333)

Independence of initial condition 8

The lesson to take away No matter what assumptions you make about the initial probability distribution, after a large number of steps, the state probability distribution is approximately (2/3, 1/3) 9 See p.666, 667

steady state probabilities 10

State probabilities (pmf) converge As n , then transition prob. matrix P n approaches a matrix whose rows are equal to the same pmf. In matrix notation, where 1 is a column vector of all 1’s, and  =(  0,  1, … ) The convergence of P n implies the convergence of the state pmf’s 11

Steady state probability System reaches “equilibrium” or “steady state”, i.e., n , p j (n)   j, p i (n-1)   i In matrix notation, here  is stationary state pmf of the Markov chain To solve this, 12

Speech activity system From the steady state probabilities 13  =  P (  1,  2 ) = (  1,  2 )  1 = 0.9   2  2 = 0.2   2  1 +  2 = 1  1 = 2 / 3 =  2 = 1 / 3 = 0.333

14 Question 11-1: Alice, Bob and Carol are playing Frisbee. Alice always throws to Carol. Bob always throws to Alice. Carol throws to Bob 2/3 of the time and to Alice 1/3 of the time. In the long run, what percentage of the time do each of the players have the Frisbee?