Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
Markov Chains.
Markov Chains Extra problems
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Markov Chains 1.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Solutions Markov Chains 1
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
INDR 343 Problem Session
Lecture 13 – Continuous-Time Markov Chains
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
Lecture 11 – Stochastic Processes
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Introduction to Stochastic Models GSLM 54100
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Intro. to Stochastic Processes
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 11 – Stochastic Processes Topics Definitions.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
Chapman-Kolmogorov Equations
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Random Processes / Markov Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼

Outline Review Classification of States of a Markov Chain First passage times Absorbing States

Review Stochastic process : A stochastic process is a indexed collection of random variables {X t } = { X 0, X 1, X 2, … } for describing the behavior of a system operating over some period of time. Markov chain : A stochastic process having the Markovian property, P{ X t+1 = j | X 0 = k 0, X 1 = k 1,..., X t-1 = k t-1, X t = i } = P{ X t+1 = j | X t = i } One-step transition probability : p ij = P{ X t+1 = j | X t = i }

Review (cont.) N-step transition probability : p ij (n) = P{ X t+n = j | X t = i } Chapman-Kolmogrove equations : for all i = 0, 1, …, M, j = 0, 1, …, M, and any m = 1, 2, …, n-1, n = m+1, m+2, …

Review (cont.) One-step transition matrix : P = P 00 P P 0M P 10 P P 1M... P M0 P M1... P MM state M M

Review (cont.) N-step transition probability : P (n) = P 00 (n) P 01 (n)... P 0M (n) P 10 (n) P 11 (n)... P 1M (n)... P M0 (n) P M1 (n)... P MM (n) state01...M M

Review (cont.) Steady-state probability : The steady-state probability implies that there is a limiting probability that the system will be in each state j after a large number of transitions, and that this probability is independent of the initial state.

Classification of States of a Markov Chain Accessible : State j is accessible from state i if P ij (n) > 0 for some n ≥ 0. Communicate : If state j is accessible from state i and state i is accessible from state j, then states i and j are said to communicate. If state i communicates with state j and state j communicates with state k, then state j communicates with state k. Class : The state may be partitioned into one or more separate classes such that those states that communicate with each other are in the same class.

Classification of States of a Markov Chain (cont.) Irreducible : A Markov chain is said to be irreducible if there is only one class, i.e., all the states communicate.

Classification of States of a Markov Chain (cont.) A gambling example : Suppose that a player has $1 and with each play of the game wins $1 with probability p > 0 or loses $1 with probability 1-p. The game ends when the player either accumulates $3 or goes broke. P = p 0 p p state

Classification of States of a Markov Chain (cont.) Transient state : A state is said to be a transient state if, upon entering this state, the process may never return to this state. Therefore, state I is transient if and only if there exists a state j (j≠i) that is accessible from state i but not vice versa. Recurrent state : A state is said to be a recurrent state if, upon entering this state, the process definitely will return to this state again. Therefore, a state is recurrent if and only if it is not transient.

Classification of States of a Markov Chain (cont.) Absorbing state : A state is said to be an absorbing state if, upon entering this state, the process never will leave this state again. Therefore, state i is an absorbing state if and only if P ii = 1. P = p 0 p p state

Classification of States of a Markov Chain (cont.) Period : The period of state i is defined to be the integer t (t>1) such that P ii (n) = 0 for all value of n other than t, 2t, 3t,.... P 11 (k+1) = 0, k = 0, 1,2,... Aperiodic : If there are two consecutive numbers s and s+1 such that the process can be in the state i at times s and s+1, the state is said to be have period 1 and is called an aperiodic state. Ergodic : Recurrent states that are aperiodic are called ergodic states. A Markov chain is said to be ergodic if all its states are ergodic. For any irreducible ergodic Markov chain, steady-state probability,,exists.

Classification of States of a Markov Chain (cont.) An inventory example : The process is irreducible and ergodic and therefore, has steady- state probability. P = state

First Passage Times First Passage time : The first passage time from state i to state j is the number of transitions made by the process in going from state i to state j for the first time. Recurrence time : When j = i, the first passage time is just the number of transitions until the process returns to the initial state i and called the recurrence time for state i. Example : X 0 = 3, X 1 = 2, X 2 = 1, X 3 = 0, X 4 = 3, X 5 = 1 The first passage time from state 3 to state 1 is 2 weeks. The recurrence time for state 3 is 4 weeks.

First Passage Times (cont.) : denotes the probability that the first passage time from state i to state j is n. Recursive relationship :

First Passage Times (cont.) The inventory example : f 30 (1) = p 30 = f 30 (2) = p 31 f 10 (1) + p 32 f 20 (1) + p 33 f 30 (1) = 0.184(0.632) (0.264) (0.080) = Sum : ≦ 1

First Passage Times (cont.) Expected first passage time :  ij =  ∞ if < 1 If = 1

First Passage Times (cont.) The inventory example :  30 = 1 + p 31  10 + p 32  20 + p 33  30  20 = 1 + p 21  10 + p 22  20 + p 23  30  10 = 1 + p 11  10 + p 12  20 + p 13  30  10 = 1.58 weeks,  20 = 2.51 weeks,  30 = 3.50 weeks

Absorbing states Absorbing states : A state k is called an absorbing state if p kk = 1, so that once the chain visits k it remains there forever. An gambling example : Suppose that two players (A and B), each having $2, agree to keep playing the game and betting $1 at a time until one player is broke. The probability of A winning a single bet is 1/3.

Absorbing states (cont.) The transition matrix form A ’ s point of view P = /3 01/3 0 02/3 01/3 0 02/3 0 state /

Absorbing states (cont.) Probability of absorption : If k is an absorbing state, and the process starts in state i, the probability of ever going to state k is called the probability of absorption into state k, given the system started in state i. The gambling example : f 20 = 4/5, f 24 = 1/5 for i = 0, 1, 2,..., M subject to the conditions f kk = 1, f ik = 0, if state i is the recurrent and i ≠ k.

Reference Hillier and Lieberman, “ Introduction to Operations Research ”, seventh edition, McGraw Hill

Q & A