Markov Processes What is a Markov Process?

Slides:



Advertisements
Similar presentations
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Advertisements

Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
Markov Chains.
Discrete Time Markov Chains
Markov Chains 1.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Solutions Markov Chains 1
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Intro. to Stochastic Processes
S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Discrete Time Markov Chains
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 11 – Stochastic Processes Topics Definitions.
1 Introduction to Stochastic Models GSLM Outline  transient behavior  first passage time  absorption probability  limiting distribution 
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Chapter 9: Markov Processes
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Ergodicity, Balance Equations, and Time Reversibility
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
Discrete Time Markov Chains
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
Chapman-Kolmogorov Equations
Solutions Markov Chains 1
Discrete-time markov chain (continuation)
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Random Processes / Markov Processes
Presentation transcript:

Markov Processes What is a Markov Process? A stochastic process which contains the Markovian property. A process has the Markovian property if: for t = 0,1,… and every sequence i,j, k0, k1,…kt-1. In other words, any future state is only dependent on it’s prior state.

Markov Processes cont. This conditional probability is called the one-step transition probability. And if for all t = 1,2,… then the one-step transition probability is said to be stationary and therefore referred to as the stationary transition probability.

Markov Processes cont. Let pij = state 0 1 2 3 0 p00 p01 p02 p03 P = 1 p10 p11 p12 p13 2 p20 p21 p22 p23 3 p30 p31 p32 p33 P is referred to as the probability transition matrix.

Markov Processes cont. Suppose the probability you win is based on if you won the last time you played some game. Say, if you won last time, then there is a 70% chance of winning the next time. However, if you lost last time, there is a 60% chance you lose the next time. Can the process of winning and losing be modeled as a Markov process? Let state 0 be you win, and state 1 be you lose, then: state 0 1 P = 0 .70 .30 1 .40 .60

Markov Processes cont. See handout on n-step transition matrix.

Markov Processes cont. Let, state 0 1 2 ... N 0 p0 p1 p2 … pN Pn = 1 p0 p1 p2 … pN 2 p0 p1 p2 … pN 3 p0 p1 p2 … pN Then P= [p0 , p1 , p2 , p3 …pN ] are the steady state probabilities.

Markov Processes cont. Observing that P(n) = P(n-1)P, as , P = PP. [p0 , p1 ,,p2 ,…pN ] = [p0 , p1 ,,p2 ,…pN ] p00 p01 p02 … p0N p10 p11 p12 … p1N p20 p21 p22 … p2N pN0 pN1 pN2 … p3N The inner product of this matrix equation results in N+1 equations and N+1 unknowns, however rank of the P matrix is N. However, note that p0 + p1 + p2+ p3 …pN = 1. Therefore N+1 equations and N+1 unknowns.

Markov Processes cont. Observing that P(n) = P(n-1)P, as , P = PP. [p0 , p1 ,,p2 ,…pN ] = [p0 , p1 ,,p2 ,…pN ] p00 p01 p02 … p0N p10 p11 p12 … p1N p20 p21 p22 … p2N pN0 pN1 pN2 … p3N The inner product of this matrix equation results in N+1 equations and N+1 unknowns, however rank of the P matrix is N. However, note that p0 + p1 + p2+ p3 …pN = 1. Therefore N+1 equations and N+1 unknowns.

Markov Processes cont. Show example of obtaining P = PP from transition matrix: state 0 1 P = 0 .70 .30 1 .40 .60

Markov Processes cont. Break for Exercise

Markov Processes cont. State diagrams: state 0 1 P = 0 .70 .30 1 .40 .60 1

Markov Processes cont. State diagrams: state 0 1 2 3 P = 0 .5 .5 0 0 1 .5 .5 0 0 2 .25 .25 .25 .25 3 0 0 0 1 1 2 3

Markov Processes cont. Classification of States: A state j is accessible from some state i if it is possible to transition from i to j in a finite number of steps. i j State i and j communicate if j i and i j , . The communicating class of state i is the set C(i) where C(i) ={j: } If the communicating class C(i) = f then i is a non-return state. A process is said to be irreducible if all states within the process communicate.

Markov Processes cont. 6. A closed communicating class is such that there is no escape from the class. Note: An ergodic process can have at most 1 closed communicating class. 7. If i is the only member of C(i) and no state j is accessible from i, then i is an absorbing or capturing state, pii = 1. 8. A return state may be revisited infinitely often (recurrent) or finitely often (non-recurrent or transient) in the long run.

Markov Processes cont. First Passage Times: 1. The first passage time for a state i to j, Tij is the number of transitions required to enter state j for the first time given we start in state i. The recurrence time from state i, Tii = number of transitions to get back to state i. The first passage probability, fij(n), is the probability that the first passage time from state i to j is equal to n: fij(n) = P[Tij = n] fij(1) = pij fij(2) = P[Xn+2 = j | Xn = i, Xn+1 j ] =

Markov Processes cont. First Passage Times: The mean first passage time: mij = E[Tij] = mij = mean recurrence time. if mij = then i is null recurrent if mij < then i is positive recurrent Probability of absorption – probability of ever going from state i to k. let fii = for i = 0,1, 2 …M if fii = 1, then i is recurrent. fii < 1, then i is transient

Markov Processes cont. Expected Average Value(Cost) per Unit Time : How does one find the long run average reward (cost) of a Markov process? Let V(Xt) be a function that represents the reward for being in state X. Then