Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Discrete Time Markov Chains
Markov Chains 1.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Entropy Rates of a Stochastic Process
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Continuous Time Markov Chains and Basic Queueing Theory
Markov Chains Lecture #5
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Markov Chains Chapter 16.
INDR 343 Problem Session
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
Lecture 11 – Stochastic Processes
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Introduction to Stochastic Models GSLM 54100
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Intro. to Stochastic Processes
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 11 – Stochastic Processes Topics Definitions.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains.
Discrete-time Markov chain (DTMC) State space distribution
Markov Chains and Random Walks
Availability Availability - A(t)
Discrete-time markov chain (continuation)
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Discrete time Markov Chain
Discrete time Markov Chain
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range (possible values) of all X t lStationary Process: Joint Distribution of the X’s dependent only on their relative positions.(not affected by time shift) (X t1,..., X tn ) has the same distribution as (X t1+h, X t2+h..., X tn+h ) e.g.) (X 8, X 11 ) has same distribution as (X 20, X 23 )

Stochastic Process2 Stochastic Process(cont.) lMarkov Process: P r of any future event given present does not depend on past: t 0 < t 1 <... < t n-1 < t n < t P(a  X t  b | X tn = x tn, , X t0 = x t0 ) | future | | present | | past | P (a  X t  b | X tn = x tn ) Another way of writing this: P{X t+1 = j | X 0 = k 0, X 1 = k 1,..., X t = i} = P{X t+1 = j | X t = i} for t=0,1,.. And every sequence i, j, k 0, k 1,... k t-1,

Stochastic Process3 Stochastic Process(cont.) lMarkov Chains: State Space {0, 1,...} Discrete Time Continuous Time  T = (0, 1, 2,...)} {T = [0,  –Finite number of states –The markovian property –Stationary transition probabilities –A set of initial probabilities P{X 0 = i} for i

Stochastic Process4 Stochastic Process(cont.) SNote: P ij = P(X t+1 = j | X t = i) = P(X 1 = j | X 0 = i) Only depends on going ONE step

Stochastic Process5 Stochastic Process(cont.) Stage (t) Stage (t + 1) State i State j (with prob. P ij ) P ij These are conditional probabilities! Note that given X t = i, must enter some state at stage t + 1 0P i0 1P i1 2withP i prob jP ij mP im

Stochastic Process6 Stochastic Process(cont.) go to i th state j... m = 012im012im P 00 P 0j P 10 P 1j P 20 P 2j P i0 P ij P m0 P mj P mm Rows are given in this stage Rows sum to 1 Convenient to give transition probabilities in matrix form P = P (m+1)  (m+1) = P ij

Stochastic Process7 Stochastic Process(cont.) Example: t = day index 0, 1, 2,... X t = 0high defective rate on t th day = 1low defective rate on t th day two states ===> n = 1 (0, 1) P 00 = P(X t+1 = 0 | X t = 0) = 1/4 0 0 P 01 = P(X t+1 = 1 | X t = 0) = 3/4 0 1 P 10 = P(X t+1 = 0 | X t = 1) = 1/2 1 0 P 11 = P(X t+1 = 1 | X t = 1) = 1/2 1 1  P =

Stochastic Process8 Stochastic Process(cont.) Note: Row sum to 1 P 00 = P(X 1 = 0 | X 0 = 0) = 1/4 = P(X 36 = 0 | X 35 = 0) Also = P(X 2 = 0 | X 1 = 0, X 0 = 1) = P(X 2 = 0 | X 1 = 0) = P 00 What is P(X 2 = 0 | X 0 = 0) This is a two-step trans. stage stage 0 2 or t t + 2

Stochastic Process9 Stochastic Process(cont.) P 00 P 10 P 01 P 00 Stage (t + 0) Stage (t + 2) Stage (t + 1) P(X 2 = 0, X 1 = 0 | X 0 = 0) = P 00 P = P 00 P + P 01 P 10 = 1/4 *1/4 + 3/4 * 1/2 = 7/16 or P(X 2 = 0| X 0 = 0) =

Stochastic Process10 Stochastic Process(cont.) lPerformance Questions to be answered –How often a certain state is visited? –How much time will be spent in a state by the system? –What is the average length of intervals between visits?

Stochastic Process11 Stochastic Process(cont.) lOther Properties: –Irreducible –Recurrent –Mean Recurrent Time –Aperiodic –Homogeneous

Stochastic Process12 (j=0, 1, 2...) Exist and are Independent of the P j (0)’s Stochastic Process(cont.) Homogeneous, Irreducible, Aperiodic  Limiting State Probabilities:

Stochastic Process13 Stochastic Process(cont.) If all states of the chain are recurrent and their mean recurrence time is finite,  P j ’s are a stationary probabilitydistribution and can be determined by solving the equations P j =  P i P ij, (j=0,1,2..) and  P i = 1 i Solution ==> Equilibrium State Probabilities

Stochastic Process14 Stochastic Process(cont.) Mean Recurrence Time of S j : t rj = 1 / P j Independence allows us to calculate the time intervals spent in S j  State durations are geometrically distributed with mean 1 / (1 - P jj )

Stochastic Process15 Stochastic Process(cont.) Example: Consider a communication system which transmits the digits 0 and 1 through several stages. At each stage the probability that the same digit will be received by the next stage, as transmitted, is What is the probability that a 0 that is entered at the first stage is received as a 0 by the 5th stage?

Stochastic Process16 Stochastic Process(cont.) Solution: We want to find. The state transition matrix P is given by P = Hence P 2 = and P 4 = P 2 P 2 = Therefore the probability that a zero will be transmitted through four stages as a zero is It is clear that this Markov chain is irreducuble and aperidoic.

Stochastic Process17 Stochastic Process(cont.) We have the equations   +   = 1,   = 0.75    ,   = 0.25    . The unique solution of these equations is   = 0.5,   = 0.5. This means that if data are passed through a large number of stages, the output is independent of the original input and each digit received is equally likely to be a 0 or a 1. This also means that

Stochastic Process18 Stochastic Process(cont.) Note that: and the convergence is rapid. Note also that  P = (0.5, 0.5) =  so  is a stationary distribution.

Stochastic Process19 Example I Problem: CPU of a multiprogramming system is at any time executing instructions from: User program or ==> Problem State (S3) OS routine explicitly called by a user program (S2) OS routine performing system wide ctrl task (S1) ==> Supervisor State wait loop ==> Idle State (S0)

Stochastic Process20 Example I (cont.) Assume time spent in each state  50  s Note: Should split S 1 into 3 states (S 3, S 1 ), (S 2, S 1 ),(S 0, S 1 ) so that a distinction can be made regarding entering S 0.

Stochastic Process21 Example I (cont.) State Transition Diagram of discrete-time Markov of a CPU

Stochastic Process22 Example I (cont.) To State S0S1S2S3 S FromS StateS S Transition Probability Matrix

Stochastic Process23 Example I (cont.) P 0 = 0.99P P 1 P 1 = 0.01P P P P 3 P 2 = 0.02P P P 3 P 3 = 0.04P P P 3 1 = P 0 + P 1 + P 2 + P 3 Equilibrium state probabilities can be computed by solving system of equations. So we have: P 0 = 2/9, P 1 = 1/9, P 2 = 8/99, P 3 = 58/99

Stochastic Process24 Example I (cont.) Utilization of CPU 1 - P 0 = 77.7% 58.6% of total time spent for processing users programs 19.1% ( ) of time spent in supervisor state 11.1% in S 1 8% in S 2

Stochastic Process25 Example I (cont.) Mean Duration of state S j, (j = 0, 1, 2,...) t 0 = 1  (50) / (1 - P jj ) = 50/0.01 = 5000  s = 5 ms t 1 = 50 / 0.08 = 625  s t 2 = 50 / 0.10 = 500  s t 3 = 50 / 0.02 = 2.5 ms

Stochastic Process26 Example I (cont.) Mean Recurrence Time t rj = 1  P j t r0 = 50 / (2/9) = 225  s t r1 = 50 / (1/9) = 450  s t r2 = 50 / (8/99) =  s t r3 = 50 / (58/99) =  s

Stochastic Process27 Stochastic Process(cont.) Other Markov chain properties for classifying states: –Communicating Classes: States i and j communicate if each is accessible from the other. –Transient State: Once the process is in state i, there is a positive probability that it will never return to state i, –Absorbing State: A state i is said to be an absorbing state if the (one step) transition probability P ii = 1.

Stochastic Process28 Stochastic Process(cont.) Note: State Classification: STATES RecurrentTransient Periodic Aperiodic Absorbing

Stochastic Process29 Example II Example II: –Communicating Class {0, 1} –Aperiodic chain –Irreducible –Positive Recurrent

Stochastic Process30 Example III Example III: –Absorbing State {0} –Transient State {1} –Aperiodic chain –Communicating Classes {0} {1}

Stochastic Process31 Exercise Exercise: Classify States.

Stochastic Process32 Major Results lResult I: j is transient P(X n = j | X 0 = i) = as n   lResult II: If chain is irreducible: as n  

Stochastic Process33 Major Results(cont.) lResult III: If chain is irreducible and aperiodic: P ij (n)  j as n   P (n) =  0  1  j  0  1   j      0  1  j