Discrete-time Markov chain (DTMC) State space distribution

Slides:



Advertisements
Similar presentations
Chapter 0 Review of Algebra.
Advertisements

Discrete time Markov Chain
8.3 Inverse Linear Transformations
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
Chapter 2: Second-Order Differential Equations
Fault Tree Analysis Part 12 – Redundant Structure and Standby Units.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Discrete Time Markov Chains
Markov Analysis Jørn Vatn NTNU.
TCOM 501: Networking Theory & Fundamentals
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
Copyright © 2005 Department of Computer Science CPSC 641 Winter Markov Chains Plan: –Introduce basics of Markov models –Define terminology for Markov.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Continuous Time Markov Chains and Basic Queueing Theory
Workshop on Stochastic Differential Equations and Statistical Inference for Markov Processes Day 1: January 19 th, Day 2: January 28 th Lahore University.
Reliable System Design 2011 by: Amir M. Rahmani
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
A. BobbioBertinoro, March 10-14, Dependability Theory and Methods 5. Markov Models Andrea Bobbio Dipartimento di Informatica Università del Piemonte.
Single queue modeling. Basic definitions for performance predictions The performance of a system that gives services could be seen from two different.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Intro. to Stochastic Processes
NETE4631:Capacity Planning (2)- Lecture 10 Suronapee Phoomvuthisarn, Ph.D. /
Lecture 4: State-Based Methods CS 7040 Trustworthy System Design, Implementation, and Analysis Spring 2015, Dr. Rozier Adapted from slides by WHS at UIUC.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Reliability Engineering
Copyright 2007 Koren & Krishna, Morgan-Kaufman Part.4.1 FAULT TOLERANT SYSTEMS Part 4 – Analysis Methods Chapter 2 – HW Fault Tolerance.
Markov Chains.
Discrete Time Markov Chains (A Brief Overview)
Availability Availability - A(t)
Industrial Engineering Dep
Discrete-time markov chain (continuation)
Much More About Markov Chains
CTMCs & N/M/* Queues.
Section 4.1 Eigenvalues and Eigenvectors
Discrete Time Markov Chains
Hidden Markov Models Part 2: Algorithms
Markov Chains Carey Williamson Department of Computer Science
Discrete-time markov chain (continuation)
Solutions Markov Chains 1
Reliability Engineering
Continuous time Markov Chains
Carey Williamson Department of Computer Science University of Calgary
Discrete time Markov Chain
Discrete time Markov Chain
Chapter 4. Time Response I may not have gone where I intended to go, but I think I have ended up where I needed to be. Pusan National University Intelligent.
Autonomous Cyber-Physical Systems: Probabilistic Models
Solutions Markov Chains 6
CS723 - Probability and Stochastic Processes
Quantitative evaluation of dependability
Presentation transcript:

Discrete-time Markov chain (DTMC) State space distribution p(t) = [p0(t), p1(t), p2(t) , …] state occupancy vector at time t Probability that the Markov process is in state i at time-step t pi(t) = P{Xt = i} initial state space distribution: = (p1 , …, pn ) p(0) (0) A single step forward: p(1) = p(0) P Transient solution: p(t) State occupancy vector at time t in terms of the transition probability matrix: p(t) = p(0) Pt System evolution in a finite number of steps computed starting from the initial state distribution and the transition probability matrix

Limiting behaviour p(t) = p(0) Pt A Markov process can be specified in terms of the state occupancy probability vector p and a transition probability matrix P p(t) = p(0) Pt The limiting behaviour of a DTMC (steady-state behaviour): The limiting behaviour of a DTMC depends on the characteristics of its states. Sometimes the solution is simple.

Irreducible DTMC A state j is said to be accessible from state i if there exists t >0 such that Pij(t) >0, we write i->j A DTMC is irreducible if each state is accessible from every other state in a finite number of steps : for each i, j: i -> j A subset S’ of S is closed if there not exists any transition from S’ to S-S’

Classification of states A state i is recurrent if (i->j) then (j->i) process moves again to state i with probability 1 recurrent non-null: medium time of recurrence is finite recurrent null: medium time of recurrence is infinite A state i is transient if exists (j!=i) such that (i->j) and not (j->i) A state i is absorbent if pii=1 (i is a recurrent state)

Classification of states Given a recurrent state, let d be the greatest common divisor of all the integers m such that Pii(m) > 0 A recurrent state i is periodic if d > 1 A recurrent state i is aperiodic if d = 1: it is possible to move to the same state in one step 1 2 1 1 state 1 is periodic with period d=2 state 2 is periodic with period d=2

Steady-state behaviour THEOREM: For aperiodic irreducible Markov chain for each j and are independent from p(0) Moreover, if all states are recurrent non-null, the steady- state behaviour of the Markov chain is given by the fixpoint of the equation: p(t) = p(t-1) P with Sj pj =1 pj is inversely proportional to the period of recurrence of state j

Time-average state space distribution For periodic Markov chains doesn’t exist (caused by the probability of the periodic state) 1 We compute the time-average state space distribution, called p* 1 2 p(0) =(1,0) 1 state i is periodic with period d=2 p* = p(0) = (1,0) p(1) = p(0) P p(1) = (0,1) p(2) = p(1) P p(2) = (1,0) ……….. 1 2 1 0 1 2 1 0 P=

Continuous-time Markov models Continuous-time models: state transitions occur at random intervals transition rates assigned to each transition Markov property assumption: the length of time already spent in a state does not influence either the probability distribution of the next state or the probability distribution of remaining time in the same state before the next transition These very strong assumptions imply that the waiting time spent in any one state is exponentially distributed Thus the Markov model naturally fits with the standard assumptions that failure rates are constant, leading to exponential distribution of interarrivals of failures

Continuous-time Markov models derived from the discrete time model, taking the limit as the time-step interval approaches zero Single system with repair failure rate, m repair rate state 0: working state 1: failed p0(t) probability of being in the operational state p1(t) probability of being in the failed state Graph model Transition Matrix P

Continuous-time Markov models Probability of being in state 0 or 1 at time t+Dt: probability of being in state 0 at time t+Dt Performing multiplication, rearranging and dividing by Dt, taking the limit as Dt approaches to 0: Continuous-time Chapman-Kolmogorov equations

Continuous-time Markov models Matrix form: T matrix The set of equations can be written by inspection of a transition diagram without self-loops and Dt’s: Continuous time Markov model graph The change in state 0 is minus the flow out of state 0 times the probability of being in state 0 at time t, plus the flow into state 0 from state 1 times the probability of being in state 1.

Continuous-time Markov models Chapman-Kolmogorov equations solved by use of a LaPlace transform of a time domain function probability of being in state 0 at time t=0 A matrix Linear equation solving techniques where I is the identity matrix We solve the equations. We obtain as solutions a ratio of two polynomials in s. Then we apply the inverse transform to the solutions.

Continuous-time Markov models Our example Assume the system starts in the operational state: P(0) = [1,0] We apply the inverse transforms.

Continuous-time Markov models A(t) p0(t) probability that the system is in the operational state at time t, availability at time t The availability consists of a steady-state term and an exponential decaying transient term Only steady-state solution Chapman-Kolmogorov equations: derivative replaced by 0; p0(t) replaced by p0(0) and p1(t) replaced by p1(0)

Availability as a function of time The steady-state value is reached in a very short time

Continuous-time Markov models: Reliability Markov model making the system-failed state a trapping state Single system without repair lDt = state transition probability Differential equations: T matrix Continuous time Markov model graph T matrix can be built by inspection l = failure rate

Continuous-time Markov models: Reliability A matrix A= [sI –T] Taking the inverse transform: Continuous-time homogeneous Markov chains (CTMC)

An example of modeling (CTMC) Multiprocessor system with 2 processors and 3 shared memories system. System is operational if at least one processor and one memory are operational. lm failure rate for memory lp failure rate for processor X random process that represents the number of operational memories and the number of operational processors at time t Given a state (i, j): i is the number of operational memories; j is the number of operational processors S = {(3,2), (3,1), (3,0), (2,2), (2,1), (2,0), (1,2), (1,1), (1,0), (0,2), (0,1)}

Reliability modeling lm failure rate for memory lp failure rate for processor (3, 2) -> (2,2) failure of one memory (3,0), (2,0), (1,0), (0,2), (0,1) are absorbent states

Availability modeling Assume that faulty components are replaced and we evaluate the probability that the system is operational at time t Constant repair rate m (number of expected repairs in a unit of time) Strategy of repair: only one processor or one memory at a time can be substituted The behaviour of components (with respect of being operational or failed) is not independent: it depends on whether or not other components are in a failure state.

Strategy of repair: only one component can be substituted at a time lm failure rate for memory lp failure rate for processor mm repair rate for memory mp repair rate for processor

An alternative strategy of repair: only one component can be substituted at a time and processors have higher priority exclude the lines mm representing memory repair in the case where there has been a process failure