Discrete-Time Markov Chains. © Tallal Elshabrawy 2 Introduction Markov Modeling is an extremely important tool in the field of modeling and analysis of.

Slides:



Advertisements
Similar presentations
Stationary Probability Vector of a Higher-order Markov Chain By Zhang Shixiao Supervisors: Prof. Chi-Kwong Li and Dr. Jor-Ting Chan.
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Discrete Time Markov Chains
Markov Chains 1.
TCOM 501: Networking Theory & Fundamentals
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
Continuous Time Markov Chains and Basic Queueing Theory
Operations Research: Applications and Algorithms
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
TCOM 501: Networking Theory & Fundamentals
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Condition State Transitions and Deterioration Models H. Scott Matthews March 10, 2003.
Queueing Theory.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec. 7.1)
Group exercise For 0≤t 1
Lecture 11 – Stochastic Processes
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Simulation Output Analysis
Generalized Semi-Markov Processes (GSMP)
Intro. to Stochastic Processes
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Model under consideration: Loss system Collection of resources to which calls with holding time  (c) and class c arrive at random instances. An arriving.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Modeling and simulation of systems Introduction to queuing theory Slovak University of Technology Faculty of Material Science and Technology in Trnava.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains.
Discrete Time Markov Chains (A Brief Overview)
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
水分子不時受撞,跳格子(c.p. 車行) 投骰子 (最穩定) 股票 (價格是不穏定,但報酬過程略穩定) 地震的次數 (不穩定)
Discrete Time Markov Chains
Lecture on Markov Chain
Introduction to Concepts Markov Chains and Processes
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
September 1, 2010 Dr. Itamar Arel College of Engineering
Discrete-time markov chain (continuation)
Lecture 11 – Stochastic Processes
Presentation transcript:

Discrete-Time Markov Chains

© Tallal Elshabrawy 2 Introduction Markov Modeling is an extremely important tool in the field of modeling and analysis of telecommunication networks Example: Markov Models are applicable in the following networking problems Connection Admission Control (CAC) Bandwidth Allocation Congestion Control Routing Queuing and Scheduling

© Tallal Elshabrawy 3 X(t) depicts a stochastic process The Markov Property: Conditional on X(t)=X, the future {X(u)} u>t and the past {X(u)} u<t are statistically independent time X(t) = X PAST Future t Markov Property

© Tallal Elshabrawy 4 Markov Process = Random State Process A Markov process is characterized by a state variable The state depicts a random process with the condition that knowledge of the value of the state at any given time t relieves the need of any past knowledge of the system Examples: The state could be the number of packets inside a queue with exponential inter-arrivals and exponential service times

© Tallal Elshabrawy 5 Types of Markov Process Markov processes with discrete amplitude are termed as Markov Chains The amplitude in Markov chains is discrete in the sense that they are drawn from a state space S where S={s 0, s 1, s 2, ……} TimeAmplitude Discrete Continuous Discrete Continuous

© Tallal Elshabrawy 6 Discrete-Time Markov Chains Consider Markov Chain {X(t)} in discrete-time t = 0, 1, 2, … time t=0 t=1t=2t=3t=4t=n X(0) X(1) X(2) X(3) X(4) X(n) Future with respect to t=1 Past Present From the Markov property this term is irrelevant in determining the future with respect to t=1

© Tallal Elshabrawy 7 Discrete-Time Markov Chains Consider Markov Chain {X(t)} in discrete-time t = 0, 1, 2, … time t=0 t=1t=2t=3t=4t=n X(0) X(1) X(2) X(3) X(4) X(n) Continuing in the same way Initial Condition Transition Probabilities

© Tallal Elshabrawy 8 Theorem The statistics of Markov chain X(.) are completely specified by: The initial distribution (the distribution of X(0)) The transition probabilities (the distribution of X(k) given X(k-1)) for all k Goal: To reduce the complexity of describing random processes

© Tallal Elshabrawy 9 Further Reduction: Time Invariance Markov X(.) is said to be time-homogeneous if and only if the distribution of X(t) given X(s) for s<t depends only on the difference t-s, i.e., Time Homogeneity means

© Tallal Elshabrawy 10 Modeling & Notations Assume the Markov chain may assume one of M states, i.e., S={S 0, S 1, …, S M-1 } The chain could be fully characterized by: An Mx1 vector Q(0) = {Pr[X[0]=S i }, 0≤i ≤M-1 An MxM matrix P(t) = {P ij (t)}, 0≤i ≤M-1, 0≤j≤M-1 S0S0 S1S1 S2S2 S M-1 SjSj SiSi SjSj SiSi P ij (t) P ji (t) P ij (t) = Pr[X(s+t)=S j |X(s)=S i ] P ji (t) = Pr[X(s+t)=S i |X(s)=S j ]

© Tallal Elshabrawy 11 Example S0S0 S1S1 S2S2 Matrix Multiplication

© Tallal Elshabrawy 12 Stationary Markov Chains If a chain is stationary: Q(t)={Pr[X(t)=S i ]}, 0≤i ≤M-1 does not depend on t Stationary means: Distribution of position is insensitive to time

© Tallal Elshabrawy 13 Chapman-Kolmogorov time t s u SiSi SjSj SkSk Future with respect to time=t Present with respect to time=t Past with respect to time=t True for all S i, S j and for all time points s<t<u Chapman-Kolmogorov Equations

© Tallal Elshabrawy 14 Example S0S0 S1S1 S2S2 To move from state S 0 to S 1 within time = 2t, we have the following three Possibilities: 1.Transition from S 0 to S 0 between 0,t is followed by the transition S 0 to S 1 between t,2t 2.Transition from S 0 to S 1 between 0,t is followed by the transition S 1 to S 1 between t,2t 3.Transition from S 0 to S 2 between 0,t is followed by the transition S 2 to S 1 between t,2t time t s=0 u=2t SiSi SjSj SkSk

© Tallal Elshabrawy 15 Chapman-Kolmogorov: Matrix Notation time t s u SiSi SjSj SkSk Chapman-Kolmogorov Equations

© Tallal Elshabrawy 16 One Step Transition Probabilities time t+1 t t+2 SiSi SjSj SkSk One Step Transition Probability

© Tallal Elshabrawy 17 Summary Discrete-Time Homogeneous Markov Chain A complete statistical characterization consists of Q(0)  Initial Probability distribution P(1)  One Step Transition Probability Q T (t) = Q T (0)P(t) (i.e., Pr[X(t)=S j ] = ∑ i Pr[X(0)=S i ]Pr[X(t)=S j |X(0)=S i ]) Q(t) is a Probability Vector (i.e., ∑ i Pr[X(t)=S i ] =1) The matrix P(t) is stochastic All elements >0 Sum of each row is equal to 1 (∑ j P ij (t) = 1)