TCOM 501: Networking Theory & Fundamentals

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Discrete Time Markov Chains
Topics Review of DTMC Classification of states Economic analysis
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
Copyright © 2005 Department of Computer Science CPSC 641 Winter Markov Chains Plan: –Introduce basics of Markov models –Define terminology for Markov.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Continuous Time Markov Chains and Basic Queueing Theory
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
TCOM 501: Networking Theory & Fundamentals
Queueing Theory: Part I
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
1 TCOM 501: Networking Theory & Fundamentals Lecture 7 February 25, 2003 Prof. Yannis A. Korilis.
1 TCOM 501: Networking Theory & Fundamentals Lectures 9 & 10 M/G/1 Queue Prof. Yannis A. Korilis.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
1 TCOM 501: Networking Theory & Fundamentals Lecture 8 March 19, 2003 Prof. Yannis A. Korilis.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Introduction to Queuing Theory
Flows and Networks Plan for today (lecture 5): Last time / Questions? Waiting time simple queue Little Sojourn time tandem network Jackson network: mean.
Introduction to Stochastic Models GSLM 54100
Generalized Semi-Markov Processes (GSMP)
Intro. to Stochastic Processes
Network Design and Analysis-----Wang Wenjie Queueing System IV: 1 © Graduate University, Chinese academy of Sciences. Network Design and Analysis Wang.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
Flows and Networks Plan for today (lecture 4): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
1 Queueing Theory Frank Y. S. Lin Information Management Dept. National Taiwan University
1 Elements of Queuing Theory The queuing model –Core components; –Notation; –Parameters and performance measures –Characteristics; Markov Process –Discrete-time.
Modeling and Analysis of Computer Networks
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Flows and Networks Plan for today (lecture 2): Questions? Birth-death process Example: pure birth process Example: pure death process Simple queue General.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
Chapter 6 Product-Form Queuing Network Models Prof. Ali Movaghar.
Queuing Theory.  Queuing Theory deals with systems of the following type:  Typically we are interested in how much queuing occurs or in the delays at.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Flows and Networks Plan for today (lecture 3): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
Queueing Fundamentals for Network Design Application ECE/CSC 777: Telecommunications Network Design Fall, 2013, Rudra Dutta.
QUEUING. CONTINUOUS TIME MARKOV CHAINS {X(t), t >= 0} is a continuous time process with > sojourn times S 0, S 1, S 2,... > embedded process X n = X(S.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Reliability Engineering
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Discrete Time Markov Chains (A Brief Overview)
Flows and Networks Plan for today (lecture 3):
Flows and Networks Plan for today (lecture 4):
CTMCs & N/M/* Queues.
Discrete Time Markov Chains
Lecture on Markov Chain
Markov Chains Carey Williamson Department of Computer Science
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
September 1, 2010 Dr. Itamar Arel College of Engineering
Carey Williamson Department of Computer Science University of Calgary
CS723 - Probability and Stochastic Processes
Presentation transcript:

TCOM 501: Networking Theory & Fundamentals Lecture 3 January 29, 2003 Prof. Yannis A. Korilis

Topics Markov Chains Discrete-Time Markov Chains Calculating Stationary Distribution Global Balance Equations Detailed Balance Equations Birth-Death Process Generalized Markov Chains Continuous-Time Markov Chains

Markov Chain Stochastic process that takes values in a countable set Example: {0,1,2,…,m}, or {0,1,2,…} Elements represent possible “states” Chain “jumps” from state to state Memoryless (Markov) Property: Given the present state, future jumps of the chain are independent of past history Markov Chains: discrete- or continuous- time

Discrete-Time Markov Chain Discrete-time stochastic process {Xn: n = 0,1,2,…} Takes values in {0,1,2,…} Memoryless property: Transition probabilities Pij Transition probability matrix P=[Pij]

Chapman-Kolmogorov Equations n step transition probabilities Chapman-Kolmogorov equations is element (i, j) in matrix Pn Recursive computation of state probabilities

State Probabilities – Stationary Distribution State probabilities (time-dependent) In matrix form: If time-dependent distribution converges to a limit p is called the stationary distribution Existence depends on the structure of Markov chain

Classification of Markov Chains Irreducible: States i and j communicate: Irreducible Markov chain: all states communicate Aperiodic: State i is periodic: Aperiodic Markov chain: none of the states is periodic 3 4 2 1 3 4 2 1

Limit Theorems Theorem 1: Irreducible aperiodic Markov chain For every state j, the following limit exists and is independent of initial state i Nj(k): number of visits to state j up to time k j: frequency the process visits state j

Existence of Stationary Distribution Theorem 2: Irreducible aperiodic Markov chain. There are two possibilities for scalars: j = 0, for all states j No stationary distribution j > 0, for all states j  is the unique stationary distribution Remark: If the number of states is finite, case 2 is the only possibility

Ergodic Markov Chains Markov chain with a stationary distribution States are positive recurrent: The process returns to state j “infinitely often” A positive recurrent and aperiodic Markov chain is called ergodic Ergodic chains have a unique stationary distribution Ergodicity ⇒ Time Averages = Stochastic Averages

Calculation of Stationary Distribution A. Finite number of states Solve explicitly the system of equations Numerically from Pn which converges to a matrix with rows equal to  Suitable for a small number of states B. Infinite number of states Cannot apply previous methods to problem of infinite dimension Guess a solution to recurrence:

Example: Finite Markov Chain 2 1 Absent-minded professor uses two umbrellas when commuting between home and office. If it rains and an umbrella is available at her location, she takes it. If it does not rain, she always forgets to take an umbrella. Let p be the probability of rain each time she commutes. What is the probability that she gets wet on any given day? Markov chain formulation i is the number of umbrellas available at her current location Transition matrix

Example: Finite Markov Chain 2 1

Example: Finite Markov Chain Taking p = 0.1: Numerically determine limit of Pn Effectiveness depends on structure of P

Global Balance Equations Markov chain with infinite number of states Global Balance Equations (GBE) is the frequency of transitions from j to i Intuition: j visited infinitely often; for each transition out of j there must be a subsequent transition into j with probability 1

Global Balance Equations Alternative Form of GBE If a probability distribution satisfies the GBE, then it is the unique stationary distribution of the Markov chain Finding the stationary distribution: Guess distribution from properties of the system Verify that it satisfies the GBE Special structure of the Markov chain simplifies task

Global Balance Equations – Proof

Birth-Death Process S Sc 1 n+1 n 2 One-dimensional Markov chain with transitions only between neighboring states: Pij=0, if |i-j|>1 Detailed Balance Equations (DBE) Proof: GBE with S ={0,1,…,n} give:

Example: Discrete-Time Queue In a time-slot, one arrival with probability p or zero arrivals with probability 1-p In a time-slot, the customer in service departs with probability q or stays with probability 1-q Independent arrivals and service times State: number of customers in system 1 n+1 n 2

Example: Discrete-Time Queue 1 n+1 n 2

Example: Discrete-Time Queue Have determined the distribution as a function of 0 How do we calculate the normalization constant 0? Probability conservation law: Noting that

Detailed Balance Equations General case: Imply the GBE Need not hold for a given Markov chain Greatly simplify the calculation of stationary distribution Methodology: Assume DBE hold – have to guess their form Solve the system defined by DBE and ii = 1 If system is inconsistent, then DBE do not hold If system has a solution {i: i=0,1,…}, then this is the unique stationary distribution

Generalized Markov Chains Markov chain on a set of states {0,1,…}, that whenever enters state i The next state that will be entered is j with probability Pij Given that the next state entered will be j, the time it spends at state i until the transition occurs is a RV with distribution Fij {Z(t): t ≥ 0} describing the state the chain is in at time t: Generalized Markov chain, or Semi-Markov process It does not have the Markov property: future depends on The present state, and The length of time the process has spent in this state

Generalized Markov Chains Ti: time process spends at state i, before making a transition – holding time Probability distribution function of Ti Tii: time between successive transitions to i Xn is the nth state visited. {Xn: n=0,1,…} Is a Markov chain: embedded Markov chain Has transition probabilities Pij Semi-Markov process irreducible: if its embedded Markov chain is irreducible

Limit Theorems pj is equal to the proportion of time spent at state j Theorem 3: Irreducible semi-Markov process, E[Tii] < ∞ For any state j, the following limit exists and is independent of the initial state. Tj(t): time spent at state j up to time t pj is equal to the proportion of time spent at state j

Occupancy Distribution Theorem 4: Irreducible semi-Markov process; E[Tii] < ∞. Embedded Markov chain ergodic; stationary distribution π Occupancy distribution of the semi-Markov process πj proportion of transitions into state j E[Tj] mean time spent at j Probability of being at j is proportional to πjE[Tj]

Continuous-Time Markov Chains Continuous-time process {X(t): t ≥ 0} taking values in {0,1,2,…}. Whenever it enters state i Time it spends at state i is exponentially distributed with parameter νi When it leaves state i, it enters state j with probability Pij, where j≠i Pij = 1 Continuous-time Markov chain is a semi-Markov process with Exponential holding times: a continuous-time Markov chain has the Markov property

Continuous-Time Markov Chains When at state i, the process makes transitions to state j≠i with rate: Total rate of transitions out of state i Average time spent at state i before making a transition:

Occupancy Probability Irreducible and regular continuous-time Markov chain Embedded Markov chain is irreducible Number of transitions in a finite time interval is finite with probability 1 From Theorem 3: for any state j, the limit exists and is independent of the initial state pj is the steady-state occupancy probability of state j pj is equal to the proportion of time spent at state j [Why?]

Global Balance Equations Two possibilities for the occupancy probabilities: pj= 0, for all j pj> 0, for all j, and j pj = 1 Global Balance Equations Rate of transitions out of j = rate of transitions into j If a distribution {pj: j = 0,1,…} satisfies GBE, then it is the unique occupancy distribution of the Markov chain Alternative form of GBE:

Detailed Balance Equations Simplify the calculation of the stationary distribution Need not hold for any given Markov chain Examples: birth-death processes, and reversible Markov chains

Birth-Death Process Transitions only between neighboring states Sc 1 n+1 n 2 Transitions only between neighboring states Detailed Balance Equations Proof: GBE with S ={0,1,…,n} give:

Birth-Death Process Use DBE to determine state probabilities as a function of p0 Use the probability conservation law to find p0 Using DBE in problems: Prove that DBE hold, or Justify validity (e.g. reversible process), or Assume they hold – have to guess their form – and solve system

M/M/1 Queue Arrival process: Poisson with rate λ Service times: iid, exponential with parameter µ Service times and interarrival times: independent Single server Infinite waiting room N(t): Number of customers in system at time t (state) 1 n+1 n 2

M/M/1 Queue Birth-death process → DBE Normalization constant 1 n+1 n 2 Birth-death process → DBE Normalization constant Stationary distribution

The M/M/1 Queue Average number of customers Applying Little’s Theorem, we have Similarly, the average waiting time and number of customers in the queue is given by