Discrete-time markov chain (continuation)

Slides:



Advertisements
Similar presentations
Random Processes Markov Chains Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Solving Equations = 4x – 5(6x – 10) -132 = 4x – 30x = -26x = -26x 7 = x.
Lecture 12 – Discrete-Time Markov Chains
Al-Imam Mohammad Ibn Saud University
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Markov Chains Ali Jalali. Basic Definitions Assume s as states and s as happened states. For a 3 state Markov model, we construct a transition matrix.
Continuous Time Markov Chains and Basic Queueing Theory
Operations Research: Applications and Algorithms
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Eigenvalues and Eigenvectors (11/17/04) We know that every linear transformation T fixes the 0 vector (in R n, fixes the origin). But are there other subspaces.
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Continuous time Markov chains (Sec )
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
INFM 718A / LBSC 705 Information For Decision Making Lecture 9.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec. 7.1)
1 A Network Traffic Classification based on Coupled Hidden Markov Models Fei Zhang, Wenjun Wu National Lab of Software Development.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
5.2 – Solving Inequalities by Multiplication & Division.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Computers in Civil Engineering 53:081 Spring 2003 Lecture #8 Roots of Equations: Systems of Equations.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Asst. Professor in Mathematics
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
Reliability Engineering
Markov Chains.
Discrete Time Markov Chains (A Brief Overview)
Discrete-time Markov chain (DTMC) State space distribution
Ergodicity, Balance Equations, and Time Reversibility
Courtesy of J. Bard, L. Page, and J. Heyl
Medium Access Control Protocols
Industrial Engineering Dep
Discrete-time markov chain (continuation)
Discrete Time Markov Chains
Time-dependent queue modelling
DTMC Applications Ranking Web Pages & Slotted ALOHA
Dynamic Programming Lecture 13 (5/31/2017).
DETERMINANT definition and origin.
Discrete-time markov chain (continuation)
Find 4 A + 2 B if {image} and {image} Select the correct answer.
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Introduction to Concepts Markov Chains and Processes
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Chapman-Kolmogorov Equations
§ 4.3 Differentiation of Exponential Functions.
General Concept On Operational Excellence Open for discussion.
Warmup Open Loop System Closed Loop System
LECTURE 39: APPLICATIONS OF STATE VARIABLES
Tutorial 4 Techniques of Differentiation
Discrete-time markov chain (continuation)
Solutions Markov Chains 6
Random Processes / Markov Processes
CS723 - Probability and Stochastic Processes
Lecture 5 This lecture is about: Introduction to Queuing Theory
Presentation transcript:

Discrete-time markov chain (continuation)

CHAPMAN-KOLMOGOROV EQUATIONS IN MATRIX FORM Starting from the transition matrix 𝐏, we have 𝐏 (𝟐) =𝐏𝐏= 𝐏 𝟐 𝐏 (𝟑) =𝐏 𝐏 (𝟐) = 𝐏 𝟑 𝐏 (𝟒) =𝐏 𝐏 (𝟑) = 𝐏 𝟒 In general, 𝐏 (𝒏) =𝐏 𝐏 (𝒏−𝟏) = 𝐏 𝒏

CHAPMAN-KOLMOGOROV EQUATIONS IN MATRIX FORM Recall our example with 𝐏= 𝟎 𝟎.𝟕 𝟎 𝟎.𝟑 𝟎.𝟐 𝟎 𝟎.𝟖 𝟎 𝟎.𝟗 𝟎 𝟎.𝟏 𝟎 𝟎 𝟎.𝟎𝟓 𝟎 𝟎.𝟗𝟓 Refer to the MS Excel file.

Unconditional state probabilities If we start with 𝑿 𝟎 =𝟐, what are the probabilities 𝑷 𝑿 𝟏𝟎 =𝟏 , 𝑷 𝑿 𝟏𝟎 =𝟐 , 𝑷 𝑿 𝟏𝟎 =𝟑 and 𝑷{ 𝑿 𝟏𝟎 =𝟒}? After 10 time periods

Unconditional state probabilities After 10 matrix multiplications: 𝐏 (𝟏𝟎) ≈ 𝟎.𝟏𝟐 𝟎.𝟏𝟕 𝟎.𝟏𝟐 𝟎.𝟔𝟎 𝟎.𝟏𝟔 𝟎.𝟏𝟐 𝟎.𝟏𝟕 𝟎.𝟓𝟓 𝟎.𝟏𝟗 𝟎.𝟏𝟑 𝟎.𝟏𝟏 𝟎.𝟓𝟔 𝟎.𝟎𝟗 𝟎.𝟏𝟎 𝟎.𝟎𝟖 𝟎.𝟕𝟑

Unconditional state probabilities If we start with 𝑿 𝟎 =𝟐, what are the probabilities 𝑷 𝑿 𝟏𝟎 =𝟏 , 𝑷 𝑿 𝟏𝟎 =𝟐 , 𝑷 𝑿 𝟏𝟎 =𝟑 and 𝑷{ 𝑿 𝟏𝟎 =𝟒}? 𝟎 𝟏 𝟎 𝟎 𝟎.𝟏𝟐 𝟎.𝟏𝟕 𝟎.𝟏𝟐 𝟎.𝟔𝟎 𝟎.𝟏𝟔 𝟎.𝟏𝟐 𝟎.𝟏𝟕 𝟎.𝟓𝟓 𝟎.𝟏𝟗 𝟎.𝟏𝟑 𝟎.𝟏𝟏 𝟎.𝟓𝟔 𝟎.𝟎𝟗 𝟎.𝟏𝟎 𝟎.𝟎𝟖 𝟎.𝟕𝟑

Unconditional state probabilities If we start with 𝑿 𝟎 =𝟐, the probabilities are 𝑷 𝑿 𝟏𝟎 =𝟏 =𝟎.𝟏𝟔, 𝑷 𝑿 𝟏𝟎 =𝟐 =𝟎.𝟏𝟐, 𝑷 𝑿 𝟏𝟎 =𝟑 =𝟎.𝟏𝟕 and 𝑷 𝑿 𝟏𝟎 =𝟒 =𝟎.𝟓𝟓.

STEADY-STATE PROBABILITIES After 50 matrix multiplications: 𝐏 (𝟓𝟎) ≈ 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.𝟔𝟕 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.𝟔𝟕 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.𝟔𝟕 𝟎.𝟏𝟏 𝟎.𝟏𝟏 𝟎.𝟏𝟎 𝟎.𝟔𝟕 What is the meaning of this?

STEADY-STATE PROBABILITIES 𝒑 𝒊𝟏 (𝟓𝟎) ≈𝟎.𝟏𝟏 𝒑 𝒊𝟐 (𝟓𝟎) ≈𝟎.𝟏𝟏 𝒑 𝒊𝟑 (𝟓𝟎) ≈𝟎.𝟏𝟎 𝒑 𝒊𝟒 (𝟓𝟎) ≈𝟎.𝟔𝟕 This is called the steady-state probabilities of the Markov Chain.

STEADY-STATE PROBABILITIES 𝒑 𝒊𝟏 (𝟓𝟎) ≈𝟎.𝟏𝟏 𝒑 𝒊𝟐 (𝟓𝟎) ≈𝟎.𝟏𝟏 𝒑 𝒊𝟑 (𝟓𝟎) ≈𝟎.𝟏𝟎 𝒑 𝒊𝟒 (𝟓𝟎) ≈𝟎.𝟔𝟕 Note: Do not be confused with steady-state probabilities and stationary transition probabilities.

STEADY-STATE PROBABILITIES 𝒑 𝒊𝟏 (𝟓𝟎) ≈𝟎.𝟏𝟏 𝒑 𝒊𝟐 (𝟓𝟎) ≈𝟎.𝟏𝟏 𝒑 𝒊𝟑 (𝟓𝟎) ≈𝟎.𝟏𝟎 𝒑 𝒊𝟒 (𝟓𝟎) ≈𝟎.𝟔𝟕 Can we derive them directly without doing too many matrix multiplications?

STEADY-STATE PROBABILITIES In some cases, we can use the concept of fixed- point iteration. 𝛑=𝛑𝐏 The steady-state probabilities: 𝛑= 𝜋 𝟎 𝜋 𝟏 𝜋 𝟐 … 𝜋 𝑴 .

STEADY-STATE PROBABILITIES Recall again our example: 𝜋 𝟏 𝜋 𝟐 𝜋 𝟑 𝜋 𝟒 = 𝜋 𝟏 𝜋 𝟐 𝜋 𝟑 𝜋 𝟒 𝟎 𝟎.𝟕 𝟎 𝟎.𝟑 𝟎.𝟐 𝟎 𝟎.𝟖 𝟎 𝟎.𝟗 𝟎 𝟎.𝟏 𝟎 𝟎 𝟎.𝟎𝟓 𝟎 𝟎.𝟗𝟓 We also include in our equations: 𝜋 𝟏 + 𝜋 𝟐 +𝜋 𝟑 + 𝜋 𝟒 =𝟏

STEADY-STATE PROBABILITIES Solving it will result in: 𝜋 𝟏 ≈𝟎.𝟏𝟏𝟐𝟓 𝜋 𝟐 ≈𝟎.𝟏𝟏𝟐𝟓 𝜋 𝟑 ≈𝟎.𝟏 𝜋 𝟒 ≈𝟎.𝟔𝟕𝟓