Al-Imam Mohammad Ibn Saud University

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
CS433: Modeling and Simulation
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Operations Research: Applications and Algorithms
Discrete Time Markov Chains
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Discrete-Time Markov Chains. © Tallal Elshabrawy 2 Introduction Markov Modeling is an extremely important tool in the field of modeling and analysis of.
Entropy Rates of a Stochastic Process
Continuous Time Markov Chains and Basic Queueing Theory
Hidden Markov Models Fundamentals and applications to bioinformatics.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
Lecture 13 – Continuous-Time Markov Chains
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
TCOM 501: Networking Theory & Fundamentals
Markov Chains Chapter 16.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec. 7.1)
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
Lecture 11 – Stochastic Processes
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
CS433: Modeling and Simulation
Al-Imam Mohammad Ibn Saud University
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Generalized Semi-Markov Processes (GSMP)
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Lecture 4: State-Based Methods CS 7040 Trustworthy System Design, Implementation, and Analysis Spring 2015, Dr. Rozier Adapted from slides by WHS at UIUC.
CS433 Modeling and Simulation Lecture 03 – Part 01 Probability Review 1 Dr. Anis Koubâa Al-Imam Mohammad Ibn Saud University
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
CS433 Modeling and Simulation Lecture 06 – Part 02 Discrete Markov Chains Dr. Anis Koubâa 11 Nov 2008 Al-Imam Mohammad.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Markov Chains.
Discrete Time Markov Chains (A Brief Overview)
Discrete-time Markov chain (DTMC) State space distribution
Availability Availability - A(t)
V5 Stochastic Processes
Discrete Time Markov Chains
Operations Research: Applications and Algorithms
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
September 1, 2010 Dr. Itamar Arel College of Engineering
Discrete time Markov Chain
Discrete time Markov Chain
Autonomous Cyber-Physical Systems: Probabilistic Models
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

Al-Imam Mohammad Ibn Saud University CS433 Modeling and Simulation Lecture 06 – Part 01 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009

Goals for Today Understand what is a Stochastic Process Understand the Markov property Learn how to use Markov Chains for modelling stochastic processes

The overall picture … Markov Process Discrete Time Markov Chains Homogeneous and non-homogeneous Markov chains Transient and steady state Markov chains Continuous Time Markov Chains

Markov Process Stochastic Process Markov Property

What is “Discrete Time”? 1 2 3 4 Events occur at a specific points in time

What is “Stochastic Process”? State Space = {SUNNY, RAINNY} Day Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 THU FRI SAT SUN MON TUE WED X(dayi): Status of the weather observed each DAY

Markov Processes Markov Process Stochastic Process X(t) is a random variable that varies with time. A state of the process is a possible value of X(t) Markov Process The future of a process does not depend on its past, only on its present a Markov process is a stochastic (random) process in which the probability distribution of the current value is conditionally independent of the series of past value, a characteristic called the Markov property. Markov property: the conditional probability distribution of future states of the process, given the present state and all past states, depends only upon the present state and not on any past states Marko Chain: is a discrete-time stochastic process with the Markov property

What is “Markov Property”? PAST EVENTS NOW FUTURE EVENTS ? Probability of “R” in DAY6 given all previous states Probability of “S” in DAY6 given all previous states Day Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 THU FRI SAT SUN MON TUE WED Markov Property: The probability that it will be (FUTURE) SUNNY in DAY 6 given that it is RAINNY in DAY 5 (NOW) is independent from PAST EVENTS

X(tk) or Xk = xk Notation Value of the stochastic process at instant tk or k Discrete time tk or k X(tk) or Xk = xk The stochastic process at time tk or k

Markov Chain Discrete Time Markov Chains (DTMC)

Markov Processes Markov Process The future of a process does not depend on its past, only on its present Since we are dealing with “chains”, X(ti) = Xi can take discrete values from a finite or a countable infinite set. The possible values of Xi form a countable set S called the state space of the chain For a Discrete-Time Markov Chain (DTMC), the notation is also simplified to Where Xk is the value of the state at the kth step

General Model of a Markov Chain p11 p01 p12 p22 S0 S1 S2 p00 p21 p10 p20 Discrete Time (Slotted Time) State Space i Si State i or pij Transition Probability from State Si to State Sj

Example of a Markov Process A very simple weather model pSR=0.3 SUNNY RAINY pSS=0.7 pRR=0.4 pRS=0.6 State Space If today is Sunny, What is the probability that to have a SUNNY weather after 1 week? If today is rainy, what is the probability to stay rainy for 3 days? Problem: Determine the transition probabilities from one state to another after n events.

Five Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.

Chapman Kolmogorov Equation Determine transition probabilities from one state to anothe after n events.

Chapman-Kolmogorov Equations We define the one-step transition probabilities at the instant k as Necessary Condition: for all states i, instants k, and all feasible transitions from state i we have: We define the n-step transition probabilities from instant k to k+n as xi x1 xR … xj k u k+n Discrete time k+1

Chapman-Kolmogorov Equations Using Law of Total Probability xi x1 xR … xj k u k+n Discrete time k+1

Chapman-Kolmogorov Equations Using the memoryless property of Markov chains Therefore, we obtain the Chapman-Kolmogorov Equation

Chapman-Kolmogorov Equations Example on the simple weather model SUNNY RAINY pSR=0.3 pRR=0.4 pSS=0.7 pRS=0.6 What is the probability that the weather is rainy on day 3 knowing that it is sunny on day 1?

Transition Matrix Generalization Chapman-Kolmogorov Equations

Transition Matrix Simplify the transition probability representation Define the n-step transition matrix as We can re-write the Chapman-Kolmogorov Equation as follows: Choose, u = k+n-1, then Forward Chapman-Kolmogorov One step transition probability

Transition Matrix Simplify the transition probability representation Choose, u = k+1, then Backward Chapman-Kolmogorov One step transition probability

Transition Matrix Example on the simple weather model SUNNY RAINY pSR=0.3 pRR=0.4 pSS=0.7 pRS=0.6 What is the probability that the weather is rainy on day 3 knowing that it is sunny on day 1?

Homogeneous Markov Chains Markov chains with time-homogeneous transition probabilities Time-homogeneous Markov chains (or, Markov chains with time-homogeneous transition probabilities) are processes where The one-step transition probabilities are independent of time k. is said to be Stationary Transition Probability Even though the one step transition is independent of k, this does not mean that the joint probability of Xk+1 and Xk is also independent of k. Observe that:

Two Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.

Example: Two Processors System Consider a two processor computer system where, time is divided into time slots and that operates as follows: At most one job can arrive during any time slot and this can happen with probability α. Jobs are served by whichever processor is available, and if both are available then the job is given to processor 1. If both processors are busy, then the job is lost. When a processor is busy, it can complete the job with probability β during any one time slot. If a job is submitted during a slot when both processors are busy but at least one processor completes a job, then the job is accepted (departures occur before arrivals). Q1. Describe the automaton that models this system (not included). Q2. Describe the Markov Chain that describes this model.

Example: Automaton (not included) Let the number of jobs that are currently processed by the system by the state, then the State Space is given by X= {0, 1, 2}. Event set: a: job arrival, d: job departure Feasible event set: If X=0, then Γ(X)= a If X= 1, 2, then Γ(Χ)= a, d. State Transition Diagram - / a,d a a -/a/ad 1 2 - d / a,d,d d dd

Example: Alternative Automaton (not included) Let (X1,X2) indicate whether processor 1 or 2 are busy, Xi= {0, 1}. Event set: a: job arrival, di: job departure from processor i Feasible event set: If X=(0,0), then Γ(X)= a If X=(0,1) then Γ(Χ)= a, d2. If X=(1,0) then Γ(Χ)= a, d1. If X=(0,1) then Γ(Χ)= a, d1, d2. State Transition Diagram - / a,d1 a a 00 10 11 01 -/a/ad1/ad2 d1 a,d1,d2 - d1,d2 a,d2 d2 d1 -

Example: Markov Chain 1 2 p11 p01 p12 p22 p00 p21 p10 p20 For the State Transition Diagram of the Markov Chain, each transition is simply marked with the transition probability p11 p01 p12 p22 1 2 p00 p21 p10 p20

Example: Markov Chain 1 2 p01 p11 p12 p00 p10 p21 p20 p22 1 2 p01 p11 p12 p00 p10 p21 p20 p22 Suppose that α = 0.5 and β = 0.7, then,

State Holding Time How much time does it take for going from one state to another?

State Holding Times Suppose that at point k, the Markov Chain has transitioned into state Xk=i. An interesting question is how long it will stay at state i. Let V(i) be the random variable that represents the number of time slots that Xk=i. We are interested on the quantity Pr{V(i) = n}

State Holding Times This is the Geometric Distribution with parameter Clearly, V(i) has the memoryless property

State Probabilities An interesting quantity we are usually interested in is the probability of finding the chain at various states, i.e., we define For all possible states, we define the vector Using total probability we can write In vector form, one can write Or, if homogeneous Markov Chain

State Probabilities Example Suppose that with Find π(k) for k=1,2,… Transient behavior of the system In general, the transient behavior is obtained by solving the difference equation