Operations Research: Applications and Algorithms

Slides:



Advertisements
Similar presentations
Stationary Probability Vector of a Higher-order Markov Chain By Zhang Shixiao Supervisors: Prof. Chi-Kwong Li and Dr. Jor-Ting Chan.
Advertisements

ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Operations Research: Applications and Algorithms
Chapter 4 Probability and Probability Distributions
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Operations Research: Applications and Algorithms
Chapter 19 Probabilistic Dynamic Programming
Markov Chains 1.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Section 10.1 Basic Properties of Markov Chains
Chapter 17 Markov Chains.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Natural Language Processing Spring 2007 V. “Juggy” Jagannathan.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Stochastic Processes Dr. Nur Aini Masruroh. Stochastic process X(t) is the state of the process (measurable characteristic of interest) at time t the.
Introduction to stochastic process
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Sampling Distributions
Chapter 2 Basic Linear Algebra
Probability Densities
Probability Distributions
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Probability theory 2011 Convergence concepts in probability theory  Definitions and relations between convergence concepts  Sufficient conditions for.
The moment generating function of random variable X is given by Moment generating function.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna Markov Analysis.
Group exercise For 0≤t 1
Lecture 11 – Stochastic Processes
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Presentation by: H. Sarper
Chapter 12 Review of Calculus and Probability
Generalized Semi-Markov Processes (GSMP)
Chapter 14 Game Theory to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c) 2004 Brooks/Cole, a.
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 20 Queuing Theory to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c) 2004 Brooks/Cole,
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Basic Concepts of Information Theory Entropy for Two-dimensional Discrete Finite Probability Schemes. Conditional Entropy. Communication Network. Noise.
Discrete Time Markov Chains (A Brief Overview)
Chapter 9 Integer Programming
Operations Research: Applications and Algorithms
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Operations Research: Applications and Algorithms
Systems of Linear Equations:
Discrete-time markov chain (continuation)
Markov Chains & Population Movements
CS723 - Probability and Stochastic Processes
Presentation transcript:

Operations Research: Applications and Algorithms Chapter 17 Markov Chains to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.

Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable evolves over time includes stochastic processes. An explanation of stochastic processes – in particular, a type of stochastic process known as a Markov chain is included. We begin by defining the concept of a stochastic process.

17.1 What is a Stochastic Process? Suppose we observe some characteristic of a system at discrete points in time. Let Xt be the value of the system characteristic at time t. In most situations, Xt is not known with certainty before time t and may be viewed as a random variable. A discrete-time stochastic process is simply a description of the relation between the random variables X0, X1, X2 …..

A continuous –time stochastic process is simply the stochastic process in which the state of the system can be viewed at any time, not just at discrete instants in time. For example, the number of people in a supermarket t minutes after the store opens for business may be viewed as a continuous-time stochastic process.

17.2 What is a Markov Chain? One special type of discrete-time is called a Markov Chain. Definition: A discrete-time stochastic process is a Markov chain if, for t = 0,1,2… and all states P(Xt+1 = it+1|Xt = it, Xt-1=it-1,…,X1=i1, X0=i0) =P(Xt+1=it+1|Xt = it) Essentially this says that the probability distribution of the state at time t+1 depends on the state at time t(it) and does not depend on the states the chain passed through on the way to it at time t.

In our study of Markov chains, we make further assumption that for all states i and j and all t, P(Xt+1 = j|Xt = i) is independent of t. This assumption allows us to write P(Xt+1 = j|Xt = i) = pij where pij is the probability that given the system is in state i at time t, it will be in a state j at time t+1. If the system moves from state i during one period to state j during the next period, we that a transition from i to j has occurred.

The pij’s are often referred to as the transition probabilities for the Markov chain. This equation implies that the probability law relating the next period’s state to the current state does not change over time. It is often called the Stationary Assumption and any Markov chain that satisfies it is called a stationary Markov chain. We also must define qi to be the probability that the chain is in state i at the time 0; in other words, P(X0=i) = qi.

We call the vector q= [q1, q2,…qs] the initial probability distribution for the Markov chain. In most applications, the transition probabilities are displayed as an s x s transition probability matrix P. The transition probability matrix P may be written as

For each I We also know that each entry in the P matrix must be nonnegative. Hence, all entries in the transition probability matrix are nonnegative, and the entries in each row must sum to 1.