Operations Research: Applications and Algorithms

Slides:



Advertisements
Similar presentations
Review of Probability. Definitions (1) Quiz 1.Let’s say I have a random variable X for a coin, with event space {H, T}. If the probability P(X=H) is.
Advertisements

Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Operations Research: Applications and Algorithms
Chapter 4 Probability and Probability Distributions
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 7 Probability.
IERG5300 Tutorial 1 Discrete-time Markov Chain
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
Chapter 17 Markov Chains.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Operations Research: Applications and Algorithms
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Probability Distributions
Part1 Markov Models for Pattern Recognition – Introduction CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna Markov Analysis.
Chapter6 Jointly Distributed Random Variables
Probability and Statistics Dr. Saeid Moloudzadeh Axioms of Probability/ Basic Theorems 1 Contents Descriptive Statistics Axioms of Probability.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  transient behavior.
Chapter 6: Random Variables
Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.
Presentation by: H. Sarper
Chapter 12 Review of Calculus and Probability
All of Statistics Chapter 5: Convergence of Random Variables Nick Schafer.
Simple Mathematical Facts for Lecture 1. Conditional Probabilities Given an event has occurred, the conditional probability that another event occurs.
Copyright © Cengage Learning. All rights reserved. CHAPTER 9 COUNTING AND PROBABILITY.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 6: Random Variables Section 6.1 Discrete and Continuous Random Variables.
Chapter 6 Random Variables
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
Markov Models and Simulations Yu Meng Department of Computer Science and Engineering Southern Methodist University.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Chapter 20 Queuing Theory to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c) 2004 Brooks/Cole,
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
AP STATISTICS Section 7.1 Random Variables. Objective: To be able to recognize discrete and continuous random variables and calculate probabilities using.
MATH 256 Probability and Random Processes Yrd. Doç. Dr. Didem Kivanc Tureli 14/10/2011Lecture 3 OKAN UNIVERSITY.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Asst. Professor in Mathematics
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Probability Distribution. Probability Distributions: Overview To understand probability distributions, it is important to understand variables and random.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Chapter 6: Random Variables
Chapter 9 Integer Programming
Operations Research: Applications and Algorithms
Chapter 6: Random Variables
Chapter 6: Random Variables
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
12/6/ Discrete and Continuous Random Variables.
Chapter 6: Random Variables
Chapter 6: Random Variables
Chapter 7: Random Variables
Chapter 6: Random Variables
Chapter 6: Random Variables
Chapter 6: Random Variables
Chapter 6: Random Variables
Chapter 6: Random Variables
Chapter 6: Random Variables
Discrete-time markov chain (continuation)
Chapter 6: Random Variables
Chapter 6: Random Variables
Presentation transcript:

Operations Research: Applications and Algorithms Chapter 17 Markov Chains to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.

Description Sometimes we are interested in how a random variable changes over time. For example, we may want to know how the price of a share of stock or a firm’s market share evolves. The study of how a random variable evolves over time includes stochastic processes. An explanation of stochastic processes – in particular, a type of stochastic process known as a Markov chain is covered in this chapter. We begin by defining the concept of a stochastic process.

17.1 What is a Stochastic Process? Suppose we observe some characteristic of a system at discrete points in time. Let Xt be the value of the system characteristic at time t. In most situations, Xt is not known with certainty before time t and may be viewed as a random variable. A discrete-time stochastic process is simply a description of the relation between the random variables X0, X1, X2 …..

Example: Let X0 be the price of a share of CSL Computer stock at the beginning of the current trading day. Also, let Xt be the price of a share of CSL stock at the beginning of the tth trading day in the future. Clearly, knowing the values of X0, X1, X2, … , Xt tells us something about the probability distribution of Xt+1; the question is, what does the past (stock prices up to time t) tell us about Xt+1? The answer to this question is of critical importance in finance.

A continuous–time stochastic process is simply the stochastic process in which the state of the system can be viewed at any time, not just at discrete instants in time. For example, the number of people in a supermarket t minutes after the store opens for business may be viewed as a continuous-time stochastic process.

17.2 What is a Markov Chain? One special type of discrete-time is called a Markov Chain. Definition: A discrete-time stochastic process is a Markov chain if, for t = 0,1,2… and all states P(Xt+1 = it+1|Xt = it, Xt-1=it-1,…,X1=i1, X0=i0) =P(Xt+1=it+1|Xt = it) Essentially this says that the probability distribution of the state at time t+1 depends on the state at time t(it) and does not depend on the states the chain passed through on the way to it at time t.

In our study of Markov chains, we make further assumption that for all states i and j and all t, P(Xt+1 = j|Xt = i) is independent of t. This assumption allows us to write P(Xt+1 = j|Xt = i) = pij where pij is the probability that given the system is in state i at time t, it will be in a state j at time t+1. If the system moves from state i during one period to state j during the next period, we say that a transition from i to j has occurred.

The pij’s are often referred to as the transition probabilities for the Markov chain. This equation P(Xt+1 = j|Xt = i) = pij implies that the probability law relating the next period’s state to the current state does not change over time. It is often called the Stationary Assumption and any Markov chain that satisfies it is called a stationary Markov chain. We also must define qi to be the probability that the chain is in state i at the time 0; in other words, P(X0=i) = qi.

We call the vector q= [q1, q2,…qs] the initial probability distribution for the Markov chain. In most applications, the transition probabilities are displayed as an s x s transition probability matrix P. The transition probability matrix P may be written as

For each i We also know that each entry in the P matrix must be nonnegative. Hence, all entries in the transition probability matrix are nonnegative, and the entries in each row must sum to 1.

Example Choosing Balls from an Urn An urn contains two unpainted balls at present. We choose a ball at random and flip a coin. It the chosen ball is unpainted and the coin comes up heads, we paint the chosen unpainted ball red; if the chosen ball is unpainted and the coin comes up tails, we paint the chosen unpainted ball black. If the ball has already been painted, then (whether heads or tails has been tossed) we change the color of the ball (from red to black or from black to red). To model this situation as a stochastic process, we define time t to the time after the coin has been flipped from the tth time and the chosen ball has been painted.

Solution: Page 926 of the textbook (cont.) The state at any time may be described by the vector [u r b], where u is the number of unpainted balls in the urn, r is the number of red balls in the urn, and b is the number of blacks balls in the urn. We are given that Xt = [2 0 0]. After the first coin toss, one ball will have been painted either red or black, and the state will be either [1 1 0] or [1 0 1]. Hence, we can be sure that X1 = [1 1 0] or X1 = [1 0 1]. Clearly, there must be some sort of relation between the Xt’s. The example, if Xt = [ 0 2 0]. We can be sure that Xt +1 will be [0 1 1]. Solution: Page 926 of the textbook [0 1 1] [0 2 0] [0 0 2] [2 0 0] [1 1 0] [1 0 1] [0 1 1] [0 2 0] [0 0 2] [2 0 0] [1 1 0] [1 0 1]

[0 1 1] [0 2 0] [0 0 2] [2 0 0] [1 1 0] [1 0 1] [0 1 1] [0 2 0] [0 0 2] [2 0 0] [1 1 0] [1 0 1]