INDR 343 Problem Session 2 16.10.2014 http://home.ku.edu.tr/~indr343/

Slides:



Advertisements
Similar presentations
Markov Chain Nur Aini Masruroh.
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Markov Models.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Queueing Models and Ergodicity. 2 Purpose Simulation is often used in the analysis of queueing models. A simple but typical queueing model: Queueing models.
Operations Research: Applications and Algorithms
Markov Chains Extra problems
INDR 343 Problem Session
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
Solutions Markov Chains 1
An Introduction to Markov Chains. Homer and Marge repeatedly play a gambling game. Each time they play, the probability that Homer wins is 0.4, and the.
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Lecture 13 – Continuous-Time Markov Chains
Final Exam Due: December 14 (Noon), 2004
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
INFM 718A / LBSC 705 Information For Decision Making Lecture 9.
HW # Due Day: Nov. 9.
Markov Models. Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property).
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
INDR 343 Problem Session
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
CDA6530: Performance Models of Computers and Networks Examples of Stochastic Process, Markov Chain, M/M/* Queue TexPoint fonts used in EMF. Read the TexPoint.
Intro. to Stochastic Processes
9  Markov Chains  Regular Markov Chains  Absorbing Markov Chains  Game Theory and Strictly Determined Games  Games with Mixed Strategies Markov Chains.
CH – 11 Markov analysis Learning objectives:
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Chapter 2. Conditional Probability Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
Problems Markov Chains 2
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
President UniversityErwin SitompulPBST 4/1 Dr.-Ing. Erwin Sitompul President University Lecture 4 Probability and Statistics
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Markov Games TCM Conference 2016 Chris Gann
Problems Markov Chains 1 1) Given the following one-step transition matrices of a Markov chain, determine the classes of the Markov chain and whether they.
1 Introduction to Stochastic Models GSLM Outline  transient behavior  first passage time  absorption probability  limiting distribution 
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Chapter 9: Markov Processes
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Problems Markov Chains ) Given the following one-step transition matrices of a Markov chain, determine the classes of the Markov chain and whether.
Discrete Time Markov Chain
Discrete-time markov chain (continuation)
Markov Chains Applications
IENG 362 Markov Chains.
Solutions Markov Chains 1
TexPoint fonts used in EMF.
Solutions Markov Chains 1
Problem Markov Chains 1 A manufacturer has one key machine at the core of its production process. Because of heavy use, the machine.
Solutions Markov Chains 1
Solutions Markov Chains 1
Solutions Markov Chains 2
Problem Markov Chains 1 A manufacturer has one key machine at the core of its production process. Because of heavy use, the machine.
Discrete-time markov chain (continuation)
Solutions Markov Chains 6
TexPoint fonts used in EMF.
Presentation transcript:

INDR 343 Problem Session 2 16.10.2014 http://home.ku.edu.tr/~indr343/

16.4-3 Consider the Markov chain that has the following (one step) transition matrix. Determine the classes of this Markov chain and, for each class, determine whether it is recurrent or transient.

Inventory Example

16.5-4 The leading brewery on the West Coast (labeled A) has hired an OR analyst to analyze its market position. It is particularly concerned about its major competitor (labeled B). The analyst believes that brand switching can be modeled as a Markov chain using three states, with states A and B representing customers drinking beer produced from the aforementioned breweries and state C representing all other brands. Data are taken monthly, and the analyst has constructed the following (one-step) transition matrix from past data.

16.5-4 What are the steady-state market shares for two breweries?

16.5-5 Consider the following blood inventory problem facing a hospital. There is need for a rare blood type, namely, type AB, Rh(-). The demand D (in pints) over any 3-day period is given by Note that the expected demand is 1 pint, since E(D) = 0.3(1)+ 0.2(2)+ 0.1(3) = 1. Suppose that there are 3 days between deliveries. The hospital proposes a policy of receiving 1 pint at each delivery and using the oldest blood first. If more blood is required than is on hand, an expensive emergency delivery is made. Blood is discarded if it is still on the shelf after 21 days. Denote the state of the system as the number of pints on hand just after a delivery. Thus, because of the discarding policy, the largest possible state is 7.

16.5-5 (a) Construct the (one-step) transition matrix for this Markov chain. (b) Find the steady-state probabilities of the state of the Markov chain. (c) Use the results from part (b) to find the steady-state probability that a pint of blood will need to be discarded during a 3-day period. (Hint: Because the oldest blood is used first, a pint reaches 21 days only if the state was 7 and then D = 0.) (d) Use the results from part (b) to find the steady-state probability that an emergency delivery will be needed during the 3-day period between regular deliveries.

16.6-1 A computer is inspected at the end of every hour. It is found to be either working (up) or failed (down). If the computer is found to be up, the probability of its remaining up for the next hour is 0.95. If it is down, the computer is repaired, which may require more than 1 hour. Whenever the computer is down (regardless of how long it has been down), the probability of its still being down 1 hour later is 0.5. (a) Construct the (one-step) transition matrix for this Markov chain. (b) Use the approach described in Sec. 16.6 to find the ij (the expected first passage time from state i to state j) for all i and j.

16.6-5 A production process contains a machine that deteriorates rapidly in both quality and output under heavy usage, so that it is inspected at the end of each day. Immediately after inspection, the condition of the machine is noted and classified into one of four possible states:

16.6-5 The process can be modeled as a Markov chain with its (one-step) transition matrix P given by (a) Find the steady-state probabilities. (b) If the costs of being in states 0, 1, 2, 3, are 0, $1,000, $3,000, and $6,000, respectively, what is the long-run expected average cost per day? (c) Find the expected recurrence time for state 0 (i.e., the expected length of time a machine can be used before it must be replaced).

16.7-1 Consider the following gambler’s ruin problem. A gambler bets $1 on each play of a game. Each time, he has a probability p of winning and probability q = 1-p of losing the dollar bet. He will continue to play until he goes broke or nets a fortune of T dollars. Let Xn denote the number of dollars possessed by the gambler after the nth play of the game. Then

16.7-1 {Xn} is a Markov chain. The gambler starts with X0 dollars, where X0 is a positive integer less than T. (a) Construct the (one-step) transition matrix of the Markov chain. (b) Find the classes of the Markov chain. (c) Let T =3 and p = 0.3. Using the notation of Sec. 16.7, find f10, f1T, f20, f2T. (d) Let T =3 and p = 0.7. Find f10, f1T, f20, f2T.

16.7-2 A video cassette recorder manufacturer is so certain of its quality control that it is offering a complete replacement warranty if a recorder fails within 2 years. Based upon compiled data, the company has noted that only 1 percent of its recorders fail during the first year, whereas 5 percent of the recorders that survive the first year will fail during the second year. The warranty does not cover replacement recorders.

16.7-2 (a) Formulate the evolution of the status of a recorder as a Markov chain whose states include two absorption states that involve needing to honor the warranty or having the recorder survive the warranty period. Then construct the (one-step) transition matrix. (b) Use the approach described in Sec. 16.7 to find the probability that the manufacturer will have to honor the warranty.