1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.

Slides:



Advertisements
Similar presentations
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
Advertisements

The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Operations Research: Applications and Algorithms
Chapter 4 Systems of Linear Equations; Matrices Section 6 Matrix Equations and Systems of Linear Equations.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
Topics Review of DTMC Classification of states Economic analysis
Section 10.1 Basic Properties of Markov Chains
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
1 Markov Analysis In an industry with 3 firms we could look at the market share of each firm at any time and the shares have to add up to 100%. If we had.
10.3 Absorbing Markov Chains
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 16-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Markov Analysis Chapter 15
Markov Analysis Chapter 16
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
Here is a preview of one type of problem we are going to solve using matrices. Solve this system of equations:
1 1 Slide © 2005 Thomson/South-Western Final Exam (listed) for 2008: December 2 Due Day: December 9 (9:00AM) Exam Materials: All the Topics After Mid Term.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
Matrix Revolutions: Solving Matrix Equations Matrix 3 MathScience Innovation Center Betsey Davis
1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna 16-1 © 2003 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
Using Matrices to Solve a 3-Variable System
Table of Contents Matrices - Inverse of a 2  2 Matrix To find the inverse of a 2  2 matrix, use the following pattern. Let matrix A be given by... Then.
Markov Processes System Change Over Time Data Mining and Forecast Management MGMT E-5070 scene from the television series “Time Tunnel” (1970s)
Table of Contents Matrices - Calculator Operations The graphing calculator can be used to do a variety of matrix calculations, as shown in the following.
Using Matrices to Solve a System of Equations. Multiplicative Identity Matrix The product of a square matrix A and its identity matrix I, on the left.
Warm-Up Solving Systems of Equations Learning Targets l Refresher on solving systems of equations l Matrices –Operations –Uses –Reduced Row Echelon.
Copyright © 2009 Pearson Education, Inc. CHAPTER 9: Systems of Equations and Matrices 9.1 Systems of Equations in Two Variables 9.2 Systems of Equations.
Section 10.2 Regular Markov Chains
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
F UNDAMENTALS OF E NGINEERING A NALYSIS Eng. Hassan S. Migdadi Inverse of Matrix. Gauss-Jordan Elimination Part 1.
Inverses and Systems Section Warm – up:
Using Matrices to Solve Systems of Equations Matrix Equations l We have solved systems using graphing, but now we learn how to do it using matrices.
Copyright © 2007 Pearson Education, Inc. Slide 7-1.
CH – 11 Markov analysis Learning objectives:
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Identity What number is the multiplication identity for real numbers? For matrices, n x n--square matrices, has 1’s on main diagonal and zeros elsewhere.
Chapter 4 Section 4: Inverse and Identity Matrices 1.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
Copyright ©2015 Pearson Education, Inc. All rights reserved.
Section 1.7 Linear Independence and Nonsingular Matrices
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
 Recall that when you wanted to solve a system of equations, you used to use two different methods.  Substitution Method  Addition Method.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Chapter 9: Markov Processes
Using Matrices to Solve a 3-Variable System
College Algebra Chapter 6 Matrices and Determinants and Applications
Discrete-time Markov chain (DTMC) State space distribution
Slides Prepared by JOHN LOUCKS
L9Matrix and linear equation
Matrix Algebra.
Chapter 5 Markov Analysis
Use Inverse Matrices to Solve Linear Systems
Markov Chains Part 5.
Chapter 7: Matrices and Systems of Equations and Inequalities
Inverse & Identity MATRICES Last Updated: October 12, 2005.
3.8 Use Inverse Matrices to Solve Linear Systems
Gaussian Elimination.
Slides by John Loucks St. Edward’s University.
Presentation transcript:

1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS

2 2 Slide Chapter 17 Markov Process n Transition Probabilities n Steady-State Probabilities n Absorbing States n Transition Matrix with Submatrices n Fundamental Matrix

3 3 Slide Markov Processes n Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages.

4 4 Slide Transition Probabilities n Transition probabilities govern the manner in which the state of the system changes from one stage to the next. These are often represented in a transition matrix. n A system has a finite Markov chain with stationary transition probabilities if: there are a finite number of states,there are a finite number of states, the transition probabilities remain constant from stage to stage, andthe transition probabilities remain constant from stage to stage, and the probability of the process being in a particular state at stage n+ 1 is completely determined by the state of the process at stage n (and not the state at stage n- 1). This is referred to as the memory-less property.the probability of the process being in a particular state at stage n+ 1 is completely determined by the state of the process at stage n (and not the state at stage n- 1). This is referred to as the memory-less property.

5 5 Slide Steady-State Probabilities n The state probabilities at any stage of the process can be recursively calculated by multiplying the initial state probabilities by the state of the process at stage n. n The probability of the system being in a particular state after a large number of stages is called a steady- state probability. Steady state probabilities can be found by solving the system of equations  P =  together with the condition for probabilities that  i = 1. Here the matrix P is the transition probability matrix and the vector, , is the vector of steady state probabilities. Steady state probabilities can be found by solving the system of equations  P =  together with the condition for probabilities that  i = 1. Here the matrix P is the transition probability matrix and the vector, , is the vector of steady state probabilities.

6 6 Slide Absorbing States n An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1. n If there is more than one absorbing state, then a steady- state condition independent of initial state conditions does not exist.

7 7 Slide Transition Matrix with Submatrices n If a Markov chain has both absorbing and nonabsorbing states, the states may be rearranged so that the transition matrix can be written as the following composition of four submatrices: I, 0, R, and Q : I 0 I 0 R Q R Q

8 8 Slide Transition Matrix with Submatrices I = an identity matrix indicating one always remains in an absorbing state once it is reached I = an identity matrix indicating one always remains in an absorbing state once it is reached 0 = a zero matrix representing 0 probability of 0 = a zero matrix representing 0 probability of transitioning from the absorbing states to the transitioning from the absorbing states to the nonabsorbing states nonabsorbing states R = the transition probabilities from the nonabsorbing states to the absorbing states R = the transition probabilities from the nonabsorbing states to the absorbing states Q = the transition probabilities between the nonabsorbing states Q = the transition probabilities between the nonabsorbing states

9 9 Slide Fundamental and NR Matrices n The fundamental matrix, N, is the inverse of the difference between the identity matrix and the Q matrix: N = ( I - Q ) -1 N = ( I - Q ) -1 n The NR matrix, the product of the fundamental matrix and the R matrix, gives the probabilities of eventually moving from each nonabsorbing state to each absorbing state. Multiplying any vector of initial nonabsorbing state probabilities by NR gives the vector of probabilities for the process eventually reaching each of the absorbing states. Such computations enable economic analyses of systems and policies.

10 Slide Example: North’s Hardware Henry, a persistent salesman, calls North's Hardware Store once a week hoping to speak with the store's buying agent, Shirley. If Shirley does not accept Henry's call this week, the probability she will do the same next week is.35. On the other hand, if she accepts Henry's call this week, the probability she will not do so next week is.20.

11 Slide Example: North’s Hardware n Transition Matrix Next Week's Call Next Week's Call Refuses Accepts Refuses Accepts This Refuses This Refuses Week's Week's Call Accepts Call Accepts.20.80

12 Slide Example: North’s Hardware n Steady-State Probabilities QuestionQuestion How many times per year can Henry expect to talk to Shirley? AnswerAnswer To find the expected number of accepted calls per year, find the long-run proportion (probability) of a call being accepted and multiply it by 52 weeks.... continued

13 Slide Example: North’s Hardware n Steady-State Probabilities Answer (continued)Answer (continued) Let  1 = long run proportion of refused calls  2 = long run proportion of accepted calls  2 = long run proportion of accepted calls Then, Then, [     ] = [     ] [     ] = [     ]

14 Slide Example: North’s Hardware n Steady-State Probabilities Answer (continued)Answer (continued) Thus,   +   =   (1) Thus,   +   =   (1)   +   =   (2)   +   =   (2) and,   +   = 1 (3) and,   +   = 1 (3) Solving using equations (2) and (3), (equation 1 is redundant), substitute   = 1 -   into (2) to give: Solving using equations (2) and (3), (equation 1 is redundant), substitute   = 1 -   into (2) to give:.65(1 -  2 ) +   =  2.65(1 -  2 ) +   =  2 This gives   = Substituting back into (3) gives   = This gives   = Substituting back into (3) gives   = Thus the expected number of accepted calls per year is (.76471)(52) = or about 40. Thus the expected number of accepted calls per year is (.76471)(52) = or about 40.

15 Slide Example: North’s Hardware n State Probability QuestionQuestion What is the probability Shirley will accept Henry's next two calls if she does not accept his call this week? What is the probability Shirley will accept Henry's next two calls if she does not accept his call this week? AnswerAnswer P =.35(.35) =.1225 P =.35(.65) =.2275 P =.65(.20) =.1300 REFUSES REFUSES REFUSES REFUSES ACCEPTS ACCEPTS ACCEPTS P =.65(.80) =.5200

16 Slide Example: North’s Hardware n State Probability QuestionQuestion What is the probability of Shirley accepting exactly one of Henry's next two calls if she accepts his call this week? AnswerAnswer The probability of exactly one of the next two calls being accepted if this week's call is accepted can be week and refuse the following week) and (refuse next week and accept the following week) = The probability of exactly one of the next two calls being accepted if this week's call is accepted can be week and refuse the following week) and (refuse next week and accept the following week) = =.29

17 Slide Example: Jetair Aerospace The vice president of personnel at Jetair Aerospace has noticed that yearly shifts in personnel can be modeled by a Markov process. The transition matrix is: Next Year Next Year Same Same Position Promotion Retire Quit Fired Position Promotion Retire Quit Fired Current Year Current Year Same Position Same Position Promotion Promotion Retire Retire Quit Quit Fired Fired

18 Slide Example: Jetair Aerospace n Transition Matrix Next Year Next Year Retire Quit Fired Same Promotion Retire Quit Fired Same Promotion Current Year Retire Retire Quit Quit Fired Same Same Promotion Promotion

19 Slide Example: Jetair Aerospace n Fundamental Matrix N = ( I - Q ) -1 = - N = ( I - Q ) -1 = =

20 Slide Example: Jetair Aerospace n Fundamental Matrix The determinant, d = a  a  - a  a  = (.45)(.80) - (-.70)(-.10) =.29 = (.45)(.80) - (-.70)(-.10) =.29 Thus, Thus,.80/.29.10/ /.29.10/ N = = N = =.70/.29.45/ /.29.45/

21 Slide Example: Jetair Aerospace n NR Matrix The probabilities of eventually moving to the absorbing states from the nonabsorbing states are given by: NR = x NR = x Retire Quit Fired Retire Quit Fired Same Same = Promotion Promotion

22 Slide Example: Jetair Aerospace n Absorbing States QuestionQuestion What is the probability of someone who was just promoted eventually retiring?... quitting?... being fired? AnswerAnswer The answers are given by the bottom row of the NR matrix. The answers are therefore: Eventually Retiring =.12 Eventually Retiring =.12 Eventually Quitting =.64 Eventually Quitting =.64 Eventually Being Fired =.24 Eventually Being Fired =.24

23 Slide The End of Chapter 17