MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.

Slides:



Advertisements
Similar presentations
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Advertisements

State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Example 1 Matrix Solution of Linear Systems Chapter 7.2 Use matrix row operations to solve the system of equations  2009 PBLPathways.
Operations Research: Applications and Algorithms
IERG5300 Tutorial 1 Discrete-time Markov Chain
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Section 10.1 Basic Properties of Markov Chains
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Operations Research: Applications and Algorithms
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Reliable System Design 2011 by: Amir M. Rahmani
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Continuous time Markov chains (Sec )
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
Page Rank.  Intuition: solve the recursive equation: “a page is important if important pages link to it.”  Maximailly: importance = the principal eigenvector.
Paper Review: “Parameter Estimation in a Stochastic Drift Hidden Markov Model with a Cap” by J. Hernandez, D. Saunders & L. Seco Anatoliy Swishchuk Math.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Simultaneous Forecasting of Non-stationary conditional Mean & Variance Speaker: Andrey Torzhkov September 25 th, 2006.
1 1 Slide © 2005 Thomson/South-Western Final Exam (listed) for 2008: December 2 Due Day: December 9 (9:00AM) Exam Materials: All the Topics After Mid Term.
1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Section 8.3 – Systems of Linear Equations - Determinants Using Determinants to Solve Systems of Equations A determinant is a value that is obtained from.
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
AN INTRODUCTION TO ELEMENTARY ROW OPERATIONS Tools to Solve Matrices.
Matrix Solutions to Linear Systems. 1. Write the augmented matrix for each system of linear equations.
MAC 1140 Unit 4 Test Review. 1. Give the order of the following matrix:.
CH – 11 Markov analysis Learning objectives:
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Day 3 Markov Chains For some interesting demonstrations of this topic visit: 2005/Tools/index.htm.
The Logistic Growth SDE. Motivation  In population biology the logistic growth model is one of the simplest models of population dynamics.  To begin.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Section 5.3 Solving Systems of Equations Using the Elimination Method There are two methods to solve systems of equations: The Substitution Method The.
7.4. 5x + 2y = 16 5x + 2y = 16 3x – 4y = 20 3x – 4y = 20 In this linear system neither variable can be eliminated by adding the equations. In this linear.
Discrete Time Markov Chains
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Gaussian Elimination and Back Substitution Aleksandra Cerović 0328/2010 1/41/4Gaussian Elimination And Back Substitution.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
Solving systems of equations with three variables January 13, 2010.
Algebra Review. Systems of Equations Review: Substitution Linear Combination 2 Methods to Solve:
Chapter 9: Markov Processes
 Introduction  Important Concepts in MCL Algorithm  MCL Algorithm  The Features of MCL Algorithm  Summary.
Reliability Engineering
Discrete Time Markov Chains (A Brief Overview)
Discrete-time Markov chain (DTMC) State space distribution
Slides Prepared by JOHN LOUCKS
Industrial Engineering Dep
PageRank and Markov Chains
DTMC Applications Ranking Web Pages & Slotted ALOHA
Dynamic Programming Lecture 13 (5/31/2017).
Discrete-time markov chain (continuation)
Discrete-time markov chain (continuation)
Matrix Solutions to Linear Systems
Slides by John Loucks St. Edward’s University.
Systems of Equations Solve by Graphing.
9.3 Linear programming and 2 x 2 games : A geometric approach
Discrete-time markov chain (continuation)
Solutions Markov Chains 6
CS723 - Probability and Stochastic Processes
Markov Chains and Markov Processes
Presentation transcript:

MARKOV CHAIN EXAMPLE Personnel Modeling

DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy that is filled during the next promotion period –remain in grade –get demoted

STATE SPACE S = {N1, N2, N3, N4, V} V for Vacancy Every time period, the employee moves according to a probability

MODELED AS A MARKOV CHAIN Discrete time periods Stationarity –transitions stay constant over time –transitions do not depend on time in grade

TRANSITION DIAGRAM V

PROBABILITY TRANSITION MATRIX 1234V V1 = P

MEASURES OF INTEREST Proportion of the workforce at each level Expected labor costs per year Expected annual cost of Entry-level training PDF of passage from N1 to N4

TRANSITION PROBABILITY CALCULATION Start with employee in N1 a0 = [1, 0, 0, 0, 0] a1 = a0 * P a1 = [0.1, 0.6, 0, 0, 0.3] a2 = a1 * P

STEADY STATE PROBABILITIES a0 * P * P * P * P *.... P is singular (rank 4) P is stochastic –rows sum to 1   is the stationary probability distribution   N1 is the proportion of the time spent in state N1

COMPUTATION STRATEGY  P  N1  N2   N3   N4   V Substitute stochastic equation for first component of  P Solve Linear System via Gaussian Elimination

...more COMPUTATION STRATEGY Start with arbitrary a0 calculate a1, a2, a3,... will converge to   

CONVERGENCE TO 

CONVERGENCE IS QUICK

FOR GRINS Changed P N4,V to 0.0  = [0.09, 0.16, 0.23, 0.46, 0.06]

ENTRY-LEVEL TRAINING 12% of the time we are in state V Cost of ELT = –12% –times the Workforce size –times the cost of training

LABOR COSTS Salaries –C N1 = $12,000 –C N2 = $21,000 –C N3 = $25,000 –C N4 = $31,000 Total Workforce = 180,000 Cost = 180K * (C *  ) = $3.7B

EXCURSION Promotion probabilities unchanged Allow attrition to reduce workforce –P V,N1 = 0.6 results in workforce of 108,000 How much $ saved? How fast does it happen?

LABOR COSTS

CONVERGENCE TO 75% WORKFORCE (135K)

CONVERGENCE TO 60% WORKFORCE (108K)

BUILDING AN N4 FROM AN N1 CUMULATIVE

BUILDING AN N4 FROM AN N1 MARGINAL