Markov Chains.

Slides:



Advertisements
Similar presentations
MARKOV ANALYSIS Andrei Markov, a Russian Mathematician developed the technique to describe the movement of gas in a closed container in 1940 In 1950s,
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Saturday Agenda Interfaces – Johnson, Robert R. How Bayer Makes Decisions to Develop New Drugs – Isbrandt, Derek A Major League Baseball Team Uses Operations.
What data set is the collection of all outcomes, responses, measurements, or counts that are of interest?
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Operations Research: Applications and Algorithms
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Hidden Markov Models Tunghai University Fall 2005.
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
Solutions Markov Chains 1
Chapter 17 Markov Chains.
Markov Analysis. Overview A probabilistic decision analysis Does not provide a recommended decision Provides probabilistic information about a decision.
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 16-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Markov Analysis Chapter 16
Queuing Models Basic Concepts
Markov Chains Chapter 16.
INDR 343 Problem Session
QUEUING MODELS Queuing theory is the analysis of waiting lines It can be used to: –Determine the # checkout stands to have open at a store –Determine the.
DynaTraffic – Models and mathematical prognosis
Markov Processes System Change Over Time Data Mining and Forecast Management MGMT E-5070 scene from the television series “Time Tunnel” (1970s)
Abstract Sources Results and Conclusions Jessica Porath  Department of Mathematics  University of Wisconsin-Eau Claire Faculty Mentor: Dr. Don Reynolds.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
CH – 11 Markov analysis Learning objectives:
The 29th Conference ARAB-ACRAO 29 March - 3 April 2009 Comparison of Student Flow in Different Colleges of Kuwait University Using Absorbing Markov Analysis.
© 2010 Pearson Prentice Hall. All rights reserved Chapter Data Collection 1.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Data Collection 1.
Abstract Sources Results and Conclusions Jessica Porath  Department of Mathematics  University of Wisconsin-Eau Claire Faculty Mentor: Dr. Don Reynolds.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Problems Markov Chains 1 1) Given the following one-step transition matrices of a Markov chain, determine the classes of the Markov chain and whether they.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Chapter 9: Markov Processes
Markov Chains Applications. Brand Switching  100 customers currently using brand A 84 will stay with A, 9 will switch to B, 7 will switch to C  100.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Risk and Return in Capital Markets
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Advanced Statistical Computing Fall 2016
Introduction to Health Systems Engineering
Discrete-time markov chain (continuation)
Much More About Markov Chains
Markov Chains Applications
Solutions Markov Chains 1
Chapter 5 Markov Analysis
Markov Chains Carey Williamson Department of Computer Science
Solutions Markov Chains 1
Markov Processes Markov Chains 1/16/2019 rd.
Solutions Markov Chains 1
Homework #5 — Monte Carlo Simulation
Taneisha Johnson Section 3
Solutions Markov Chains 2
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Carey Williamson Department of Computer Science University of Calgary
Slides by John Loucks St. Edward’s University.
Certified General Accountants
Discrete-time markov chain (continuation)
Discrete-time markov chain (continuation)
Solutions Markov Chains 6
CS723 - Probability and Stochastic Processes
Predicting Student Enrollment Using Markov Chain Modeling in SAS
Presentation transcript:

Markov Chains

Markov Chains General Description We want to describe the behavior of a system as it moves (makes transitions) probabilistically from “state” to “state”. States may be qualitative or quantitative Basic Assumption The future depends only on the present (current state) and not on the past. That is, the future depends on the state we are in, not on how we arrived at this state.

Example 1 - Brand loyalty or Market Share For ease, assume that all cola buyers purchase either Coke or Pepsi in any given week. That is, there is a duopoly. Assume that if a customer purchases Coke in one week there is a 90% chance that the customer will purchase Coke the next week (and a 10% chance that the customer will purchase Pepsi). Similarly, 80% of Pepsi drinkers will repeat the purchase from week to week.

Example 1 - Developing the Markov Matrix States State 1 - Coke was purchased State 2 - Pepsi was purchased (note: states are qualitative) Markov (transition or probability) Matrix From\To Coke Pepsi Coke 0.9 0.1 Pepsi 0.2 0.8

Example 1 – Understanding Movement From\To Coke Pepsi Coke 0.9 0.1 Pepsi 0.2 0.8 Quiz: If we start with 100 Coke purchasers and 100 Pepsi purchasers, how many Coke purchasers will there be after 1 week?

Graphical Description – 1 The States From\To Coke Pepsi .9 .1 .2 .8

Graphical Description – 2 Transitions from Coke .9 .1 From\To Coke Pepsi .9 .1 .2 .8

Graphical Description – 3 All transitions .9 .8 .1 .2 From\To Coke Pepsi .9 .1 .2 .8

Example 1 - Starting Conditions Percentages Identify probability of (percentage of shoppers) starting in either state (We will assume a 50/50 starting market share in our example that follows.) Assume we start in one specific state (by setting one probability to 1 and the remaining probabilities to 0) Counts (numbers) Identify number of shoppers starting in either state

Example 1 From\To Coke Pepsi Coke 0.9 0.1 Pepsi 0.2 0.8 Starting Probabilities = 50% (or 50 people) each Questions What will happen in the short run (next 3 periods)? What will happen in the long run? Do starting probabilities influence long run?

Graphical Solution After 1 Transition .9(50)=45 .8(50)=40 .1(50)=5 (50)Coke(55) (50)Pepsi(45) .2(50)=10 From\To Coke Pepsi .9 .1 .2 .8

Graphical Solution After 2 Transitions .9(55)=49.5 .8(45)=36 .1(55)=5.5 (55)Coke(58.5) (45)Pepsi(41.5) .2(45)=9 From\To Coke Pepsi .9 .1 .2 .8

Graphical Solution After 3 Transitions .9(58.5)=52.65 .8(41.5)=33.2 .1(58.5)=5.85 (58.5)Coke(60.95) (41.5)Pepsi(39.05) .2(41.5)=8.3

Analyzing Markov Chains Open QM for Windows Module – Markov Chains Number of states – 2 Number of transitions - 3

Example 1 – After 3 transitions n-step Transition probabilities End of Period 1 Coke Pepsi Coke 0.8999 0.1000 Pepsi 0.2000 0.8000 End prob (given initial) 0.5500 0.4500 End of Period 2 Coke Pepsi Coke 0.8299 0.1700 Pepsi 0.3400 0.6600 End prob (given initial) 0.5849 0.4150 End of Period 3 Coke Pepsi Coke 0.7809 0.2190 Pepsi 0.4380 0.5620 End prob (given initial) 0.6094 0.3905 1 step transition matrix 2 step transition matrix 3 step transition matrix

Example 1 - Results (3 transitions, start = .5, .5) From\To Coke Pepsi Coke 0.78100 0.21900 Pepsi 0.43800 0.56200 Ending probability 0.6095 0.3905 Steady State probability 0.6666 0.3333 Note: We end up alternating between Coke and Pepsi 3 step transition matrix Depends on initial conditions Independent of initial conditions

Example 2 - Student Progression Through a University States Freshman Sophomore Junior Senior Dropout Graduate (note: again, states are qualitative)

Example 2 - Student Progression Through a University - States Freshman Sophomore Junior Senior Drop out Graduate Note that eventually you must end up in Grad or Drop-out.

Example 2 – Results Lazarus paper data First yr Soph Junior Senior Grad Drop out First year 0.0000 0.0000 0.0000 0.0000 0.8565 0.1434 Sophomore 0.0000 0.0000 0.0000 0.0000 0.8860 0.1139 Junior 0.0000 0.0000 0.0000 0.0000 0.9273 0.0726 Senior 0.0000 0.0000 0.0000 0.0000 0.9690 0.0310 Graduate 0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 Drop out 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 End prob 0 0 0 0 0.8565 0.1434 Steady State 0 0 0 0 1 1

From the paper If there are an equal number of freshmen, sophomores, juniors and seniors at the beginning of an academic year then The percentage of this mixed group of students who will graduate is (.857+.886+.927+.969)/4 = 91%

Classification of states Absorbing Those states such that once you are in you never leave. Graduate, Drop Out Recurrent Those states to which you will always both leave and return at some time. Coke, Pepsi Transient States that you will eventually never return to Freshman, Sophomore, Junior, Senior

State Classification Quiz

State Classification Article “A non-recursive algorithm for classifying the states of a finite Markov chain” European Journal of Operational Research Vol 28, 1987

Example 3 - Diseases States Purpose no disease pre-clinical (no symptoms) clinical death (note: again states are qualitative) Purpose Transition probabilities can be different for different testing or treatment protocols

Example 4 - Customer Bill paying States State 0: Bill is paid in full State i: Bill is in arrears for i months, i= 1,2,…,11 State 12: Deadbeat

Example 5 - Oil Market State State 0 - oil market is normal State 1 - oil market is mildly disrupted State 2 - oil market is severely disrupted State 3 - oil production is essentially shut down Note: States are qualitative Phila Inq, 3/24/04, “Strategic oil reserve fill-up will continue”

Example 6 – HIV infections Based on “Can Difficult-to-Reuse Syringes Reduce the Spread of HIV among Injection Drug Users” Caulkins, et. al. Interfaces, Vol 28, No. 3, May-June 1998, pp 23-33 State State 0 – Syringe is uninfected State 1 – Syringe is infected Notes: P(0, 1) = .14 14% of drug users are infected with HIV P(1, 0) = .33+.05 5% of the time the virus dies; 33% of the time it is killed by bleaching

Example 7 – Mental Health Lazarus depressed manic euthymic/remitted mortality

Example 8 - Baseball States Moneyball by Michael Lewis, p 134 State 0 - no outs, bases empty State 1 - no outs, runner on first State 2 - no outs, runner on second State 3 - no outs, runner on third State 4 - no outs, runners on first, second State 5 - no outs, runners on first, third State 6 - no outs, runners on second, third State 7 - no outs, runners on first, second, third …. Repeat for 1 out and 2 outs for a total of 24 states Moneyball by Michael Lewis, p 134

Example 9 – Football Overtime Playoffs (no time limit) States Team A has ball Team B has ball Team A scores (absorbing) Team B scores (absorbing) “Win, Lose, or Draw: A Markov Chain Analysis of Overtime in the National Football League”, Michael A. Jones, The College Mathematics Journal, Vol. 35, No. 5, November 2004, pp 330-336

Additional References from Interfaces Managing Credit Lines and Prices for Bank One Credit Cards. By: Trench, Margaret S.; Pederson, Shane P.; Lau, Edward T.; Lizhi Ma; Hui Wang; Nair, Suresh K.. Interfaces, Sep/Oct2003, Vol. 33 Issue 5, p4, 18p Real Applications of Markov Decision Processes. By: White, Douglas J.. Interfaces, Nov/Dec85, Vol. 15 Issue 6, p73, 11p Further Real Applications of Markov Decision. By: White, D.J.. Interfaces, Sep/Oct88, Vol. 18 Issue 5, p55, 7p A Markovian Model for the Valuation of Human Assets Acquired by an Organizational Purchase. By: Flamholtz, Eric G.; Geis, George T.; Perle, Richard J.. Interfaces, Nov/Dec84, Vol. 14 Issue 6, p11, 5p STUDENT FLOW IN A UNIVERSITY DEPARTMENT: RESULTS OF A MARKOV ANALYSIS. By: Bessent, E. Wailand; Bessent, Authella M.. Interfaces, 1980, Vol. 10 Issue 2, p52, 8p

Markov Chains The end