Markov Processes Markov Chains 1/16/2019 rd.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Markov chains Assume a gene that has three alleles A, B, and C. These can mutate into each other. Transition probabilities Transition matrix Probability.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Operations Research: Applications and Algorithms
Chapter 8: Long-Term Dynamics or Equilibrium 1.(8.1) Equilibrium 2.(8.2) Eigenvectors 3.(8.3) Stability 1.(8.1) Equilibrium 2.(8.2) Eigenvectors 3.(8.3)
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
Solutions Markov Chains 1
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 16-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
Markov Chains Lecture #5
Lecture 4 The Gauß scheme A linear system of equations Matrix algebra deals essentially with linear linear systems. Multiplicative elements. A non-linear.
Markov Analysis Chapter 15
Markov Analysis Chapter 16
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Link Analysis, PageRank and Search Engines on the Web
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
1 1 Slide © 2005 Thomson/South-Western Final Exam (listed) for 2008: December 2 Due Day: December 9 (9:00AM) Exam Materials: All the Topics After Mid Term.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
Markov Processes System Change Over Time Data Mining and Forecast Management MGMT E-5070 scene from the television series “Time Tunnel” (1970s)
Lecture 11 – Stochastic Processes
Section 10.2 Regular Markov Chains
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Day 3 Markov Chains For some interesting demonstrations of this topic visit: 2005/Tools/index.htm.
11/6/20151 ENM 503Linear Algebra Review 1. u = (2, -4, 3) and v = (-5, 1, 2). u + v = (2-5, -4+1, 3 +2) = (-3, -3, 5). 2u = (4, -8, 6); -v = (5, -1, -2);
2 x 2 Matrices, Determinants, and Inverses.  Definition 1: A square matrix is a matrix with the same number of columns and rows.  Definition 2: For.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Discrete Time Markov Chains
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
Asst. Professor in Mathematics
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Markov Games TCM Conference 2016 Chris Gann
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Chapter 9: Markov Processes
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Discrete-time Markov chain (DTMC) State space distribution
Slides Prepared by JOHN LOUCKS
Courtesy of J. Bard, L. Page, and J. Heyl
Markov Chains.
Discrete-time markov chain (continuation)
Much More About Markov Chains
Search Engines and Link Analysis on the Web
Discrete Time Markov Chains
Expected Value.
Chapter 5 Markov Analysis
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Solutions Markov Chains 1
ENM 500 Linear Algebra Review
Slides by John Loucks St. Edward’s University.
Discrete time Markov Chain
Discrete time Markov Chain
Markov Chains & Population Movements
CS723 - Probability and Stochastic Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

Markov Processes Markov Chains 1/16/2019 rd

Lagrange Multipliers You need 100 units made at 2 plants with the total cost function C = 0.1x2 + 7x + 15y + 1000. How should you divide the units between plants x and y to minimize the cost? L(x,y,) = 0.1x2 + 7x + 15y +1000 + (x + y – 100) Lx = 0.2x + 7 +  = 0 Ly = 15 +  = 0 =>  = -15 and 0.2x + 7 = 15 or x = 40; y = 60. (solve '((.2 0 1 -7)(0 0 1 -15)(1 1 0 100)))  (40 60 -15) C(40, 60) = $2349 min. See that Lxx and Lyy are > 0; D > 0 1/16/2019 rd

Examples System States Transition p's Market share Brand A vs. Brand B Switch chances TV Market Proportion watching 2 Switch chances Rental returns to various locations different returns Machine breakdowns % not running p of running 1/16/2019 rd

Markov Process Describes a dynamic situation and predicts the movement of a system along different states as time passes. bus F car From To T bus 0.2 0.4 [ ¼ ¾ ] column vector car 0.8 0.6 ( ¼ ¾ ) row vector Presently initial state vector is a0 0.3 0.7 for (bus, car) Tij = P(next state is i | current state is j Tbus car = P(riding bus | after riding car) = 0.4e 1/16/2019 rd

Markov Assumptions Finite number of states (none absorbing) State in any period depends only on previous State Transition probabilities are constant over time Changes occur only once during period Transition periods occur with regularity Initial state a0 for example is [0.275 0.375 0.350] sum to 1 T T2 T3 T4 … are the powers of T and approach the fixed point. State Transition Matrix States 1 2 3 1 0.2 0.3 0.4 2 0.4 0.5 0.2 3 0.4 0.2 0.4 Column vectors are probability vectors Tij = P(next state is i | current state is j); T21 = 0.4 1/16/2019 rd

Markov States Transient- a state when entered, can never return to this state when out and about. Recurrent - a state when entered, process will definitely return to this state. Absorbing - when entered, will never leave again. States are either recurrent or transient. 1/16/2019 rd

Fixed Point A B if Tx = x fixed point A 0.2 0.3 then Tx = x is also a fixed point B 0.8 0.7 0.2a + 0.3b = a or 0.8a + 0.7b = b; or 0.8a – 0.3b = 0 a + b = 1 a + b = 1 (solve '((4/5 -3/10 0)(1 1 1)))  (3/11, 8/11) Steady state solution which is the Limit of the Matrix as n   (setf matT #2A((0.2 0.3)(0.8 0.7))) (setf fm (expt-matrix matT 10)  #2A((0.2727 0.2727)(0.7273 0.7273)) (M* fm #2A((1)(0)))  #2A((0.272727)(0.727273)) Multiply the limit matrix by any probability vector results in fixed vector. 1/16/2019 rd

Eigenvalues Given matrix ¼ ½ 3/4x – 1/2y = 0 ¾ ½ ¾ ½ TX = tX => (tI – T)X = 0 (quadratic 1 -3/4 -1/4) (1 -1/4) are the eigenvalues (eigenvalues #2A((1/4 1/2)(3/4 1/2)))  (-0.25 1) Eigenvectors are (2, 3) and (1, -1) (M* #2a((1/4 1/2)(3/4 1/2)) #2A((2)(3)))  #2A((2)(3)) (M* #2a((1/4 1/2)(3/4 1/2)) #2A((1)(-1)))  #2A((-1/4)(1/4)) (expt-matrix #2A((0.25 1/2))(3/4 1/2)) 30)  #2A((0.4 0.4)(0.6 0.6)) (M* (inverse #2A((2 1)(3 -1))) #2A((1/4 1/2)(3/4 1/2)) #2A((2 1)(3 -1)))  #2A((1 0)(0 -1/4)) (M* (inverse #2A((2 1)(3 -1))) #2A((1/4 1/2)(3/4 1/2)) #2A((2 1)(3 -1))) #2A((1 0)(0 -1/4)) 1/16/2019 rd

Eigenvalues A = 1 4 Ax = tx => t-1 -4 2 3 -2 t-3 = t2 -4t -5 = 0; (5, -1) Eigenvectors: 4x - 4y = 0 or x = y [1 1] -2x - 4y = 0 or x = 2y [2 -1] (M* #2a((1 4)(2 3)) #2a((1)(1)))  #2A((5)(5)) ~ (1, 1) (M* #2a((1 4)(2 3)) #2a((2)(-1)))  #2A((-2)(1)) Diagonalize (M* (inverse #2a((1 2)(1 -1))) #2a((1 4)(2 3)) #2a((1 2)(1 -1))) #2A((5 0)(0 -1)) Give another value in lieu of [2 -1]. 1/16/2019 rd

Toothpaste G C G owns 60% (a0 = 0.6 0.4] of market and T = 0.88 0.15 0.12 0.85 in that 88% of G's customers remain loyal and 15% of those using the competitors brands switch to G. (eigenvalues #2A((0.88 0.15)(0.12 0.85)))  (0.73 1) (expt-matrix #2A((0.88 0.15)(0.12 0.85)) 50)  (5/9, 4/9) (M* (inverse #2A((0.12 -0.15)(-0.12 0.12))) #2A((0.88 0.15) (0.12 0.85)) #2A((0.12 -0.15)(-0.12 0.12)))  #2A((0.73 0)(0 1.0)) (-0.15 -0.12) (0.12 - 0.12) is eigenvector; 15/27 = 0.55555  #2A((0.555556 0.555556)(0.444445 0.444445)) a0 = (0.6 0.4). a1 = Ta0 ; an = Tna0 Note that dominant eigenvalue is 1 and determines the stability eigenvector. The dominant eigenvalue is always 1 for a transition matrix (M* (inverse #2A((0.12 -0.15)(-0.12 0.12))) #2A((0.88 0.15)(0.12 0.85)) #2A((0.12 -0.15) (-0.12 0.12))) Solve 0.12a- 0.15b = 0 and a+ b = 1 => to get (5/9 4/9) 1/16/2019 rd

Degenerate Discrete-Parameter Stochastic Process T= 0 1 1 0 Solve to get a = b = ½ but Tn = 0 1 1 0 Process is strictly stationary and time-independent. Consider T = ½ ½ a + b = 2a ½ ½ a + b = 1 => (a, b) = (½, ½) Consider T = Tn = 1/3 2/3 1a + 2b = 3a => a = b 2/3 1/3 1a + 1b = 1 or (½, ½) 1/16/2019 rd

Stochastic Matrix A matrix is stochastic if its columns (rows) are probability vectors and is called a regular matrix if some power of the matrix has positive vector components . ½ ¼ is regular ½ ¾ Regular matrices have a unique fixed probability vector with positive components and The powers of the regular matrices approach the fixed point. If A and B are stochastic, then the product AB is also. 1/16/2019 rd

Stochastic and Regular Matrices Fixed points of square matrices Let A = |2 2| and u = [2, -1] |1 3| Then |2 2|| 2 |= | 2| |1 3||-1 | |-1| Also A(tu) = tAu => tu or scalar multiples are fixed points. Matrix is ergodic if limit exists; regular if any power has plus entries (expt-matrix #2A((2 2)(1 3)) 20) NOT ergodic,    - 2 -2 -1  - 3 yields 2 - 5  + 4 = 0; with eigenvalues = -1, -4; (expt-matrix #2A((.25 .75)(.75 .25)) 20)  #2A((0.5 0.5)(0.5 0.5)) ergodic  - ¼ -¾ -¼  - ¾ yields 2 -  = 0;  = 0, 1 (solve '((1 1 1)(1 -0.4 0)))  (0.285714 0.714286) = (2/7, 5/7) 1/16/2019 rd

Stochastic and Ergodic Consider matrix A and Ax = x where x = (a b) a/2 + 2b/3 = a 3a - 4b = 0 1a +1b = 1 (solve '((3 -4 0)(1 1 1)))  (4/7 3/7) (expt-matrix #2A ((0.5 0.67)(0.5 0.33)) 20) Limit = [4/7 3/7] (M* #2A((1/2 2/3)(0.5 1/3)) #2A((4/7)(3/7))) (4/7 3/7) (M* (expt-matrix #2A((1/2 2/3)(0.5 1/3)) 20) #2A((1/9)(8/9)))  #2A((0.571429)(0.428572)) All probability vectors maps to the fixed vector (4/7, 3/7), the eigenvector of eigenvalue 1. (EIGENVALUES = (1 -1/6) 1/16/2019 rd

Ergodic Is matrix 0 0.4 ergodic? Does it have a limit? 1 0.6 Is it Regular? Does any power contain only positive values? (expt-matrix #2A((0 0.4)(1 0.6)) 2)  0.4 0.24 0.6 0.76 Yes Find the limit by writing 0.4b = a; a + b = 1 (solve '((-1 2/5 0)(1 1 1)))  (2/7 5/7) 1/16/2019 rd

Fixed Points and Regular Stochastic Matrices If matrix T is regular stochastic, then T has a unique fixed probability vector with positive components The sequence T, T2, T3 … approach a matrix whose columns are the fixed point. If p is any probability vector, the sequence of vectors Tp, T2p, T3p …approach the fixed point. The fixed point of imply b/2 = a or b = 2a 2a – 1b = 0 1a + 1b = 1 (1/3 2/3) is fixed point 1/16/2019 rd

33. Markov Processes Example 1 Page 448 20% of the residents in region1 move to region 2, and 10% move to region 3. Of the residents in region 2, 10% move to region 1 and 10% to region 3. In region 3, 20% move to region 1 and 10% to region 2. Find the probability that a resident in region 1 this year is a resident of region 1 next year; T11 -- in 2 years. T2 Given state vector a0 = [0.4 0.3 0.3], find the distribution of the population 3 years from now. T3a0. Find the steady state vector. #2A((0.3125)(0.4375)(0.25)) a1 = Ta0 … , an = Tna0 From Region To 1 2 3 1 0.7 0.1 0.2 2 0.2 0.8 0.1 3 0.1 0.1 0.7 (setf demo #2A((0.7 0.1 0.2)(0.2 0.8 0.1)(0.1 0.1 0.7))) (M* demo demo) 0.53 0.17 0.29 0.31 0.67 0.19 0.16 0.16 0.52 (M* (expt-matrix demo 3) #2A((0.4)(0.3)(0.3)))  #2A((0.3368)(0.4024)(0.2608) "Steady State Probabilities" 0.3125 0.4375 0.25 "Equilibrium 1st Passage Times" 0.35 0.382 0.268 "Transition Matrix after 3 Transitions" 0.434 0.218 0.326 0.37 0.586 0.262 0.196 0.196 0.412 Transition Matrix after 3 Transitions Market Shares: ((0.3368) (0.4024) (0.2608)) 1/16/2019 rd

Continued Given T = ½ ¼ a/2 + b/4 = a or 2a + b = 4a; 2a – b = 0 find the probability of going from state 2 to state 1 after 2 steps (a12)2 Find the steady state probabilities.   (print-matrix (expt-matrix #2A((1/2 1/4)(1/2 3/4)) 2))  3/8 5/16 5/8 11/16 (expt-matrix #2A((1/2 1/4)(1/2 3/4)) 30)  0.33333 0.33333 1/3 1/3 0.66667 0.66667 2/3 2/3 1/16/2019 rd

453-31 Flu transition matrix flu no flu flu 0.1 0.2 for 200 dorm students no flu 0.9 0.8 120 students have the flu now, how many will have it 8 days from now? T21 in 8 days? 12 days from now? Assume a 4-day flu cycle Given 120 have the flu; 120/200 = ao = 3/5 2/5 T2 a0 (prn-mat (M* (expt-matrix #2a((1/10 2/10)(9/10 8/10)) 2) #2A((3/5)(2/5)))  93/500 407/500 => (* 200 93/500)  37.2 will get the flu T3 a0 (prn-mat (M* (expt-matrix #2a((1/10 2/10)(9/10 8/10)) 3) #2A((3/5)(2/5)))  #2A((907/5000)(4093/5000)) (* 200 907/5000)  36.28 will get the flu 1/16/2019 rd

Steady State Vector 35. Find the unique probability vector of the regular stochastic matrix. Start with a0 = [a, b, 1 - a – b] => 0 0 ½ a 1 0 ½ b 0 1 0 1 - a - b  ½ - a/2 – b/2 = a 3a + b = 1 a + ½ - a/2 - b/2 = b a – 3b = -1 b = 1 – a – b a + 2b = 1 (solve '((3 1 1)(1 -3 -1)(1 2 1)))  (1/5, 2/5, 2/5) (Expt-Matrix #2A((0 0 1/2)(1 0 1/2)(0 1 0)) 30)  (0.2 0.4 0.4) 1/16/2019 rd

Flu Steady State Vector Find the steady state flu vector given transition matrix flu no flu flu 0.1 0.2 a 0.1a + 0.2b = a no flu 0.9 0.8 b 0.9a + 0.8b = b a + b = 1 0.9a = 0.2b => 0.9a = 0.2(1 – a) => a = 0.2/1.1 = 0.1818 b = 0.8191 (expt-matrix #2a((0.1 0.2)(0.9 0.8)) 30)  (0.181818 0.818181) 1/16/2019 rd

2W 3R 1W 1R 1W 2R 2R 2W 1R a b a b a b a b c #2A((0 1/6 0)(1 1/2 2/3)(0 1/3 1/3)) a b c a 0 1/6 0 Ex: b  b: 1/2 * 1/3 + 1/2 * 2/3 = 1/2 b 1 1/2 2/3 b  c: 1 * 1/3 c 0 1/3 1/3 (0.1, 0.6, 0.3) final states Compute probability of 2R marbles in Urn A after 3 steps and 2 in urn A after steady state (0.3). a2 = T3p0 = (1/12 23/36 5/18); 0.3 In urn A are 2 White marbles and in urn B there are 3 Red marbles. At each step of the process a marble is randomly selected from each urn and switched. The states are a0 = 0 red in urn a => state a0 = [1 0 0] States: ai is number of red (state) in urn a #2A((0.1 0.1 0.1)(0.6 0.6 0.6)(0.3 0.3 0.3)) (M* (expt-matrix amat 3) p0) #2A((1/12)(23/36)(5/18)) 1/16/2019 rd

Other Notations Find given a0 = (2/3, 0, 1/3) 1/4 1/4 1/2 3/4 1/2 1/2 (prn-mat (expt-matrix #2A((0 1/2 0)(1/2 1/2 1)(1/2 0 0)) 2))  1/4 1/4 1/2 3/4 1/2 1/2 0 1/4 0 (mmult #2A((1/4 1/4 0)(1/4 1/2 0)(0 1/4 0)) #2A((2/3)(0)(1/3))) 1/16/2019 rd

Absorbing Barriers A player has $2. He bets $1 at a time and wins with probability ½. He stops playing if he loses the $2 or wins $4. Compute probability that he will loose his money at the end of at most 5 plays. Find probability that the game will last more than 7 plays. p0 = (0, 0, 1, 0 0 0 0) $2 State ai has i dollars. Player is the T matrix. a0 a1 a2 a3 a4 a5 a6 a0 1 ½ 0 0 0 0 0 a1 0 0 ½ 0 0 0 0 a2 0 ½ 0 ½ 0 0 0 a3 0 0 ½ 0 ½ 0 0 a4 0 0 0 ½ 0 ½ 0 a5 0 0 0 0 ½ 0 0 a6 0 0 0 0 0 ½ 1 Random walk with absorbing barriers at 0 (lose $2) and 6 (win $4). (M* (expt-matrix player 5) #2A((0) (0)(1)(0)(0)(0)(0)))  #2A((3/8)(5/32)(0)(9/32)(0)(1/8)(1/16)) (M* (expt-matrix player 7) #2A((0) (0)(1)(0)(0)(0)(0)))  2A((29/64)(7/64)(0)(27/128)(0)(13/128)(1/8)) (sum (flatten '(7/64)(0)(27/128)(0)(13/128)))  27/64 (setf player #2a((1 1/2 0 0 0 0 0)(0 0 1/2 0 0 0 0)(0 1/2 0 1/2 0 0 0)(0 0 1/2 0 1/2 0 0)(0 0 0 1/2 0 1/2 0)(0 0 0 0 1/2 0 0)(0 0 0 0 0 1/2 1))) 1/16/2019 rd

Exercise 1 Given P = and the system is in state 2 [0 1] a) Find state vector X2 , the probability of being in state 2 after 2 steps. (setf Y #2A((0.2 0.6)(0.8 0.4))) IV #2A((0)(1))) (M* Y IV)#2A((0.6)(0.4)) (M* Y Y IV)  #2A((0.36)(0.64)) (M* Y #2A((0.6)(0.4)))  #2A((0.36)(0.64)) b) Find (X12)3 probability of going from state 2 to state1 in 3 steps. (expt-matrix #2A ((0.2 0.6)(0.8 0.4)) 3)  #2A((0.392 0.456)(0.608 0.544)) c) Find the equilibrium (steady state) vector (limit L). #2A((0.428572 0.428572)(0.571429 0.571429)) d) Confirm using Px = x and solve where x = (a, b). 1/16/2019 rd

Exercise 2 Cars are classified as good fair or poor with transition matrix G F P G F P Assume 100 cars are Good, 60 Fair and 20 Poor and weekly updated. Given a0 = [5/9 3/9 1/9] a) How many cars will be found in each condition next week? (M* #2A((0.6 0.2 0.1)(0.3 0.6 0.4)(0.1 0.2 0.5)) #2A((5/9)(3/9)(1/9))) #2A((0.411111)(0.411111)(0.177778)) times 180 for weekly condition. b) Repeat for steady state conditions. #2A((0.293 0.293 0.293)(0.463 0.463 0.463)(0.244 0.244 0.244)) (mapcar #' * '(0.293 0.463 0.244) '(180 180 180))  (52.74 83.34 43.92) 1/16/2019 rd

Which to Lease? Two computer operating condition (OC) and not operating (NO) are given as O NO O NO O 0.95 0.90 0.98 0.85 NO 0.05 0.10 0.02 0.15 Which one should be leased? (solve '((0.05 -0.90 0)(1 1 1)))  (0.947368 0.052632) (solve '((0.02 -0.85 0)(1 1 1)))  (0.977012 0.022989) 1/16/2019 rd

State Transition Matrix Compute the transition matrix given the diagram below. ½ 1 2 ½ c 1 ½ 3 ½ 4 1 1/16/2019 rd

Absorbing States Once there, cannot exit. Bankrupt, home destroyed by fire, lake irreversibly polluted, Goal reached, game over, death 1/16/2019 rd

Recall: an = Tna0 where T = 0.9 0.2 At a university, 60% of all first-semester students buy their books at the Campus Bookstore, and 40% buy their books at the Off-Campus Bookstore. See transition matrix below. If these trends continue, what percent of the market will each bookstore have at the beginning of the second semester? At the beginning of the third semester? The tenth semester? Recall: an = Tna0 where T = 0.9 0.2 0.1 0.8 and a0 = [0.6 0.4] . We found that a1 = 0.62 0.38 a2 = 0.634 T= 0.366 a9 = 0.6640 (solve '((-0.1 0.2 0)(1 1 1)))  (2/3 1/3) 0.3360 What is the distribution in later semesters? At some point, will the distribution stay the same from one semester to the next? Yes, because T is regular. (setf a0 #2a((0.6)(0.4)) Tmat #2A((0.9 0.2)(0.1 0.8))) 1/16/2019 rd‹#›

Students at an University Freshmen Dropouts Sophomores Graduated I Identity Matrix O Null Matrix A Absorbing N Non-absorbing D G F S D 1 0 0.1 0.05 G 0 1 0 0.4 F 0 0 0.6 0 S 0 0 0.3 0.55 0.6 0.1 F 1 D 0.3 0.05 S 0.55 G 1 0.4 1.0 0.0 0.333333 0.111111 0.0 1.0 0.666667 0.888889 0.0 0.0 8.08283e-12 0.0 0.0 0.0 4.787138e-11 1.042641e-13 2.5 0.0 3.888889 2.222222 1/16/2019 rd‹#›

Students at an University Abs Nabs 1 0 0.6 0.2 Identity Zero A Nabsorb Abs 0 1 0 0.1 0 0 0.1 0.2 NAbs 0 0 0.3 0.5 The fundamental matrix is F = (I – N)-1 time F S Conditional Prob = FA F S (inverse #2a((9/10 -3/10)(-2/10 1/2))) --> #2A((50/39 20/39)(10/13 30/13)) 1/16/2019 rd

Accounts Receivable Analysis Paid Bad Debt 0-30 days 31-90 days Paid 1 0 0.4 0.4 Bad Debt 0 1 0 0.2 0-30 days 0 0 0.3 0.3 31-90 days 0 0 0.3 0.1 #2A((0.666667 0.777778)(0.222222 0.481482)) Probability of 0-30 days being paid is 89% and 11% bad debt. 31-90, 74% being paid. 1/16/2019 rd

Application: 3 products A B C 0.2 0.1 0.5 0.2 0.5 0.2 0.6 0.4 0.3 (expt-matrix #2A((.2 .1 .5)(.2 .5 .2)(.6 .4 .3)) 10)  #2A((0.297 0.286 0.418) A + B = 29.7 + 28.6 = 58.3 It costs $150K to promote A only; 280K to promote B only; Each percentage point is worth $10K 0.6 0.4 0.6 0.1 0.2 0.3 0.2 0.4 0.1 0.5 0.8 0.5 0.2 0.2 0.3 0.4 0.0 0.2 (expt-matrix #2A((.6 .4 .6)(.2 .4 .1)(.2 .2 .3)) 50)  #2A((0.556 0.222 0.222)) A + B = 77.8, a 19.5 * 10K = $195K 150K => $45K profit (expt-matrix #2A((.1 .2 .3)(.5 .8 .5)(.4 0 .2)) 50)  #2A((0.190 0.715 0.095)) A + B = 0.905  $322K 280K => $42K #2A((0.296703 0.296703 0.296703)(0.285714 0.285714 0.285714)(0.417583 0.417583 0.417583)) 1/16/2019 rd

Markov From To A B C A 0.85 0.20 0.15 B 0.10 0.75 0.10 C 0.05 0.05 0.75 a) Compute steady state probabilities. (expt-matrix #2A((0.85 0.20 0.15)(0.10 0.75 0.10)(0.05 0.05 0.75)) 20) #2A((0.547804 0.547623 0.547006)(0.285663 0.285844 0.285663)(0.166534 0.166534 0.167332)) b) Find market share for C. 0.166534 1/16/2019 rd

a) Identify the absorbing states. FROM TO Graduate Drop-Out Freshman Sophomore Junior Senior Graduate 1.0 0 0 0 0 0.9 Dropout 0 1 0.2 0.15 0.1 0.05 Freshman 0 0 0.15 0 0 0 Sophomore 0 0 0.65 0.1 0 0 Junior 0 0 0 0.75 0.05 0 Senior 0 0 0 0 0.85 0.05 a) Identify the absorbing states. b) Interpret transition probabilities for a sophomore. c) Compute probabilities that a sophomore will graduate; will drop out. d) Compute steady state probabilities. e) Currently there are 600 freshmen, 520 sophomores, 460 juniors, 420 seniors. What percentage of the 2000 students will eventually graduate? (M* (expt-matrix #2A((1 0 0 0 0 0.9)(0 1 0.2 0.15 0.1 0.05)(0 0 0.15 0 0 0)(0 0 0.65 0.1 0 0)(0 0 0 0.75 0.05 0)(0 0 0 0 0.85 0.05)) 20) #2a((0)(0)(600)(520)( 460)( 420)))  #2A((1479.22432)(520.775648)(1.995156e-14)(2.592974e-13)(1.944184e-12)(1.651628e-11)) 1/16/2019 rd

a) Find probabilities of ending in an absorbing state. Pat walks along a four-blocks of an avenue. If at corner 1, 2, or 3, Pat walks to the left or right with probability ½. When at corner 4 or corner 0, Pat stays. 0 1 2 3 4 0 1 ½ 0 0 0 1 0 0 ½ 0 0 2 0 ½ 0 ½ 0 3 0 0 ½ 0 0 4 0 0 0 ½ 1 a) Find probabilities of ending in an absorbing state. b) On average, how long for the process to be absorbed? c) On average, how many times will the process be in each transient state? 1/16/2019 rd‹#›

States of an Ecosystem Find the states for the year given strontium 90 compartments a0 = [20 60 15 20] 0.85 0.01 0 0 0.05 0.98 0.2 0 0.1 0 0.8 0 0 0.01 0 1 0.01 run-off leaching 0.05 Grasses Soil growth 0.01 Dead Organic matter Streams 1/16/2019 rd

Accounts receivable p b 1 2 p = paid; 1 = 1-3 days overdue p 1 0 .5 .3 b = bad debt; 2 = 31-60 days b 0 1 0 .3 I R absorbing 1 0 0 .3 0 O Q non absorbing 2 0 0 .2 .4 I – Q = .7 0 (I – Q)-1 = F = #2A((1.429 0.0)(0.476 1.667)) -.2 .6 RF = (M* #2A ((.5 .3)(0 .3))#2A((1.429 0.0)(0.476 1.667)))  #2A((0.8573 0.5001) If $10K in 1 and $6K in 2 then (0.1428 0.5001)) (M* #2A((0.8573 0.5001)(0.1428 0.5001)) #2a((10e3)(6e3))) #2A((11573.60)(4428.60)) => (expected paid, expected bad debt) = $16K ` 1/16/2019 rd