A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,

Slides:



Advertisements
Similar presentations
Ch 7.7: Fundamental Matrices
Advertisements

Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
Markov Chains.
T 0 = time (age) to failure random variable for a new entity, where the space of T 0 is {t | 0 < t <  } and  =  is possible F 0 (t) = Pr(T 0  t) is.
Operations Research: Applications and Algorithms
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
A second order ordinary differential equation has the general form
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
TCOM 501: Networking Theory & Fundamentals
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Group exercise For 0≤t 1
Separate multivariate observations
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Linear Equations in Linear Algebra
ME 2304: 3D Geometry & Vector Calculus Dr. Faraz Junejo Double Integrals.
Copyright © Cengage Learning. All rights reserved.
Generalized Semi-Markov Processes (GSMP)
The importance of sequences and infinite series in calculus stems from Newton’s idea of representing functions as sums of infinite series.  For instance,
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
1 1.5 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra SOLUTION SETS OF LINEAR SYSTEMS.
Chapter 7 Lesson 7.3 Random Variables and Probability Distributions 7.3 Probability Distributions for Continuous Random Variables.
Multiple Random Variables Two Discrete Random Variables –Joint pmf –Marginal pmf Two Continuous Random Variables –Joint Distribution (PDF) –Joint Density.
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
EQT 272 PROBABILITY AND STATISTICS
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
EQT 272 PROBABILITY AND STATISTICS
Copyright ©2015 Pearson Education, Inc. All rights reserved.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Games TCM Conference 2016 Chris Gann
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Industrial Engineering Dep
V5 Stochastic Processes
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Pre- Calculus Lesson 1.1 begin Chapter 1: Functions.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Linear Equations in Linear Algebra
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Linear Equations in Linear Algebra
Presentation transcript:

A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …, m; each of these possible values represents what is called a state. Typically, 0 represents what is called the initial state, and we have that X 0 = 0. Chapter 3 The conditional probability Pr[X n+1 = j | X n = i] is called a transitional probability. When this conditional probability does not depend on n, the stochastic process defined by the Markov Chain is called homogeneous, the probability is denoted by p ij, and the transitional probability matrix is then defined to be P = p 00 p 01 …p 0m p 10 p 11 …p 1m.... p m0 p m1 …p mm

 j = 1 n p ij = 1. Observe that each row of the matrix must sum to 1, that is, for each i, If p ii = 1, then state i is called an absorbing state, and it is not possible to move from this state. When the conditional probability depends on n, the stochastic process defined by the Markov Chain is called non-homogeneous

1. Chapter 3 Exercises Let States 0, 1, and 2 be defined respectively by three rooms, labeled 0, 1, and 2, and let X n be the room which a particular person occupies at time n = 0, 1, 2, 3, …. When the person is in a room at time n, one of the paths leading from to another room or possibly back to the same room is selected and taken at random, which determines the room the person will be in at time n + 1. (a)Decide whether the stochastic process defined by the Markov Chain is homogeneous or non-homogeneous, and say why. The stochastic process is homogeneous, since paths from room to room are always the same, implying that the transitional probability of moving from one room to another room does not depend on n.

Room 0 Room 2Room 1 (b)Suppose the paths between rooms are as in the figure below. 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 (i)Find the transitional probability matrix. P =

Room 0 Room 2Room 1 (c)Suppose the paths between rooms are as in the figure below. 3/5 02/5 1/3 2/ /41/4 (i)Find the transitional probability matrix. P =

Room 0 Room 2Room 1 (d)Suppose the paths between rooms are as in the figure below, where each arrow represents a locked door, and when a locked door is chosen the person does not take any path. (i)Find the transitional probability matrix.

Room 0 Room 2Room 1 (i)Find the transitional probability matrix. 2/5 1/52/ /4 2/41/4 (d) P =

Room 0 Room 2Room 1 (e)Suppose the paths between rooms are as in the figure below, where each arrow represents a locked door, and when a locked door is chosen the person must choose and take a different path. Find each item listed following the figure. (i)Find the transitional probability matrix.

Room 0 Room 2Room 1 2/ /31/3 (i)Find the transitional probability matrix. (e) P =

We let  in denote the probability of being in State i at time n, and we denote the row vector of such probabilities for every state as  n = which is called the state vector at time n. (  0n,  1n, …,  mn ), It must of course be true that m  i = 0  in = 1.

(b)Suppose the paths between rooms are as in the figure below. 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 (i)Find the transitional probability matrix. (ii)Write a formula for each of the following: the probability of being in room 0 at time n = 1, the probability of being in room 1 at time n = 1, the probability of being in room 2 at time n = 1.  01 = Pr[X 1 = 0] = Pr[X 0 = 0  X 1 = 0] + Pr[X 0 = 1  X 1 = 0] + Pr[X 0 = 2  X 1 = 0] = Pr[X 0 = 0] Pr[X 1 = 0 | X 0 = 0] + Pr[X 0 = 1] Pr[X 1 = 0 | X 0 = 1] + Pr[X 0 = 2] Pr[X 1 = 0 | X 0 = 2] =  00 p 00 +  10 p 10 +  20 p 20 = (4/9)  00 + (2/9)  10 +(3/6)  20 P =

(ii)Write a formula for each of the following: the probability of being in room 0 at time n = 1, the probability of being in room 1 at time n = 1, the probability of being in room 2 at time n = 1.  01 = Pr[X 1 = 0] = Pr[X 0 = 0  X 1 = 0] + Pr[X 0 = 1  X 1 = 0] + Pr[X 0 = 2  X 1 = 0] = Pr[X 0 = 0] Pr[X 1 = 0 | X 0 = 0] + Pr[X 0 = 1] Pr[X 1 = 0 | X 0 = 1] + Pr[X 0 = 2] Pr[X 1 = 0 | X 0 = 2] =  00 p 00 +  10 p 10 +  20 p 20 = (4/9)  00 + (2/9)  10 +(3/6)  20  11 = Pr[X 1 = 1] =  00 p 01 +  10 p 11 +  20 p 21 = (2/9)  00 + (6/9)  10 +(1/6)  20  21 = Pr[X 1 = 2] =  00 p 02 +  10 p 12 +  20 p 22 = (3/9)  00 + (1/9)  10 +(2/6)  20 We may now write  1 =(  01,  11,  21 ) = 00 P

(iii)Write a formula for each of the following: the probability of being in room 0 at time n = 2, the probability of being in room 1 at time n = 2, the probability of being in room 2 at time n = 2.  02 = Pr[X 2 = 0] =  01 p 00 +  11 p 10 +  21 p 20 = (4/9)  01 + (2/9)  11 +(3/6)  21  12 = Pr[X 2 = 1] =  01 p 01 +  11 p 11 +  21 p 21 = (2/9)  01 + (6/9)  11 +(1/6)  21  22 = Pr[X 2 = 2] =  01 p 02 +  11 p 12 +  21 p 22 = (3/9)  01 + (1/9)  11 +(2/6)  21 We may now write  2 =(  02,  12,  22 ) = 11 P = 00 P 00 P2P2

(iv)Write a formula for each of the following: the probability of being in room 0 at time n, the probability of being in room 1 at time n, the probability of being in room 2 at time n.  0n = Pr[X n = 0] =  0,n  1 p 00 +  1,n  1 p 10 +  2,n  1 p 20 = (4/9)  0,n  1 + (2/9)  1,n  1 +(3/6)  2,n  1  1n = Pr[X n = 1] =  0,n  1 p 01 +  1,n  1 p 11 +  2,n  1 p 21 = (2/9)  0,n  1 + (6/9)  1,n  1 +(1/6)  2,n  1  2n = Pr[X n = 2] =  0,n  1 p 02 +  1,n  1 p 12 +  2,n  1 p 22 = (3/9)  0,n  1 + (1/9)  1,n  1 +(2/6)  2,n  1 We may now write  2 =(  02,  12,  22 ) = 00 Pn1Pn1

We let  in denote the probability of being in State i at time n, and we denote the row vector of such probabilities for every state as  n = which is called the state vector at time n. (  0n,  1n, …,  mn ), It must of course be true that m  i = 0  in = 1. From Exercise 1(b), we can see that  n+r = nn P r.

(v)Suppose a person is in room 0 at time 0. Find the probability of being in room 0 at time n = 2, the probability of being in room 1 at time n = 2, the probability of being in room 2 at time n = 2. These probabilities are respectively the entries of the row vector  2 =. Since the person is in room 0 at time 0, we must have  0 = 00 P2P2 (1, 0, 0).  2 = = 00 P2P2 (1, 0, 0) 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 = (4/9, 2/9, 3/9) 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 =(67/162, 49/162, 23/81)

(vi)Suppose a person is in room 1 at time 0. Find the probability of being in room 0 at time n = 2, the probability of being in room 1 at time n = 2, the probability of being in room 2 at time n = 2. These probabilities are respectively the entries of the row vector  2 =. Since the person is in room 1 at time 0, we must have  0 = 00 P2P2 (0, 1, 0).  2 = = 00 P2P2 (0, 1, 0) 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 = (2/9, 6/9, 1/9) 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 =(49/162, 83/162, 5/27)

(vii)Suppose a person is in room 2 at time 0. Find the probability of being in room 0 at time n = 2, the probability of being in room 1 at time n = 2, the probability of being in room 2 at time n = 2. These probabilities are respectively the entries of the row vector  2 =. Since the person is in room 2 at time 0, we must have  0 = 00 P2P2 (0, 0, 1).  2 = = 00 P2P2 (0, 0, 1) 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 = (3/6, 1/6, 2/6) 3/9 4/92/9 1/9 2/96/9 2/6 3/61/6 =(23/54, 5/18, 8/27) Completing parts (c), (d), and (e) are for homework.

When the transitional probability Pr[X n+1 = j | X n = i] depends on n, the stochastic process defined by the Markov Chain is called non-homogeneous. We can define p ij = which is the probability of moving from State i to State j in discrete time interval #(n + 1) of the process, and we can denote the matrix whose entries are these probabilities by P (n). n Pr[X n + 1 = j | X n = i]

2. Chapter 3 Exercises Let States 0 and 1 be defined respectively by a lighter not igniting or igniting, and let X n be 0 or 1 depending respectively on whether the lighter does not ignite or does ignite on try number n = 0, 1, 2, 3, …, with X 0 = 1. Whenever the lighter is triggered at time n, if X n  1 = 0 then the probabilities the lighter will not or will ignite are respectively 1  0.8 n and 0.8 n, but if X n  1 = 1 then the probabilities the lighter will not or will ignite are respectively 1  0.9 n and 0.9 n. (a)Decide whether the stochastic process defined by the Markov Chain is homogeneous or non-homogeneous, and say why. The stochastic process is non-homogeneous, since the probability of the lighter igniting or not igniting depends on n.

2. Chapter 3 Exercises Let States 0 and 1 be defined respectively by a lighter not igniting or igniting, and let X n be 0 or 1 depending respectively on whether the lighter does not ignite or does ignite on try number n = 0, 1, 2, 3, …, with X 0 = 1. Whenever the lighter is triggered at time n, if X n  1 = 0 then the probabilities the lighter will not or will ignite are respectively 1  0.8 n and 0.8 n, but if X n  1 = 1 then the probabilities the lighter will not or will ignite are respectively 1  0.9 n and 0.9 n. (b)Find a formula for each matrix P (n) for n = 0, 1, 2, 3, …. 1  0.8 n n  0.9 n n + 1 P (n) =

2. Chapter 3 Exercises Let States 0 and 1 be defined respectively by a lighter not igniting or igniting, and let X n be 0 or 1 depending respectively on whether the lighter does not ignite or does ignite on try number n = 0, 1, 2, 3, …, with X 0 = 1. Whenever the lighter is triggered at time n, if X n  1 = 0 then the probabilities the lighter will not or will ignite are respectively 1  0.8 n and 0.8 n, but if X n  1 = 1 then the probabilities the lighter will not or will ignite are respectively 1  0.9 n and 0.9 n. (c)Find each of the following:  0 =  1 =  2 = (0, 1)  0 P (0) = (0, 1) = 1   (0.1, 0.9)  1 P (1) = (0.1, 0.9)= 1   (0.207, 0.795)

3. Chapter 3 Exercises Let States 0 and 1 be defined respectively by a battery providing power or not providing power, and let X n be 0 or 1 depending respectively on whether the battery does or does not provide power on day number n = 0, 1, 2, 3, …, with X 0 = 1. Whenever the battery is used at time n, if X n  1 = 1 then the probabilities the battery will or will not provide power are respectively 0 and 1, but if X n  1 = 0 then the probabilities the battery will or will not provide power are respectively 0.99 n and 1  0.99 n. (a)Decide whether the stochastic process defined by the Markov Chain is homogeneous or non-homogeneous, and say why. The stochastic process is non-homogeneous, since the probability of the battery providing or not providing power depends on n. (b)Find a formula for each matrix P (n) for n = 0, 1, 2, 3, ….

(b)Find a formula for each matrix P (n) for n = 0, 1, 2, 3, …  0.99 n n + 1 P (n) = (c)Find each of the following:  0 =  1 =  2 = (1, 0)  0 P (0) = (1, 0) =  (0.99, 0.01)  1 P (1) = (0.99, 0.01)=  ( , )

(d)Suppose that the battery costs $50, and the manufacturer of the battery pays a refund of 1/(n + 1) of the cost if the battery does not provide power on day number n = 1, 2, 3. Find the expected refund. The expected refund is (50/2)  11 + (50/3)  12 + (50/4)  13 We have  11 and  12 from part (c), but we need to calculate  13 as follows:  3 =  2 P (2) = ( , ) =  (don’t care, ) The expected refund is (50/2)(0.01) + (50/3)( ) + (50/4)( ) = $1.19

(e)Suppose that the battery costs $50, and the manufacturer of the battery pays a refund of 1/(n + 1) of the cost if the battery does not provide power for the first time on day number n = 1, 2, 3. Find the expected refund. The expected refund is (50/2)p 01 + (50/3) 0 p 00  p Completing parts (c) is for homework.

4.Read Text Exercise 3-4. (a)Find the probability that the process is in State 0 or State 1 at each of times t = 0, 1, 2, 3, …. Since the process begins in State 0 at time 0, the probability of being in State 0 or State 1 at time 0 is 1 The probability that the process is in State 0 or State 1 at time 1 is p 00 + p 01 = = 0.9 The probability that the process is in State 0 or State 1 at time 2 is p 00  p 00 + p 00  p 01 + p 01  p 11 = 01 (0.6)(0.6) + (0.6)(0.3) + (0.3)(0) = The probability that the process is in State 0 or State 1 at time 3 is p 00  p 00  p 01 = 01 (0.6)(0.6)(0.3) = The probability that the process is in State 0 or State 1 at time greater than 3 is 0

(b)Do part (a) of Text Exercise 3-4. The expected payment is (1)(1) + (1)(0.9) + Completing parts (c), (d), and (e) are for homework.

When the transitional probability Pr[X n+1 = j | X n = i] depends on n, the stochastic process defined by the Markov Chain is called non-homogeneous. We can define p ij = which is the probability of moving from State i to State j in discrete time interval #(n + 1) of the process, and we can denote the matrix whose entries are these probabilities by P (n). More generally, we can define r p ij = which is the probability of moving from State i to State j from time n to time n + r, and these probabilities will be the entries in the matrix P (n) P (n + 1) P (n + 2) … P (n + r  1). In actuarial mathematics applications, time n = 0 typically corresponds to age x for an entity (person). The previous notation is adapted by writing r p ij = n Pr[X n + 1 = j | X n = i] Pr[X n + r = j | X n = i] n x Pr[X r = j | X 0 = i]

When it is possible to move from one state to any other state, we say that all states communicate with each other; in the actuarial models to be considered here, this will never be the case. If there is a state for which the probability of leaving is zero, that state is called an absorbing state. With the discrete-time process, observations X n are made only for the discrete time intervals from 0 to 1, 1 to 2, etc. However, with a continuous-time process, observations X t = X(t) can be made at any time t  0, and the probabilities r p ij are determined by force of transition functions. For an entity with age x at time 0, we define the following force of transition function at time s (when the entity is age x + s):  ij = x x + s force of transition from State i to State j = lim Note that if  ij is constant for all values of s, then it can be shown that t p ij is constant for all values of s, implying that the process is homogeneous; otherwise, the process is non-homogeneous. h0h0 h x + s h p ij x + s

It must of course be true that t + s p ij = x m  k = 0 t p ik  x s p kj x + t We may now write t + h p ij  t p ij = xx m  k = 0 t p ik  x h p kj x + t  t p ij = x m  k  j t p ik  x h p kj x + t  t p ij = x + t p ij  x h p jj x + t m  k  j t p ik  x h p kj x + t  (1  h p jj )  t p ij = x + t  i = x + s i  1  j = 0  ij + x + s m  j=i+1  ij = x + s m  j  i  ij. x + s It will be convenient to define

m  k  j t p ik  x h p kj x + t   t p ij Dividing both sides by h, we may write t + h p ij  t p ij = h h h Taking the limit of both sides as h  0, we have Kolmogorov’s Forward Equation: m  k  j t p ik  x h p kj x + t  t p ij x + t m  k  j x + t  h p jk m  k  j x + t h p jk xx t p ij = x d — dt m  k  j t p ik  x  kj  x + t m  k  j  jk x + t  t p ij jj x + t

5. Chapter 3 Exercises Suppose X(t) represents a continuous-time Markov chain where from State i it is only possible to move to State j which is an absorbing state. Let random variable T be the time it takes to transition from State i to State j. (a)Write a formula for cumulative distribution function for T and the probability density function for T both in terms of t p ij. x Pr[T  t] = t p ij x Pr[X(t) = j | X(0) = i] = t p ij x d — dt Consequently, probability density function for the random variable T must be

(b)Use part (a) to write a formula for t p ij in terms of an integral. x t p ij = Pr[X(t) = j | X(0) = i] = x t0 t0 dr. r p ij x d — dr (c)Use part (a) to write a formula for t p ii in terms of an integral. x t p ii = Pr[X(t) = i | X(0) = i] = Pr[T > t] = x tt dr. r p ij x d — dr

6. Chapter 3 Exercises Let States 0 and 1 be defined respectively by a battery providing power or not providing power, and let X(t) be 0 or 1 depending respectively on whether the battery does or does not provide power at time t. For any time when the battery does not provide power, it must be true that the battery will not provide power at any future time, that is, if X(t 0 ) = 1, then X(t) = 1 for all t > t 0. (a)Is either of the two states an absorbing state? Why or why not? State 1 is an absorbing state, since once the battery does not provide power, the battery will not provide power at any future time, implying that probability of leaving this state is zero. (b)Use Kolmogorov’s Forward Equation to write a system of equations involving the various probability functions, their derivatives, and the various force of transition functions.

(b)Use Kolmogorov’s Forward Equation to write a system of equations involving the various probability functions, their derivatives, and the various force of transition functions. First, we realize that we must have t p 11 =and t p 10 =. Next, we realize that since t p 10 = we must have  10 =. xx 10 t p 01 = x t p 00 = x d — dt d — dt t p 00  x x + t  01  (  10 )  t p 01 x + tx t p 01  x x + t  10  (  01 )  t p 00 x + tx 00 x = =  t p 00  x x + t  01 t p 00  x x + t  01

(d) Suppose the constant in part (c) is equal to 8. Use results from Exercise 5 to find each of the following: Pr[X(10) = 1 | X(0) = 0] = Pr[X(10) = 0 | X(0) = 0] = (c) Suppose  01 is equal to a constant (i.e., the process is homogeneous). Find functions t p 10 and t p 11 which satisfy the differential equations in part (b). x + t xx By realizing that the derivative of t p 00 = e  t will be  e  t, we find that this function satisfies the second equation in (a). The first equation in (a) can then be satisfied by letting t p 01 = x x 10  0 dr = r p 01 x d — dr   10 dr = r p 01 x d — dr 1  e  t. 8e  8r dr = 10  0 1  e  80 8e  8r dr =   10 e  80

7.Read Text Exercise 3-1. (a)Do part (a) of Text Exercise 3-1. p 01 = 1  p 00 = x + 1 (b)Do part (b) of Text Exercise 3-1. p 10 = 1  p 11 = x + 2x + 1 (c)Do part (c) of Text Exercise 3-1, realizing that Pr[X 3 = 0 | X 1 = 0] = Pr[X 2 = 0  X 3 = 0 | X 1 = 0] + Pr[X 2 = 1  X 3 = 0 | X 1 = 0]. 2 p 00 = p 00  p 00 + x + 1 x + 2 Completing this exercise is for homework.

(d)Do part (d) of Text Exercise 3-1. The stochastic process is non-homogeneous, since each probability depends on age. (e)Find Pr[X 3 = 1 | X 1 = 1]. 2 p 11 = p 10  p 01 + p 11  p 11 = x + 1 x + 2x + 1x + 2 [1  0.6  0.2/(1+1)][1  0.7  0.1/(2+1)] + [ /(1+1)][ /(2+1)] = …

8.Read Text Exercise 3-2. (a)Do part (a) of Text Exercise 3-2. (b)Do part (b) of Text Exercise 3-2 by examining the probability of moving from endangered to thriving at time t as t increases. (c)Do part (c) of Text Exercise 3-2. First, consider the possible paths from State 1 to 2: 1  2, 1  1  2, 1  1  1  2 The probability is Completing this exercise is for homework.

9.Read Text Exercise 3-3. (a)Do part (a) of Text Exercise 3-3. The stochastic process is homogeneous, since each force of transition is constant. (b)Do part (b) of Text Exercise 3-3 by noticing that after simplifying the Kolmogorov equation, you are looking for a function with a first derivative equal to a multiple of the original function. First, we realize that  10 =  20 =  30 = 0. Consequently, we have t p 00 = x d — dt x + t  t p 00  x x + t (  01 +  02 +  03 )  x + t Completing this exercise is for homework.