Presentation is loading. Please wait.

Presentation is loading. Please wait.

S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Similar presentations


Presentation on theme: "S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept."— Presentation transcript:

1 S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 16, 2015

2 Outline 1.Long-Term Behavior of Markov Chains 2.Mean-Time Spent in Transient States 3.Branching Processes

3 1.5 L ONG -R UN P ROPORTIONS AND L IMITING P ROBABILITIES

4 Long-Run Proportions of MC In this section, we will study the long-term behavior of Markov chains. Consider a Markov chain Let denote the long-run proportion of time that the Markov chain is in state, i.e.,

5 Long-Run Proportions of MC (Continued) A simple fact is that, if a state is transient, the corresponding For recurrent states, we have the following result: Let, the number of transitions until the Markov chain makes a transition into the state Denote to be its expectation, i.e..

6 Long-Run Proportions of MC (Continued) Theorem: If the Markov chain is irreducible and recurrent, then for any initial state

7 Positive Recurrent and Null Recurrent Definition: we say a state is positive recurrent if and say that it is null recurrent if It is obvious from the previous theorem, if state is positive recurrent, we have

8 How to Determine ? Theorem: Consider an irreducible Markov chain. If the chain is also positive recurrent, then the long-run proportions of each state are the unique solution of the equations: If there is no solution of the preceding linear equations, then the chain is either transient or null recurrent and all

9 Example I: Rainy Days in Shenzhen Assume that in Shenzhen, if it rains today, then it will rain tomorrow with prob. 60%; and if it does not rain today, then it will rain tomorrow with prob. 40%. What is the average proportion of rainy days in Shenzhen?

10 Example I: Rainy Days in Shenzhen (Solution) Modeling the problem as a Markov chain: Let and be the long-run proportions of rainy and no-rain days. We have

11 Example II: A Model of Class Mobility A problem of interest to sociologists is to determine the proportion of a society that has an upper-, middle-, and lower-class occupations. Let us consider the transitions between social classes of the successive generations in a family. Assume that the occupation of a child depends only on his or her parent’s occupation.

12 Example II: A Model of Class Mobility (Continued) The transition matrix of this social mobility is given by That is, for instance, the child of a middle- class worker will attain an upper-class occupation with prob. 5%, will move down to a lower-class occupation with prob. 25%.

13 Example II: A Model of Class Mobility (Continued) The long-run proportion thus satisfy Hence,

14 Stationary Distribution of MC For a Markov chain, any set of satisfying and is called a stationary probability distribution of the Markov chain. Theorem: If the Markov chain starts with an initial distribution i.e., then for all state and

15 Limiting Probabilities Reconsider Example I (Rainy days in Shenzhen): Please calculate what are the probabilities that it will rain in 10 days, in 100 days, and in 1,000 days, given it does not rain today.

16 Limiting Probabilities (Continued) That transforms to the following problem: Given what are, and

17 Limiting Probabilities (Continued) With the help of MATLAB, we have

18 Limiting Probabilities (Continued) Observations: – As converges; – The limit of does not depend on the initial state – The limits coincide with the stationary distribution of the Markov chain

19 Limiting Probabilities and Stationary Distribution Theorem: In a positive recurrent (aperiodic) Markov chain, we have where is the stationary distribution of the chain. In words, we say that a positive recurrent Markov chain will reach an equilibrium/a steady state after long-term transition.

20 Periodic vs. Aperiodic The requirement of aperiodicity in the last theorem turns out to be very essential. We say that a Markov chain is periodic if, starting from any state, it can only return to the state in a multiple of steps. Otherwise, we say that it is aperiodic. Example: A Markov chain with period 3.

21 Summary In a positive recurrent aperiodic Markov chain, the following three concepts are equivalent: – Long-run proportion; – Stationary probability distribution; – Long-term limits of transition probabilities.

22 The Ergodic Theorem The following result is quite useful: Theorem: Let be an irreducible Markov chain with stationary distribution, and let be a bounded function on the state space. Then,

23 Example III: Bonus-Malus Automobile Insurance System In most countries of Europe and Asia, automobile insurance premium is determined by use of a Bonus-Malus (Latin for Good-Bad) system. Each policyholder is assigned a positive integer valued state and the annual premium is a function of this state. Lower numbered states correspond to lower number of claims in the past, and result in lower annual premiums.

24 Example III (Continued) A policyholder’s state changes from year to year in response to the number of claims that he/she made in the past year. Consider a hypothetical Bonus-Malus system having 4 states.

25 Example III (Continued) According to historical data, suppose that the transition probabilities between states are given by Find the average annual premium paid by a typical policyholder.

26 Example III (Solution) By the ergodic theorem, we need to compute the corresponding stationary distribution first: Therefore, the average annual premium paid is

27 1.6 M EAN T IME S PENT IN T RANSIENT S TATES

28 Mean Time Spend in Transient States Now, let us turn to transient states. We know that the chain will leave transient states eventually. Thus, it will be an interesting problem to study how long a Markov chain will spend in such states. Below we use an example to illustrate the idea. Please refer to Ross Ch. 4.6 (p. 231-234) for more theoretical discussion.

29 Example IV: Gambler’s Ruin Problem Consider the gambler’s ruin problem. Suppose that he starts with 3 dollars in the pocket and decides to exit the game if he loses all the money or reaches $7. Assume that the winning probability for the gambler is 40%. What is the expected amount of time the gambler has 2 dollars?

30 Example IV: Gambler’s Ruin Problem (Continued) This is a 7-state Markov chain. States are transient. Let denote the expected number of time periods the gambler has dollars, given that he starts with dollars. Then, conditioning on the initial transition, we have

31 Example IV: Gambler’s Ruin Problem (Continued) In general, for

32 Example IV: Gambler’s Ruin Problem (Continued) Putting the above equations in a matrix form, where, is an identity matrix, and is given by 123456 100.40000 20.600.4000 300.600.400 4000.600.40 50000.600.4 600000.60

33 Example IV: Gambler’s Ruin Problem (Solution) We can invert the matrix to solve for S: In particular,

34 1.7 B RANCHING P ROCESSES

35 Branching Processes In this section, we consider a special example of Markov chains, known as the branching processes. It has a wide variety of applications in the biological, sociological, and engineering sciences.

36 Origin of the Problem In 19 th century, two British sociologists, Francis Galton and Henry Watson, found that many aristocratic surnames in ancient Britain were becoming extinction. They attributed such phenomenon to the patrilineal rule (meaning passed from father to son) of inheritance.

37 Origin of the Problem An extinct family tree: Chen XX (Male) Chen XX (Female) Generation 0 Generation 1 Generation 2

38 Mathematical Formulation Suppose that each individual will, by the end of his/her lifetime, have produced new offspring with prob. Independent of the number produced by other individuals. Suppose that for all Let be the size of the nth generation. It is easy to verify that is a Markov chain having as its state space the set of nonnegative integers. (Exercise: please write down the transition probabilities)

39 Some Simple Facts of Branching Processes 0 is a recurrent state, since If all the other states are transient. This leads to an important conclusion that, if then the population will extinct or explode up to infinity.

40 Population Expectation Assume that that is, initially there is a single individual present. We calculate We may write where represents the number of offspring of the ith individual of the st generation.

41 Population Expectation (Continued) Let the average offspring number of each individual. We have Therefore, – When the expected number of individuals will converge to 0 as

42 Extinction Probability Denote to be the probability that a family ultimately dies out (under the assumption ). An equation determining may be derived by conditioning on the number of offspring of the initial individual

43 Extinction Probability (Continued) Define a function It satisfies –

44 Extinction Probability (Continued) The problem is transformed to find a solution to the following equation: Two scenarios: – When the above equation only has one solution That is, the family will die out with probability 1.

45 Extinction Probability (Continued) (Continue) – When the above equation has two solutions: one solution is still 1 and the other is strictly less than 1. We can show that the extinction probability should be equal to the smaller solution.

46 Example V: Branching Processes Suppose that the individuals in a family has the following distribution to produce offspring. What is the extinction probability of this family? –

47 Homework Assignments Read Ross Chapter 4.4, 4.6, and 4.7. Supplementary reading: Ross Chapter 4.8- 4.11. Answer Questions: – Exercises 18, 20 (Page 263, Ross) – Exercises 23, 24 (Page 264, Ross) – Exercises 37 (Page 266, Ross) – (Optional, Extra Bonus) Exercise 70 (page 272, Ross).


Download ppt "S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept."

Similar presentations


Ads by Google