S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.

Slides:



Advertisements
Similar presentations
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Advertisements

Markov Models.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Queueing Models and Ergodicity. 2 Purpose Simulation is often used in the analysis of queueing models. A simple but typical queueing model: Queueing models.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Chapter 8: Long-Term Dynamics or Equilibrium 1.(8.1) Equilibrium 2.(8.2) Eigenvectors 3.(8.3) Stability 1.(8.1) Equilibrium 2.(8.2) Eigenvectors 3.(8.3)
Markov Chains 1.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
ST3236: Stochastic Process Tutorial 8
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Chapter 17 Markov Chains.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Stochastic Processes Dr. Nur Aini Masruroh. Stochastic process X(t) is the state of the process (measurable characteristic of interest) at time t the.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
20. Extinction Probability for Queues and Martingales
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
INFINITE SEQUENCES AND SERIES
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Markov Chains Chapter 16.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Intro. to Stochastic Processes
The importance of sequences and infinite series in calculus stems from Newton’s idea of representing functions as sums of infinite series.  For instance,
S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Stochastic Models Lecture 2 Poisson Processes
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
S TOCHASTIC M ODELS L ECTURE 3 P ART II C ONTINUOUS -T IME M ARKOV P ROCESSES Nan Chen MSc Program in Financial Engineering The Chinese University of Hong.
S TOCHASTIC M ODELS L ECTURE 2 P ART II P OISSON P ROCESSES Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen)
Chapter 2. Conditional Probability Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Stochastic Models Lecture 3 Continuous-Time Markov Processes
Agenda Lecture Content:  Recurrence Relations  Solving Recurrence Relations  Iteration  Linear homogenous recurrence relation of order k with constant.
Discrete Time Markov Chains
Insurance mathematics V. lecture Bonus systems, premium refund Introduction The basic principle of bonus systems is if somebody has less claims in the.
S TOCHASTIC M ODELS L ECTURE 4 P ART II B ROWNIAN M OTIONS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen)
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Functions of Random Variables
S TOCHASTIC M ODELS L ECTURE 4 B ROWNIAN M OTIONS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen) Nov 11,
S TOCHASTIC M ODELS L ECTURE 4 P ART III B ROWNIAN M OTION Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen)
S TOCHASTIC M ODELS L ECTURE 5 S TOCHASTIC C ALCULUS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen) Nov 25,
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
Much More About Markov Chains
Presentation transcript:

S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 16, 2015

Outline 1.Long-Term Behavior of Markov Chains 2.Mean-Time Spent in Transient States 3.Branching Processes

1.5 L ONG -R UN P ROPORTIONS AND L IMITING P ROBABILITIES

Long-Run Proportions of MC In this section, we will study the long-term behavior of Markov chains. Consider a Markov chain Let denote the long-run proportion of time that the Markov chain is in state, i.e.,

Long-Run Proportions of MC (Continued) A simple fact is that, if a state is transient, the corresponding For recurrent states, we have the following result: Let, the number of transitions until the Markov chain makes a transition into the state Denote to be its expectation, i.e..

Long-Run Proportions of MC (Continued) Theorem: If the Markov chain is irreducible and recurrent, then for any initial state

Positive Recurrent and Null Recurrent Definition: we say a state is positive recurrent if and say that it is null recurrent if It is obvious from the previous theorem, if state is positive recurrent, we have

How to Determine ? Theorem: Consider an irreducible Markov chain. If the chain is also positive recurrent, then the long-run proportions of each state are the unique solution of the equations: If there is no solution of the preceding linear equations, then the chain is either transient or null recurrent and all

Example I: Rainy Days in Shenzhen Assume that in Shenzhen, if it rains today, then it will rain tomorrow with prob. 60%; and if it does not rain today, then it will rain tomorrow with prob. 40%. What is the average proportion of rainy days in Shenzhen?

Example I: Rainy Days in Shenzhen (Solution) Modeling the problem as a Markov chain: Let and be the long-run proportions of rainy and no-rain days. We have

Example II: A Model of Class Mobility A problem of interest to sociologists is to determine the proportion of a society that has an upper-, middle-, and lower-class occupations. Let us consider the transitions between social classes of the successive generations in a family. Assume that the occupation of a child depends only on his or her parent’s occupation.

Example II: A Model of Class Mobility (Continued) The transition matrix of this social mobility is given by That is, for instance, the child of a middle- class worker will attain an upper-class occupation with prob. 5%, will move down to a lower-class occupation with prob. 25%.

Example II: A Model of Class Mobility (Continued) The long-run proportion thus satisfy Hence,

Stationary Distribution of MC For a Markov chain, any set of satisfying and is called a stationary probability distribution of the Markov chain. Theorem: If the Markov chain starts with an initial distribution i.e., then for all state and

Limiting Probabilities Reconsider Example I (Rainy days in Shenzhen): Please calculate what are the probabilities that it will rain in 10 days, in 100 days, and in 1,000 days, given it does not rain today.

Limiting Probabilities (Continued) That transforms to the following problem: Given what are, and

Limiting Probabilities (Continued) With the help of MATLAB, we have

Limiting Probabilities (Continued) Observations: – As converges; – The limit of does not depend on the initial state – The limits coincide with the stationary distribution of the Markov chain

Limiting Probabilities and Stationary Distribution Theorem: In a positive recurrent (aperiodic) Markov chain, we have where is the stationary distribution of the chain. In words, we say that a positive recurrent Markov chain will reach an equilibrium/a steady state after long-term transition.

Periodic vs. Aperiodic The requirement of aperiodicity in the last theorem turns out to be very essential. We say that a Markov chain is periodic if, starting from any state, it can only return to the state in a multiple of steps. Otherwise, we say that it is aperiodic. Example: A Markov chain with period 3.

Summary In a positive recurrent aperiodic Markov chain, the following three concepts are equivalent: – Long-run proportion; – Stationary probability distribution; – Long-term limits of transition probabilities.

The Ergodic Theorem The following result is quite useful: Theorem: Let be an irreducible Markov chain with stationary distribution, and let be a bounded function on the state space. Then,

Example III: Bonus-Malus Automobile Insurance System In most countries of Europe and Asia, automobile insurance premium is determined by use of a Bonus-Malus (Latin for Good-Bad) system. Each policyholder is assigned a positive integer valued state and the annual premium is a function of this state. Lower numbered states correspond to lower number of claims in the past, and result in lower annual premiums.

Example III (Continued) A policyholder’s state changes from year to year in response to the number of claims that he/she made in the past year. Consider a hypothetical Bonus-Malus system having 4 states.

Example III (Continued) According to historical data, suppose that the transition probabilities between states are given by Find the average annual premium paid by a typical policyholder.

Example III (Solution) By the ergodic theorem, we need to compute the corresponding stationary distribution first: Therefore, the average annual premium paid is

1.6 M EAN T IME S PENT IN T RANSIENT S TATES

Mean Time Spend in Transient States Now, let us turn to transient states. We know that the chain will leave transient states eventually. Thus, it will be an interesting problem to study how long a Markov chain will spend in such states. Below we use an example to illustrate the idea. Please refer to Ross Ch. 4.6 (p ) for more theoretical discussion.

Example IV: Gambler’s Ruin Problem Consider the gambler’s ruin problem. Suppose that he starts with 3 dollars in the pocket and decides to exit the game if he loses all the money or reaches $7. Assume that the winning probability for the gambler is 40%. What is the expected amount of time the gambler has 2 dollars?

Example IV: Gambler’s Ruin Problem (Continued) This is a 7-state Markov chain. States are transient. Let denote the expected number of time periods the gambler has dollars, given that he starts with dollars. Then, conditioning on the initial transition, we have

Example IV: Gambler’s Ruin Problem (Continued) In general, for

Example IV: Gambler’s Ruin Problem (Continued) Putting the above equations in a matrix form, where, is an identity matrix, and is given by

Example IV: Gambler’s Ruin Problem (Solution) We can invert the matrix to solve for S: In particular,

1.7 B RANCHING P ROCESSES

Branching Processes In this section, we consider a special example of Markov chains, known as the branching processes. It has a wide variety of applications in the biological, sociological, and engineering sciences.

Origin of the Problem In 19 th century, two British sociologists, Francis Galton and Henry Watson, found that many aristocratic surnames in ancient Britain were becoming extinction. They attributed such phenomenon to the patrilineal rule (meaning passed from father to son) of inheritance.

Origin of the Problem An extinct family tree: Chen XX (Male) Chen XX (Female) Generation 0 Generation 1 Generation 2

Mathematical Formulation Suppose that each individual will, by the end of his/her lifetime, have produced new offspring with prob. Independent of the number produced by other individuals. Suppose that for all Let be the size of the nth generation. It is easy to verify that is a Markov chain having as its state space the set of nonnegative integers. (Exercise: please write down the transition probabilities)

Some Simple Facts of Branching Processes 0 is a recurrent state, since If all the other states are transient. This leads to an important conclusion that, if then the population will extinct or explode up to infinity.

Population Expectation Assume that that is, initially there is a single individual present. We calculate We may write where represents the number of offspring of the ith individual of the st generation.

Population Expectation (Continued) Let the average offspring number of each individual. We have Therefore, – When the expected number of individuals will converge to 0 as

Extinction Probability Denote to be the probability that a family ultimately dies out (under the assumption ). An equation determining may be derived by conditioning on the number of offspring of the initial individual

Extinction Probability (Continued) Define a function It satisfies –

Extinction Probability (Continued) The problem is transformed to find a solution to the following equation: Two scenarios: – When the above equation only has one solution That is, the family will die out with probability 1.

Extinction Probability (Continued) (Continue) – When the above equation has two solutions: one solution is still 1 and the other is strictly less than 1. We can show that the extinction probability should be equal to the smaller solution.

Example V: Branching Processes Suppose that the individuals in a family has the following distribution to produce offspring. What is the extinction probability of this family? –

Homework Assignments Read Ross Chapter 4.4, 4.6, and 4.7. Supplementary reading: Ross Chapter Answer Questions: – Exercises 18, 20 (Page 263, Ross) – Exercises 23, 24 (Page 264, Ross) – Exercises 37 (Page 266, Ross) – (Optional, Extra Bonus) Exercise 70 (page 272, Ross).