S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.

Slides:



Advertisements
Similar presentations
Markov Chain Nur Aini Masruroh.
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Markov Models.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
Chapter 17 Markov Chains.
Chapter 5 Probability Models Introduction –Modeling situations that involve an element of chance –Either independent or state variables is probability.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Entropy Rates of a Stochastic Process
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
1 Copyright M.R.K. Krishna Rao 2003 Chapter 5. Discrete Probability Everything you have learned about counting constitutes the basis for computing the.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
CDA6530: Performance Models of Computers and Networks Examples of Stochastic Process, Markov Chain, M/M/* Queue TexPoint fonts used in EMF. Read the TexPoint.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Computational Finance Lecture 2 Option Pricing: Binomial Tree Model.
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses.
Intro. to Stochastic Processes
Efficient Market Hypothesis EMH Presented by Inderpal Singh.
A Brief Introduction of FE. What is FE? Financial engineering (quantitative finance, computational finance, or mathematical finance): –A cross-disciplinary.
S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Stochastic Models Lecture 2 Poisson Processes
Lecture 4: State-Based Methods CS 7040 Trustworthy System Design, Implementation, and Analysis Spring 2015, Dr. Rozier Adapted from slides by WHS at UIUC.
Copyright © Cengage Learning. All rights reserved. CHAPTER 8 RELATIONS.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Basic Principles (continuation) 1. A Quantitative Measure of Information As we already have realized, when a statistical experiment has n eqiuprobable.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
S TOCHASTIC M ODELS L ECTURE 3 P ART II C ONTINUOUS -T IME M ARKOV P ROCESSES Nan Chen MSc Program in Financial Engineering The Chinese University of Hong.
S TOCHASTIC M ODELS L ECTURE 2 P ART II P OISSON P ROCESSES Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen)
Stochastic Models Lecture 3 Continuous-Time Markov Processes
Discrete Time Markov Chains
S TOCHASTIC M ODELS L ECTURE 4 P ART II B ROWNIAN M OTIONS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen)
MATH 256 Probability and Random Processes Yrd. Doç. Dr. Didem Kivanc Tureli 14/10/2011Lecture 3 OKAN UNIVERSITY.
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
S TOCHASTIC M ODELS L ECTURE 4 B ROWNIAN M OTIONS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen) Nov 11,
3/7/20161 Now it’s time to look at… Discrete Probability.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
S TOCHASTIC M ODELS L ECTURE 4 P ART III B ROWNIAN M OTION Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen)
S TOCHASTIC M ODELS L ECTURE 5 S TOCHASTIC C ALCULUS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen) Nov 25,
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Now it’s time to look at…
Discrete Time Markov Chain
Discrete-time markov chain (continuation)
Now it’s time to look at…
Now it’s time to look at…
Now it’s time to look at…
Presentation transcript:

S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015

Outline 1.Introduction 2.Chapman-Kolmogrov Equations 3.Classification of States 4.The Gambler’s ruin Problem

1.1 I NTRODUCTION

What is a Stochastic Process? A stochastic process is a collection of random variables that are indexed by time. Usually, we denote it by – or – Examples: – Daily average temperature on CUHK-SZ campus – Real-time stock price of Google

Motivation of Markov Chains Stochastic processes are widely used to characterize the temporal relationship between random variables. The simplest model should be that are independent of each other. But, the above model may not be able to provide a reasonable approximation to financial markets.

What is a Markov Chain? Let be a stochastic process that takes on a finite/countable number of possible states. We call it by a Markov chain, if the conditional distribution of depends on the past observations only through. Namely, for all

The Markovian Property It can be shown that the definition of Markov chains is equivalent to stating that In words, given the current state of the process, its future and historical movements are independent.

Financial Relevance: Efficient Market Hypothesis The Markovian property turns out to be highly relevant to financial modeling in light of one of the most profound theory in the history of modern finance --- efficient market hypothesis. It states – Market information, such as the information reflected in the past record or the information published in financial press, must be absorbed and reflected quickly in the stock price.

More about EMH: a Thought Experiment Let us start with the following thought experiment: Assume that Prof. Chen had invented an formula which we could use to predict the movements of Google stock price very accurately. What would happen if this formula was unveiled to the public?

More about EMH: a Thought Experiment Suppose that it predicted that Google’s stock price would rise dramatically in three days to US$700 from US$650. – The prediction would induce a great wave of immediate buy orders. – Huge demands on Google’s stocks would push its price to jump to $700 immediately. – The formula fails! A true story of Edward Thorp and the Black- Scholes formula

Implication of Efficient Market Hypothesis One implication of EMH is that given the current stock price, knowing its history will help very little in predicting its future. Therefore, we should use Markov processes to model the dynamic of financial variables.

Transition Matrix In this lecture, we only consider time- homogenous Markov chains; that is, the transition probabilities are independent of time Denote We then can use the following matrix to characterize the process.

Transition Matrix The transition matrix of a Markov chain must be a stochastic matrix: –

Example I: Forecasting the Weather Suppose that the chance of rain tomorrow in Shenzhen depends on previous weather conditions only through whether or not it is raining today. Assume that if it rains today, then it will rain tomorrow with probability 70%; and if it does not rain today, then it will rain tomorrow with prob. 50%. How can we use a Markov chain to model it?

Example II: 1-dimensional Random Walk A Markov chain whose state space is given by the integers is said to be a random walk if, for some number We say the random walk is symmetric if ; asymmetric if

1.2 C HAPMAN - K OLMOGOROV E QUATIONS

The Chapman-Kolmogorov Equations The CK equations provide a method for computing the -step transition probabilities of a Markov chain. or in a matrix form,

Example III: Rain Probability Reconsider the situation in Example I. Given that it is raining today, what is the probability that it will rain four days from today?

Example IV: Urn and Balls An urn always contains 2 balls. Ball colors are red and blue. At each stage a ball is randomly chosen and then replaced by a new ball, which with prob. 80% is the same color, and with prob. 20% is the opposite color. If initially both balls are red, find the probability that the fifth ball selected is red.

1.3 S TATE C LASSIFICATION

Asymptotic Behavior of Markov Chains It is frequently of interest to find the asymptotic behavior of as One may expect that the influence of the initial state recedes in time and that consequently, as approaches a limit which is independent of In order to analyze precisely this issue, we need to introduce some principles of classifying states of a Markov chain.

Example V Consider a Markov chain consisting of the 4 states 0, 1, 2, 3 and having transition probability matrix What is the most improbable state after 1,000 steps by your intuition?

Accessibility and Communication State is said to be accessible from state if for some, – In the previous slide, state 3 is accessible from state 2. – But, state 2 is not accessible from state 3. Two states and are said to communicate if they are accessible to each other. We write – States 0 and 1 communicate in the previous example.

Simple Properties of Communication The relation of communication satisfies the following three properties: – State communicates with itself; – If state communicates with state, then state communicates with state – If state communicates with state, and state communicates with state, then state communicates with state

State Classes Two states that communicate are said to be in the same class. It is an easy consequence of the three properties in the last slide that any two classes are either identical or disjoint. In other words, the concept of communication divides the state space into a number of separate classes. In the previous example, we have three classes:

Example VI: Irreducible Markov Chain Consider the Markov chain consisting of the three states 0, 1, 2, and having transition probability matrix How many classes does it contain? The Markov chain is said to be irreducible if there is only one class.

Recurrence and Transience Consider an arbitrary state in a generic Markov chain. Define In other words, represents the probability that, starting from state, the first return to state occurs at the nth step. Let

Recurrence and Transience (Continued) We say a state is recurrent if That is to say, a state is recurrent if and only if, starting from this state, the probability of returning to it after some finite length of time is 100%. It is easy to argue, that if a state is recurrent, then, starting from this state, the Markov chain will return to it again, and again, and again --- in fact, infinitely often.

Recurrence and Transience (Continued) A non-recurrent state is said to be transient, i.e., a transient state satisfies Starting from a transient state, – The process will never again revisit the state with a positive probability – The process will revisit the state just once with a probability – The process will revisit the state just twice with a probability – ……

Recurrence and Transience (Continued) From the above two definitions, we can easily see the following two conclusions: – A transient state will only be visited a finite number of times. – In a finite-state Markov chain not all states can be transient. In Example V, states 0, 1, 3 are recurrent, and state 2 is transient.

One Commonly Used Criterion of Recurrence Theorem: A state is recurrent if and only if You may refer to Example 4.18 in Ross to see one application of this criterion to prove that one-dimensional symmetric random walk is recurrent.

Recurrence as a Class Property Theorem: If state is recurrent, and state communicates with state, then state is recurrent. Two conclusions can be drawn from the theorem: – Transience is also a class property. – All states of a finite irreducible Markov chain are recurrent.

Example VII Let the Markov chain consisting of the states 0, 1, 2, 3, and having transition probability matrix Determine which states are transient and which are recurrent.

Example VIII Discuss the recurrent property of a one- dimensional random walk. Conclusion: – Symmetric random walk is recurrent; – Asymmetric random walk is not.

1.4 T HE G AMBLER ’ S R UIN P ROBLEM

The Gambler’s Ruin Problem Consider a gambler who at each play of the game has probability of winning one dollar and probability of losing one dollar. Assuming that successive plays of the game are independent, what is the probability that, starting with dollars, the gambler fortune will win dollars before he ruins (i.e., his fortune reaches 0)?

Markov Description of the Model If we let denote the player’s fortune at time, then the process is a Markov chain with transition probabilities – –, The Markov chain has three classes:

Solution Let be the probability that, starting with dollars, the gambler fortune will eventually reach. By conditioning on the outcome of the initial play, and

Solution (Continued) Hence, we obtain from the preceding slide that – – …… –

Solution (Continued) Adding all the equalities up, we obtain –

Solution (Continued) Note that, as Thus, if, there is a positive probability that the gambler’s fortune will increase indefinitely; while if, the gambler will, with probability 1, go ruin against an infinitely rich adversary (say, a casino).

Homework Assignments Read Ross Chapter 4.1, 4.2, 4.3, and 4.5 (you may ignore 4.5.3). Answer Questions: – Exercises 2, 3, 5, 6 (Page 261, Ross) – Exercises 13, 14 (Page 262, Ross) – Exercises 56, 57, 58 (Page 270, Ross) – (Optional, Extra Bonus) Exercise 59 (page 270, Ross).