Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.

Slides:



Advertisements
Similar presentations
5.1 Real Vector Spaces.
Advertisements

ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
1.5 Elementary Matrices and a Method for Finding
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
Solutions Markov Chains 1
10.3 Absorbing Markov Chains
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Markov Chains Lecture #5
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
1 Section 6.1 Recurrence Relations. 2 Recursive definition of a sequence Specify one or more initial terms Specify rule for obtaining subsequent terms.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Applied Discrete Mathematics Week 9: Relations
MATH 250 Linear Equations and Matrices
Generalized Semi-Markov Processes (GSMP)
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
Theory of Computations III CS-6800 |SPRING
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
Discrete Time Markov Chains
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
2 2.2 © 2016 Pearson Education, Ltd. Matrix Algebra THE INVERSE OF A MATRIX.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
CMSC Discrete Structures
Discrete Time Markov Chains (A Brief Overview)
Ergodicity, Balance Equations, and Time Reversibility
Markov Chains and Random Walks
Markov Chains and Mixing Times
Discrete-time markov chain (continuation)
Much More About Markov Chains
V5 Stochastic Processes
Solutions Markov Chains 1
Hidden Markov Models Part 2: Algorithms
Hidden Markov Autoregressive Models
Discrete-time markov chain (continuation)
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Solutions Markov Chains 1
CMSC Discrete Structures
CMSC Discrete Structures
Recurrence Relations Discrete Structures.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Discrete-time markov chain (continuation)
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
CS723 - Probability and Stochastic Processes
Random Processes / Markov Processes
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
Presentation transcript:

Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous even.

We have two states of Markov Chain: Transient and Recurrent States. Recurrent States is when starting from that state, the chain has probability 1 to return back to that state. Transient State is when starting from that state, the chain has the probability of 0 to return back to that state. In other words it can never return back to that state. For example; Recurrent stateTransient state

Consider now a finite state Markov chain and suppose that the states are numbered so that T = {1, 2,..., t} denotes the set of transient states. Let P11 P12 · · · P1t Pt = Pt1 Pt2 · · · Ptt and note that since Pt specifies only the transition probabilities from transient states into transient states, some of its row sums are less than 1 (otherwise, T would be a closed class of states). For transient states i and j, let sij denote the expected number of time periods that the Markov chain is in state j, given that it starts in state i. Let δi,j = 1 when i = j and let it be 0 otherwise. Condition on the initial transition to obtain sij = δi,j + Pik skj = δi,j + Pik skj t K=1

where the final equality follows since it is impossible to go from a recurrent to a transient state, implying that skj = 0 when k is a recurrent state. Let S denote the matrix of values sij, i, j = 1,..., t. That is, S11 S12 · · · S1t S = St1 St2 · · · Stt Based on the fact that the above equation is a general equation implying that Skj=0, the equation below can be used for transient states computations S = I+PT S Where I is the Identity Matrix of Size t, Pt is the transition probabilities and S is the expected number of periods in each state. The equation above can be transformed to; S-Pts=I S(1-Pt)=I Therefore S= I/(1-Pt) S= (I-Pt) −1

Example: Consider the gambler’s ruin problem with p = 0.4 and N = 7.Starting with 3 units, determine (a) the expected amount of time the gambler has 5 units, (b) the expected amount of time the gambler has 2 units. Solution: The matrix PT, which specifies Pij, i, j ∈ {1, 2, 3, 4, 5, 6}, is as follows: I= PT = Applying the above Equation we will have (I-Pt)

(I - PT) = Inverting I−PT gives ( This Computation was done Using Math lab) S = (I−PT ) − Hence, s3,5 = , s3,2 =

For i ∈ T,j ∈ T, the quantity fij, equal to the probability that the Markov chain ever makes a transition into state j given that it starts in state i, is easily determined from PT. To determine the relationship, let us start by deriving an expression for sij by conditioning on whether state j is ever entered. This yields sij = E[time in j |start in i, ever transit to j ]fij +E[time in j |start in i, never transit to j ](1 −fij ) = (δi,j +sjj )fij + δi,j (1 −fi,j ) = δi,j + fij sjj since sjj is the expected number of additional time periods spent in state j given that it is eventually entered from state i. Solving the preceding equation yields fij = sij − δi,j sjj Example: what is the probability that the gambler ever has a fortune of 1? Solution: Since s3,1 = and s1,1 = , then f3,1 = s3,1 s1,1 = THE END. THANK YOU