Random Processes / Markov Processes

Slides:



Advertisements
Similar presentations
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Advertisements

1 Probability- Independence and Fundamental Rules Dr. Jerrell T. Stracener, SAE Fellow EMIS 7370 STAT 5340 Probability and Statistics for Scientists and.
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
IERG5300 Tutorial 1 Discrete-time Markov Chain
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
Chapter 17 Markov Chains.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
Reinforcement Learning Game playing: So far, we have told the agent the value of a given board position. How can agent learn which positions are important?
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
Lecture 11 – Stochastic Processes
8.2 Regular Stochastic Matrices
Entropy Rate of a Markov Chain
Intro. to Stochastic Processes
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Chapter 2. Conditional Probability Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Markov Games TCM Conference 2016 Chris Gann
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Chapter 9: Markov Processes
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Markov Chains and Random Walks
Markov Chains and Mixing Times
Courtesy of J. Bard, L. Page, and J. Heyl
Discrete-time markov chain (continuation)
Baseball Season Spring of 2008.
V5 Stochastic Processes
Discrete Time Markov Chains
IENG 362 Markov Chains.
A Problem that will delight your fancy
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
TexPoint fonts used in EMF.
Discrete-time markov chain (continuation)
Solutions Markov Chains 1
Solutions Markov Chains 2
Discrete-time markov chain (continuation)
TexPoint fonts used in EMF.
Markov Chains & Population Movements
Lecture 11 – Stochastic Processes
Presentation transcript:

Random Processes / Markov Processes Pool example: The home I moved into came with an above ground pool that was green. I spent big $s and and got the pool clear again. Then the pump started leaking and I turned off the pump and eventually the pool turned green again. After fixing the pump, I finally got the pool to turn blue again. I have made the following observations: If I observe the pool each morning, it basically has three states: blue, blue/green, and green. If the pool is blue, the probability of it staying blue is about 80%, otherwise it turns blue/green. If the pool is blue/breen, there is equal probability of remaining blue/green, or turning blue or green. If the pool is green, there is a 60% probability of remaining green, otherwise the pool turns blue/green.

Random Processes / Markov Processes If the pool is blue, the probability of it staying blue is about 80%, otherwise it turns blue/green. If the pool is blue/breen, there is equal probability of remaining blue/green, or turning blue or green. If the pool is green, there is a 60% probability of remaining green, otherwise the pool turns blue/green. G B/G B

Random Processes / Markov Processes Probability Transition Matrix (P) – probability of transitioning from some current state to some next state in one step. state G B/G B G .60 .40 0.0 P = B/G .33 .33 .33 B 0.0 .20 .80 P is referred to as the probability transition matrix.

Random Processes / Markov Processes What is a Markov Process? A stochastic (probabilistic) process which contains the Markovian property. A process has the Markovian property if: for t = 0,1,… and every sequence i,j, k0, k1,…kt-1. In other words, any future state is only dependent on it’s prior state.

Markov Processes cont. This conditional probability is called the one-step transition probability. And if for all t = 1,2,… then the one-step transition probability is said to be stationary and therefore referred to as the stationary transition probability.

Markov Processes cont. Let pij = state 0 1 2 3 0 p00 p01 p02 p03 P = 1 p10 p11 p12 p13 2 p20 p21 p22 p23 3 p30 p31 p32 p33 P is referred to as the probability transition matrix.

Markov Processes cont. Suppose the probability you win is based on if you won the last time you played some game. Say, if you won last time, then there is a 70% chance of winning the next time. However, if you lost last time, there is a 60% chance you lose the next time. Can the process of winning and losing be modeled as a Markov process? Let state 0 be you win, and state 1 be you lose, then: state 0 1 P = 0 .70 .30 1 .40 .60

Markov Processes cont. See handout on n-step transition matrix.

Markov Processes cont. Let, state 0 1 2 ... N 0 p0 p1 p2 … pN Pn = 1 p0 p1 p2 … pN 2 p0 p1 p2 … pN 3 p0 p1 p2 … pN Then P= [p0 , p1 , p2 , p3 …pN ] are the steady state probabilities.

Markov Processes cont. Observing that P(n) = P(n-1)P, As , P = PP. [p0 , p1 ,,p2 ,…pN ] = [p0 , p1 ,,p2 ,…pN ] p00 p01 p02 … p0N p10 p11 p12 … p1N p20 p21 p22 … p2N pN0 pN1 pN2 … p3N The inner product of this matrix equation results in N+1 equations and N+1 unknowns, however rank of the P matrix is N. However, note that p0 + p1 + p2+ p3 …pN = 1. Therefore N+1 equations and N+1 unknowns.

Markov Processes cont. Show example of obtaining P = PP from transition matrix: state 0 1 P = 0 .70 .30 1 .40 .60

Markov Processes cont. Break for Exercise

Markov Processes cont. State diagrams: state 0 1 P = 0 .70 .30 1 .40 .60 1

Markov Processes cont. State diagrams: state 0 1 2 3 P = 0 .5 .5 0 0 1 .5 .5 0 0 2 .25 .25 .25 .25 3 0 0 0 1 1 2 3