ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.

Slides:



Advertisements
Similar presentations
Random Processes Markov Chains Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering
Advertisements

State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
ST3236: Stochastic Process Tutorial 9
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
Totally Unimodular Matrices
ST3236: Stochastic Process Tutorial 4 TA: Mar Choong Hock Exercises: 5.
Operations Research: Applications and Algorithms
ST3236: Stochastic Process Tutorial TA: Mar Choong Hock
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
ST3236: Stochastic Process Tutorial 8
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Chapter 5 Probability Models Introduction –Modeling situations that involve an element of chance –Either independent or state variables is probability.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
HIDDEN MARKOV CHAINS Prof. Alagar Rangan Dept of Industrial Engineering Eastern Mediterranean University North Cyprus Source: Probability Models Sheldon.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Reliable System Design 2011 by: Amir M. Rahmani
Markov Chains Lecture #5
Planning under Uncertainty
Binomial Random Variables. Binomial experiment A sequence of n trials (called Bernoulli trials), each of which results in either a “success” or a “failure”.
The Gibbs sampler Suppose f is a function from S d to S. We generate a Markov chain by consecutively drawing from (called the full conditionals). The n’th.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
The moment generating function of random variable X is given by Moment generating function.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Hidden Markov Model Continues …. Finite State Markov Chain A discrete time stochastic process, consisting of a domain D of m states {1,…,m} and 1.An m.
Group exercise For 0≤t 1
ST3236: Stochastic Process Tutorial 2
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Entropy Rate of a Markov Chain
ECES 741: Stochastic Decision & Control Processes – Chapter 1: The DP Algorithm 1 Chapter 1: The DP Algorithm To do:  sequential decision-making  state.
ST3236: Stochastic Process Tutorial 10
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Functions of Two Random.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Motif finding with Gibbs sampling CS 466 Saurabh Sinha.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Pattern Recognition and Machine Learning-Chapter 13: Sequential Data
ST3236: Stochastic Process Tutorial 5
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
Asst. Professor in Mathematics
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
ST3236: Stochastic Process Tutorial 6
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Chapter 9: Joint distributions and independence CIS 3033.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
How many iterations in the Gibbs sampler? Adrian E. Raftery and Steven Lewis (September, 1991) Duke University Machine Learning Group Presented by Iulian.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Industrial Engineering Dep
ST3236: Stochastic Process Tutorial 7
Hidden Markov Autoregressive Models
Discrete-time markov chain (continuation)
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Discrete-time markov chain (continuation)
Markov Chains & Population Movements
CS723 - Probability and Stochastic Processes
Chapter 5 – Probability Rules
Presentation transcript:

ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4

Question 1 A markov chain X 0,X 1,… on state 0, 1, 2 has the transition probability matrix and initial distributions, p 0 = P(X 0 = 0) = 0.3, p 1 = P(X 0 = 1) = 0.4 and p 2 = P(X 0 = 2) = 0.3. Determine P(X 0 = 0, X 1 = 1, X 2 = 2) and draw the state- diagram with transition probability.

Question 1 P(X 0 = 0,X 1 = 1,X 2 = 2) = P(X 0 = 0)P(X 1 = 1 | X 0 = 0)P(X 2 = 2 | X 0 = 0,X 1 = 1) = P(X 0 = 0)P(X 1 = 1 | X 0 = 0)P(X 2 = 2 | X 1 = 1) = p 0 x p 01 x p 12 = 0.3 x 0.2 x 0 = 0.

Question 2 A markov chain X 0,X 1,… on state 0, 1, 2 has the transition probability matrix Determine the conditional probabilities P(X 2 = 1,X 3 = 1|X 1 = 0) and P(X 1 = 1,X 2 = 1|X 0 = 0).

Question 2 P(X 2 = 1, X 3 = 1 | X 1 = 0) = P(X 2 = 1 | X 1 = 1)P(X 3 = 1 | X 1 = 0, X 2 = 1) = P(X 2 = 1 | X 1 = 0)P(X 3 = 1 | X 2 = 1) = p 01 x p 11 = 0.2 x 0.6 = 0.12 Similarly (or by stationarity), P(X 1 = 1, X 2 = 1 | X 0 = 0) = 0.12 In general, P(X n+1 = 1, X n+2 = 1 | X n = 0) = 0.12 for any n. That is, it doesn’t matter when you start.

Question 3 A markov chain X 0,X 1,… on state 0, 1, 2 has the transition probability matrix If we know that the process starts in state X 0 = 1, determine probability P(X 0 = 1,X 1 = 0,X 2 = 2).

Question 3 P(X 0 = 1,X 1 = 0,X 2 = 2) = P(X 0 = 1)P(X 1 = 0| X 0 = 1)P(X 2 = 2 | X 0 = 1,X1 = 0) = P(X 0 = 1)P(X 1 = 0| X 0 = 1)P(X 2 = 2 | X 1 = 0) = p 1 x p 10 x p 02 = 1 x 0.3 x 0.1 = 0.03

Question 4 A markov chain X 0,X 1,… on state 0, 1, 2 has the transition probability matrix

Question 4a Compute the two-step transition matrix P(2). Note: Observe that the rows must always sum to one for all transition matrices.

Question 4b What is P(X 3 = 1|X 1 = 0)? P(X 3 = 1|X 1 = 0) = 0.13 In general, P(X n+2 = 1 | X n = 0) = 0.13 for any n.

Question 4c What is P(X 3 = 1|X 0 = 0)? Note that: Thus, P(X 3 = 1|X 0 = 0) = 0.16 In general, P(X n+3 = 1 | X n = 0) = 0.16 for any n.

Question 5 A markov chain X 0,X 1,… on state 0, 1, 2 has the transition probability matrix It is known that the process starts in state X 0 = 1, determine probability P(X 2 = 2).

Question 5 Note that: P(X 2 = 2) = P(X 0 = 0) x P(X 2 = 2 | X 0 = 0) +P(X 0 = 1) x P(X 2 = 2 | X 0 = 1) +P(X 0 = 2) x P(X 2 = 2 | X 0 = 2) = p 0 p 02 + p 1 p 12 + p 2 p 22 = 1 x p 12 = 0.35

Question 6 Consider a sequence of items from a production process, with each item being graded as good or defective. Suppose that a good item is followed by another good item with probability  and by a defective item with probability 1- . Similarly, a defective item is followed by another defective item with probability  and by a good item with probability 1- . Specify the transition probability matrix. If the first item is good, what is the probability that the first defective item to appear is the fifth item?

Question 6 Let X n be the grade of the nth product. P(X n+1 = g | X n = g) = , P(X n+1 = d | X n = g) = 1 -  P(X n+1 = d | X n = d) = , P(X n+1 = g | X n = d) = 1 -  Thus, the transition probability matrix is

Question 6 The probability is, P(X 5 = d,X 4 = g,X 3 = g,X 2 = g | X 1 = g) = P(X 2 = g | X 1 = g) x P(X 3 = g | X 2 = g) x P(X 4 = g | X 3 = g) x P(X 5 = d | X 4 = g) (why?) = p gg x p gg x p gg x p gd =  3 (1 -  )

Question 7 The random variables  1,  2,... are independent and with common probability mass function Set X 0 = 0 and let X n = max {  1,  2,... }. Determine the transition probability matrix for the MC {X n }. Draw the state-diagram associated with transition probability

Question 7 Observe: X 0 = 0, X 1 = max {X 0,  1 }, X 2 = max {X 1,  2 }, … X n = max {X n-1,  n } Hence, X n recursively compares the previous maximum and the current input to obtain the new maximum.

Question 7 The state space is S = {0, 1, 2, 3} P(X n+1 = 0 | X n = 0) = P(  n+1 = 0)= 0.1 P(X n+1 = 1 | X n = 0) = P(  n+1 = 1)= 0.3 P(X n+1 = 2 | X n = 0) = 0.2 P(X n+1 = 3 | X n = 0) = 0.4 P(X n+1 = 1 | X n = 1) = P(  n+1 = 0) + P(  n+1 = 1) = = 0.4 P(X n+1 = 2 | X n = 1) = 0.2 … P(X n+1 = 2 | X n = 2) = = 0.6 … P(X n+1 = 3 | X n = 3) = = 1 … P(X n+1 = j | X n = i) = 0 if j < i. (Cannot Happen!)

Question 7 The transition probability matrix is