INDR 343 Problem Session 1 09.10.2014 http://home.ku.edu.tr/~indr343/

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains Extra problems
Operations Research: Applications and Algorithms
Markov Chains 1.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
Solutions Markov Chains 1
An Introduction to Markov Chains. Homer and Marge repeatedly play a gambling game. Each time they play, the probability that Homer wins is 0.4, and the.
G12: Management Science Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
INDR 343 Problem Session
Lecture 13 – Continuous-Time Markov Chains
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
Lecture 11 – Stochastic Processes
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  transient behavior.
S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
Markov Chains. What is a “Markov Chain” We have a set of states, S = {s1, s2,...,sr}. A process starts in one of these states and moves successively from.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 11 – Stochastic Processes Topics Definitions.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
ST3236: Stochastic Process Tutorial 7
Discrete Time Markov Chains
IENG 362 Markov Chains.
Solutions Markov Chains 1
6. Markov Chain.
Discrete-time markov chain (continuation)
Markov Chains Part 5.
Chapman-Kolmogorov Equations
Discrete-time markov chain (continuation)
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Presentation transcript:

INDR 343 Problem Session 1 09.10.2014 http://home.ku.edu.tr/~indr343/

16.2-3 Consider the second version of stock market model presented as an example. Whether the stock goes up tomorrow depends upon whether it increased today and yesterday. If the stock increased today and yesterday, it will increase tomorrow with probability α1. If the stock increased today and decreased yesterday, it will increase tomorrow with probability α2. If the stock decreased today and increased yesterday, it will increase tomorrow with probability α3. If the stock decreased today and yesterday, it will increase tomorrow with probability α4.

16.2-3 (a) Construct the (one-step) transition matrix of the Markov chain. (b) Explain why the states used for this Markov chain cause the mathematical definition of the Markovian property to hold even though what happens in the future (tomorrow) depends upon what happened in the past (yesterday) as well as the present (today).

16.3-2 Suppose that a communications network transmits binary digits, 0 or 1, where each digit is transmitted 10 times in succession. During each transmission, the probability is 0.99 that the digit entered will be transmitted accurately. In other words, the probability is 0.01 that the digit being transmitted will be recorded with the opposite value at the end of the transmission. For each transmission after the first one, the digit entered for transmission is the one that was recorded at the end of the preceding transmission. If X0 denotes the binary digit entering the system, X1 the binary digit recorded after the first transmission, X2 the binary digit recorded after the second transmission, . . . , then {Xn} is a Markov chain.

16.3-2 (a) Construct the (one-step) transition matrix. (b) Use your OR Courseware to find the 10- step transition matrix P(10). Use this result to identify the probability that a digit entering the network will be recorded accurately after the last transmission.

Ex: Unconditional Probabilities An urn always contains 2 balls. Ball colors are red and blue. At each stage a ball is randomly chosen and then replaced by a new ball, which with probability 0.8 is the same color, and with probability 0.2 is the opposite color, as the ball it replaces. If initially both balls are red, find the probability that the fifth ball selected is red.

Classification of States Accessible: Possible to go from state i to state j (path exists in the network from i to j). Two states communicate if both are accessible from each other. A system is irreducible if all states communicate. State i is recurrent if the system will return to it after leaving some time in the future. If a state is not recurrent, it is transient.

Classification of States (continued) A state is periodic if it can only return to itself after a fixed number of transitions greater than 1 (or multiple of a fixed number). A state that is not periodic is aperiodic. a. Each state visited every 3 iterations b. Each state visited in multiples of 3 iterations

Classification of States (continued) An absorbing state is one that locks in the system once it enters. This diagram might represent the wealth of a gambler who begins with $2 and makes a series of wagers for $1 each. Let ai be the event of winning in state i and di the event of losing in state i. There are two absorbing states: 0 and 4.

Classification of States (continued) Class: set of states that communicate with each other. A class is either all recurrent or all transient.. State i is ergodic if it is recurrent and aperiodic. A Markov chain is ergodic if all of its states are ergodic.

Illustration of Concepts Example 1 Every pair of states communicates forming a single recurrent class; moreover, the states are not periodic. Thus the stochastic process is aperiodic and irreducible.

Illustration of Concepts Example 2 States 0 and 1 communicate and form a recurrent class. States 3 and 4 form separate transient classes. State 2 is an absorbing state and forms a recurrent class.

Illustration of Concepts Example 3 Every state communicates with every other state, so we have an irreducible stochastic process. Periodic? Yes, so Markov chain is irreducible and periodic.

Classification of States Example .6 .7 1 2 .4 3 4 .5 .4 .5 .5 .3 .8 .1 5 .2

16.4-2 Given each of the following (one-step) transition matrices of a Markov chain, determine the classes of the Markov chain and whether they are recurrent.

16.4-3

16.4-5 Consider the Markov chain that has the following (one step) transition matrix. (a) Determine the classes of this Markov chain and, for each class, determine whether it is recurrent or transient. (b) Determine the periods of each state.