A First Course in Stochastic Processes Chapter Two: Markov Chains.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
Markov chains Assume a gene that has three alleles A, B, and C. These can mutate into each other. Transition probabilities Transition matrix Probability.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Discrete Time Markov Chains
Phylogenetic Trees Lecture 4
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
Workshop on Stochastic Differential Equations and Statistical Inference for Markov Processes Day 1: January 19 th, Day 2: January 28 th Lahore University.
Markov Chains Lecture #5
. Maximum Likelihood (ML) Parameter Estimation with applications to inferring phylogenetic trees Comput. Genomics, lecture 7a Presentation partially taken.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
The Gibbs sampler Suppose f is a function from S d to S. We generate a Markov chain by consecutively drawing from (called the full conditionals). The n’th.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Damped random walks and the spectrum of the normalized laplacian on a graph.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
Group exercise For 0≤t 1
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Entropy Rate of a Markov Chain
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Sequence alignment. aligned sequences substitution model.
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
Lecture 10 – Models of DNA Sequence Evolution Correct for multiple substitutions in calculating pairwise genetic distances. Derive transformation probabilities.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Discrete Time Markov Chains
MODELLING EVOLUTION TERESA NEEMAN STATISTICAL CONSULTING UNIT ANU.
Measuring genetic change Level 3 Molecular Evolution and Bioinformatics Jim Provan Page and Holmes: Section 5.2.
Evolutionary Models CS 498 SS Saurabh Sinha. Models of nucleotide substitution The DNA that we study in bioinformatics is the end(??)-product of evolution.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Modelling evolution Gil McVean Department of Statistics TC A G.
The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang.
Markov Chains and Random Walks
Models for DNA substitution
Discrete-time markov chain (continuation)
6. Markov Chain.
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Chapman-Kolmogorov Equations
Random WALK, BROWNIAN MOTION and SDEs
Discrete-time markov chain (continuation)
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Presentation transcript:

A First Course in Stochastic Processes Chapter Two: Markov Chains

X2=X2= X 1 =1 X 2 =2X 3 =1X 4 =3

X1X1 X2X2 X3X3 X4X4 X5X5 etc

P =

Example Two: Nucleotide evolution A G C T

Types of point mutation AG TC Purine PyramidineTransitions Transversions αββββ α

A AGCT T C G Kimura’s 2 parameter model (K2P) P =

GGTCAC A G C T A G C T CTATGA AGTTCGC

The Markov Property GGTCAC A G C T A G C T CTATGA

Markov Chain properties accessibleaperiodic communicate recurrent irreducible transient

A AGCT T C G Accessible P = 0

A AGCT T C G Accessible P = A (and G ) are no longer accessible from C (or T ).

A AGCT T C G Accessible P = But C (and T ) are still accessible from A (or G ).

A AGCT T C G Communicate P = Reciprocal accessibility

A AGCT T C G Irreducible P = All elements communicate

Non-irreducible A AGCT T C G P = = P1P1 P2P2 0 0 P 1 = A G AG C T CT P 2 =

Reflexivity Symmetry Transitivity Repercussions of communication

Periodicity P =

Periodicity The period d(i) of an element i is defined as the greatest common divisor of the numbers of the generations in which the element is visited. Most Markov Chains that we deal with do not exhibit periodicity. A Markov Chain is aperiodic if d(i) = 1 for all i.

Recurrence recurrent transient

More on Recurrence and i is recurrent then j is recurrent In a one-dimensional symmetric random walk the origin is recurrent In a two-dimensional symmetric random walk the origin is recurrent In a three-dimensional symmetric random walk the origin is transient

Markov Chain properties accessibleaperiodic communicate recurrent irreducible transient

Markov Chains Examples

X 1 =1 X1X1 X2X2 X3X3 X4X4 X5X5 etc

P =

Diffusion across a permeable membrane (1D random walk)

Brownian motion (2D random walk)

Wright-Fisher allele frequency model X 1 =1

Haldane (1927) branching process model of fixation probability

P i,j = coefficient of s j in the above generating function

Haldane (1927) branching process model of fixation probability Probability of fixation = 2s

Markov Chain properties accessibleaperiodic communicate recurrent irreducible transient