Time to Equilibrium for Finite State Markov Chain 許元春(交通大學應用數學系)

Slides:



Advertisements
Similar presentations
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Advertisements

CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Markov Chains 1.
Corporate Banking and Investment Mathematical issues with volatility modelling Marek Musiela BNP Paribas 25th May 2005.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
Operations Research: Applications and Algorithms
MAT 4830 Mathematical Modeling 4.4 Matrix Models of Base Substitutions II
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
What if time ran backwards? If X n, 0 ≤ n ≤ N is a Markov chain, what about Y n = X N-n ? If X n follows the stationary distribution, Y n has stationary.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
1 On the Computation of the Permanent Dana Moshkovitz.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
Entropy Rate of a Markov Chain
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
9. Convergence and Monte Carlo Errors. Measuring Convergence to Equilibrium Variation distance where P 1 and P 2 are two probability distributions, A.
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 3.
PageRank. s1s1 p 12 p 21 s2s2 s3s3 p 31 s4s4 p 41 p 34 p 42 p 13 x 1 = p 21 p 34 p 41 + p 34 p 42 p 21 + p 21 p 31 p 41 + p 31 p 42 p 21 / Σ x 2 = p 31.
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Random walks on undirected graphs and a little bit about Markov Chains Guy.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Discrete Time Markov Chains
Date: 2005/4/25 Advisor: Sy-Yen Kuo Speaker: Szu-Chi Wang.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
PERTEMUAN 26. Markov Chains and Random Walks Fundamental Theorem of Markov Chains If M g is an irreducible, aperiodic Markov Chain: 1. All states are.
The Poincaré Constant of a Random Walk in High- Dimensional Convex Bodies Ivona Bezáková Thesis Advisor: Prof. Eric Vigoda.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Shuffling by semi-random transpositions Elchanan Mossel, U.C. Berkeley Joint work with Yuval Peres and Alistair Sinclair.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Markov Chains and Random Walks
Markov Chains and Mixing Times
Advanced Statistical Computing Fall 2016
Industrial Engineering Dep
Markov Chains Mixing Times Lecture 5
Path Coupling And Approximate Counting
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
6. Markov Chain.
Tim Holliday Peter Glynn Andrea Goldsmith Stanford University
Randomized Algorithms Markov Chains and Random Walks
Ilan Ben-Bassat Omri Weinstein
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Presentation transcript:

Time to Equilibrium for Finite State Markov Chain 許元春(交通大學應用數學系)

a finite set ( state space) a sequence of valued random variables ( random process, stochastic process) ( finite-dimensional distribution) ( Here )

What is ? Among all possibilities, the following two are the simplest :  (i.i.d.) where is a probability measure on Example: ( Black-Scholes-Merton Model) the price of some asset at time t

 Here is a stochastic matrix (i.e. and ) In this case, is the transition probability for the Markov chain

Example: ( Riffle Shuffles ) (Gilbert, Shannon ‘55, Reeds ‘81)

Markov Chain with transition kernel K and initial distribution λ This implies In particular, we observe Here and

What is the limiting distribution of given ? (i.e. What is the limiting behavior for ?) Example: ( Two State Chain )

does not exist

invariant/equilibrium/stationary distribution Suppose for some, that for all Then

Ergodic Markov Chain Assume is aperiodic and irreducible. Then there admits a unique invariant distribution λ and How the distribution of converge to its limiting distribution?

Distance between two probability measures ν and μ on S. ( total variation distance ) ( distance ) ( Note that )

For is a non-increasing sub-additive function ( ) This implies that if for some and then

We say is reversible if it satisfies the detailed balance condition Assume is reversible, irreducible and aperiodic.Then there exists eigenvalue and for any corresponding orthonormal basis of eigenvectors with, we have and

the smallest non-zero eigenvalue of = the spectral gap of where is the smallest constant satisfying the Poincare inequality Holding for all

Setting.Then ( The Divichlet form associated with the semigroup ) and Note that Hence

Theorem: The mixing time is given by Theorem: where

Consider the entropy – like quantity L And The log-Sobolev constant is given by the Formula L Hence is the smallest constant satisfying the log-Sobolev inequality L holding for all function L

Theorem:

Can one compute or estimate the constant ? The present answer is that it seems to be a very difficult problem to estimate. Lee-Yau(1998), Ann. of probability symmetric simple exclusion/random transposition Diaconis-Saloff-Coste(1996), Ann. Of Applied Probability. For simple random walk on cycle,. The exact value of for with all rows equal to Chen-Sheu(03), Journal of Functional Analysis when and is even

Who Cares ?

a set. a group. Action of group on set : Orbit of for some What’s the number of orbits (or patterns) ?

Example ( balls, boxes, Bose-Einstein distribution) Polya’s theory of counting (See Enumerative Combinatorics, Vol II, by R. Stanley, Sec7.24) Burnside Process (Jerrum and Goldberg )

Diaconis (‘03) ( balls, boxes) for all

Cut-off phenomenon Bayer and Diacoins (’86) The total variation distance for riffle shuffles of 52 cards “neat riffle shuffles”?