6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 3.

Slides:



Advertisements
Similar presentations
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Advertisements

Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Complexity Theory Lecture 4 Lecturer: Moni Naor. Recap Last week: Space Complexity Savitch’s Theorem: NSPACE(f(n)) µ SPACE(f 2 (n)) –Collapse of NPSPACE.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Markov Chains 1.
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
11 - Markov Chains Jim Vallandingham.
Mathematical Foundations of Markov Chain Monte Carlo Algorithms Based on lectures given by Alistair Sinclair Computer Science Division U.C. Berkeley.
TCOM 501: Networking Theory & Fundamentals
Random Walks Ben Hescott CS591a1 November 18, 2002.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
More on Rankings. Query-independent LAR Have an a-priori ordering of the web pages Q: Set of pages that contain the keywords in the query q Present the.
Markov Chains Lecture #5
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
TCOM 501: Networking Theory & Fundamentals
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
CSE 321 Discrete Structures Winter 2008 Lecture 25 Graph Theory.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 10 June 4, 2006
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
1 On the Computation of the Permanent Dana Moshkovitz.
Genome evolution: a sequence-centric approach Lecture 4: Beyond Trees. Inference by sampling Pre-lecture draft – update your copy after the lecture!
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Entropy Rate of a Markov Chain
1 Random Walks on Graphs: An Overview Purnamrita Sarkar, CMU Shortened and modified by Longin Jan Latecki.
Time to Equilibrium for Finite State Markov Chain 許元春(交通大學應用數學系)
CS774. Markov Random Field : Theory and Application Lecture 02
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
Random walks on undirected graphs and a little bit about Markov Chains Guy.
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Date: 2005/4/25 Advisor: Sy-Yen Kuo Speaker: Szu-Chi Wang.
geometric representations of graphs
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
ST3236: Stochastic Process Tutorial 6
PERTEMUAN 26. Markov Chains and Random Walks Fundamental Theorem of Markov Chains If M g is an irreducible, aperiodic Markov Chain: 1. All states are.
STAT 534: Statistical Computing
Daphne Koller Sampling Methods Metropolis- Hastings Algorithm Probabilistic Graphical Models Inference.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
The Monte Carlo Method/ Markov Chains/ Metropolitan Algorithm from sec in “Adaptive Cooperative Systems” -summarized by Jinsan Yang.
Markov Chains and Random Walks
Markov Chains and Mixing Times
Advanced Statistical Computing Fall 2016
Random Walks on Graphs.
Random walks on undirected graphs and a little bit about Markov Chains
Markov Chains Mixing Times Lecture 5
Introduction to Algorithms
Lecture 16 Cramer’s Rule, Eigenvalue and Eigenvector
Haim Kaplan and Uri Zwick
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Randomized Algorithms Markov Chains and Random Walks
Markov Chain Monte Carlo: Metropolis and Glauber Chains
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Ilan Ben-Bassat Omri Weinstein
Presented by Nick Janus
Presentation transcript:

6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 3

recap

Markov Chains Def: A Markov Chain on Ω is a stochastic process (X 0, X 1,…, X t, …) such that a. b. =: distribution of X t conditioning on X 0 = x. then

Graphical Representation Represent Markov chain by a graph G(P): - there is a directed edge between states x and y if P(x, y) > 0, with edge-weight P(x,y); (no edge if P(x,y)=0;) - nodes are identified with elements of the state-space Ω - self loops are allowed (when P(x,x) >0) Much of the theory of Markov chains only depends on the topology of G (P), rather than its edge-weights. irreducibility aperiodicity

Mixing of Reversible Markov Chains Theorem (Fundamental Theorem of Markov Chains) : If a Markov chain P is irreducible and aperiodic then: b. p t x → π as t → ∞, for all x ∈ Ω. a. it has a unique stationary distribution π =π P ;

examples

Example 1 Random Walk on undirected Graph G=(V, E) = P is irreducible iff G is connected; P is aperiodic iff G is non-bipartite; P is reversible with respect to π(x) = deg(x)/(2|E|).

Example 2 Card Shufflings Showed convergence to uniform distribution using concept of doubly stochastic matrices.

Example 3 Designing Markov Chains Input: A positive weight function w : Ω → R +. Goal: Define MC (X t ) t such that p t x → π, where The Metropolis Approach - define a graph Γ(Ω,Ε) on Ω. - define a proposal distribution κ(x,  ) s.t. κ(x, y ) = κ(y, x ) > 0, for all (x, y)  E, x  y κ(x, y ) = 0, if (x,y)  E

Example 3 The Metropolis Approach - define a graph Γ(Ω,Ε) on Ω. - define a proposal distribution κ(x,  ) s.t. κ(x, y ) = κ(y, x ) > 0, for all (x, y)  E, x  y κ(x, y ) = 0, if (x,y)  E the Metropolis chain: When chain is at state x - Pick neighbor y with probability κ(x, y ) - Accept move to y with probability

Example 3 the Metropolis chain: When process is at state x - Pick neighbor y with probability κ(x, y ) - Accept move to y with probability Claim: The Metropolis Markov chain is reversible wrt to π(x) = w(x)/Z. Transition Matrix: Proof: 1point exercise

back to the fundamental theorem

Mixing of Reversible Markov Chains Theorem (Fundamental Theorem of Markov Chains) : If a Markov chain P is irreducible and aperiodic then: b. p t x → π as t → ∞, for all x ∈ Ω. a. it has a unique stationary distribution π =π P ; Proof: see notes.