Markov Chain Monte Carlo

Slides:



Advertisements
Similar presentations
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Advertisements

Gibbs Sampling Qianji Zheng Oct. 5th, 2010.
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
Bayesian statistics – MCMC techniques
BAYESIAN INFERENCE Sampling techniques
Midterm Review. The Midterm Everything we have talked about so far Stuff from HW I won’t ask you to do as complicated calculations as the HW Don’t need.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
CSE 5731 Lecture 24B Markov Chain Monte Carlo CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Bayesian Networks. Graphical Models Bayesian networks Conditional random fields etc.
CS 561, Session 29 1 Belief networks Conditional independence Syntax and semantics Exact inference Approximate inference.
CSE 592 Applications of Artificial Intelligence Winter 2003 Probabilistic Reasoning.
11/14  Continuation of Time & Change in Probabilistic Reasoning Project 4 progress? Grade Anxiety? Make-up Class  On Monday?  On Wednesday?
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Approximate Inference 2: Monte Carlo Markov Chain
Particle Filtering in Network Tomography
Introduction to MCMC and BUGS. Computational problems More parameters -> even more parameter combinations Exact computation and grid approximation become.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
Markov Random Fields Probabilistic Models for Images
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Bayes’ Nets: Sampling [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available.
CS B 553: A LGORITHMS FOR O PTIMIZATION AND L EARNING Monte Carlo Methods for Probabilistic Inference.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
An Introduction to Markov Chain Monte Carlo Teg Grenager July 1, 2004.
Unsupervised Mining of Statistical Temporal Structures in Video Liu ze yuan May 15,2011.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Markov Chain Monte Carlo for LDA C. Andrieu, N. D. Freitas, and A. Doucet, An Introduction to MCMC for Machine Learning, R. M. Neal, Probabilistic.
Lecture #9: Introduction to Markov Chain Monte Carlo, part 3
CS 188: Artificial Intelligence Bayes Nets: Approximate Inference Instructor: Stuart Russell--- University of California, Berkeley.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
04/21/2005 CS673 1 Being Bayesian About Network Structure A Bayesian Approach to Structure Discovery in Bayesian Networks Nir Friedman and Daphne Koller.
CS 416 Artificial Intelligence Lecture 15 Uncertainty Chapter 14 Lecture 15 Uncertainty Chapter 14.
TEMPLATE DESIGN © Approximate Inference Completing the analogy… Inferring Seismic Event Locations We start out with the.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 17: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
CS 541: Artificial Intelligence Lecture VII: Inference in Bayesian Networks.
Hierarchical Models. Conceptual: What are we talking about? – What makes a statistical model hierarchical? – How does that fit into population analysis?
CS498-EA Reasoning in AI Lecture #19 Professor: Eyal Amir Fall Semester 2011.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Introduction to Sampling based inference and MCMC
Conditional Probability, Bayes Theorem, Independence and Repetition of Experiments Chris Massa.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 12
CS b553: Algorithms for Optimization and Learning
CSC321: Neural Networks Lecture 19: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
CS 4/527: Artificial Intelligence
Bayesian inference Presented by Amir Hadadi
Markov Chain Monte Carlo
Introduction to particle filter
A Non-Parametric Bayesian Method for Inferring Hidden Causes
CAP 5636 – Advanced Artificial Intelligence
Markov Networks.
Particle Filter/Monte Carlo Localization
Particle Filtering ICS 275b 2002.
Introduction to particle filter
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 12
Instructors: Fei Fang (This Lecture) and Dave Touretzky
CS 188: Artificial Intelligence
An introduction to Graphical Models – Michael Jordan
Class #19 – Tuesday, November 3
MCMC for PGMs: The Gibbs Chain
Conditional Probability, Bayes Theorem, Independence and Repetition of Experiments Chris Massa.
Markov Networks.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 12
Presentation transcript:

Markov Chain Monte Carlo

MCMC with Gibbs Sampling Fix the values of observed variables Set the values of all non-observed variables randomly Perform a random walk through the space of complete variable assignments. On each move: Pick a variable X Calculate Pr(X=true | all other variables) Set X to true with that probability Repeat many times. Frequency with which any variable X is true is it’s posterior probability. Converges to true posterior when frequencies stop changing significantly Time to converge is “mixing time”

Markov Blanket Sampling How to calculate Pr(X=true | all other variables) ? Recall: a variable is independent of all others given it’s Markov Blanket parents children other parents of children So problem becomes calculating Pr(X=true | MB(X)) We solve this sub-problem exactly Fortunately, it is easy to solve

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties

Breathing difficulties Smoking Lung disease Heart disease Breathing difficulties Let’s Simulate!

Don Patterson, Dieter Fox, Henry Kautz, Matthai Philipose Expressive, Scalable and Tractable Techniques for Modeling Activities of Daily Living Don Patterson, Dieter Fox, Henry Kautz, Matthai Philipose

Our Model of Activities

Our Model of Activities Linear Temporal Ordering of Sub-Activities

Our Model of Activities Unordered Sequence of Object Touches

Our Model of Activities Each object is required with a probability, P(o) (not shown)

Our Model of Activities Optional Gaussian Timing Constraint

Expressive Scalable Tractable General and intuitive way to specify activities Scalable We mine these models from the web Tractable We convert models to Dynamic Bayesian Networks We reason in real-time using stochastic Monte- Carlo techniques (particle filters)

Short Demo Long Demo