Tea – Time - Talks Every Friday 3.30 pm ICS 432. We Need Speakers (you)! Please volunteer. Philosophy: a TTT (tea-time-talk) should approximately take.

Slides:



Advertisements
Similar presentations
Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Advertisements

Slice Sampling Radford M. Neal The Annals of Statistics (Vol. 31, No. 3, 2003)
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Exact Inference in Bayes Nets
Hidden Markov Models. Room Wandering I’m going to wander around my house and tell you objects I see. Your task is to infer what room I’m in at every point.
Dynamic Bayesian Networks (DBNs)
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Introduction to Hidden Markov Models
Statistical NLP: Lecture 11
Hidden Markov Models Theory By Johan Walters (SR 2003)
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
Advanced Artificial Intelligence
BAYESIAN INFERENCE Sampling techniques
Beam Sampling for the Infinite Hidden Markov Model Van Gael, et al. ICML 2008 Presented by Daniel Johnson.
Hilbert Space Embeddings of Hidden Markov Models Le Song, Byron Boots, Sajid Siddiqi, Geoff Gordon and Alex Smola 1.
… Hidden Markov Models Markov assumption: Transition model:
1 Def: Let and be random variables of the discrete type with the joint p.m.f. on the space S. (1) is called the mean of (2) is called the variance of (3)
Machine Learning CUNY Graduate Center Lecture 7b: Sampling.
Temporal Processes Eran Segal Weizmann Institute.
Genome evolution: a sequence-centric approach Lecture 3: From Trees to HMMs.
. Hidden Markov Models with slides from Lise Getoor, Sebastian Thrun, William Cohen, and Yair Weiss.
. Learning Parameters of Hidden Markov Models Prepared by Dan Geiger.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
. cmsc726: HMMs material from: slides from Sebastian Thrun, and Yair Weiss.
Speech Technology Lab Ƅ ɜ: m ɪ ŋ ǝ m EEM4R Spoken Language Processing - Introduction Training HMMs Version 4: February 2005.
1 EM for BNs Graphical Models – Carlos Guestrin Carnegie Mellon University November 24 th, 2008 Readings: 18.1, 18.2, –  Carlos Guestrin.
Approximate Inference 2: Monte Carlo Markov Chain
Computer vision: models, learning and inference Chapter 19 Temporal models.
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
HMM - Basics.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
CS Statistical Machine learning Lecture 24
Hilbert Space Embeddings of Conditional Distributions -- With Applications to Dynamical Systems Le Song Carnegie Mellon University Joint work with Jonathan.
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Probabilistic reasoning over time Ch. 15, 17. Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –Exceptions: games.
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
Beam Sampling for the Infinite Hidden Markov Model by Jurgen Van Gael, Yunus Saatic, Yee Whye Teh and Zoubin Ghahramani (ICML 2008) Presented by Lihan.
Markov Chain Monte Carlo for LDA C. Andrieu, N. D. Freitas, and A. Doucet, An Introduction to MCMC for Machine Learning, R. M. Neal, Probabilistic.
Statistics 349.3(02) Analysis of Time Series. Course Information 1.Instructor: W. H. Laverty 235 McLean Hall Tel:
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Presented by Dajiang Zhu 11/1/2011.  Introduction of Markov chains Definition One example  Two problems as examples 2-SAT Algorithm (simply introduce.
Hidden Markov Models (HMMs) –probabilistic models for learning patterns in sequences (e.g. DNA, speech, weather, cards...) (2 nd order model)
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Dynamic Programming & Hidden Markov Models. Alan Yuille Dept. Statistics UCLA.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Daphne Koller Sampling Methods Metropolis- Hastings Algorithm Probabilistic Graphical Models Inference.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
CS 541: Artificial Intelligence Lecture VIII: Temporal Probability Models.
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Today.
Probabilistic Reasoning Over Time
Hidden Markov Autoregressive Models
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Hidden Markov Model LR Rabiner
Stochastic Volatility Models: Bayesian Framework
Lecture 15 Sampling.
Hidden Markov Models Markov chains not so useful for most agents
Chapter14-cont..
MCMC for PGMs: The Gibbs Chain
HMM: Particle filters Lirong Xia. HMM: Particle filters Lirong Xia.
Presentation transcript:

Tea – Time - Talks Every Friday 3.30 pm ICS 432

We Need Speakers (you)! Please volunteer. Philosophy: a TTT (tea-time-talk) should approximately take 15 mins. (extract the essence only).

Embedded HMM’s Radford Neal Matt Beal Sam Roweis University of Toronto

Question: Can we efficiently sample in non-linear state space models with hidden variables (e.g. non- linear Kalman filter).

Graphical Model continuous domain hidden observed

Inference One option is Gibbs sampling. However: if random variables are tightly coupled, the Markov chain mixes very slowly. This is because we need to change a large number of variables simultaneously.

Idea: Embed an HMM! 1. Choose a distribution at every time slice t. i) Define a forward kernel and a backward kernel: such that: (note: not necessarily detailed balance) The kernels will be used to sample K states embedded the continuous domain of

Idea: Embed an HMM! 1. Choose a distribution at every time slice t. 2. Sample M states from distribution as follows: i) Define a forward kernel and a backward kernel: and the current state sequence ii) Pick a number uniformly at random between iii) Apply the forward kernel times, starting at and apply backward kernel times, starting at 3. Sample from the `embedded HMM’ using “forward-backward”.

Sampling from the eHMM REPEAT: 1. Starting at the current state sequence, sample K states by applying the forward kernel J times (J chosen uniform at random) and the backward kernel K-J-1 times. This defines the embedded state space. 2. Sample states using the forward-backward algorithm from the following distribribution: x & y discrete! proof of detailed balance: see paper

Note: The probabilities of the HMM are not normalized, so it should it should be treated as an undirected graphical model