Today.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Motivating Markov Chain Monte Carlo for Multiple Target Tracking
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 27 – Overview of probability concepts 1.
Lecture 16 Hidden Markov Models. HMM Until now we only considered IID data. Some data are of sequential nature, i.e. have correlations have time. Example:
Exact Inference. Inference Basic task for inference: – Compute a posterior distribution for some query variables given some observed evidence – Sum out.
Review of Probability. Definitions (1) Quiz 1.Let’s say I have a random variable X for a coin, with event space {H, T}. If the probability P(X=H) is.
Exact Inference in Bayes Nets
Dynamic Bayesian Networks (DBNs)
Loopy Belief Propagation a summary. What is inference? Given: –Observabled variables Y –Hidden variables X –Some model of P(X,Y) We want to make some.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Introduction to Belief Propagation and its Generalizations. Max Welling Donald Bren School of Information and Computer and Science University of California.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Hidden Markov Models M. Vijay Venkatesh. Outline Introduction Graphical Model Parameterization Inference Summary.
Hidden Markov Models First Story! Majid Hajiloo, Aria Khademi.
Graphical models, belief propagation, and Markov random fields 1.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
… Hidden Markov Models Markov assumption: Transition model:
Midterm Review. The Midterm Everything we have talked about so far Stuff from HW I won’t ask you to do as complicated calculations as the HW Don’t need.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
Learning Low-Level Vision William T. Freeman Egon C. Pasztor Owen T. Carmichael.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Bayesian Networks Alan Ritter.
CPSC 422, Lecture 14Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14 Feb, 4, 2015 Slide credit: some slides adapted from Stuart.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
Computer vision: models, learning and inference
Factor Graphs Young Ki Baik Computer Vision Lab. Seoul National University.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Lab 4 1.Get an image into a ROS node 2.Find all the orange pixels (suggest HSV) 3.Identify the midpoint of all the orange pixels 4.Explore the findContours.
Reasoning Under Uncertainty: Bayesian networks intro CPSC 322 – Uncertainty 4 Textbook §6.3 – March 23, 2011.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Generalizing Variable Elimination in Bayesian Networks 서울 시립대학원 전자 전기 컴퓨터 공학과 G 박민규.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.1: Bayes Filter Jürgen Sturm Technische Universität München.
1 CMSC 671 Fall 2001 Class #21 – Tuesday, November 13.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
CS Statistical Machine learning Lecture 24
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Lecture 2: Statistical learning primer for biologists
Introduction to Belief Propagation
1 Chapter 15 Probabilistic Reasoning over Time. 2 Outline Time and UncertaintyTime and Uncertainty Inference: Filtering, Prediction, SmoothingInference:
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
Reasoning Under Uncertainty: Independence and Inference CPSC 322 – Uncertainty 5 Textbook §6.3.1 (and for HMMs) March 25, 2011.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Bayesian Belief Propagation for Image Understanding David Rosenberg.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
CS498-EA Reasoning in AI Lecture #23 Instructor: Eyal Amir Fall Semester 2011.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Data Mining Lecture 11.
Introduction to particle filter
State Estimation Probability, Bayes Filtering
Bayesian Models in Machine Learning
CSE-490DF Robotics Capstone
Introduction to particle filter
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Expectation-Maximization & Belief Propagation
Class #16 – Tuesday, October 26
Chapter14-cont..
Approximate Inference by Sampling
Principle of Bayesian Robot Localization.
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Presentation transcript:

Today

Next week

Marginalization ) , ( y x P P(x1) = sum sum ) , ( y x P Suppose you have some joint probability, Involving observations, y, and hidden states, x. Suppose you’re at x1, and you want to find the marginal probability there, given the observations. Normally, you would have to compute: For N other hidden nodes, each of M states, that will take MN additions. ) , ( 3 2 1 y x P P(x1) = sum sum ) , ( 3 2 1 y x P x x 2 3

Special case: Markov network But suppose the joint probability has a special structure, shown by this Markov network: Then this sum: can be computed with N M2 additions, as follows… y1 x1 y2 x2 y3 x3 ) , ( sum 3 2 1 y x P P(x1) =

Derivation of belief propagation y1 y2 y3 x1 x2 x3 P(x1) = sum sum P ( x , x , x , y , y , y ) 1 2 3 1 2 3 x x 2 3

The posterior factorizes P(x1) = sum sum P ( x , x , x , y , y , y ) 1 2 3 1 2 3 x x 2 3 = sum sum F ( x , y ) 1 1 x x 2 3 F ( x , y ) Y ( x , x ) 2 2 1 2 F ( x , y ) Y ( x , x ) 3 3 2 3 x = mean F ( x , y ) 1 MMSE 1 1 x y1 y2 y3 1 sum F ( x , y ) Y ( x , x ) 2 2 1 2 x 2 x1 x2 x3 sum F ( x , y ) Y ( x , x ) 3 3 2 3 x 3

Propagation rules P(x1) = sum sum P ( x , x , x , y , y , y ) = sum 2 3 1 2 3 x x 2 3 = sum sum F ( x , y ) 1 1 x x 2 3 F ( x , y ) Y ( x , x ) 2 2 1 2 F ( x , y ) Y ( x , x ) 3 3 2 3 P(x1) = F ( x , y ) 1 1 y1 y2 y3 sum F ( x , y ) Y ( x , x ) 2 2 1 2 x 2 x1 x2 x3 sum F ( x , y ) Y ( x , x ) 3 3 2 3 x 3

Propagation rules P(x1) = F ( x , y ) sum F ( x , y ) Y ( x , x ) sum 2 2 1 2 x 2 sum F ( x , y ) Y ( x , x ) 3 3 2 3 x 3 y1 y2 y3 x1 x2 x3

Belief and message update rules j = i j i

Belief propagation updates = i j i ( ) = * .* .*

Simple example For the 3-node example, worked out in detail, see Sections 2.0, 2.1 of:

Optimal solution in a chain or tree: Belief Propagation “Do the right thing” Bayesian algorithm. For Gaussian random variables over time: Kalman filter. For hidden Markov models: forward/backward algorithm (and MAP variant is Viterbi).

Other loss functions The above rules let you compute the marginal probability at a node. From that, you can compute the mean estimate. But you can also use a related algorithm to compute the MAP estimate for x1.

MAP estimate for a chain or a tree y1 y2 y3 x1 x2 x3

The posterior factorizes y1 y2 y3 x1 x2 x3

Propagation rules y1 y2 y3 x1 x2 x3

Using conditional probabilities instead of compatibility functions x1 y2 x2 y3 x3 By Bayes rule

Writing it as a factorization y1 x1 y2 x2 y3 x3 By the fact that conditioning on x1 makes y1 and x2, x3, y2, y3 independent

Writing it as a factorization y1 x1 y2 x2 y3 x3 Now use Bayes rule (with x2) for the rightmost term.

Writing it as a factorization y1 x1 y2 x2 y3 x3 From the Markov structure, conditioning on x1 and x2 is the same as conditioning on x2.

Writing it as a factorization y1 x1 y2 x2 y3 x3 Conditioning on x2 makes y2 independent of x3 and y3.

Writing it as a factorization y1 x1 y2 x2 y3 x3 The same operations, once more, with the far right term.

A toy problem 10 nodes. 2 states for each node. Local evidence as shown below.

Classic 1976 paper

Relaxation labelling

Belief propagation Relaxation labelling

Yair’s motion example

Yair’s figure/ground example