Dynamic Probabilistic Relational Models Paper by: Sumit Sanghai, Pedro Domingos, Daniel Weld Anna Yershova Presentation slides are adapted.

Slides:



Advertisements
Similar presentations
Is Random Model Better? -On its accuracy and efficiency-
Advertisements

Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
A Tutorial on Learning with Bayesian Networks
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Dynamic Bayesian Networks (DBNs)
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Rao-Blackwellised Particle Filtering Based on Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks by Arnaud Doucet, Nando de Freitas, Kevin.
Probabilistic Robotics
On-Line Probabilistic Classification with Particle Filters Pedro Højen-Sørensen, Nando de Freitas, and Torgen Fog, Proceedings of the IEEE International.
Oklahoma State University Generative Graphical Models for Maneuvering Object Tracking and Dynamics Analysis Xin Fan and Guoliang Fan Visual Computing and.
2. Introduction Multiple Multiplicative Factor Model For Collaborative Filtering Benjamin Marlin University of Toronto. Department of Computer Science.
Evaluation.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Learning with Bayesian Networks David Heckerman Presented by Colin Rickert.
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
Ensemble Learning: An Introduction
Object (Data and Algorithm) Analysis Cmput Lecture 5 Department of Computing Science University of Alberta ©Duane Szafron 1999 Some code in this.
Scaling Personalized Web Search Glen Jeh, Jennfier Widom Stanford University Presented by Li-Tal Mashiach Search Engine Technology course (236620) Technion.
Representing Uncertainty CSE 473. © Daniel S. Weld 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one.
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Particle Filtering for Non- Linear/Non-Gaussian System Bohyung Han
Learning Bayesian Networks
. Approximate Inference Slides by Nir Friedman. When can we hope to approximate? Two situations: u Highly stochastic distributions “Far” evidence is discarded.
Bayesian Filtering for Location Estimation D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Borriello Presented by: Honggang Zhang.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Overview and Mathematics Bjoern Griesbach
Particle Filtering. Sensors and Uncertainty Real world sensors are noisy and suffer from missing data (e.g., occlusions, GPS blackouts) Use sensor models.
Particle Filters++ TexPoint fonts used in EMF.
Learning Models of Relational Stochastic Processes Sumit Sanghai.
Semi-Supervised Learning with Concept Drift using Particle Dynamics applied to Network Intrusion Detection Data Fabricio Breve Institute of Geosciences.
Importance Sampling ICS 276 Fall 2007 Rina Dechter.
Mean Field Inference in Dependency Networks: An Empirical Study Daniel Lowd and Arash Shamaei University of Oregon.
QUIZ!!  T/F: The forward algorithm is really variable elimination, over time. TRUE  T/F: Particle Filtering is really sampling, over time. TRUE  T/F:
Object Tracking using Particle Filter
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Probabilistic Robotics Bayes Filter Implementations.
Young Ki Baik, Computer Vision Lab.
Mean Field Variational Bayesian Data Assimilation EGU 2012, Vienna Michail Vrettas 1, Dan Cornford 1, Manfred Opper 2 1 NCRG, Computer Science, Aston University,
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
Learning and Inferring Transportation Routines By: Lin Liao, Dieter Fox and Henry Kautz Best Paper award AAAI’04.
Dynamic Bayesian Networks and Particle Filtering COMPSCI 276 (chapter 15, Russel and Norvig) 2007.
-Arnaud Doucet, Nando de Freitas et al, UAI
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Simultaneously Learning and Filtering Juan F. Mancilla-Caceres CS498EA - Fall 2011 Some slides from Connecting Learning and Logic, Eyal Amir 2006.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
Dependency Networks for Collaborative Filtering and Data Visualization UAI-2000 발표 : 황규백.
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
SLAM Tutorial (Part I) Marios Xanthidis.
Nonlinear State Estimation
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Network Arnaud Doucet Nando de Freitas Kevin Murphy Stuart Russell.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
CIS750 – Seminar in Advanced Topics in Computer Science Advanced topics in databases – Multimedia Databases V. Megalooikonomou Link mining ( based on slides.
CSC Lecture 23: Sigmoid Belief Nets and the wake-sleep algorithm Geoffrey Hinton.
Machine Learning: Ensemble Methods
Talal H. Noor, Quan Z. Sheng, Lina Yao,
Chapter 4 Probability.
Particle Filtering for Geometric Active Contours
Probabilistic Reasoning Over Time
Introduction to particle filter
Auxiliary particle filtering: recent developments
Introduction to particle filter
CAP 5636 – Advanced Artificial Intelligence
Solving Equations Containing Decimals
Finding Periodic Discrete Events in Noisy Streams
Non-parametric Filters: Particle Filters
Presentation transcript:

Dynamic Probabilistic Relational Models Paper by: Sumit Sanghai, Pedro Domingos, Daniel Weld Anna Yershova Presentation slides are adapted from: Lise Getoor, Eyal Amir and Pedro Domingos slides

The problem How to represent/model uncertain sequential phenomena?

Limitations of the DBNs How to represent: Classes of objects and multiple instances of a class Multiple kinds of relations Relations evolving over time Example: Early fault detection in manufacturing Complex and diverse relations evolving over the manufacturing process.

PLATE weight shape color PLATE weight shape color PLATE weight shape color Fault detection in manufacturing PLATE weight shape color PLATE weight shape color BRACKET weight shape color welded to bolt PLATE weight shape color PLATE weight shape color PLATE weight shape color BOLT weight size type welded to bolt PLATE weight shape color welded to bolt PLATE weight shape color PLATE weight shape color PLATE weight shape color ACTION action

Strain s1 Patient p1 Patient p2 Contact c3 Contact c2 Contact c1 Strain s2 Patient p3 Strain Patient Contact DPRM with AU Semantics Attributes Objects probability distribution over completions I : 2TPRM relational skeletons  1    2 += Strain Patient Contact Strain s1 Patient p1 Patient p2 Contact c3 Contact c2 Contact c1 Strain s2 Patient p3

Particle Filtering

The Objective of PF The objective of the particle filter is to compute the conditional distribution To do this analytically - expensive The particle filter gives us an approximate computational technique.

Particle Filter Algorithm Create particles as samples from the initial state distribution p(A 1, B 1, C 1 ). For i going from 1 to N –Update each particle using the state update equation. –Compute weights for each particle using the observation value. –(Optionally) resample particles.

Initial State Distribution A 1, B 1, C 1

Prediction A t , B t , C t  A t, B t, C t = f (A t , B t , C t  ) A t, B t, C t

Compute Weights Before After A t, B t, C t

Resample A t, B t, C t

Rao-Blackwellised PF

Another Issue Rao-Blackwellising the relational attributes can vastly reduce the size of the state space. If the relational skeleton contains a large number of objects and relations, storing and updating all the requisite probabilities can still become quite expensive. Use some particular knowledge of the domain

Abstraction trees Replace the vector of probabilities with a tree structure leaves represent probabilities for entire sets of objects nodes represent all combinations of the propositional attributes Part2 1 - p f Uniform distr. over the rest of the objects true P(Part1.mate | Bolt(Part1, Part2))

PLATE weight shape color PLATE weight shape color PLATE weight shape color Experiments PLATE weight shape color PLATE weight shape color BRACKET weight shape color welded to bolt PLATE weight shape color PLATE weight shape color PLATE weight shape color BOLT weight size type welded to bolt PLATE weight shape color welded to bolt PLATE weight shape color PLATE weight shape color PLATE weight shape color ACTION action Dom(ACTION.action) = {paint, drill, polish, Change prop. Attr., Change rel. Attr.}

Fault Model Used With probability 1  p f an action produces the intended effect, with probability p f one of the several faults occur: Painting not being completed Wrong color used Bolting the wrong object Welding the wrong object

Observation Model Used With probability 1  p o the truth value of the attribute is observed, with probability p o an incorrect value is observed

Measure of the Accuracy K-L divergence between distributions Computing is infeasible – approximation is needed

Approximation of K-L Divergence We are interested only in measuring the differences in performance of different approximation methods -> first term is eliminated Take S samples from the true distribution (S = 10,000 in the experiments)

Experimental Results Abstraction trees reduced RBPF’s time and memory by a factor of 30 to 70 On average six times longer and 11 times the memory of PF, per particle. However, note that we ran PF with 40 times more particles than RBPF. Thus, RBPF is using less time and memory than PF, and performing far better in accuracy.

Conclusions and future work Relaxing the assumptions made Further scaling up inference Studying the properties of the abstraction trees Handling continuous variables Learning DPRMs Applying them to the real world problems