Complexity Analysis Variable Elimination Inference Probabilistic

Slides:



Advertisements
Similar presentations
Markov models and HMMs. Probabilistic Inference P(X,H,E) P(X|E=e) = P(X,E=e) / P(E=e) with P(X,E=e) = sum h P(X, H=h, E=e) P(X,E=e) = sum h,x P(X=x, H=h,
Advertisements

Introduction to Graphical Models Brookes Vision Lab Reading Group.
CS188: Computational Models of Human Behavior
Variational Methods for Graphical Models Micheal I. Jordan Zoubin Ghahramani Tommi S. Jaakkola Lawrence K. Saul Presented by: Afsaneh Shirazi.
Bayesian Networks CSE 473. © Daniel S. Weld 2 Last Time Basic notions Atomic events Probabilities Joint distribution Inference by enumeration Independence.
Solving systems of equations with 2 variables Word problems (Number Problems)
Junction Trees And Belief Propagation. Junction Trees: Motivation What if we want to compute all marginals, not just one? Doing variable elimination for.
Chapter 15 Probabilistic Reasoning over Time. Chapter 15, Sections 1-5 Outline Time and uncertainty Inference: ltering, prediction, smoothing Hidden Markov.
.. Likelihood Computation  Given a Bayesian network and evidence e, compute P( e ) Sum over all possible values of unobserved variables Bet1Die Win1.
Hidden Markov Models First Story! Majid Hajiloo, Aria Khademi.
1 Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence Fall 2013.
Median/Order Statistics Algorithms
Max-norm Projections for Factored MDPs Carlos Guestrin Daphne Koller Stanford University Ronald Parr Duke University.
Belief Propagation, Junction Trees, and Factor Graphs
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Solving Systems of Equations: Elimination Method.
Basic Terminology BASE EXPONENT means. IMPORTANT EXAMPLES.
Ahsanul Haque *, Swarup Chandra *, Latifur Khan * and Michael Baron + * Department of Computer Science, University of Texas at Dallas + Department of Mathematical.
1.3 The Intersection Point of Lines System of Equation A system of two equations in two variables looks like: – Notice, these are both lines. Linear Systems.
Sum-Product Networks CS886 Topics in Natural Language Processing
Efficient Solution Algorithms for Factored MDPs by Carlos Guestrin, Daphne Koller, Ronald Parr, Shobha Venkataraman Presented by Arkady Epshteyn.
1 Variable Elimination Graphical Models – Carlos Guestrin Carnegie Mellon University October 11 th, 2006 Readings: K&F: 8.1, 8.2, 8.3,
Daphne Koller Variable Elimination Graph-Based Perspective Probabilistic Graphical Models Inference.
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 12th Bayesian network and belief propagation in statistical inference Kazuyuki Tanaka.
Tell Me What You See and I will Show You Where It Is Jia Xu 1 Alexander G. Schwing 2 Raquel Urtasun 2,3 1 University of Wisconsin-Madison 2 University.
Bayesian Networks CSE 473. © D. Weld and D. Fox 2 Bayes Nets In general, joint distribution P over set of variables (X 1 x... x X n ) requires exponential.
1 Chapter 15 Probabilistic Reasoning over Time. 2 Outline Time and UncertaintyTime and Uncertainty Inference: Filtering, Prediction, SmoothingInference:
Daphne Koller Overview Conditional Probability Queries Probabilistic Graphical Models Inference.
Today Graphical Models Representing conditional dependence graphically
Teacher Orientation Linear Equations in two Variables Introduction Conditions with two lines in a plane Methods to solve LEs.
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
1 Structure Learning (The Good), The Bad, The Ugly Inference Graphical Models – Carlos Guestrin Carnegie Mellon University October 13 th, 2008 Readings:
Daphne Koller Overview Maximum a posteriori (MAP) Probabilistic Graphical Models Inference.
Notes T4-53 Properties of Exponents Objective: I can use properties of exponents. Key Words: Base Exponent Variable.
An Algorithm to Learn the Structure of a Bayesian Network Çiğdem Gündüz Olcay Taner Yıldız Ethem Alpaydın Computer Engineering Taner Bilgiç Industrial.
Daphne Koller Variable Elimination Variable Elimination Algorithm Probabilistic Graphical Models Inference.
Integrative Genomics I BME 230. Probabilistic Networks Incorporate uncertainty explicitly Capture sparseness of wiring Incorporate multiple kinds of data.
Graduate School of Information Sciences, Tohoku University
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk
Maximum Expected Utility
Inference in Bayesian Networks
Conditional Random Fields
WARMUP 1. y > – x – 4 y < 2x + 2.
Uncertainty in AI.
Tractable MAP Problems
Independence in Markov Networks
An Introduction to Variational Methods for Graphical Models
Luger: Artificial Intelligence, 5th edition
Independence in Markov Networks
General Gibbs Distribution
Bucket Renormalization for Approximate Inference
Markov Random Fields Presented by: Vladan Radosavljevic.
Graduate School of Information Sciences, Tohoku University
Simple Sampling Sampling Methods Inference Probabilistic Graphical
Graduate School of Information Sciences, Tohoku University
Learning human mobility patterns by learning a hierarchical hidden Markov model using latent Dirichlet allocation Eyal Ben Zion , Boaz Lerner Department.
Preliminaries: Factors
Clique Tree Algorithm: Computation
Shared Features in Log-Linear Models
Conditional Random Fields
Markov Networks Independencies Representation Probabilistic Graphical
Independence in Markov Networks
Pairwise Markov Networks
Graduate School of Information Sciences, Tohoku University
Wellington Cabrera Advisor: Carlos Ordonez
Multivariable Linear Systems
Variable Elimination Graphical Models – Carlos Guestrin
Cases. Simple Regression Linear Multiple Regression.
Multiple Regression Copyright © 2007 Pearson Education, Inc Publishing as Pearson Addison-Wesley.
Graduate School of Information Sciences, Tohoku University
Presentation transcript:

Complexity Analysis Variable Elimination Inference Probabilistic Graphical Models Variable Elimination Complexity Analysis

Eliminating Z

Reminder: Factor Product b1 c1 0.5·0.5 = 0.25 c2 0.5·0.7 = 0.35 b2 0.8·0.1 = 0.08 0.8·0.2 = 0.16 a2 0.1·0.5 = 0.05 0.1·0.7 = 0.07 0·0.1 = 0 0·0.2 = 0 a3 0.3·0.5 = 0.15 0.3·0.7 = 0.21 0.9·0.1 = 0.09 0.9·0.2 = 0.18 Nk =|Val(Xk)| a1 b1 0.5 b2 0.8 a2 0.1 a3 0.3 0.9 b1 c1 0.5 c2 0.7 b2 0.1 0.2 Cost: (mk-1)Nk multiplications

Reminder: Factor Marginalization b1 c1 0.25 c2 0.35 b2 0.08 0.16 a2 0.05 0.07 a3 0.15 0.21 0.09 0.18 Nk =|Val(Xk)| a1 c1 0.33 c2 0.51 a2 0.05 0.07 a3 0.24 0.39 Cost: ~Nk additions

Complexity of Variable Elimination Start with m factors m  n for Bayesian networks can be larger for Markov networks At each elimination step generate At most elimination steps Total number of factors: m*

Complexity of Variable Elimination N = max(Nk) = size of the largest factor Product operations: k (mk-1)Nk Sum operations: k Nk Total work is linear in N and m*

Complexity of Variable Elimination Total work is linear in N and m Nk =|Val(Xk)|=O(drk) where d = max(|Val(Xi)|) rk = |Xk| = cardinality of the scope of the kth factor

Complexity Example C D I S G L J H

Complexity and Elimination Order Eliminate: G C D I S G L J H

Complexity and Elimination Order B1 B2 B3 Bk … Eliminate A first: A C B1 B2 B3 Bk … Eliminate Bi‘s first:

Summary Complexity of variable elimination linear in size of the model (# factors, # variables) size of the largest factor generated Size of factor is exponential in its scope Complexity of algorithm depends heavily on elimination ordering

END END END