CSE 473: Artificial Intelligence Spring 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.

Slides:



Advertisements
Similar presentations
Reminder Reassigned Project 1 due today midnight
Advertisements

Bayesian networks Chapter 14 Section 1 – 2. Outline Syntax Semantics Exact computation.
Probabilistic Reasoning Bayesian Belief Networks Constructing Bayesian Networks Representing Conditional Distributions Summary.
Bayesian Networks CSE 473. © Daniel S. Weld 2 Last Time Basic notions Atomic events Probabilities Joint distribution Inference by enumeration Independence.
Lirong Xia Bayesian networks (2) Thursday, Feb 25, 2014.
BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
Identifying Conditional Independencies in Bayes Nets Lecture 4.
1 22c:145 Artificial Intelligence Bayesian Networks Reading: Ch 14. Russell & Norvig.
Bayesian Networks Chapter 14 Section 1, 2, 4. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
Review: Bayesian learning and inference
CS 188: Artificial Intelligence Fall 2009 Lecture 13: Probability 10/8/2009 Dan Klein – UC Berkeley 1.
Bayesian networks Chapter 14 Section 1 – 2.
Bayesian Belief Networks
CS 188: Artificial Intelligence Fall 2009 Lecture 15: Bayes’ Nets II – Independence 10/15/2009 Dan Klein – UC Berkeley.
CS 188: Artificial Intelligence Spring 2009 Lecture 15: Bayes’ Nets II -- Independence 3/10/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein.
CS 188: Artificial Intelligence Fall 2009 Lecture 14: Bayes’ Nets 10/13/2009 Dan Klein – UC Berkeley.
Bayesian networks More commonly called graphical models A way to depict conditional independence relationships between random variables A compact specification.
Bayes’ Nets [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at
Advanced Artificial Intelligence
Announcements  Midterm 1  Graded midterms available on pandagrader  See Piazza for post on how fire alarm is handled  Project 4: Ghostbusters  Due.
Bayes’ Nets  A Bayes’ net is an efficient encoding of a probabilistic model of a domain  Questions we can ask:  Inference: given a fixed BN, what is.
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
Bayesian Networks Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
Probabilistic Models  Models describe how (a portion of) the world works  Models are always simplifications  May not account for every variable  May.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
QUIZ!!  T/F: Traffic, Umbrella are cond. independent given raining. TRUE  T/F: Fire, Smoke are cond. Independent given alarm. FALSE  T/F: BNs encode.
CSE 473: Artificial Intelligence Spring 2012 Bayesian Networks - Inference Dan Weld Slides adapted from Jack Breese, Dan Klein, Daphne Koller, Stuart Russell,
Announcements Project 4: Ghostbusters Homework 7
Probabilistic Reasoning [Ch. 14] Bayes Networks – Part 1 ◦Syntax ◦Semantics ◦Parameterized distributions Inference – Part2 ◦Exact inference by enumeration.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
QUIZ!! T/F: Probability Tables PT(X) do not always sum to one. FALSE
CHAPTER 5 Probability Theory (continued) Introduction to Bayesian Networks.
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2006 Readings: K&F: 3.1, 3.2, 3.3.
PROBABILISTIC REASONING Heng Ji 04/05, 04/08, 2016.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016.
Artificial Intelligence Bayes’ Nets: Independence Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides.
CS 188: Artificial Intelligence Spring 2007
CS 2750: Machine Learning Directed Graphical Models
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Bayesian networks (1) Lirong Xia Spring Bayesian networks (1) Lirong Xia Spring 2017.
Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account.
CS 4/527: Artificial Intelligence
CS 4/527: Artificial Intelligence
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Fall 2007
CSE 473: Artificial Intelligence Autumn 2011
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
CS 188: Artificial Intelligence Spring 2006
Bayesian Networks CSE 573.
Warm-up as you walk in Each node in a Bayes net represents a conditional probability distribution. What distribution do you get when you multiply all of.
Bayesian networks (2) Lirong Xia. Bayesian networks (2) Lirong Xia.
Bayesian networks (1) Lirong Xia. Bayesian networks (1) Lirong Xia.
CS 188: Artificial Intelligence Fall 2008
Presentation transcript:

CSE 473: Artificial Intelligence Spring 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1

Outline  Probabilistic models (and inference)  Bayesian Networks (BNs)  Independence in BNs  Efficient Inference in BNs  Learning

Bayes’ Nets: Big Picture  Two problems with using full joint distribution tables as our probabilistic models:  Unless there are only a few variables, the joint is WAY too big to represent explicitly  Hard to learn (estimate) anything empirically about more than a few variables at a time  Bayes’ nets: a technique for describing complex joint distributions (models) using simple, local distributions (conditional probabilities)  More properly called graphical models  We describe how variables locally interact  Local interactions chain together to give global, indirect interactions

Bayes’ Net Semantics Formally:  A set of nodes, one per variable X  A directed, acyclic graph  A CPT for each node  CPT = “Conditional Probability Table”  Collection of distributions over X, one for each combination of parents’ values A1A1 X AnAn A Bayes net = Topology (graph) + Local Conditional Probabilities

Hidden Markov Models  An HMM is defined by:  Initial distribution:  Transitions:  Emissions: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5 XNXN ENEN

Example Bayes’ Net: Car

Probabilities in BNs  Bayes’ nets implicitly encode joint distributions  As a product of local conditional distributions  To see what probability a BN gives to a full assignment, multiply all the relevant conditionals together:  This lets us reconstruct any entry of the full joint  Not every BN can represent every joint distribution  The topology enforces certain independence assumptions  Compare to the exact decomposition according to the chain rule!

Example: Independence  N fair, independent coin flips: h0.5 t h t h t

Example: Coin Flips X1X1 X2X2 XnXn  N independent coin flips  No interactions between variables: absolute independence

Independence  Two variables are independent if:  This says that their joint distribution factors into a product of two simpler distributions  Another form:  We write: ? B A A  B

Independence  Two variables are independent if:  This says that their joint distribution factors into a product of two simpler distributions  Another form:  We write: ? B A A  B

Independence  Two variables are independent if:  This says that their joint distribution factors into a product two simpler distributions  Another form:  We write:  Independence is a simplifying modeling assumption  Empirical joint distributions: at best “close” to independent  What could we assume for {Weather, Traffic, Cavity, Toothache}? B A A  B

Example: Independence? TWP warmsun0.4 warmrain0.1 coldsun0.2 coldrain0.3 TW warmsun0.15 warmrain0.10 coldsun0.45 coldrain0.30 TP warm0.25 cold0.75 WP sun0.6 rain0.4

Conditional Independence  Unconditional (absolute) independence very rare (why?)  Conditional independence is our most basic and robust form of knowledge about uncertain environments:  What about fire, smoke, alarm?

© Daniel S. Weld 15 Conditional Independence Are A & B independent? P(A|B) ? P(A) A A  B B  P(A)=(.25+.5)/2 =.375 P(B)=.75 P(A|B)= ( ) /3 =.3333

© Daniel S. Weld 16 A, B Conditionally Independent Given C P(A|B,C) = P(A|C) C = spots P(A|C) =.25 ACAC C

© Daniel S. Weld 17 A, B Conditionally Independent Given C P(A|B,C) = P(A|C) C = spots P(A|C) =.25 P(A|B,C)=.25 ABCABC BCBC

© Daniel S. Weld 18 A, B Conditionally Independent Given  C P(A|B,C) = P(A|C) C = spots P(A|C) =.25 P(A|B,C)=.25 P(A|  C) =.5 P(A|B,  C)=.5 A  C A  B  C B  C

Conditional Independence  P(Toothache, Cavity, Catch)  If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache:  P(+catch | +toothache, +cavity) = P(+catch | +cavity)  The same independence holds if I don’t have a cavity:  P(+catch | +toothache,  cavity) = P(+catch|  cavity)  Catch is conditionally independent of Toothache given Cavity:  P(Catch | Toothache, Cavity) = P(Catch | Cavity)  Equivalent statements:  P(Toothache | Catch, Cavity) = P(Toothache | Cavity)  P(Toothache, Catch | Cavity) = P(Toothache | Cavity) P(Catch | Cavity)  One can be derived from the other easily

Ghostbusters Chain Rule TBGP(T,B,G) +t+b+g0.16 +t+b gg t bb +g0.24 +t bb gg 0.04  t +b+g0.04 tt +b gg 0.24 tt bb +g0.06 tt bb gg  2-position maze, each sensor indicates ghost location  T: Top square is red B: Bottom square is red G: Ghost is in the top P(T,B,G) = P(G) P(T|G) P(B|G)  Can assume: P( +g ) = 0.5 P( +t | +g ) = 0.8 P( +t |  g ) = 0.4 P( +b | +g ) = 0.4 P( +b |  g ) = 0.8  That means, the two sensors are conditionally independent, given the ghost position

Example: Traffic  Variables:  R: It rains  T: There is traffic  Model 1: independence  Model 2: rain is conditioned on traffic  Why is an agent using model 2 better?  Model 3: traffic is conditioned on rain  Is this better than model 2?

Example: Alarm Network  Variables  B: Burglary  A: Alarm goes off  M: Mary calls  J: John calls  E: Earthquake!  How big is joint distribution?  2 n -1 = 31 parameters

Example: Alarm Network Burglary Earthqk Alarm John calls Mary calls BP(B) +b0.001 bb EP(E) +e0.002 ee BEAP(A|B,E) +b+e+a0.95 +b+e aa b ee +a0.94 +b ee aa 0.06 bb +e+a0.29 bb +e aa 0.71 bb ee +a0.001 bb ee aa AJP(J|A) +a+j0.9 +a jj 0.1 aa +j0.05 aa jj 0.95 AMP(M|A) +a+m0.7 +a mm 0.3 aa +m0.01 aa mm 0.99 Only 10 params

Example: Traffic II  Let’s build a graphical model  Variables  T: Traffic  R: It rains  L: Low pressure  D: Roof drips  B: Ballgame  C: Cavity

Changing Bayes’ Net Structure  The same joint distribution can be encoded in many different Bayes’ nets  Analysis question: given some edges, what other edges do you need to add?  One answer: fully connect the graph  Better answer: don’t make any false conditional independence assumptions

Example: Independence  For this graph, you can fiddle with  (the CPTs) all you want, but you won’t be able to represent any distribution in which the flips are dependent! h0.5 t h t X1X1 X2X2 All distributions

Example: Coins  Extra arcs don’t prevent representing independence, just allow non-independence h0.5 t X1X1 X2X2 X1X1 X2X2 h t h | h0.5 t | h0.5 h | t0.5 t | t0.5  Adding unneeded arcs isn’t wrong, it’s just inefficient h0.5 t

Topology Limits Distributions  Given some graph topology G, only certain joint distributions can be encoded  The graph structure guarantees certain (conditional) independences  (There might be more independence)  Adding arcs increases the set of distributions, but has several costs  Full conditioning can encode any distribution X Y Z X Y ZX Y Z

Independence in a BN  Important question about a BN:  Are two nodes independent given certain evidence?  If yes, can prove using algebra (tedious in general)  If no, can prove with a counter example  Example: XYZ  Question: are X and Z independent?  Answer: no.  Example: low pressure causes rain, which causes traffic.  Knowledge about X may change belief in Z,  Knowledge about Z may change belief in X (via Y)  Addendum: they could be independent: how?

Causal Chains  This configuration is a “causal chain”  Is X independent of Z given Y? Yes! X: Low pressure Y: Rain Z: Traffic Evidence along the chain “blocks” the influence X Y Z

Common Parent  Another basic configuration: two effects of the same parent  Are X and Z independent? XYZ Y: Project due X: Forum busy Z: Lab full

Common Parent  Another basic configuration: two effects of the same parent  Are X and Z independent?  Are X and Z independent given Y? X Y Z Yes! Y: Project due X: Forum busy Z: Lab full  Observing the cause blocks influence between effects.

Common Effect  Last configuration: two causes of one effect (v-structures)  Are X and Z independent? XYZ X: Raining Z: Ballgame Y: Traffic  Yes: the ballgame and the rain cause traffic, but they are not correlated  Still need to prove they must be (try it!)

Common Effect  Last configuration: two causes of one effect (v-structures)  Are X and Z independent? X Y Z X: Raining Z: Ballgame Y: Traffic  No: seeing traffic puts the rain and the ballgame in competition as explanation!  This is backwards from the other cases  Observing an effect activates influence between possible causes.  Yes: the ballgame and the rain cause traffic, but they are not correlated  Still need to prove they must be (try it!)  Are X and Z independent given Y?

The General Case  Any complex example can be analyzed using these three canonical cases  General question: in a given BN, are two variables independent (given evidence)?  Solution: analyze the graph

Reachability  Recipe: shade evidence nodes  Attempt 1: if two nodes are connected by an undirected path not blocked by a shaded node, they are conditionally independent RTBDL  Almost works, but not quite  Where does it break?  Answer: the v-structure at T doesn’t count as a link in a path unless “active”

Reachability (D-Separation)  Question: Are X and Y conditionally independent given evidence vars {Z}?  Yes, if X and Y “separated” by Z  Look for active paths from X to Y  No active paths = independence!  A path is active if each triple is active:  Causal chain A  B  C where B is unobserved (either direction)  Common cause A  B  C where B is unobserved  Common effect (aka v-structure) A  B  C where B or one of its descendents is observed  All it takes to block a path is a single inactive segment Active TriplesInactive Triples

Example: Independent? Yes RTBT’ Active Segments

Example: Independent? RTBDLT’ Yes Active Segments

Example  Variables:  R: Raining  T: Traffic  D: Roof drips  S: I’m sad  Questions: TSDR Yes Active Segments

Summary  Bayes nets compactly encode joint distributions  Guaranteed independencies of distributions can be deduced from BN graph structure  D-separation gives precise conditional independence guarantees from graph alone  A Bayes’ net’s joint distribution may have further (conditional) independence that is not detectable until you inspect its specific distribution

Bayes’ Nets  So far: how a Bayes’ net encodes a joint distribution  Next: how to answer queries about that distribution  Key idea: conditional independence  Last class: assembled BNs using an intuitive notion of conditional independence as causality  Today: formalize these ideas  Main goal: answer queries about conditional independence and influence  After that: how to answer numerical queries (inference)