CS 188: Artificial Intelligence Spring 2007

Slides:



Advertisements
Similar presentations
Bayesian Networks CSE 473. © Daniel S. Weld 2 Last Time Basic notions Atomic events Probabilities Joint distribution Inference by enumeration Independence.
Advertisements

Identifying Conditional Independencies in Bayes Nets Lecture 4.
3/19. Conditional Independence Assertions We write X || Y | Z to say that the set of variables X is conditionally independent of the set of variables.
Bayesian networks Chapter 14 Section 1 – 2.
Bayesian Network Representation Continued
CS 188: Artificial Intelligence Fall 2009 Lecture 15: Bayes’ Nets II – Independence 10/15/2009 Dan Klein – UC Berkeley.
CS 188: Artificial Intelligence Spring 2009 Lecture 15: Bayes’ Nets II -- Independence 3/10/2009 John DeNero – UC Berkeley Slides adapted from Dan Klein.
CS 188: Artificial Intelligence Spring 2007 Lecture 14: Bayes Nets III 3/1/2007 Srini Narayanan – ICSI and UC Berkeley.
10/22  Homework 3 returned; solutions posted  Homework 4 socket opened  Project 3 assigned  Mid-term on Wednesday  (Optional) Review session Tuesday.
Bayesian Networks Alan Ritter.
CS 188: Artificial Intelligence Fall 2009 Lecture 14: Bayes’ Nets 10/13/2009 Dan Klein – UC Berkeley.
Bayes’ Nets [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at
Advanced Artificial Intelligence
Announcements  Midterm 1  Graded midterms available on pandagrader  See Piazza for post on how fire alarm is handled  Project 4: Ghostbusters  Due.
Bayes’ Nets  A Bayes’ net is an efficient encoding of a probabilistic model of a domain  Questions we can ask:  Inference: given a fixed BN, what is.
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
Bayesian Networks Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
CSE 473: Artificial Intelligence Spring 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer.
Probabilistic Models  Models describe how (a portion of) the world works  Models are always simplifications  May not account for every variable  May.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
Introduction to Bayesian Networks
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
QUIZ!!  T/F: Traffic, Umbrella are cond. independent given raining. TRUE  T/F: Fire, Smoke are cond. Independent given alarm. FALSE  T/F: BNs encode.
Announcements Project 4: Ghostbusters Homework 7
Probabilistic Reasoning [Ch. 14] Bayes Networks – Part 1 ◦Syntax ◦Semantics ◦Parameterized distributions Inference – Part2 ◦Exact inference by enumeration.
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2008 Readings: K&F: 3.1, 3.2, –  Carlos.
QUIZ!! T/F: Probability Tables PT(X) do not always sum to one. FALSE
CHAPTER 5 Probability Theory (continued) Introduction to Bayesian Networks.
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
1 BN Semantics 2 – Representation Theorem The revenge of d-separation Graphical Models – Carlos Guestrin Carnegie Mellon University September 17.
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2006 Readings: K&F: 3.1, 3.2, 3.3.
PROBABILISTIC REASONING Heng Ji 04/05, 04/08, 2016.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Artificial Intelligence Bayes’ Nets: Independence Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides.
BN Semantic II d-Separation, PDAGs, etc
Another look at Bayesian inference
CS 188: Artificial Intelligence Spring 2007
CS 2750: Machine Learning Directed Graphical Models
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Qian Liu CSE spring University of Pennsylvania
Bayesian networks (1) Lirong Xia Spring Bayesian networks (1) Lirong Xia Spring 2017.
Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account.
CS 4/527: Artificial Intelligence
CS 4/527: Artificial Intelligence
CS 4/527: Artificial Intelligence
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
Structure and Semantics of BN
CHAPTER 7 BAYESIAN NETWORK INDEPENDENCE BAYESIAN NETWORK INFERENCE MACHINE LEARNING ISSUES.
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
CSE 473: Artificial Intelligence Autumn 2011
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence
Class #16 – Tuesday, October 26
Structure and Semantics of BN
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
CS 188: Artificial Intelligence Spring 2006
Warm-up as you walk in Each node in a Bayes net represents a conditional probability distribution. What distribution do you get when you multiply all of.
Probabilistic Reasoning
Bayesian networks (2) Lirong Xia. Bayesian networks (2) Lirong Xia.
Bayesian networks (1) Lirong Xia. Bayesian networks (1) Lirong Xia.
CS 188: Artificial Intelligence Fall 2008
Bayesian networks (2) Lirong Xia.
Presentation transcript:

CS 188: Artificial Intelligence Spring 2007 Lecture 13: Graphical models II 2/27/2007 Srini Narayanan – ICSI and UC Berkeley

Example Bayes’ Net

Bayes’ Net Semantics A Bayes’ net: Semantics: A1 An X A set of nodes, one per variable X A directed, acyclic graph A conditional distribution of each variable conditioned on its parents (the parameters ) Semantics: A BN defines a joint probability distribution over its variables: A1 An X

Building the (Entire) Joint We can take a Bayes’ net and build any entry from the full joint distribution it encodes Typically, there’s no reason to build ALL of it We build what we need on the fly To emphasize: every BN over a domain implicitly represents some joint distribution over that domain, but is specified by local probabilities

Example: Alarm Network .001 * .002 * .05 * .05 * .01 = 5x10-11

Bayes’ Nets So far: how a Bayes’ net encodes a joint distribution Next: how to answer qualitative queries about that distribution Key idea: conditional independence Last class: assembled BNs using an intuitive notion of conditional independence as causality Today: formalize these ideas Main goal: answer queries about conditional independence and influence After that: how to answer numerical queries (inference)

Review: Useful Rules Conditional Probability (definition) Chain Rule Bayes Rule

Conditional Independence Reminder: independence X and Y are independent ( ) iff or equivalently, X and Y are conditionally independent given Z ( ) iff (Conditional) independence is a property of a distribution

Example: Independence For this graph, you can fiddle with  (the CPTs) all you want, but you won’t be able to represent any distribution in which the flips are dependent! X1 X2 h 0.5 t h 0.5 t All distributions

Topology Limits Distributions X Y Z Given some graph topology G, only certain joint distributions can be encoded The graph structure guarantees certain (conditional) independences (There might be more independence) Adding arcs increases the set of distributions, but has several costs X Y Z X Y Z

Example: Coins Extra arcs don’t prevent representing independence, just allow non-independence X1 X2 X1 X2 h 0.5 t h 0.5 t h 0.5 t h | h 0.5 t | h h | t 0.5 t | t

Independence in a BN Important question about a BN: X Y Z Are two nodes independent given certain evidence? If yes, can calculate using algebra (really tedious) If no, can prove with a counter example Example: Question: are X and Z independent? Answer: not necessarily, we’ve seen examples otherwise: low pressure causes rain which causes traffic. X can influence Z, Z can influence X (via Y) Addendum: they could be independent: how? X Y Z

Analyzing Independence Arc between nodes ==> (poss) dependence What if there is no direct arc? To answer this question in general, we only need to understand 3-node graphs with 2 arcs Cast of characters: “Common Effect” X Y Z X Y Z X Y Z “Causal Chain” “Common Cause”

Causal Chains This configuration is a “causal chain” X Y Z Is X independent of Z given Y? Evidence along the chain “blocks” the influence X: Low pressure Y: Rain Z: Traffic X Y Z Yes!

Common Cause Another basic configuration: two effects of the same cause Are X and Z independent? Consider the project due example Are X and Z independent given Y? Observing the cause blocks influence between effects. Y X Z Y: Project due X: Newsgroup busy Z: Lab full Yes!

Common Effect Last configuration: two causes of one effect (v-structures) Are X and Z independent? Yes: remember the ballgame and the rain causing traffic, no correlation? Still need to prove they must be (homework) Are X and Z independent given Y? No: remember that seeing traffic put the rain and the ballgame in competition? This is backwards from the other cases Observing the effect enables influence between causes. X Z Y X: Raining Z: Ballgame Y: Traffic

The General Case Any complex example can be analyzed using these three canonical cases General question: in a given BN, are two (sets of) variables independent given some evidence? Solution: graph search!

Reachability Recipe: shade evidence nodes Attempt 1: if two nodes are connected by an undirected path not blocked by a shaded node, they are conditionally independent Almost works, but not quite Where does it break? Answer: the v-structure at T doesn’t count as a link in a path unless shaded L R B D T T’

Reachability (the Bayes’ Ball) Correct algorithm: Shade in evidence Start at source node Try to reach target by search States: pair of (node X, previous state S) Successor function: X unobserved: To any child To any parent if coming from a child X observed: From parent to parent If you can’t reach a node, it’s conditionally independent of the start node given evidence S X X S S X X S

Example L Yes R B Yes D T Yes T’

Example Variables: Questions: R: Raining T: Traffic D: Roof drips S: I’m sad Questions: R T D S Yes

Deriving Useful Properties Which nodes are (unconditionally) independent of S?

Deriving Useful Properties Which nodes are (unconditionally) independent of S? Nodes w/o common ancestors

Deriving Useful Properties Which nodes are (unconditionally) independent of S? Nodes w/o common ancestors Which nodes are conditionally indep. of S given S’s parents?

Deriving Useful Properties Which nodes are (unconditionally) independent of S? Nodes w/o common ancestors Which nodes are conditionally indep. of S given S’s parents? All nondescendants

Deriving Useful Properties Which nodes are (unconditionally) independent of S? Nodes w/o common ancestors Which nodes are conditionally indep. of S given S’s parents? All nondescendants Given S’s Markov Blanket? (parents, kids, kids’ parents)

Deriving Useful Properties Which nodes are (unconditionally) independent of S? Nodes w/o common ancestors Which nodes are conditionally indep. of S given S’s parents? All nondescendants Given S’s Markov Blanket? (parents, kids, kids’ parents) All other nodes!

Causality? When Bayes nets reflect the true causal patterns: Often simpler (nodes have fewer parents) Often easier to think about Often easier to elicit from experts BNs need not actually be causal Sometimes no causal net exists over the domain E.g. consider the variables Traffic and Drips End up with arrows that reflect correlation, not causation What do the arrows really mean? Topology may happen to encode causal structure Topology only guaranteed to encode conditional independencies

Example: Traffic Basic traffic net Let’s multiply out the joint R T r 1/4 r 3/4 r t 3/16 t 1/16 r 6/16 r t 3/4 t 1/4 T r t 1/2 t

Example: Reverse Traffic Reverse causality? T t 9/16 t 7/16 r t 3/16 t 1/16 r 6/16 t r 1/3 r 2/3 R t r 1/7 r 6/7

Alternate BNs

Summary Bayes nets compactly encode joint distributions Guaranteed independencies of distributions can be deduced from BN graph structure The Bayes’ ball algorithm (aka d-separation) A Bayes net may have other independencies that are not detectable until you inspect its specific distribution