Bayesian Networks Chapter 14 Section 1, 2, 4. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.

Slides:



Advertisements
Similar presentations
Bayesian networks Chapter 14 Section 1 – 2. Outline Syntax Semantics Exact computation.
Advertisements

BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
P ROBABILISTIC I NFERENCE. A GENDA Conditional probability Independence Intro to Bayesian Networks.
1 Bayesian Networks Slides from multiple sources: Weng-Keen Wong, School of Electrical Engineering and Computer Science, Oregon State University.
Identifying Conditional Independencies in Bayes Nets Lecture 4.
1 22c:145 Artificial Intelligence Bayesian Networks Reading: Ch 14. Russell & Norvig.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
Review: Bayesian learning and inference
Bayesian Networks. Motivation The conditional independence assumption made by naïve Bayes classifiers may seem to rigid, especially for classification.
Artificial Intelligence Probabilistic reasoning Fall 2008 professor: Luigi Ceccaroni.
Bayesian networks Chapter 14 Section 1 – 2.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 14 Jim Martin.
Bayesian Belief Networks
Bayesian Belief Network. The decomposition of large probabilistic domains into weakly connected subsets via conditional independence is one of the most.
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Bayesian Networks Russell and Norvig: Chapter 14 CMCS424 Fall 2003 based on material from Jean-Claude Latombe, Daphne Koller and Nir Friedman.
Belief Networks Russell and Norvig: Chapter 15 CS121 – Winter 2002.
Bayesian Reasoning. Tax Data – Naive Bayes Classify: (_, No, Married, 95K, ?)
1 Probabilistic Belief States and Bayesian Networks (Where we exploit the sparseness of direct interactions among components of a world) R&N: Chap. 14,
Bayesian Networks Russell and Norvig: Chapter 14 CMCS421 Fall 2006.
Bayesian networks More commonly called graphical models A way to depict conditional independence relationships between random variables A compact specification.
Probabilistic Reasoning
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Bayesian Networks Material used 1 Random variables
Artificial Intelligence CS 165A Tuesday, November 27, 2007  Probabilistic Reasoning (Ch 14)
Read R&N Ch Next lecture: Read R&N
Bayesian networks Chapter 14. Outline Syntax Semantics.
Bayesian networks Chapter 14 Section 1 – 2. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
B AYESIAN N ETWORKS. S IGNIFICANCE OF C ONDITIONAL INDEPENDENCE Consider Grade(CS101), Intelligence, and SAT Ostensibly, the grade in a course doesn’t.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Probabilistic Belief States and Bayesian Networks (Where we exploit the sparseness of direct interactions among components of a world) R&N: Chap. 14, Sect.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
1 Chapter 14 Probabilistic Reasoning. 2 Outline Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions.
2 Syntax of Bayesian networks Semantics of Bayesian networks Efficient representation of conditional distributions Exact inference by enumeration Exact.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
P ROBABILISTIC I NFERENCE. A GENDA Conditional probability Independence Intro to Bayesian Networks.
Probabilistic Reasoning [Ch. 14] Bayes Networks – Part 1 ◦Syntax ◦Semantics ◦Parameterized distributions Inference – Part2 ◦Exact inference by enumeration.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
Bayesian Networks CSE 473. © D. Weld and D. Fox 2 Bayes Nets In general, joint distribution P over set of variables (X 1 x... x X n ) requires exponential.
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
CS B 553: A LGORITHMS FOR O PTIMIZATION AND L EARNING Bayesian Networks.
CPSC 7373: Artificial Intelligence Lecture 5: Probabilistic Inference Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
1 Probability FOL fails for a domain due to: –Laziness: too much to list the complete set of rules, too hard to use the enormous rules that result –Theoretical.
Conditional Probability, Bayes’ Theorem, and Belief Networks CISC 2315 Discrete Structures Spring2010 Professor William G. Tanner, Jr.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) Nov, 13, 2013.
Conditional Independence As with absolute independence, the equivalent forms of X and Y being conditionally independent given Z can also be used: P(X|Y,
Belief Networks CS121 – Winter Other Names Bayesian networks Probabilistic networks Causal networks.
PROBABILISTIC REASONING Heng Ji 04/05, 04/08, 2016.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Web-Mining Agents Data Mining Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen)
A Brief Introduction to Bayesian networks
Another look at Bayesian inference
CS 2750: Machine Learning Directed Graphical Models
Bayesian Networks Chapter 14 Section 1, 2, 4.
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Qian Liu CSE spring University of Pennsylvania
Read R&N Ch Next lecture: Read R&N
Bayesian Networks Probability In AI.
Read R&N Ch Next lecture: Read R&N
Probabilistic Reasoning; Network-based reasoning
CS 188: Artificial Intelligence Fall 2007
Belief Networks CS121 – Winter 2003 Belief Networks.
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
Read R&N Ch Next lecture: Read R&N
Presentation transcript:

Bayesian Networks Chapter 14 Section 1, 2, 4

Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact specification of full joint distributions Syntax: –a set of nodes, one per variable –a directed, acyclic graph (link ≈ "directly influences") –if there is a link from x to y, x is said to be a parent of y –a conditional distribution for each node given its parents: P (X i | Parents (X i )) In the simplest case, conditional distribution represented as a conditional probability table (CPT) giving the distribution over X i for each combination of parent values

Example Topology of network encodes conditional independence assertions: Weather is independent of the other variables Toothache and Catch are conditionally independent given Cavity

Example I'm at work, neighbor John calls to say my alarm is ringing, but neighbor Mary doesn't call. Sometimes it's set off by minor earthquakes. Is there a burglar? Variables: Burglary, Earthquake, Alarm, JohnCalls, MaryCalls Network topology reflects "causal" knowledge: –A burglar can set the alarm off –An earthquake can set the alarm off –The alarm can cause Mary to call –The alarm can cause John to call

Example contd.

Compactness A CPT for Boolean X i with k Boolean parents has 2 k rows for the combinations of parent values Each row requires one number p for X i = true (the number for X i = false is just 1-p) If each variable has no more than k parents, the complete network requires O(n · 2 k ) numbers I.e., grows linearly with n, vs. O(2 n ) for the full joint distribution For burglary net, = 10 numbers (vs = 31)

Semantics The full joint distribution is defined as the product of the local conditional distributions: P (X 1, …,X n ) = π i = 1 P (X i | Parents(X i )) Thus each entry in the joint distribution is represented by the product of the appropriate elements of the conditional probability tables in the Bayesian network. e.g., P(j ^ m ^ a ^ ¬b ^ ¬ e) = P (j | a) P (m | a) P (a | ¬ b, ¬ e) P (¬ b) P (¬ e) = 0.90 * 0.70 * * * = n

Back to the dentist example...  We now represent the world of the dentist D using three propositions – Cavity, Toothache, and PCatch  D’s belief state consists of 2 3 = 8 states each with some probability: {cavity ^ toothache ^ pcatch, ¬ cavity ^ toothache ^ pcatch, cavity ^ ¬ toothache ^ pcatch,...}

The belief state is defined by the full joint probability of the propositions pcatch ¬ pcatchpcatch ¬ pcatch cavity ¬ cavity toothache ¬ toothache

Probabilistic Inference pcatch ¬ pcatchpcatch ¬ pcatch cavity ¬ cavity toothache ¬ toothache P(cavity  toothache) = = 0.28

Probabilistic Inference pcatch ¬ pcatchpcatch ¬ pcatch cavity ¬ cavity toothache ¬ toothache P(cavity) = = 0.2

Probabilistic Inference pcatch ¬ pcatchpcatch ¬ pcatch cavity ¬ cavity toothache ¬ toothache Marginalization: P (c) =  t  pc P(c ^ t ^ pc) using the conventions that c = cavity or ¬ cavity and that  t is the sum over t = {toothache, ¬ toothache}

Conditional Probability  P(A ^ B) = P(A|B) P(B) = P(B|A) P(A) P(A|B) is the posterior probability of A given B

pcatch ¬ pcatchpcatch ¬ pcatch cavity ¬ cavity toothache ¬ toothache P(cavity|toothache) = P(cavity ^ toothache)/P(toothache) = ( )/( ) = 0.6 Interpretation: After observing Toothache, the patient is no longer an “average” one, and the prior probabilities of Cavity is no longer valid P(cavity|toothache) is calculated by keeping the ratios of the probabilities of the 4 cases unchanged, and normalizing their sum to 1

pcatch ¬ pcatchpcatch ¬ pcatch cavity ¬ cavity toothache ¬ toothache P(cavity|toothache) = P(cavity ^ toothache)/P(toothache) = ( )/( ) = 0.6 P( ¬ cavity|toothache)=P( ¬ cavity ^ toothache)/P(toothache) = ( )/( ) = 0.4 P(C|toochache) =  P(C ^ toothache) =   pc P(C ^ toothache ^ pc) =  [(0.108, 0.016) + (0.012, 0.064)] =  (0.12, 0.08) = (0.6, 0.4) normalization constant

Conditional Probability  P(A ^ B) = P(A|B) P(B) = P(B|A) P(A)  P(A ^ B ^ C) = P(A|B,C) P(B ^ C) = P(A|B,C) P(B|C) P(C)  P(Cavity) =  t  pc P(Cavity ^ t ^ pc) =  t  pc P(Cavity|t,pc) P(t ^ pc)  P(c) =  t  pc P(c ^ t ^ pc) =  t  pc P(c|t,pc)P(t ^ pc)

Independence  Two random variables A and B are independent if P(A ^ B) = P(A) P(B) hence if P(A|B) = P(A)  Two random variables A and B are independent given C, if P(A ^ B|C) = P(A|C) P(B|C) hence if P(A|B,C) = P(A|C)

Issues  If a state is described by n propositions, then a belief state contains 2 n states (possibly, some have probability 0)   Modeling difficulty: many numbers must be entered in the first place   Computational issue: memory size and time

 toothache and pcatch are independent given cavity (or ¬ cavity), but this relation is hidden in the numbers ! [Verify this]  Bayesian networks explicitly represent independence among propositions to reduce the number of probabilities defining a belief state pcatch ¬ pcatchpcatch ¬ pcatch cavity ¬ cavity toothache ¬ toothache

Bayesian Network  Notice that Cavity is the “cause” of both Toothache and PCatch, and represent the causality links explicitly  Give the prior probability distribution of Cavity  Give the conditional probability tables of Toothache and PCatch Cavity Toothache P(cavity) 0.2 P(toothache|c) cavity ¬ cavity PCatch P(pclass|c) cavity ¬ cavity probabilities, instead of 7

A More Complex BN BurglaryEarthquake Alarm MaryCallsJohnCalls causes effects Directed acyclic graph Intuitive meaning of arc from x to y: “x has direct influence on y”

BEP(A| … ) TTFFTTFF TFTFTFTF BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) P(E) AP(J|…) TFTF AP(M|…) TFTF Size of the CPT for a node with k parents: 2 k A More Complex BN 10 probabilities, instead of 31

What does the BN encode? Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or ¬Alarm BurglaryEarthquake Alarm MaryCallsJohnCalls For example, John does not observe any burglaries directly

The beliefs JohnCalls and MaryCalls are independent given Alarm or ¬Alarm For instance, the reasons why John and Mary may not call if there is an alarm are unrelated BurglaryEarthquake Alarm MaryCallsJohnCalls What does the BN encode? A node is independent of its non-descendants given its parents

Conditional Independence of non-descendents A node X is conditionally independent of its non-descendents (e.g., the Zijs) given its parents (the Uis shown in the gray area).

Markov Blanket A node X is conditionally independent of all other nodes in the network, given its parents, chlidren, and chlidren’s parents.

Locally Structured World  A world is locally structured (or sparse) if each of its components interacts directly with relatively few other components  In a sparse world, the CPTs are small and the BN contains many fewer probabilities than the full joint distribution  If the # of entries in each CPT is bounded, i.e., O(1), then the # of probabilities in a BN is linear in n – the # of propositions – instead of 2 n for the joint distribution

But does a BN represent a belief state? In other words, can we compute the full joint distribution of the propositions from it?

Calculation of Joint Probability BEP(A| … ) TTFFTTFF TFTFTFTF BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) P(E) AP(J|…) TFTF AP(M|…) TFTF P(j ^ m ^ a ^ ¬b ¬^ e) = ??

 P(J ^ M ^ A ^¬ B ^¬ E) = P(J ^ M|A, ¬ B, ¬ E) * P(A ^¬ B ^¬ E) = P(J|A, ¬ B, ¬ E) * P(M|A, ¬ B, ¬ E) * P(A ^¬ B ^¬ E) (J and M are independent given A)  P(J|A, ¬ B, ¬ E) = P(J|A) (J and ¬ B ^¬ E are independent given A)  P(M|A, ¬ B, ¬ E) = P(M|A)  P(A ^¬ B ^¬ E) = P(A| ¬ B, ¬ E) * P( ¬ B| ¬ E) * P( ¬ E) = P(A| ¬ B, ¬ E) * P( ¬ B) * P( ¬ E) ( ¬ B and ¬ E are independent)  P(J ^ M ^ A ^¬ B ^¬ E) = P(J|A)P(M|A)P(A| ¬ B, ¬ E)P( ¬ B)P( ¬ E) BurglaryEarthquake Alarm MaryCallsJohnCalls

Calculation of Joint Probability BEP(A| … ) TTFFTTFF TFTFTFTF BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) P(E) AP(J|…) TFTF AP(M|…) TFTF P(J ^ M ^ A ^ ¬B ^¬ E) = P(J|A)P(M|A)P(A| ¬ B, ¬ E)P( ¬ B)P( ¬ E) = 0.9 x 0.7 x x x =

Calculation of Joint Probability BEP(A| … ) TTFFTTFF TFTFTFTF BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) P(E) AP(J|…) TFTF AP(M|…) TFTF P(J ^ M ^ A ^ ¬B ^¬ E) = P(J|A)P(M|A)P(A| ¬ B, ¬ E)P( ¬ B)P( ¬ E) = 0.9 x 0.7 x x x = P(x 1 ^ x 2 ^ … ^ x n ) =  i=1,…,n P(x i |parents(X i ))  full joint distribution table

Calculation of Joint Probability BEP(A| … ) TTFFTTFF TFTFTFTF BurglaryEarthquake Alarm MaryCallsJohnCalls P(B) P(E) AP(J|…) TFTF AP(M|…) TFTF P(J ^ M ^ A ^ ¬B ^¬ E) = P(J|A)P(M|A)P(A| ¬ B, ¬ E)P( ¬ B)P( ¬ E) = 0.9 x 0.7 x x x = Since a BN defines the full joint distribution of a set of propositions, it represents a belief state P(x 1 ^ x 2 ^ … ^ x n ) =  i=1,…,n P(x i |parents(X i ))  full joint distribution table

 The BN gives P(t|c)  What about P(c|t)?  P(cavity|t) = P(cavity ^ t)/P(t) = P(t|cavity) P(cavity) / P(t) [Bayes’ rule]  P(c|t) =  P(t|c) P(c)  Querying a BN is just applying the trivial Bayes’ rule on a larger scale Querying the BN Cavity Toothache P(C) 0.1 CP(T|c) TFTF

Exact Inference in Bayesian Networks Let’s generalize that last example a little – suppose we are given that JohnCalls and MaryCalls are both true, what is the probability distribution for Burglary? P(Burglary | JohnCalls = true, MaryCalls=true) Look back at using full joint distribution for this purpose – summing over hidden variables.

Inference by enumeration (example in the text book) – figure 14.8 P(X | e) = α P (X, e) = α ∑ y P(X, e, y) P(B| j,m) = αP(B,j,m) = α ∑ e ∑ a P(B,e,a,j,m) P(b| j,m) = α ∑ e ∑ a P(b)P(e)P(a|be)P(j|a)P(m|a) P(b| j,m) = α P(b)∑ e P(e)∑ a P(a|be)P(j|a)P(m|a) P(B| j,m) = α P(B| j,m) ≈

Enumeration-Tree Calculation

Inference by enumeration (another way of looking at it) – figure 14.8 P(X | e) = α P (X, e) = α ∑ y P(X, e, y) P(B| j,m) = αP(B,j,m) = α ∑ e ∑ a P(B,e,a,j,m) P(b| j,m) = P(B,e,a,j,m) + P(B,e,¬a,j,m) + P(B,¬e,a,j,m) + P(B,¬e,¬a,j,m) P(B| j,m) = α P(B| j,m) ≈

Constructing Bayesian networks 1. Choose an ordering of variables X 1, …,X n such that root causes are first in the order, then the variables that they influence, and so forth. 2. For i = 1 to n –add X i to the network select parents from X 1, …,X i-1 such that P (X i | Parents(X i )) = P (X i | X 1,... X i-1 ) –Note:the parents of a node are all of the nodes that influence it. In this way, each node is conditionally independent of its predecessors in the order, given its parents. This choice of parents guarantees: P (X 1, …,X n ) = π i =1 P (X i | X 1, …, X i-1 ) (chain rule) = π i =1 P (X i | Parents(X i )) (by construction) n n

Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? Example – How important is the ordering?

Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? Example

Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? P(B | A, J, M) = P(B)? Example

Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A,J, M) = P(E | A)? P(E | B, A, J, M) = P(E | A, B)? Example

Suppose we choose the ordering M, J, A, B, E P(J | M) = P(J)? No P(A | J, M) = P(A | J)? P(A | J, M) = P(A)? No P(B | A, J, M) = P(B | A)? Yes P(B | A, J, M) = P(B)? No P(E | B, A,J, M) = P(E | A)? No P(E | B, A, J, M) = P(E | A, B)? Yes Example

Example contd. Deciding conditional independence is hard in noncausal directions (Causal models and conditional independence seem hardwired for humans!) Network is less compact: = 13 numbers needed

Summary Bayesian networks provide a natural representation for (causally induced) conditional independence Topology + CPTs = compact representation of joint distribution Generally easy for domain experts to construct