Weng-Keen Wong, Oregon State University ©2005 1 Bayesian Networks: A Tutorial Weng-Keen Wong School of Electrical Engineering and Computer Science Oregon.

Slides:



Advertisements
Similar presentations
Bayesian networks Chapter 14 Section 1 – 2. Outline Syntax Semantics Exact computation.
Advertisements

BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
1 Bayesian Networks Slides from multiple sources: Weng-Keen Wong, School of Electrical Engineering and Computer Science, Oregon State University.
Scores and substitution matrices in sequence alignment Sushmita Roy BMI/CS 576 Sushmita Roy Sep 11 th,
Probabilistic Reasoning Course 8
1 Knowledge Engineering for Bayesian Networks. 2 Probability theory for representing uncertainty l Assigns a numerical degree of belief between 0 and.
Identifying Conditional Independencies in Bayes Nets Lecture 4.
1 A Tutorial on Bayesian Networks Weng-Keen Wong School of Electrical Engineering and Computer Science Oregon State University.
1 22c:145 Artificial Intelligence Bayesian Networks Reading: Ch 14. Russell & Norvig.
Bayesian Networks Chapter 14 Section 1, 2, 4. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
CPSC 322, Lecture 26Slide 1 Reasoning Under Uncertainty: Belief Networks Computer Science cpsc322, Lecture 27 (Textbook Chpt 6.3) March, 16, 2009.
Review: Bayesian learning and inference
 2004 University of Pittsburgh Bayesian Biosurveillance Using Multiple Data Streams Weng-Keen Wong, Greg Cooper, Denver Dash *, John Levander, John Dowling,
Bayesian networks Chapter 14 Section 1 – 2.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 14 Jim Martin.
Bayesian Belief Networks
Bayesian Networks. Graphical Models Bayesian networks Conditional random fields etc.
Weng-Keen Wong, Oregon State University © Bayesian Networks: A Tutorial Weng-Keen Wong School of Electrical Engineering and Computer Science Oregon.
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
CS 188: Artificial Intelligence Fall 2006 Lecture 17: Bayes Nets III 10/26/2006 Dan Klein – UC Berkeley.
Announcements Homework 8 is out Final Contest (Optional)
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
1 A Tutorial on Bayesian Networks Modified by Paul Anderson from slides by Weng-Keen Wong School of Electrical Engineering and Computer Science Oregon.
Bayesian networks More commonly called graphical models A way to depict conditional independence relationships between random variables A compact specification.
Quiz 4: Mean: 7.0/8.0 (= 88%) Median: 7.5/8.0 (= 94%)
Artificial Intelligence CS 165A Tuesday, November 27, 2007  Probabilistic Reasoning (Ch 14)
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Digital Statisticians INST 4200 David J Stucki Spring 2015.
Undirected Models: Markov Networks David Page, Fall 2009 CS 731: Advanced Methods in Artificial Intelligence, with Biomedical Applications.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
Bayesian Statistics and Belief Networks. Overview Book: Ch 13,14 Refresher on Probability Bayesian classifiers Belief Networks / Bayesian Networks.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
Bayesian Nets and Applications. Naïve Bayes 2  What happens if we have more than one piece of evidence?  If we can assume conditional independence 
Bayes’ Nets: Sampling [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2008 Readings: K&F: 3.1, 3.2, –  Carlos.
CHAPTER 5 Probability Theory (continued) Introduction to Bayesian Networks.
Review: Bayesian inference  A general scenario:  Query variables: X  Evidence (observed) variables and their values: E = e  Unobserved variables: Y.
Today’s Topics Graded HW1 in Moodle (Testbeds used for grading are linked to class home page) HW2 due (but can still use 5 late days) at 11:55pm tonight.
Inference Algorithms for Bayes Networks
1 Bayesian Networks: A Tutorial. 2 Introduction Suppose you are trying to determine if a patient has tuberculosis. You observe the following symptoms:
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
CPSC 7373: Artificial Intelligence Lecture 5: Probabilistic Inference Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Belief Networks Kostas Kontogiannis E&CE 457. Belief Networks A belief network is a graph in which the following holds: –A set of random variables makes.
Bayesian networks Chapter 14 Slide Set 2. Constructing Bayesian networks 1. Choose an ordering of variables X 1, …,X n 2. For i = 1 to n –add X i to the.
1 BN Semantics 1 Graphical Models – Carlos Guestrin Carnegie Mellon University September 15 th, 2006 Readings: K&F: 3.1, 3.2, 3.3.
PROBABILISTIC REASONING Heng Ji 04/05, 04/08, 2016.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Another look at Bayesian inference
Bayesian networks Chapter 14 Section 1 – 2.
Presented By S.Yamuna AP/CSE
Inference in Bayesian Networks
Read R&N Ch Next lecture: Read R&N
Bayesian Networks: A Tutorial
Artificial Intelligence
CS 4/527: Artificial Intelligence
CAP 5636 – Advanced Artificial Intelligence
Read R&N Ch Next lecture: Read R&N
Bayesian Statistics and Belief Networks
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2008
Class #16 – Tuesday, October 26
Bayesian networks Chapter 14 Section 1 – 2.
Probabilistic Reasoning
Read R&N Ch Next lecture: Read R&N
Presentation transcript:

Weng-Keen Wong, Oregon State University © Bayesian Networks: A Tutorial Weng-Keen Wong School of Electrical Engineering and Computer Science Oregon State University

Weng-Keen Wong, Oregon State University © Outline 1.Introduction 2.Probability Primer 3.Bayesian networks 4.Bayesian networks in syndromic surveillance

Weng-Keen Wong, Oregon State University © Probability Primer: Random Variables A random variable is the basic element of probability Refers to an event and there is some degree of uncertainty as to the outcome of the event For example, the random variable A could be the event of getting a heads on a coin flip

Weng-Keen Wong, Oregon State University © Boolean Random Variables We will start with the simplest type of random variables – Boolean ones Take the values true or false Think of the event as occurring or not occurring Examples (Let A be a Boolean random variable): A = Getting heads on a coin flip A = It will rain today A = The Cubs win the World Series in 2007

Weng-Keen Wong, Oregon State University © Probabilities The sum of the red and blue areas is 1 P(A = false) P(A = true) We will write P(A = true) to mean the probability that A = true. What is probability? It is the relative frequency with which an outcome would be obtained if the process were repeated a large number of times under similar conditions * * Ahem…there’s also the Bayesian definition which says probability is your degree of belief in an outcome

Weng-Keen Wong, Oregon State University © Conditional Probability P(A = true | B = true) = Out of all the outcomes in which B is true, how many also have A equal to true Read this as: “Probability of A conditioned on B” or “Probability of A given B” P(F = true) P(H = true) H = “Have a headache” F = “Coming down with Flu” P(H = true) = 1/10 P(F = true) = 1/40 P(H = true | F = true) = 1/2 “Headaches are rare and flu is rarer, but if you’re coming down with flu there’s a chance you’ll have a headache.”

Weng-Keen Wong, Oregon State University © The Joint Probability Distribution We will write P(A = true, B = true) to mean “the probability of A = true and B = true” Notice that: P(H=true|F=true) In general, P(X|Y)=P(X,Y)/P(Y) P(F = true) P(H = true)

Weng-Keen Wong, Oregon State University © The Joint Probability Distribution Joint probabilities can be between any number of variables eg. P(A = true, B = true, C = true) For each combination of variables, we need to say how probable that combination is The probabilities of these combinations need to sum to 1 ABCP(A,B,C) false 0.1 false true0.2 falsetruefalse0.05 falsetrue 0.05 truefalse 0.3 truefalsetrue0.1 true false0.05 true 0.15 Sums to 1

Weng-Keen Wong, Oregon State University © The Joint Probability Distribution Once you have the joint probability distribution, you can calculate any probability involving A, B, and C Note: May need to use marginalization and Bayes rule, (both of which are not discussed in these slides) ABCP(A,B,C) false 0.1 false true0.2 falsetruefalse0.05 falsetrue 0.05 truefalse 0.3 truefalsetrue0.1 true false0.05 true 0.15 Examples of things you can compute: P(A=true) = sum of P(A,B,C) in rows with A=true P(A=true, B = true | C=true) = P(A = true, B = true, C = true) / P(C = true)

Weng-Keen Wong, Oregon State University © The Problem with the Joint Distribution Lots of entries in the table to fill up! For k Boolean random variables, you need a table of size 2 k How do we use fewer numbers? Need the concept of independence ABCP(A,B,C) false 0.1 false true0.2 falsetruefalse0.05 falsetrue 0.05 truefalse 0.3 truefalsetrue0.1 true false0.05 true 0.15

Weng-Keen Wong, Oregon State University © Independence Variables A and B are independent if any of the following hold: P(A,B) = P(A) P(B) P(A | B) = P(A) P(B | A) = P(B) This says that knowing the outcome of A does not tell me anything new about the outcome of B.

Weng-Keen Wong, Oregon State University © Independence How is independence useful? Suppose you have n coin flips and you want to calculate the joint distribution P(C 1, …, C n ) If the coin flips are not independent, you need 2 n values in the table If the coin flips are independent, then Each P(C i ) table has 2 entries and there are n of them for a total of 2n values

Weng-Keen Wong, Oregon State University © Conditional Independence Variables A and B are conditionally independent given C if any of the following hold: P(A, B | C) = P(A | C) P(B | C) P(A | B, C) = P(A | C) P(B | A, C) = P(B | C) Knowing C tells me everything about B. I don’t gain anything by knowing A (either because A doesn’t influence B or because knowing C provides all the information knowing A would give)

Weng-Keen Wong, Oregon State University © Outline 1.Introduction 2.Probability Primer 3.Bayesian networks 4.Bayesian networks in syndromic surveillance

A Bayesian Network A Bayesian network is made up of: AP(A) false0.6 true0.4 A B CD ABP(B|A) false 0.01 falsetrue0.99 truefalse0.7 true 0.3 BCP(C|B) false 0.4 falsetrue0.6 truefalse0.9 true 0.1 BDP(D|B) false 0.02 falsetrue0.98 truefalse0.05 true A Directed Acyclic Graph 2. A set of tables for each node in the graph

Weng-Keen Wong, Oregon State University © A Directed Acyclic Graph A B CD Each node in the graph is a random variable A node X is a parent of another node Y if there is an arrow from node X to node Y eg. A is a parent of B Informally, an arrow from node X to node Y means X has a direct influence on Y

A Set of Tables for Each Node Each node X i has a conditional probability distribution P(X i | Parents(X i )) that quantifies the effect of the parents on the node The parameters are the probabilities in these conditional probability tables (CPTs) AP(A) false0.6 true0.4 ABP(B|A) false 0.01 falsetrue0.99 truefalse0.7 true 0.3 BCP(C|B) false 0.4 falsetrue0.6 truefalse0.9 true 0.1 BDP(D|B) false 0.02 falsetrue0.98 truefalse0.05 true 0.95 A B CD

Weng-Keen Wong, Oregon State University © A Set of Tables for Each Node Conditional Probability Distribution for C given B If you have a Boolean variable with k Boolean parents, this table has 2 k+1 probabilities (but only 2 k need to be stored) BCP(C|B) false 0.4 falsetrue0.6 truefalse0.9 true 0.1 For a given combination of values of the parents (B in this example), the entries for P(C=true | B) and P(C=false | B) must add up to 1 eg. P(C=true | B=false) + P(C=false |B=false )=1

Weng-Keen Wong, Oregon State University © Bayesian Networks Two important properties: 1.Encodes the conditional independence relationships between the variables in the graph structure 2.Is a compact representation of the joint probability distribution over the variables

Weng-Keen Wong, Oregon State University © Conditional Independence The Markov condition: given its parents (P 1, P 2 ), a node (X) is conditionally independent of its non- descendants (ND 1, ND 2 ) X P1P1 P2P2 C1C1 C2C2 ND 2 ND 1

Weng-Keen Wong, Oregon State University © The Joint Probability Distribution Due to the Markov condition, we can compute the joint probability distribution over all the variables X 1, …, X n in the Bayesian net using the formula: Where Parents(X i ) means the values of the Parents of the node X i with respect to the graph

Weng-Keen Wong, Oregon State University © Using a Bayesian Network Example Using the network in the example, suppose you want to calculate: P(A = true, B = true, C = true, D = true) = P(A = true) * P(B = true | A = true) * P(C = true | B = true) P( D = true | B = true) = (0.4)*(0.3)*(0.1)*(0.95) A B CD

Weng-Keen Wong, Oregon State University © Using a Bayesian Network Example Using the network in the example, suppose you want to calculate: P(A = true, B = true, C = true, D = true) = P(A = true) * P(B = true | A = true) * P(C = true | B = true) P( D = true | B = true) = (0.4)*(0.3)*(0.1)*(0.95) A B CD This is from the graph structure These numbers are from the conditional probability tables

Weng-Keen Wong, Oregon State University © Inference Using a Bayesian network to compute probabilities is called inference In general, inference involves queries of the form: P( X | E ) X = The query variable(s) E = The evidence variable(s)

Weng-Keen Wong, Oregon State University © Inference An example of a query would be: P( HasAnthrax = true | HasFever = true, HasCough = true) Note: Even though HasDifficultyBreathing and HasWideMediastinum are in the Bayesian network, they are not given values in the query (ie. they do not appear either as query variables or evidence variables) They are treated as unobserved variables HasAnthrax HasCoughHasFeverHasDifficultyBreathingHasWideMediastinum

Weng-Keen Wong, Oregon State University © The Bad News Exact inference is feasible in small to medium-sized networks Exact inference in large networks takes a very long time We resort to approximate inference techniques which are much faster and give pretty good results

Weng-Keen Wong, Oregon State University © One last unresolved issue… We still haven’t said where we get the Bayesian network from. There are two options: Get an expert to design it Learn it from data

Weng-Keen Wong, Oregon State University © Acknowledgements Andrew Moore (for letting me copy material from his slides) Greg Cooper, John Levander, John Dowling, Denver Dash, Bill Hogan, Mike Wagner, and the rest of the RODS lab

Weng-Keen Wong, Oregon State University © References Bayesian networks: “Bayesian networks without tears” by Eugene Charniak “Artificial Intelligence: A Modern Approach” by Stuart Russell and Peter Norvig Other references: My webpage PANDA webpage RODS webpage