Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 1 Bayes Networks Outline: Why Bayes Nets? Review of Bayes’ Rule Combining independent items of evidence General.

Similar presentations


Presentation on theme: "CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 1 Bayes Networks Outline: Why Bayes Nets? Review of Bayes’ Rule Combining independent items of evidence General."— Presentation transcript:

1 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 1 Bayes Networks Outline: Why Bayes Nets? Review of Bayes’ Rule Combining independent items of evidence General combination of evidence Benefits of Bayes nets for expert systems

2 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 2 Why Bayes Networks? Reasoning about events involving many parts or contingencies generally requires that a joint probability distribution be known. Such a distribution might require thousands of parameters. Modeling at this level of detail is typically not practical. Bayes Nets require making assumptions about the relevance of some conditions to others. Once the assumptions are made, the joint distribution can be “factored” so that there are many fewer separate parameters that must be specified.

3 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 3 Review of Bayes’ Rule E: Some evidence exists, i.e., a particular condition is true H: some hypothesis is true. P(E|H) = probability of E given H. P(E|~H) = probability of E given not H. P(H) = probability of H, independent of E. P(E|H) P(H) P(H|E) = ----------------- P(E) P(E) = P(E|H) P(H) + P(E|~H)(1 - P(H))

4 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 4 Combining Independent Items of Evidence E 1 : The patient’s white blood cell count exceeds 110% of average. E 2 : The patient’s body temperature is above 101 o F. H: The patient is infected with tetanus. O(H) = 0.01/0.99 O(H|E 1 ) = λ 1 O(H) sufficiency factor for high white cell count. O(H|E 2 ) = λ 2 O(H) sufficiency factor for high body temp. Assuming E1 and E2 are independent: O(H|E 1  E 2 ) = λ 1 λ 2 O(H)

5 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 5 Bayes Net Example A: Accident (An accident blocked traffic on the highway.) B: Barb Late (Barbara is late for work). C: Chris Late (Christopher is late for work). BC A P(A) = 0.2 P(B|A) = 0.5 P(B|~A) = 0.15 P(C|A) = 0.3 P(C|~A) = 0.1

6 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 6 Forward Propagation (from causes to effects) BC A P(A) = 0.2 P(B|A) = 0.5 P(B|~A) = 0.15 P(C|A) = 0.3 P(C|~A) = 0.1 Suppose A (there is an accident): Then P(B|A) = 0.5 P(C|A) = 0.3 Suppose ~A (no accident): Then P(B|~A) = 0.15 P(C|A) = 0.1 (These come directly from the given information.)

7 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 7 Marginal Probabilities (using forward propagation) BC A P(A) = 0.2 P(B|A) = 0.5 P(B|~A) = 0.15 P(C|A) = 0.3 P(C|~A) = 0.1 Then P(B) = probability Barb is late in any situation = P(B|A) P(A) + P(B|~A) P(~A) = (0.5)(0.2) + (0.15)(0.8) = 0.22 Similarly P(C) = probability Chris is late in any situation = P(C|A) P(A) + P(C|~A) P(~A) = (0.3)(0.2) + (0.1)(0.8) = 0.14 Marginalizing means eliminating a contingency by summing the probabilities for its different cases (here A and ~A).

8 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 8 Backward Propagation: “diagnosis” (from effects to causes) BC A P(A) = 0.2 P(B|A) = 0.5 P(B|~A) = 0.15 P(C|A) = 0.3 P(C|~A) = 0.1 Suppose B (Barb is late) What’s the probability of an accident on the highway? Use Bayes’ rule: Then P(A|B) = P(B|A) P(A) / P(B) = 0.5 * 0.2 / (0.5 * 0.2 + 0.15 * 0.8) = 0.1 / 0.22 = 0.4545

9 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 9 Revising Probabilities of Consequences BC A P(A|B) = 0.4545 P(B|A) = 0.5 P(B|~A) = 0.15 P(C|A) = 0.3 P(C|~A) = 0.1 P(C|B) = ??? Suppose B (Barb is late). What’s the probability that Chris is also late, given this information? We already figured that P(A|B) = 0.4545 P(C|B) = P(C|A) P(A|B) + P(C|~A) P(~A|B) = (0.3)(0.4545) + (0.1)(0.5455) = 0.191 somewhat higher than P(C)=0.14

10 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 10 Handling Multiple Causes BC A P(B|A^D) = 0.9 P(B|A^~D) = 0.45 P(B|~A^D) = 0.75 P(B|~A^~D) = 0.1 D: Disease (Barb has the flu). P(D) = 0.111 (These values are consistent with P(B|A) = 0.5. ) D

11 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 11 Explaining Away BC A P(B|A^D) = 0.9 P(B|A^~D) = 0.45 P(B|~A^D) = 0.75 P(B|~A^~D) = 0.1 Suppose B (Barb is late). This raises the probability for each cause: P(A|B) = 0.4545, P(D|B) = P(B|D) P(D)/ P(B) = 0.3935 Now, in addition, suppose C (Chris is late). C makes it more likely that A is true, “And this explains B.” D is now a less probable. P(B|D) = P(B|A^D)P(A) + P(B|~A^D)P(~A) = 0.78 D

12 CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 12 Benefits of Bayes Nets The joint probability distribution normally requires 2 n – 1 independent parameters. With Bayes Nets we only specify these parameters: 1.“root” node probabilities. e. g., P(A=true) = 0.2; P(A=false)=0.8. 2.For each non-root node, a table of 2 k values, where k is the number of parents of that node. Typically k < 5. 3. Propagating probabilities happens along the paths in the net. With a full joint prob. dist., many more computations may be needed.


Download ppt "CSE 415 -- (c) S. Tanimoto, 2007 Bayes Nets 1 Bayes Networks Outline: Why Bayes Nets? Review of Bayes’ Rule Combining independent items of evidence General."

Similar presentations


Ads by Google