Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables.

Similar presentations


Presentation on theme: "Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables."— Presentation transcript:

1 Bayesian networks practice

2 Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables X 1,…,X n. The probability for them to have the values x 1,…,x n respectively is P(x n,…,x 1 ): P(x n,…,x 1 ): is short for P(X n =x n,…, X n = x 1 ): We order them according to the topological of the given BayesNet

3 Inference in Bayesian Networks Basic task is to compute the posterior probability for a query variable, given some observed event –that is, some assignment of values to a set of evidence variables. Notation: –X denotes query variable –E denotes the set of evidence variables E 1,…,E m, and e is a particular event, i.e. an assignment to the variables in E. –Y will denote the set of the remaining variables (hidden variables). A typical query asks for the posterior probability P(x|e 1,…,e m ) E.g. We could ask: What’s the probability of a burglary if both Mary and John call, P(burglary | johhcalls, marycalls)?

4 Classification We compute and compare the following: However, how do we compute: What about the hidden variables Y 1,…,Y k ?

5 Inference by enumeration Example: P(burglary | johhcalls, marycalls)? (Abbrev. P(b|j,m))

6 Another example Once the right topology has been found. the probability table associated with each node is determined. Estimating such probabilities is fairly straightforward and is similar to the approach used by naïve Bayes classifiers.

7 High Blood Pressure Suppose we get to know that the new patient has high blood pressure. What’s the probability he has heart disease under this condition?

8 High Blood Pressure (Cont’d)

9 High Blood Pressure (  )

10 High Blood Pressure, Healthy Diet, and Regular Exercise

11 High Blood Pressure, Healthy Diet, and Regular Exercise (Cont’d)

12 The model therefore suggests that eating healthily and exercising regularly may reduce a person's risk of getting heart disease.

13 Weather data What is the Bayesian Network corresponding to Naïve Bayes?

14 “Effects” and “Causes” vs. “Evidence” and “Class” Why Naïve Bayes has this graph? Because when we compute in Naïve Bayes: P(play=yes | E) = P(Outlook=Sunny | play=yes) * P(Temp=Cool | play=yes) * P(Humidity=High | play=yes) * P(Windy=True | play=yes) * P(play=yes) / P(E) we are interested in computing P(…|play=yes), which are probabilities of our evidence “observations” given the class. Of course, “play” isn’t a cause for “outlook”, “temperature”, “humidity”, and “windy”. However, “play” is the class and knowing that it has a certain value, will influence the observational evidence probability values. For example, if play=yes, and we know that the playing happens indoors, then it is more probable (than without this class information) the outlook to be observed “rainy.”

15 Right or Wrong Topology? In general, there is no right or wrong graph topology. –Of course the calculated probabilities (from the data) will be different for different graphs. –Some graphs will induce better classifiers than some other. –If you reverse the arrows in the previous figure, then you get a pure causal graph, whose induced classifier might have estimated error (through cross- validation) better or worse than the Naïve Bayes one (depending on the data). If the topology is constructed manually, we (humans) tend to prefer the causal direction. –In domains such as medicine the graphs are usually less complex in the causal direction.

16 Weka suggestion How Weka finds the shape of the graph? Fixes an order of attributes (variables) and then adds and removes arcs until it gets the smallest estimated error (through cross-validation). By default it starts with a Naïve Bayes network. Also, it maintains a score of graph complexity, trying to keep the complexity low.

17

18 You can change to 2 for example. If you do, then the max number of parents for a node will be 2. It is going to start with a Naïve Bayes graph and then try to add/remove arcs. Laplace correction. Better change it to 1, to be compatible with the counter initialization in Naïve Bayes.

19 Play probability table Based on the data… P(play=yes) = 9/14 P(play=no) = 5/14 P(play=yes) = (9+1)/(14+2) =.625 P(play=yes) = (5+1)/(14+2) =.375 Let’s correct with Laplace …

20 Outlook probability table Based on the data… P(outlook=sunny|play=yes) = (2+1)/(9+3) =.25 P(outlook=overcast|play=yes) = (4+1)/(9+3) =.417 P(outlook=rainy|play=yes) = (3+1)/(9+3) =.333 P(outlook=sunny|play=no) = (3+1)/(5+3) =.5 P(outlook=overcast|play=no) = (0+1)/(5+3) =.125 P(outlook=rainy|play=no) = (2+1)/(5+3) =.375

21 Windy probability table P(windy=true|play=yes,outlook=sunny) = (1+1)/(2+2) =.5 Based on the data…let’s find the conditional probabilities for “windy”

22 Windy probability table P(windy=true|play=yes,outlook=sunny) = (1+1)/(2+2) =.5 P(windy=true|play=yes,outlook=overcast) = 0.5 P(windy=true|play=yes,outlook=rainy) = 0.2 P(windy=true|play=no,outlook=sunny) = 0.4 P(windy=true|play=no,outlook=overcast) = 0.5 P(windy=true|play=no,outlook=rainy) = 0.75 Based on the data…

23 Final figure Classify it

24 Classification I Classify it P(play=yes|outlook=sunny, temp=cool,humidity=high, windy=true) =  *P(play=yes) *P(outlook=sunny|play=yes) *P(temp=cool|play=yes, outlook=sunny) *P(humidity=high|play=yes, temp=cool) *P(windy=true|play=yes, outlook=sunny) =  *0.625*0.25*0.4*0.2*0.5 =  *0.00625

25 Classification II Classify it P(play=no|outlook=sunny, temp=cool,humidity=high, windy=true) =  *P(play=no) *P(outlook=sunny|play=no) *P(temp=cool|play=no, outlook=sunny) *P(humidity=high|play= no, temp=cool) *P(windy=true|play=no, outlook=sunny) =  *0.375*0.5*0.167*0.333*0.4 =  *0.00417

26 Classification III Classify it P(play=yes|outlook=sunny, temp=cool,humidity=high, windy=true) =  *0.00625 P(play=no|outlook=sunny, temp=cool,humidity=high, windy=true) =  *.00417  = 1/(0.00625+0.00417) =95.969 P(play=yes|outlook=sunny, temp=cool,humidity=high, windy=true) = 95.969*0.00625 = 0.60

27 Classification IV (missing values or hidden variables) P(play=yes|temp=cool, humidity=high, windy=true) =  *  outlook P(play=yes) *P(outlook|play=yes) *P(temp=cool|play=yes,outlook) *P(humidity=high|play=yes, temp=cool) *P(windy=true|play=yes,outlook) =…(next slide)

28 Classification V (missing values or hidden variables) P(play=yes|temp=cool, humidity=high, windy=true) =  *  outlook P(play=yes)*P(outlook|play=yes)*P(temp=cool|play=yes,outlook) *P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook) =  *[ P(play=yes)*P(outlook= sunny|play=yes)*P(temp=cool|play=yes,outlook=sunny) *P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook=sunny) +P(play=yes)*P(outlook= overcast|play=yes)*P(temp=cool|play=yes,outlook=overcast) *P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook=overcast) +P(play=yes)*P(outlook= rainy|play=yes)*P(temp=cool|play=yes,outlook=rainy) *P(humidity=high|play=yes,temp=cool)*P(windy=true|play=yes,outlook=rainy) ] =  *[ 0.625*0.25*0.4*0.2*0.5 + 0.625*0.417*0.286*0.2*0.5 + 0.625*0.33*0.333*0.2*0.2 ] =  *0.01645

29 Classification VI (missing values or hidden variables) P(play=no|temp=cool, humidity=high, windy=true) =  *  outlook P(play=no)*P(outlook|play=no)*P(temp=cool|play=no,outlook) *P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook) =  *[ P(play=no)*P(outlook=sunny|play=no)*P(temp=cool|play=no,outlook=sunny) *P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook=sunny) +P(play=no)*P(outlook= overcast|play=no)*P(temp=cool|play=no,outlook=overcast) *P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook=overcast) +P(play=no)*P(outlook= rainy|play=no)*P(temp=cool|play=no,outlook=rainy) *P(humidity=high|play=no,temp=cool)*P(windy=true|play=no,outlook=rainy) ] =  *[ 0.375*0.5*0.167*0.333*0.4 + 0.375*0.125*0.333*0.333*0.5 + 0.375*0.375*0.4*0.333*0.75 ] =  *0.0208

30 Classification VII (missing values or hidden variables) P(play=yes|temp=cool, humidity=high, windy=true) =  *0.01645 P(play=no|temp=cool, humidity=high, windy=true) =  *0.0208  =1/(0.01645 + 0.0208)= 26.846 P(play=yes|temp=cool, humidity=high, windy=true) = 26.846 * 0.01645 = 0.44 P(play=no|temp=cool, humidity=high, windy=true) = 26.846 * 0.0208 = 0.56 I.e. P(play=yes|temp=cool, humidity=high, windy=true) is 44% and P(play=no|temp=cool, humidity=high, windy=true) is 56% So, we predict ‘play=no.’


Download ppt "Bayesian networks practice. Semantics e.g., P(j  m  a   b   e) = P(j | a) P(m | a) P(a |  b,  e) P(  b) P(  e) = … Suppose we have the variables."

Similar presentations


Ads by Google