Download presentation
Presentation is loading. Please wait.
1
Propagation Algorithm in Bayesian Networks
Saturday, August 20, 2016 Farrokh Alemi, PhD. This lecture shows how to predict an event given a Bayesian network and its specifications. The lecture describes Bayes belief network propagation and based in part on the information in and in
2
Bayesian Network P1 P2 Pm X C1 C2 Cn
A Bayesian network is directed acyclical tree. Each node in the tree is related to its parents and children by arrows. Here we show a simple directed graph with three types of nodes. For the node X, all P nodes are parents and C nodes are children. There is always a directed arc from the parent node to X and from X to its children. This is called a directed graph because not only any two nodes may be associated with each other but also there is always a direction of influence. The parent node P influences node X and node X influences the child node C in the graph. All the relationships in the graph start at one node and end at another node. C1 C2 Cn
3
Bayesian Network P1 P2 Pm X C1 C2 Cn
There are no two nodes without a direction of influence. In directed graphs we cannot leave the nodes X and C1 as unconnected. The node P2 and X are associated but have no direction of influence. This too is not allowed. C1 C2 Cn
4
Bayesian Network P1 P2 Pm X C1 C2 Cn
In acyclical graph, it is not possible to start at one point, follow the directions in the graph, and end at the same place. The blue arrow between C1 and P1 makes a cycle in this graph and this is not allowed in acyclical graphs. C1 C2 Cn
5
Bayesian Network P1 P2 Pm X C1 C2 Cn
The fact that we work with directed acyclical graphs limits the applications of our tools but simplifies the mathematics of what we need to take into account. C1 C2 Cn
6
Fidelity In a Bayesian network, the joint probability of all events can be read directly from the structure of the graph. For each node, one identifies the parents of the node from the graph and the joint probability of the nodes follow from the parents. Move from a graph to independence assumptions then to equation without loss of information
7
Formula & Graph Correspondence
S: Severity of Illness R: Do Not Resuscitate T: Treatment O: Outcome For example, here we have a possible network that relates patients severity to clinicians choice of treatment and outcomes. In this graph Severity is a parent to patient preferences on resuscitation, treatment choice and outcome. The parents of treatment node are severity, resuscitation and provider’s decision. The parent’s of outcome’s are the treatment received and the severity of illness. Each of these parent child relationships indicate a dependency in the data. Perhaps, more important, the absence of any link indicates independency. M: Provider’s Decision
8
Formula & Graph Correspondence
S: Severity of Illness R: Do Not Resuscitate T: Treatment O: Outcome M: Provider’s Decision Formula & Graph Correspondence For example, the link between provider’s decision and health outcomes is not present. The model assumes that the provider influences health outcomes primarily through the choice of treatment. Once the treatment is known, the provider’s choice is irrelevant. Both the lines drawn in the Bayes net and the lines not drawn informs us of interdependencies among the events. Furthermore, these interdependencies allows us to estimate the joint probability distribution.
9
M: Provider’s Decision
Formula? S: Severity of Illness R: Do Not Resuscitate T: Treatment O: Outcome Let us see if we can write the equation for estimating the joint probability of the events in the Bayes net. In general, we know that only the parents of the node matter. We have abbreviated outcome by O, severity by S, treatment by T, physician’s decision by M, and patient’s preferences for not resuscitation by R. For simplicity we are assuming these are all binary events, in other words there are only two options: either to give treatment or not. There are only two outcomes: patient lives longer or patient dies, and so on. Now we can see what is the equivalent statement about this graph in terms of equations. M: Provider’s Decision
10
Probability of an Event
Let us start at the end node, which is patient outcomes. This is called an end node as it has no children. The equation for this node is given conditional to the probabilities of its parents, non parents are not relevant.
11
Probability of an Event
Now we can also calculate the probability of treatment which is the product of conditional probabilities of treatment given its parents. Note that the rest of the tree is irrelevant to this calculation.
12
Probability of an Event
Finally, we need to calculate the probability of severity, do not resuscitate and physician’s choice of treatment. These events have no parents and their probabilities are merely the marginal probabilities
13
Probability of an Event
Now we can put all five calculated probabilities together to estimate the joint probability of all events in the tree. The availability of the tree structure and its imbedded assumptions of independence has radically simplified what data we need and how we can calculate the joint distribution of the event.
14
What If Once we have the joint distribution of the events, we can use it to calculate conditional probabilities of what is the probability of events given that other events have already occurred. This is a relatively simple operation. The conditional probability is the probability of the event divided by probability of all events. So for probability of a particular outcome for a patient that is severely ill, we divide the joint probability of outcomes for severe patients by probability of observing severe patients. The way you can think of conditional probability is that we have selected all patients who are severely ill and within these patients we are looking at frequency of observing various outcomes.
15
Sum Out Missing Variables
What If Sum Out Missing Variables Note that in this calculation the joint distribution of two variables, outcomes, and severity is needed but earlier we had calculated the joint distribution of all five variables. To move from the joint distribution of all 5 variables to fewer variables, we have to sum out the missing variables and calculate marginal tables. In this case, treatment, physician decision and resuscitation preferences are missing in the joint distribution.
16
Calculate from Marginal Tables after Removing Missing Variables
What If Conditional questions are asked often. Bayes nets provides a useful way to calculate these data. For example, information on comparative value of treatment may be asked. Here we are looking at what are the likely outcomes for severe patients who received a particular treatment. If the joint distribution is known then conditional probabilities could be easily calculated. As before, note that in this calculation the joint distribution of three variables, outcomes, severity, and treatment is needed. Bayes calculations provide the joint distribution of all five variables. To move from the joint distribution of all 5 variables to fewer variables, we have to sum out the missing variables and calculate marginal tables. Calculate from Marginal Tables after Removing Missing Variables
17
Independence & Graph Structure
The following shows a number of graph structures and their corresponding independence structure
18
Independence & Graph Structure
The following shows a number of graph structures and their corresponding independence structure
19
Independence & Graph Structure
The following shows a number of graph structures and their corresponding independence structure
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.