Download presentation
Presentation is loading. Please wait.
Published bySeth Dudding Modified over 10 years ago
1
1 Chap. 4 Decision Graphs Statistical Genetics Forum Bayesian Networks and Decision Graphs Finn V. Jensen Presented by Ken Chen Genome Sequencing Center
2
2 Flu Fever Sleepy T A Using probabilities provided by network to support decision-making Test decisions Look for more evidences Action decisions
3
3 OH0OH1 FCSC OH2 BH MH OH0OH1 FCSC OH2 BHMH D U One Action: Example: CallFold Poker Game:
4
4 One action in general D Goal: find D=d that maximize EU(D|e)
5
5 Action GA DSS U GA U DSS 4.2 Utilities: Example: Management of effort Decision: Gd: keep pace in GA, follow DSS superficially SB: slow down in both courses Dg: keep pace in DSS, follow GA superficially Game 1: maximize the sum of the exp marks General: maximize the sum of the exp utilities
6
6 4.3 Value of information A H U T
7
7 Nonutility value functions When there is no proper model for actions and utilities, the reason for test is to decrease the uncertainty of the hypothesis
8
8 Test 1 Test 2 Action T2T2 Inf 99.94 -0.06 97.74 Inf 99.74 -0.26 T1T1 yes pos neg yes pour pos neg discard pour clean infectedclean infected Nonmyopic data request
9
9 Decision Tree Nonleaf nodes are decision nodes or chance nodes, and the leaves are utility nodes Complete: For a chance node there must be a link for each possible state, and from a decision node there must be a link for each possible decision option D action 1 action 2 action n … X P(X=x 1 |o) P(X=x 2 |o) P(X=x n |o) … U
10
10 A car start problem Possible Fault: –Spark Plug (SP), prob=0.3 –Ignition System (IS), prob=0.2, –Others, prob=0.5 Actions: –SP, fixes SP, 4 min –IS, fixes IS with prob=0.5, 2 min –T, test OK iff. IS is OK, 0.5 min –RS, fixes everything, 15 min Goal: –Have car fixed asap
11
11 15 D D D D D 10.514.525.5 D D D D D D 27.5 12.5 14.5 D SP 28 26 OK !OK RS SP OK !OK 0.38 0.62 0.8 0.2 IS !OK OK RS 0.5 RS OK !OK 0.7 0.3 RS OK !OK 0.1 0.9 P(SP fix|T=OK) =P(SP|T=OK) =P(SP| !IS ) =P(SP)/(P(SP)+P(others)) =0.3/0.8=0.38 P(IS fix)=P(IS)P(fix|IS)=0.2*0.5=0.1 Fault TFault-I IS … … … … RS T IS
12
12 15 D D D D D 10.514.525.5 D D D D D D 27.5 12.5 14.5 D RS T SP IS 28 26 OK !OK RS SP OK !OK 0.38 0.62 0.8 0.2 IS !OK OK RS 0.5 RS OK !OK 0.7 0.3 RS OK !OK 0.1 0.9 16.96 16.27 15.43 … … … … Solving Decision Trees
13
13 Coalesced decision trees Grow exponentially with the number of decisions and chance variables When decision tree contains identical subtrees they can be collapsed.
14
14 4.5 Decision-Theoretic Troubleshooting A fault causing a device to malfunction is identified and eliminated through a sequence of troubleshooting steps. A troubleshooting problem can be represented and solved through a decision tree (actions and questions) As decision trees have a risk of becoming intractably large, we look for ways of pruning the decision tree.
15
15 Action sequences A i =yes, A i =no Cost of action A i, C i ( ), evidences Action seq: s= repeatly performing the next action until problem gets fixed or the last action has been performed Expected cost of repair (ECR)
16
16 Local optimality of the optimal sequence (Dynamic Programming) Consider two neighboring actions A i and A i+1 Pruned tree has eight non-RS links, compared to 32 in a coalesced DT for the same problem
17
17 The greedy approach Always choose the action with the highest efficiency Not necessarily optimal! Proposition 4.2: Conditions under which the greedy approach is optimal: –n faults F 1 …F n, and n actions: A 1 … A n –Exactly one of the faults is present –Each action has a specific probability of repair: p i =P(A i =yes|F i ), P(A i =yes|F j )=0 if i≠j –The cost C i of an action does not depend on the performance of previous actions Theorem 4.2: for action sequence s fulfilling the conditions in Proposition 4.2. Assume s is ordered according to decreasing initial efficiencies. Then s is an optimal action sequence and F1 F2 F3 F4 A1 A2 A3
18
18 Influence Diagram A compact representation of decision tree Now seen more as a decision tool extending Bayesian networks Syntax: –There is a directed path comprising all decision nodes –The utility nodes have no children –The decision nodes and the chance nodes have a finite set of mutually exclusive states –The utility nodes have no states –To each chance node A is attached a conditional probability table P(A|pa(A)) –The each utility node U is attached a real-valued function over pa(U)
19
19 OH0OH1 OFCOSC OH BH MH D U MH0MH1 MFCMSC BN Influence Diagram OH0OH1 OFCOSC OH BH MH D U MH0MH1 MFCMSC No-forgetting: The decision maker remembers the past observations and decisions
20
20 Solution to influence diagrams Similar to decision-tree More efficiently by exploiting the structure of of the influence diagram (Chapter 7)
21
21 Information blocking V1V1 T1T1 FV 1 U1U1 V2V2 T2T2 FV 2 U2U2 V3V3 T3T3 FV 3 U3U3 V4V4 T4T4 FV 4 U4U4 V5V5 T5T5 FV 5 U5U5 FV 5 has 10 9 elements V1V1 T1T1 FV 1 U1U1 V2V2 T2T2 FV 2 U2U2 V3V3 T3T3 FV 3 U3U3 V4V4 T4T4 FV 4 U4U4 V5V5 T5T5 FV 5 U5U5 Introduce variables/links which,when observed, d-separate most of the past from The present decision Fishing Vol
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.