Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pre and Post Preferences over Abductive Models Luís Moniz Pereira 1 Gonçalo Lopes 1 Pierangelo Dell'Acqua 2 Pierangelo Dell'Acqua 2 1 CENTRIA – Universidade.

Similar presentations


Presentation on theme: "Pre and Post Preferences over Abductive Models Luís Moniz Pereira 1 Gonçalo Lopes 1 Pierangelo Dell'Acqua 2 Pierangelo Dell'Acqua 2 1 CENTRIA – Universidade."— Presentation transcript:

1 Pre and Post Preferences over Abductive Models Luís Moniz Pereira 1 Gonçalo Lopes 1 Pierangelo Dell'Acqua 2 Pierangelo Dell'Acqua 2 1 CENTRIA – Universidade Nova de Lisboa, Portugal 2 ITN – Department of Science and Technology Linköping University, Sweden Linköping University, Sweden

2 Background ● Preferences in Logic Programming have mostly been focused on:  preferences among rules of a theory  preferences over theory literals

3 Background ● Abduction is a powerful mechanism to account for defeasible reasoning and incomplete knowledge. ● Abductive Logic Programs allow for incompletely defined literals. Each abductive literal (abducible) can be assumed either true or false in a 2-valued semantics. Abductive Logic Programs typically have several models that are derived from the abduced literals.

4 Enabling Pre and Post Preferences ● Our approach is based on handling preference relations between abducibles. ● Preferences over abducibles can be enacted both a priori or a posteriori w.r.t. the model generation:  a priori: to enact preferences during the computation of the models of a theory.  a posteriori: to enact preferences on the computed models of a theory.

5 Language ● A literal can be either a domain atom A or its default negation not A. ● A rule takes the form: A ← L 1,..., L t (t ≥ 0) ● An integrity constraints has the form:  ← L 1,..., L t (t ≥ 0) ● Each program is associated with a set of abducibles, literals which do not occur in any rule head.

6 Hypotheses Generation ● Abducibles extend a theory and can be used to provide alternative explanations for a given query. ● Generating all alternative explanations for a query is a central problem in abduction because of the combinatorial explosion. ● Generate only those explanations which are preferred and relevant for the query.

7 Enabling the Assumption of Abducibles ● An abducible is said to be considered if there is an expectation for it and there is not an expectation to the contrary: consider(A) ← expect(A), not expect_not(A) expect and expect_not are domain-specific literals which encode the preconditions for assuming an abducible A.

8 A Priori Preferences ● To express a priori preferences between abducibles we employ preference rules which are embedded in the theory. ● A preference rule takes the form: a < b ← L 1,..., L t an abducible a is preferred to an abducible b if the body of the rule holds.

9 Example: Claire's Drink ● Claire drinks either tea or coffee (but not both). Suppose Claire prefers coffee over tea when sleepy. ● This situation can be represented as: expect(tea)drink ← tea expect(coffee) drink ← coffee expect_not(coffee) ← blood_pressure_high coffee < tea ← sleepy

10 Abducible Sets ● It is often desirable to express a priori preferences over sets of abducibles. ● The assumption of abducibles in such sets is highly context-dependent and should be embedded over rules in the theory. ● The problem has an analogy to issues addressed by cardinality and weight constraint rules for the Stable Model semantics and we can exploit those results.

11 Abducible Sets ● A cardinality constraint takes the form: L { a 1,..., a n, not b 1,..., not b m } U (n, m ≥ 0) L and U are the lower and upper bound on the cardinality of literals. ● Cardinality constraints can also occur in the heads of rules, meaning that they are enforced if the body of the rule holds.

12 Example: Claire's Meal ● Claire is deciding what to have for a meal from a limited buffet. The menu has appetizers, three main dishes (max 2 per person) and drinks. ● This situation can be represented as: 0 { bread, salad, cheese } 3 ← appetizers 1 { fish, meat, veggie } 2 ← main_dishes 1 { wine, juice, water } 1 ← drinks

13 Example: Claire's Meal ● Claire may skip the appetizers unless she's very hungry. 2 { appetizers, main_dishes, drinks } 3 main_dishes < appetizers drinks < appetizers appetizers ← very_hungry ● Appetizers is the least preferred set and thus effectively constrains the abduction of abducibles contained in it.

14 Semantics ● The declarative semantics is given in terms of Preferred Relevant Abductive Partial Stable Models - It is a Stable Model based semantics. ● The procedural semantics is based on a syntactic transformation translating a program with preferences into a normal logic program with cardinality constraints.

15 A Posteriori Preferences ● A priori preferences are often insufficient to express final choices. ● Sometime we are able to enact certain choices only after looking at their consequences. ● These consequences are only available after model generation. ● Only after the relevant models are computed we can reason about which consequences, or other features of the model, are determinant for the final choice, ie the quality of the model.

16 Additional Consequences of Abductive Hypotheses ● Consider the simple abductive logic program: c ← a1 { a, b } 1 expect(a)expect(b) ● Two abductive stable models can be derived: M 1 = { expect(a), expect(b), a, c } M 2 = { expect(a), expect(b), b } ● Suppose that c is an “unwanted” literal. Hence, we would like to prefer models without c.

17 Additional Consequences … ● For this particular program we could simply state: b < a ← c ● However, we should add a similar rule for every possible combination of abducibles which implies c. ● To express this meta-preference it is convenient to simply look at the computed models, and prefer a posteriori the ones where c is not present.

18 Utility Theory ● Quantitative decision making mechanisms can be employed a posteriori for enacting the final choice. ● Simple Methodology:  Associate every model with an expected utility value computed upon the hypotheses assumed in the model.  Prefer a posteriori by choosing those models with greater expected utility.

19 Example: Claire Goes on Holiday! ● Claire is spending a day on the beach and must decide what means of transportation to adopt:  Car is faster, but there may be traffic jam  Train takes a lot of time, but it is more environmentally friendly  Bus, but it is really a pain.. ● There are several utility factors for Claire in this situation: getting to the beach fast, the level of comfort, being more environmentally friendly, etc.

20 Example: Claire's Holiday, never by bus ● Consider the abductive logic program: go_to(beach) ← carexpect(car) go_to(beach) ← trainexpect(train) go_to(beach) ← busexpect(bus) 1 { car, train, bus } 1hot prob(traffic_jam, 0.7) ← hotcar < bus prob(⌐traffic_jam, 0.3) ← hottrain < bus utility(comfort, 10) utility(stuck_in_traffic, -8) utility(wasting_time, -4) utility(environment_friend, 3)

21 Example: Claire's Holiday ● For each scenario compute the expected utility: Car scenario Exp.Utility = Ut.Comfort * Prob(⌐Traffic Jam) + Ut. Stuck in Traffic * Prob(Traffic Jam) = 10 * 0.3 + 0.7 * -8 = -2.6 Train scenario Exp. Utility = Ut. Wasting Time + Ut. Environment Friendly = -4 + 3 = -1

22 Oracles ● When faced with several hypotheses for which we cannot prefer (neither a priori nor a posteriori), then additional information can be acquired. ● This can be done by: - by performing an experiment or - by asking an external system, oracle ● Information provided by the oracle has to be incorporated in the abductive logic program and a new reasoning cycle must be performed.

23 Oracles ● This process can be iterated as many times as necessary to yield a final choice. ● A possible termination condition is reaching a fixed point: - there are no further experiments to perform and - the additional information doesn't change the scenaria obtained in the previous step.

24 Example: Medical Diagnosis ● A patient shows up at the dentist with signs of pain upon teeth percussion. The expected causes for the observed signs are:  Periapical lesion  Horizontal Fracture of the root and/or crown  Vertical Fracture of the root and/or crown ● Producing a diagnosis is deriving abductive hypotheses to explain the query percussion_pain.

25 Example: Medical Diagnosis ● A medical knowledge base can point to these diagnoses, and also to additional consequences expected by assuming each of the scenaria: percussion_pain ← periapical_lesion percussion_pain ← periapical_lesion percussion_pain ← fracture percussion_pain ← fracture fracture ← horizontal_fracture fracture ← horizontal_fracture fracture ← vertical_fracture fracture ← vertical_fracture radiolucency ← periapical_lesion radiolucency ← periapical_lesion tooth_mobility ← horizontal_fracture tooth_mobility ← horizontal_fracture elliptic_fracture_trace ← horizontal_fracture elliptic_fracture_trace ← horizontal_fracture decompression_pain ← vertical_fracture decompression_pain ← vertical_fracture

26 Example: Medical Diagnosis ● The literals periapical_lesion, horizontal_fracture and vertical_fracture are abducibles. ● There are three possible explanatory scenaria for percussion_pain. ● We can attempt to disprove/confirm some of the hypotheses by performing experiments on the expected consequences of each scenaria.

27 Example: Medical Diagnosis ● The experiments and their results are derived from a secondary reasoning process over a knowledge base of experiments (example): expect(xray) ← possible(radiolucency) expect(xray) ← possible(elliptic_fracture_trace) expect_not(xray) ← radiotherapy_patient ● Expected consequences of each abducible scenaria are asserted as possibilities in this knowledge base.

28 Example: Medical Diagnosis ● The choice of what experiment to perform can be realized by an abductive process, in which the experiments are now the abducibles. ● In this way it is possible to declaratively specify preferences between possible experiments (e.g. attending to the patient's comfort, economic possibilities, etc.) ● After choosing which oracle to consult, the experiment is performed.

29 Example: Medical Diagnosis ● Confirming or disconfirming an expected consequence can introduce new constraints in the knowledge base, for instance, the constraint:  ← tooth_mobility can be derived after confirmation that the tooth has no mobility. can be derived after confirmation that the tooth has no mobility. ● These new constraints will possibly change the scenaria which were derived previously.

30 Implementation ● The proposed framework has been implemented in a combination of XSB Prolog with Smodels. ● The top-down query driven Prolog complements the bottom-up model computation of the Stable Models semantics. ● We can compute preferred, relevant partial stable models, which in practice means we drastically prune the combinatorial explosion of abducibles.

31 Conclusions ● A priori and a posteriori preference handling are complementary choice mechanisms that can be easily combined. ● A posteriori preferences enable more flexible meta-reasoning and the combination with other approaches for decision making (e.g. Utility Theory).

32 Conclusions ● Hybrid implementations combining top-down and bottom-up reasoning mechanisms can account for interesting additional properties (efficiency-wise). ● Other complementary work:  Belief revision over preference rules (how to deal with contradictory preferences)  Prospective logic programming


Download ppt "Pre and Post Preferences over Abductive Models Luís Moniz Pereira 1 Gonçalo Lopes 1 Pierangelo Dell'Acqua 2 Pierangelo Dell'Acqua 2 1 CENTRIA – Universidade."

Similar presentations


Ads by Google