Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to formal models of argumentation

Similar presentations


Presentation on theme: "Introduction to formal models of argumentation"— Presentation transcript:

1 Introduction to formal models of argumentation
Henry Prakken Dundee (Scotland) September 4th, 2014

2 What is argumentation? Giving reasons to support claims that are open to doubt Defending these claims against attack NB: Inference + dialogue 2

3 Why study argumentation?
In linguistics: Argumentation is a form of language use In Artificial Intelligence: Our applications have humans in the loop We want to model rational reasoning but with standards of rationality that are attainable by humans Argumentation is natural for humans Trade-off between rationality and naturalness In Multi-Agent Systems: Argumentation is a form of communication Why are we into argumentation and not into Bayesian statistics? Since our intended applications have humans in the loop.we are in this business to promote rationality, so we do not want to naively incorporate everything humans do in our models. But since our intended applications have humans in the loop (unlike e.g. learning applications of Bayes' nets in Google cars), our models or rationality must be explainable to humans and realistically achievable "attainable" for humans. So naturalness is still a concern (though not an overriding one; trade-offs are necessary).

4 Today: formal models of argumentation
Abstract argumentation Argumentation as inference Frameworks for structured argumentation Deductive vs. defeasible inferences Argument schemes Argumentation as dialogue

5 We should lower taxes Lower taxes increase productivity Increased productivity is good

6 We should not lower taxes
We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Attack on conclusion

7 We should not lower taxes
We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Lower taxes do not increase productivity Attack on premise … USA lowered taxes but productivity decreased

8 We should not lower taxes
We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Prof. P says that … Lower taxes do not increase productivity … often becomes attack on intermediate conclusion USA lowered taxes but productivity decreased

9 We should not lower taxes
We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Prof. P says that … Prof. P is not objective Lower taxes do not increase productivity Attack on inference People with political ambitions are not objective Prof. P has political ambitions USA lowered taxes but productivity decreased

10 We should not lower taxes
We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Prof. P says that … Prof. P is not objective Lower taxes do not increase productivity People with political ambitions are not objective Prof. P has political ambitions USA lowered taxes but productivity decreased

11 We should not lower taxes
We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Prof. P says that … Prof. P is not objective Lower taxes do not increase productivity Increased inequality is good People with political ambitions are not objective Prof. P has political ambitions USA lowered taxes but productivity decreased Increased inequality stimulates competition Competition is good Indirect defence

12 We should not lower taxes
We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Prof. P says that … Prof. P is not objective Lower taxes do not increase productivity Increased inequality is good So far we have illustrated: Arguments have structure: premises, a conclusion, and an inference (I will not tell you yet what inference rules licence the inference) So arguments can be attacked in three ways. We can resolve attacks on the basis of our preference, to obtain defeat relations Arguments can be complex (so premise attack often is attack on subconclusions) But what we have not yet seen is how inferences can be drawn from all this. Now Dung’s beautiful insight was that inference can be studied by fully abstracting from the structure of arguments and the nature of defeat. People with political ambitions are not objective Prof. P has political ambitions USA lowered taxes but productivity decreased Increased inequality stimulates competition Competition is good

13 A B C D E Letters are NOT propositions! Arrows are defeat relations. A typical case of mutual defeat is equally strong arguments with contradictory conclusions. Two typical cases of strict defeat: - arguments with contradictory conclusions, one of which is stronger. An ‘undercutter’ NB: We could leave B and E both white! Imagine a judge faced with a body of arguments. P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995. 13

14 1. An argument is In iff all arguments that attack it are Out.
2. An argument is Out iff some argument that attacks it is In. Grounded semantics minimises In labelling Preferred semantics maximises In labelling Stable semantics labels all nodes A B C D E Letters are NOT propositions! Arrows are defeat relations. A typical case of mutual defeat is equally strong arguments with contradictory conclusions. Two typical cases of strict defeat: - arguments with contradictory conclusions, one of which is stronger. An ‘undercutter’ NB: We could leave B and E both white! Imagine a judge faced with a body of arguments.

15 Properties There always exists exactly one grounded labelling
There exists at least one preferred labelling Every stable labelling is preferred (but not v.v.) The grounded labelling is a subset of all preferred and stable labellings Every finite Dung graph without attack cycles has a unique labelling (which is the same in all semantics) ... First bullet: because in grounded semantics you don’t choose between alternatives. 15

16 A B C 1. An argument is In iff all arguments that attack it are Out.
2. An argument is Out iff some argument that attacks it is In. Stable semantics labels all nodes Grounded semantics minimises In labelling Preferred semantics maximises In labelling A B C 16

17 A B C 1. An argument is In iff all arguments that attack it are Out.
2. An argument is Out iff some argument that attacks it is In. Stable semantics labels all nodes Grounded semantics minimises In labelling Preferred semantics maximises In labelling A B C 17

18 A B C 1. An argument is In iff all arguments that attack it are Out.
2. An argument is Out iff some argument that attacks it is In. Stable semantics labels all nodes Grounded semantics minimises In labelling Preferred semantics maximises In labelling A B C 18

19 A B C 1. An argument is In iff all arguments that attack it are Out.
2. An argument is Out iff some argument that attacks it is In. Stable semantics labels all nodes Grounded semantics minimises In labelling Preferred semantics maximises In labelling A B C 19

20 A B C D 1. An argument is In iff all arguments that attack it are Out.
2. An argument is Out iff some argument that attacks it is In. Stable semantics labels all nodes Grounded semantics minimises In labelling Preferred semantics maximises In labelling A B C D Why bad: add a fourth argument! 20

21 A B C D 1. An argument is In iff all arguments that attack it are Out.
2. An argument is Out iff some argument that attacks it is In. Stable semantics labels all nodes Grounded semantics minimises In labelling Preferred semantics maximises In labelling A B C D 21

22 Difference between grounded and preferred labellings
1. An argument is In iff all arguments that attack it are Out. 2. An argument is Out iff some argument that attacks it is In. A B C A = Merkel is German since she has a German name B = Merkel is Belgian since she is often seen in Brussels C = Merkel is a fan of Oranje since she wears an orange shirt (unless she is German or Belgian) D = Merkel is not a fan of Oranje since she looks like someone who does not like football D (Generalisations are left implicit) 22

23 The grounded labelling
1. An argument is In iff all arguments that attack it are Out. 2. An argument is Out iff some argument that attacks it is In. A B C D 23

24 The preferred labellings
1. An argument is In iff all arguments that attack it are Out. 2. An argument is Out iff some argument that attacks it is In. A B A B C C D D 24

25 Justification status of arguments
A is justified if A is In in all labellings A is overruled if A is Out in all labellings A is defensible otherwise 25

26 Argument status in grounded and preferred semantics
Grounded semantics: all arguments defensible Preferred semantics: A and B defensible C overruled D justified A B C D A B C D A B C D 26

27 Labellings and extensions
Given an argumentation framework AF = Args,attack: S  Args is a stable/preferred/grounded argument extension iff S = In for some stable/preferred/grounded labelling 27

28 Grounded extension A is acceptable wrt S (or S defends A) if all attackers of A are attacked by S S attacks A if an argument in S attacks A Let AF be an abstract argumentation framework F0AF =  Fi+1AF = {A  Args | A is acceptable wrt FiAF} F∞AF = ∞i=0 (Fi+1AF) If no argument has an infinite number of attackers, then F∞AF is the grounded extension of AF (otherwise it is included) 28

29 S defends A if all attackers of A are attacked by a member of S

30 S defends A if all attackers of A are attacked by a member of S
F 2 = {A,D}

31 S defends A if all attackers of A are attacked by a member of S
F 2 = {A,D} F 3 = F 2

32 S defends A if all defeaters of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E Grounded

33 Stable extensions Dung (1995): Recall:
S is conflict-free if no member of S attacks a member of S S is a stable extension if it is conflict-free and attacks all arguments outside it Recall: S is a stable argument extension if S = In for some stable labelling Proposition: S is a stable argument extension iff S is a stable extension Suppose (In,Out) is a ssa: To be proven: (1) In is conflict-free. Suppose A,B  In and B defeats A. Then A,B  Out: contradiction. So there are no such A and B. (2) All A  Out are defeated by In. This follows from condition (1) of ssa. Suppose S is a stable extension. To be proven: (S,Args/S) is an ssa. It is immediate that S Args/S = . Next it is to be proven that If A  S, then all defeaters of A are in Args/S. This follows from conflict-freeness of S. If all defeaters of A are in Args/S then A  S. If all defeaters of A are in Args/S then A cannot be in Args/S, since no defeater of A is in S.. If A  Args/S, then A is defeated by an argument in S. This follows from the definitions of Args/S and stable extensions. If A is defeated by an argument in S, then A  Args/S by conflict-freeness of S. Picture: oval with In and Out, and under that S and Args/S. 33

34 Preferred extensions Dung (1995): Recall:
S is conflict-free if no member of S attacks a member of S S is admissible if it is conflict-free and all its members are acceptable wrt S S is a preferred extension if it is -maximally admissible Recall: S is a preferred argument extension if S = In for some preferred labelling Proposition: S is a preferred argument extension iff S is a preferred extension A <- B <- C: Is  admissible, and {A}? And ....?. Same with A <-> B. 34

35 S defends A if all attackers of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E Admissible?

36 S defends A if all defeaters of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E Admissible?

37 S defends A if all defeaters of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E Admissible?

38 S defends A if all defeaters of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E Admissible?

39 S defends A if all defeaters of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E S is preferred if it is maximally admissible Preferred?

40 S defends A if all defeaters of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E S is preferred if it is maximally admissible Preferred?

41 S defends A if all defeaters of A are attacked by a member of S
S is admissible if it is conflict-free and defends all its members A B C D E S is preferred if it is maximally admissible Preferred?

42 Proof theory for abstract argumentation
Argument games between proponent P and opponent O: Proponent starts with an argument Then each party replies with a suitable attacker A winning criterion E.g. the other player cannot move Acceptability status corresponds to existence of a winning strategy. 42

43 Strategies A strategy for player p is a partial game tree:
Every branch is a game (sequence of allowable moves) The tree only branches after moves by p The children of p’s moves are all the legal moves by the other player P: A O: B O: C We can capture this idea with the notion of a strategy. ….. So this could be a strategy for P, assuming that these branches do indeed respect the rules that tell you what moves are allowed and assuming that we have represented all of O’s legal moves at each choice point. Can everyone see why this is a strategy? So it tells P what to do at every possible point as it considers all of O’s possible responses. P: D P: E O: F O: G P: H 43

44 Strategies A strategy for player p is winning iff p wins all games in the strategy Let S be an argument game: A is S-provable iff P has a winning strategy in an S-game that begins with A Winning – so there’s a way P can play s.t. it doesn’t matter which way O plays. 44

45 The G-game for grounded semantics:
A sound and complete game: Each move must reply to the previous move Proponent cannot repeat his moves Proponent moves strict attackers, opponent moves attackers A player wins iff the other player cannot move Proposition: A is in the grounded extension iff A is G-provable 45

46 An attack graph A F B C E D 46 46

47 A game tree move A F P: A B C E D 47 47

48 A game tree move A F P: A O: F B C E D 48 48

49 A game tree A F P: A O: F B P: E C move E D 49 49

50 A game tree A F P: A move O: F O: B B P: E C E D 50 50

51 A game tree A F P: A O: F O: B B P: E P: C C E move D 51 51

52 A game tree A F P: A O: F O: B B P: E P: C C E O: D move D 52 52

53 A game tree A F P: A O: F O: B B P: E P: C P: E C move E O: D D 53 53

54 Proponent’s winning strategy
F P: A O: F O: B B P: E P: E C move E D 54 54

55 Exercise F A B E C D P: D O: C O: B P: A P: E P: A O: F
Exercise: A defeats B and C, both B and C defeat D, E defeats C, E and F defeat each other. Draw the game tree for A. How many strategies does it contain for P? And how many for A? And who has a winning strategy? O: F Slide made by Liz Black 55

56 Research on abstract argumentation
New semantics Algorithms Finding labellings (extensions) Games Complexity Dynamics (adding or deleting arguments or attacks) Addition of new elements to AFs: abstract support relations preferences Reasons to be sceptical: S. Modgil & H. Prakken, Resolutions in structured Argumentation. In Proceedings of COMMA 2012. H. Prakken, Some reflections on two current trends in formal argumentation. In Festschrift for Marek Sergot, Springer 2012. H. Prakken, On support relations in abstract argumentation as abstractions of inferential relations. In Proceedings ECAI 2014 I am sceptical about much of this research. AA is indispensible as a component in models of argumentation, but real applications require an account of the structure of arguments and the nature of attack. Now: Research on algorithms can exploit the structure of arguments, and this may reduce complexity. Dynamic accounts of abstract argumentation make a lot of implicit assumptions that are often violated, such as that al arguments are independent of each other and/or that all attacks are independent of each other.

57 Arguing about attack relations
Standards for determining defeat relations are often: Domain-specific Defeasible and conflicting So determining these standards is argumentation! Recently Modgil (AIJ 2009) has extended Dung’s abstract approach Arguments can also attack attack relations Modgil’s EAFs are one of the few properly motivated extensions of Dung Afs.

58 Will it rain in Calcutta?
Modgil 2009 BBC says rain CNN says sun B C

59 Will it rain in Calcutta?
Modgil 2009 Trust BBC more than CNN T BBC says rain CNN says sun B C

60 Will it rain in Calcutta?
Modgil 2009 Trust BBC more than CNN T BBC says rain CNN says sun B C S Stats say CNN better than BBC

61 Will it rain in Calcutta?
Modgil 2009 Trust BBC more than CNN T BBC says rain CNN says sun B C R Stats more rational than trust S Stats say CNN better than BBC

62 A B E C D 62

63 We should not lower taxes
B We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Prof. P says that … Prof. P is not objective Lower taxes do not increase productivity Increased inequality is good E We should never forget that Dung is an abstraction, and we should always ask ourselves what it is that we are abstracting of. Much current work on AA does not do this. As a consequence, it often makes implicit assumptions that by many models of structured argumentation are not satisfied (e.g. that all attack relations are independent, or that all arguments are attackable). Thus these models are detached from the reality of argumentation. Dung did not make this mistake: … My advice: either stick to Dung (or Modgil’s EAFs, which in my opinion are the only well-defined extension of DAFs), or immediately investigate how your extensions of it can be instantiated (as done by Modgil). People with political ambitions are not objective Prof. P has political ambitions USA lowered taxes but productivity decreased Increased inequality stimulates competition Competition is good C D 63

64 The ultimate status of conclusions of arguments
A is justified if A is In in all labellings A is overruled if A is Out in all labellings A is defensible otherwise Conclusions:  is justified if  is the conclusion of some justified argument  is defensible if  is not justified and  is the conclusion of some defensible argument  is overruled if  is not justified or defensible and there exists an overruled argument for  Justification is nonmonotonic! Cn over L is monotonic iff for all p  L, S,S’  L: If p  Cn(S) and S  S’ then p  Cn(S’) Some alternatives and refinements are possible.

65 Two accounts of the fallibility of arguments
Plausible Reasoning: all fallibility located in the premises Assumption-based argumentation (Kowalski, Dung, Toni,… Classical argumentation (Cayrol, Besnard, Hunter, …) Defeasible reasoning: all fallibility located in the inferences Pollock, Loui, Vreeswijk, Prakken & Sartor, DeLP, … ASPIC+ combines these accounts Intuitively, if a rule is strict, then if you accept its premises, you must accept its conclusion no matter what. If a rule is defeasible, and you accept its premises, then you must accept its conclusion if you cannot find good reasons not to accept it.

66 “Nonmonotonic” v. “Defeasible”
Nonmonotonicity is a property of consequence notions Defeasibility is a property of inference rules An inference rule is defeasible if there are situations in which its conclusion does not have to be accepted even though all its premises must be accepted. One can have nonmonotoniciy without defeasibility (e.g. In classical or assumption-based argumentation, see further on).

67 Rationality postulates for structured argumentation
Extensions should be closed under subarguments Their conclusion sets should be: Consistent Closed under deductive inference Once you give structure to arguments, then you can consider additional rationality constraints on sets of arguments. M. Caminada & L. Amgoud, On the evaluation of argumentation formalisms. Artificial Intelligence 171 (2007): 67

68 The ‘base logic’ approach (Hunter, COMMA 2010)
Adopt a single base logic Define arguments as consequence in the adopted base logic Then the structure of arguments is given by the base logic

69 Classical argumentation (Besnard, Hunter, …)
Assume a possibly inconsistent KB in the language of classical logic Arguments are classical proofs from consistent (and subset-minimal) subsets of the KB Various notions of attack Possibly add preferences to determine which attacks result in defeat E.g. Modgil & Prakken, AIJ-2013. Approach recently abstracted to Tarskian abstract logics Amgoud & Besnard ( ) Modgil & Prakken show how to add preferences in a way that preserves the results on the rationality postulates (earlier, Amgoud & Vesic suggested that this would not be possible).

70 Classical argumentation formalised
Given L a propositional logical language and |- standard-logical consequence over L: An argument is a pair (S,p) such that S  L and p  L S |- p S is consistent No S’  S is such that S’ |- p Various notions of attack, e.g.: “Direct defeat”: argument (S,p) attacks argument (S’,p’) iff p |- ¬q for some q  S’ “Direct undercut”: argument (S,p) attacks argument (S’,p’) iff p = ¬q for some q  S’ Only these two attacks satisfy consistency, so classical argumentation is only optimal for plausible reasoning The last bullet is my philosophical interpretation of their formal result. If you want to allow plausible reasoning, then all you need is premise attack, so if you also allow conclusion attack, you want to model defeasible reasoning: you leave open that the premises of an inference can be accepted even though its conclusion has to be rejected. Actually, forms of rebuttal partially satisfy consistency, namely, for grounded (and ideal) semantics But the general picture is that for classical argumentation forms of premise attack are better behaved than forms of conclusion attack.

71 Modelling default reasoning in classical argumentation
Quakers are usually pacifist Republicans are usually not pacifist Nixon was a quaker and a republican 71

72 A modelling in classical logic
Quaker  Pacifist Republican   ¬Pacifist Facts: Quaker, Republican This is not an optimal modelling, since in most of Dung’s semantics conclusion attack leads to violation of the consistency postulate. Pacifist ¬Pacifist Quaker Quaker  Pacifist Republican Republican  ¬Pacifist 72

73 A modelling in classical logic
Quaker  Pacifist Republican   ¬Pacifist Facts: Quaker, Republican ¬(Quaker  Pacifist) This shows how classical logic with premise attack can model the example. A problem with this: we really do not want to attack that Nixon is a Quaker or that Quakers are usually pacifist. We only want to argue that Nixon is not a normal Quaker as regards pacifism. When saying “quakers are usually pacifists, republicans are usually not pacifists, Nixon was a quaker and a republican”, we have not said anything inconsistent. The conflict originates not from an inconsistent knowledge base but from the risky (defeasible) inferences we are making from these statements. One way to model these risky inferences is to say that they in fact make an implicit assumption that Nixon was a normal quaker, respectively, that Nixon was a normal republican. In a system with only premise attack these must be made explicit as additional premises. Pacifist ¬Pacifist Quaker Quaker  Pacifist Republican Republican  ¬Pacifist 73

74 A modelling in classical logic
Quaker & ¬Ab1  Pacifist Republican & ¬Ab2   ¬Pacifist Facts: Quaker, Republican Assumptions: ¬Ab1, ¬Ab2 (attackable) Ab1 Assume that arguments can only be attacked on assumptions. Pacifist ¬Pacifist Quaker & ¬Ab1  Pacifist Republican & ¬Ab2  ¬Pacifist Quaker ¬Ab1 ¬Ab2 Republican 74

75 A modelling in classical logic
Quaker & ¬Ab1  Pacifist Republican & ¬Ab2   ¬Pacifist Facts: Quaker, Republican Assumptions: ¬Ab1, ¬Ab2 (attackable) Ab2 Ab1 If all premises are attackable, then this has six mcs so six naïve / stable / preferred extensions. And the grounded extension is empty. Pacifist ¬Pacifist Quaker & ¬Ab1  Pacifist Republican & ¬Ab2  ¬Pacifist Quaker ¬Ab1 ¬Ab2 Republican 75

76 Extensions v. maximal consistent subsets
With classical (and Tarskian) argumentation preferred and stable extensions and maximal conflict-free sets coincide with maximal consistent subsets of the knowledge base Cayrol (1995) Amgoud & Besnard (2013) If ‘real’ argumentation is more than identifying mcs, then deductive argumentation when combined with Dung misses something. Modgil (& Prakken) 2013: with preferences they coincide with Brewka’s preferred subtheories But is real argumentation identifying preferred subtheories? ‘deductive’ = classical or Tarskian. Distinguishing facts (not attackable) from assumptions (attackable) is a special case of preferred subtheories.

77 The ASPIC+ framework Arguments: Trees where
Nodes are statements in some logical language L Links are applications of inference rules Strict rules  Defeasible rules  Constructed from a knowledge base K  L Axiom (necessary) premises + ordinary (contingent) premises Attack: On ordinary premises On defeasible inferences (undercutting) On conclusions of defeasible inferences (rebutting) Defeat: attack + argument ordering Argument evaluation with Dung (1995) 77 77

78 Rs: Rd: r,s  t p,q  r Kn = {p,q} Kp = {s} t r s p q r, s  t  Rs
p,q  r  Rd p q

79 Undermining: on ordinary premises Rebutting: on defeasible inferences
Rs: Rd: r,s  t p,q  r Kn = {p,q} Kp = {s} Attack: Undermining: on ordinary premises Rebutting: on defeasible inferences Undercutting: on conclusions of defeasible inferences t n(1, ..., n  )  L r s p q Attack + preferences = defeat

80 Consistency in ASPIC+ (with symmetric negation)
For any S  L S is (directly) consistent iff S does not contain two formulas  and – . 80 80

81 Rationality postulates for ASPIC+
Subargument closure always satisfied Consistency and strict closure: without preferences satisfied if Rs closed under transposition or closed under contraposition; and Kn is indirectly consistent with preferences satisfied if in addition the argument ordering is ‘reasonable’ Versions of the weakest- and last link ordering are reasonable So ASPIC+ is good for both plausible and defeasible reasoning 81

82 Two uses of defeasible rules
For domain-specific information Defeasible generalisations, norms, … For general patterns of presumptive reasoning Pollock’s defeasible reasons: perception, memory, induction, statistical syllogism, temporal persistence Argument schemes So frameworks with defeasible rules like ASPIC+ are not less but more expressive than ‘logic-based’ argumentation.

83 Domain-specific vs. inference general inference rules
Flies d1: Bird  Flies s1: Penguin  Bird Penguin  K Rd = {,     } Rs = all valid inference rules of prop. l. Bird  Flies  K Penguin  Bird  K Bird Penguin Flies Bird Bird Flies Penguin Penguin  Bird 83

84 Preferred extensions do not always coincide with mcs
r1: Quaker  Pacifist r2: Republican  ¬Pacifist S  p  Rs iff S |- p in Prop. L and S is finite K: Quaker, Republican Pacifist Pacifist r1 r2 Quaker Republican 84

85 Preferred/stable extensions do not always coincide with mcs
Pacifist Pacifist A1 B1 Quaker Republican E1 = {A1,A2,B1,…} E2 = {A1,B1,B2,…} 85

86 Preferred/stable extensions do not always coincide with mcs
Pacifist Pacifist A1 B1 Quaker Republican As I said above, the conflict is not caused by inconsistency of the KB but by risky inferences from the KB. Conc(E1) = Th({Quaker, Republican, Pacifist}) Conc(E2) = Th({Quaker, Republican, ¬Pacifist}) mcs(K) = {{K}} = {{Quaker, Republican}} 86

87 Preferred extensions do not always coincide with mcs
Rd = {,     } S  p  Rs iff S |- p in Prop. L and S is finite K: Quaker, Republican, Quaker  Pacifist, Republican  ¬Pacifist Pacifist Pacifist The KB is still consistent. Quaker Quaker  Pacifist Republican Quaker  Pacifist 87

88 Can defeasible reasoning be reduced to plausible reasoning?
To classical argumentation? Problems with contrapositive inferences To assumption-based argumentation? Problems with preferences In both cases: less complex metatheory but more complex representations

89 Default contraposition in classical argumentation
Men are usually not rapists . John is a rapist Assume when possible that things are normal What can we conclude about John’s sex?

90 Default contraposition in classical argumentation
Men are usually not rapists M & ¬Ab  ¬R John is a rapist (R) Assume when possible that things are normal ¬Ab

91 Default contraposition in classical argumentation
Men are usually not rapists M & ¬Ab  ¬R John is a rapist (R) Assume when possible that things are normal ¬Ab The first default implies that rapists are usually not men R & ¬Ab  ¬M So John is not a man

92 Default contraposition in classical argumentation
Heterosexual adults are usually not married => Non-married adults are usually not heterosexual This type of sensor usually does not give false alarms => False alarms are usually not given by this type of sensor The question is whether defeasible contraposition and modus tollens are based on assumptions that people tend to make in commonsense reasoning. I don’ think so: base rates can vary wildly. Statisticians call these inferences “base rate fallacies”

93 Assumption-based argumentation (Dung, Mancarella & Toni 2007)
A deductive system is a pair (L, R) where L is a logical language R is a set of rules (1, ..., n  ) over L An assumption-based argumentation framework is a tuple (L, R, A,~) where (L, R) is a deductive system A  L, A ≠ , a set of assumptions No rule has an assumption as conclusion ~ is a total mapping from Pow(L) into L. ~a is the contrary of a. An argument S |- p is a deduction of p from a set S  A. Argument S |- p attacks argument S’ |-p’ iff p = ~q for some q  S’ This is the DMT 2007 version. Many of the previously mentioned results on consistency and mcs do not apply, since ABA makes hardly any assumptions about the deductive system (except for special cases of flatness). This is a strength (they can model many approaches) but also a weakness (in general few properties can be proven). Moreover, they have no consistency and minimality conditions.

94 Reduction of ASPIC+ defeasible rules to ABA rules (Dung & Thang, JAIR 2014)
1-1 correspondence between complete extensions of ASPIC+ and ABA Assumptions: L consists of literals No preferences No rebuttals of undercutters p1, …, pn  q becomes di, p1, …, pn,not¬q  q where: di = n(p1, …, pn  q) di,not¬q are assumptions  = ~not,  = ~¬, ¬ = ~ “not” is negation as failure So undercutting attack is simulated by attack on the first premise and rebutting attack is simulated by attack on the last premise.

95 From defeasible to strict rules: example
r1: Quaker  Pacifist r2: Republican  ¬Pacifist Pacifist Pacifist r1 r2 Quaker Republican 95

96 From defeasible to strict rules: example
s1: Appl(s1), Quaker, not¬Pacifist  Pacifist s2: Appl(s2), Republican, notPacifist  ¬Pacifist Pacifist ¬Pacifist Appl(s1) Quaker not¬Pacifist notPacifist Republican Appl(s2) 96

97 Can ASPIC+ preferences be reduced to ABA assumptions?
d1: Bird  Flies d2: Penguin  ¬Flies d1 < d2 Becomes d1: Bird, notPenguin  Flies Only works in special cases, e.g. not with weakest-link ordering

98 We should not lower taxes
B We should lower taxes We should not lower taxes Lower taxes increase productivity Increased productivity is good Lower taxes increase inequality Increased inequality is bad Prof. P says that … Prof. P is not objective Lower taxes do not increase productivity Increased inequality is good E I lied when I said that this graph induces the following Dung graph … People with political ambitions are not objective Prof. P has political ambitions USA lowered taxes but productivity decreased Increased inequality stimulates competition Competition is good C D 98

99 A B E C D

100 A B A’ E C D

101 P1 P2 P3 P4 A B A’ E C D P5 P6 P7 P8 P9

102 Preferences in abstract argumentation
PAFs: extend (args,attack) to (args,attack, a) a is an ordering on args A defeats B iff A attacks B and not A < B Apply Dung’s theory to (args,defeat) Implicitly assumes that: All attacks are preference-dependent All attacks are independent from each other Assumptions not satisfied in general => Properties not inherited by all instantiations possibly violation of rationality postulates An example of what can go wrong if you extend Dung Afs without asking yourself what are sensible instantiations of your abstract framework. 102

103 John snores in the library
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 John may be removed R3 John misbehaves in the library John does not misbehave in the library R1 R2 John snores in the library John snores when nobody else is in the library 103

104 John snores in the library
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 John may be removed R3 John misbehaves in the library John does not misbehave in the library R1 < R2 R1 R2 John snores in the library John snores when nobody else is in the library 104

105 John snores in the library
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 A3 John may be removed R3 A2 John misbehaves in the library B2 John does not misbehave in the library R1 R2 A1 John snores in the library B1 John snores when nobody else is in the library 105

106 A3 The defeat graph in ASPIC+ A2 B2 A1 B1
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 so A2 < B2 < A3 (with last link) A3 The defeat graph in ASPIC+ A2 B2 A1 B1 106

107 A3 The attack graph in PAFs A2 B2 A1 B1
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 so A2 < B2 < A3 (with last link) A3 The attack graph in PAFs A2 B2 A1 B1 107

108 The defeat graph in PAFs
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 so A2 < B2 < A3 (with last link) A3 The defeat graph in PAFs A2 B2 A1 B1 108

109 John Snores in the library
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 so A2 < B2 < A3 (with last link) John may be removed R3 John misbehaves in the library John does not misbehave in the library R1 R2 John Snores in the library John snores when nobody else is in the library 109

110 PAFs don’t recognize that B2’s attacks on
R1: If you snore, you misbehave R2: If you snore when nobody else is around, you don’t misbehave R3: If you misbehave in the library, the librarian may remove you R1 < R2 < R3 so A2 < B2 < A3 (with last link) A3 PAFs don’t recognize that B2’s attacks on A2 and A3 are the same A2 B2 A1 B1 110

111 Work outside the Dung paradigm
Defeasible Logic Programming (Simari et al.) Arguments roughly as in ASPIC+ but no Dung semantics Carneades (Gordon et al.) Arguments pro and con a claim Abstract Dialectical Frameworks (Brewka & Woltran)

112 Argument(ation) schemes: general form
But also critical questions Premise 1, … , Premise n Therefore (presumably), conclusion 112

113 Argument schemes in ASPIC
Argument schemes are defeasible inference rules Critical questions are pointers to counterarguments Some point to undermining attacks Some point to rebutting attacks Some point to undercutting attacks

114 Reasoning with default generalisations
But defaults can have exceptions And there can be conflicting defaults P If P then normally/usually/typically Q So (presumably), Q - What experts say is usually true - People with political ambitions are usually not objective about security - People with names typical from country C usually have nationality C - People who flea from a crime scene when the police arrives are normally involved in the crime - Chinese people usually don’t like coffee 114 114

115 Perception Critical questions: P is observed Therefore (presumably), P
Are the observer’s senses OK? Are the circumstances such that reliable observation of P is impossible? P is observed Therefore (presumably), P

116 Inducing generalisations
Almost all observed P’s were Q’s Therefore (presumably), If P then usually Q Critical questions: Is the size of the sample large enough? was the sample selection biased? A ballpoint shot with this type of bow will usually cause this type of eye injury In 16 of 17 tests the ballpoint shot with this bow caused this type of eye injury 116

117 Expert testimony (Walton 1996)
E is expert on D E says that P P is within D Therefore (presumably), P is the case Critical questions: Is E biased? Is P consistent with what other experts say? Is P consistent with known evidence?

118 Supporting and using generalisations
Defeasible modus ponens V’s injury was caused by a fall This type of eye injury is usually caused by a fall V has this type of injury E says that his type of injury is usually caused by a fall E is an expert on this type of injury Expert testimony scheme Suppose there is no expert evidence: then the gen is unsupported. An analysis like this can expose this. This example shows why gens must be premises. Are argument schemes like the expert testimony scheme typical implicit premises? They can be regarded as such, but contrary to ‘domain-specific’ implicit premises (e.g. “people who flee from a crime scene are involved in the crime” or “police radars of type X are usually a reliable source of information”), argument schemes are never challenged as such. They are generally accepted patterns of presumptive reasoning, which can only be challenged in exceptional circumstances. Therefore I regard them as schemes for arguments. A presumptive argument is “good” if it instantiates an argument scheme (but it may still be defeated by counterarguments).

119 Arguments from consequences
Critical questions: Does A also have bad (good) consequences? Are there other ways to bring about G? ... Action A causes G, G is good (bad) Therefore (presumably), A should (not) be done 119

120 Combining multiple good/bad consequences
Action A results in C1 Action A results in Cn C1 is good Cn is good Therefore, Action A is good Action A results in C1 Action A results in Cn C1 is bad Cm is bad Therefore, Action A is bad

121 This is a visualisation of Nico Kwakman’s legal opinion on the legislative proposal to have mandatory minimum sentences for recidivism in case of serious crimes. This figure does not explicitly show multiple-consequences versions of the schemes from good and bad consequences. Instead, it only shows three applications of the single-good-consequence version of the scheme and two applications of the single-bad-consequence version of the scheme. The next slide, which contains the Dung graph of this example (again with various subarguments and all premise arguments left out), displays all combinations. H. Prakken, Formalising a legal opinion on a legislative proposal in the ASPIC+ framework. Proc. JURIX 2012

122 GC3 GC 123 BC 12 GC2 GC 23 GC1 GC 13 C1 P1 GC 12 DMP P2

123 GC3 GC2 GC1 C1 P1 P2 Preferred labelling 1
1. An argument is In iff all arguments that defeat it are Out. 2. An argument is Out iff some argument that defeats it is In. GC 123 BC 12 GC2 GC 23 GC1 GC 13 C1 P1 GC 12 DMP P2

124 GC3 GC2 GC1 C1 P1 P2 Preferred labelling 2
1. An argument is In iff all arguments that defeat it are Out. 2. An argument is Out iff some argument that defeats it is In. GC 123 BC 12 GC2 GC 23 GC1 GC 13 C1 P1 GC 12 DMP P2

125 GC3 GC2 GC1 C1 P1 P2 Grounded labelling
1. An argument is In iff all arguments that defeat it are Out. 2. An argument is Out iff some argument that defeats it is In. GC 123 BC 12 GC2 GC 23 GC1 GC 13 C1 P1 GC 12 DMP P2

126 But remember that the Dung graph abstracts from all this.

127 Summary A formal metatheory of structured argumentation is emerging
Better understanding needed of philosophical underpinnings and practical applicability Not all argumentation can be naturally reduced to plausible reasoning The ‘one base logic’ approach is only suitable for plausible reasoning Important research issues: Aggregation of arguments Relation with probability theory

128 Interaction Argument games verify status of argument (or statement) given a single theory (knowledge base) But real argumentation dialogues have Distributed information Dynamics Real players! Richer communication languages Now the dynamics. So far I have only spoken about argumentation as a kind of logic, but …

129 Example P: Tell me all you know about recent trading in explosive materials (request) P: why don’t you want to tell me? P: why aren’t you allowed to tell me? P: You may be right in general (concede) but in this case there is an exception since this is a matter of national importance P: since we have heard about a possible terrorist attack P: OK, I agree (offer accepted). O: No I won’t (reject) O: since I am not allowed to tell you O: since sharing such information could endanger an investigation O: Why is this a matter of national importance? O: I concede that there is an exception, so I retract that I am not allowed to tell you. I will tell you on the condition that you don’t exchange the information with other police officers (offer)

130 Example P: Tell me all you know about recent trading in explosive materials (request) P: why don’t you want to tell me? P: why aren’t you allowed to tell me? P: You may be right in general (concede) but in this case there is an exception since this is a matter of national importance P: since we have heard about a possible terrorist attack P: OK, I agree (offer accepted). O: No I won’t (reject) O: since I am not allowed to tell you O: since sharing such information could endanger an investigation O: Why is this a matter of national importance? O: I concede that there is an exception, so I retract that I am not allowed to tell you. I will tell you on the condition that you don’t exchange the information with other police officers (offer)

131 Example P: Tell me all you know about recent trading in explosive materials (request) P: why don’t you want to tell me? P: why aren’t you allowed to tell me? P: You may be right in general (concede) but in this case there is an exception since this is a matter of national importance P: since we have heard about a possible terrorist attack P: OK, I agree (offer accepted). O: No I won’t (reject) O: since I am not allowed to tell you O: since sharing such information could endanger an investigation O: Why is this a matter of national importance? O: I concede that there is an exception, so I retract that I am not allowed to tell you. I will tell you on the condition that you don’t exchange the information with other police officers (offer)

132 Types of dialogues (Walton & Krabbe)
Types of dialogues (Walton & Krabbe) Dialogue Type Dialogue Goal Initial situation Persuasion resolution of conflict conflict of opinion Negotiation making a deal conflict of interest Deliberation reaching a decision need for action Information seeking exchange of information personal ignorance Inquiry growth of knowledge general ignorance

133 Dialogue systems (according to Carlson 1983)
13:10 Dialogue systems (according to Carlson 1983) Dialogue systems define the conditions under which an utterance is appropriate An utterance is appropriate if it promotes the goal of the dialogue in which it is made Appropriateness defined not at speech act level but at dialogue level Dialogue game approach Protocol should promote the goal of the dialogue ISIS

134 Dialogue game systems A communication language
Well-formed utterances Rules for when an utterance is allowed Protocol Effect rules Turntaking rules Termination + outcome rules Agent design: strategies for selecting from the allowed utterances

135 Effect rules Specify commitments Commitments used for:
Effect rules Specify commitments “Claim p” and “Concede p” commits to p “p since Q” commits to p and Q “Retract p” ends commitment to p ... Commitments used for: Determining outcome Enforcing ‘dialogical consistency’

136 Public semantics for dialogue protocols
Public semantics for dialogue protocols Public semantics: can protocol compliance be externally observed? Commitments are a participant’s publicly declared standpoints, so not the same as beliefs! Only commitments and dialogical behaviour should count for move legality: “Claim p is allowed only if you believe p” vs. “Claim p is allowed only if you are not committed to p and have not challenged p”

137 More and less strict protocols
Single-multi move: one or more moves per turn allowed Single-multi-reply: one or more replies to the same move allowed Deterministic: no choice from legal moves Deterministic in communication language: no choice from speech act types Only reply to moves from previous turn?

138 Some properties that can be studied
Correspondence with players’ beliefs If union of beliefs implies p, can/will agreement on p result? If players agree on p, does union of beliefs imply p? Disregarding vs. assuming player strategies

139 Paul  Olga does not justify q but they could agree on q
Example 1 Knowledge bases Inference rules Paul: r p  q r  p s  r P1: q since p Olga: s Olga is credulous: she concedes everything for which she cannot construct a (defensible or justified) counterargument Paul  Olga does not justify q but they could agree on q

140 Paul  Olga does not justify q but they could agree on q
Example 1 Knowledge bases Inference rules Paul: r p  q r  p s  r P1: q since p Olga: s O1: concede p,q Paul  Olga does not justify q but they could agree on q

141 Paul  Olga does not justify q but they could agree on q
Example 1 Knowledge bases Inference rules Paul: r p  q r  p s  r P1: q since p Olga: s Paul  Olga does not justify q but they could agree on q Olga is sceptical: she challenges everything for which she cannot construct a (defensible or justified) argument

142 Paul  Olga does not justify q but they could agree on q
Example 1 Knowledge bases Inference rules Paul: r p  q r  p s  r P1: q since p Olga: s O1: why p? Paul  Olga does not justify q but they could agree on q

143 Paul  Olga does not justify q but they could agree on q
Example 1 Knowledge bases Inference rules Paul: r p  q r  p s  r P1: q since p Olga: s O1: why p? P2: p since r Paul  Olga does not justify q but they could agree on q

144 Paul  Olga does not justify q but they could agree on q
Example 1 Knowledge bases Inference rules Paul: r p  q r  p s  r P1: q since p Olga: s O1: why p? P2: p since r Paul  Olga does not justify q but they could agree on q O2: r since s

145 Example 2 Knowledge bases Inference rules Paul: p q Modus ponens P1: claim p Olga: p q  p Paul  Olga does not justify p but they will agree on p if players are conservative, that is, if they stick to their beliefs if possible HC 12 Commonsense Reasoning 09/10 145

146 Example 2 P1: claim p O1: concede p
Knowledge bases Inference rules Paul: p q Modus ponens P1: claim p O1: concede p Olga: p q  p Paul  Olga does not justify p but they will agree on p if players are conservative, that is, if they stick to their beliefs if possible HC 12 Commonsense Reasoning 09/10 146

147 Example 2 P1: claim p O1: what about q?
Knowledge bases Inference rules Paul: p q Modus ponens P1: claim p O1: what about q? Olga: p q  p Possible solution (for open-minded agents, who are prepared to critically test their beliefs): HC 12 Commonsense Reasoning 09/10 147

148 Example 2 P1: claim p O1: what about q? P2: claim q
Knowledge bases Inference rules Paul: p q Modus ponens P1: claim p O1: what about q? Olga: p q  p P2: claim q Possible solution (for open-minded agents, who are prepared to critically test their beliefs): HC 12 Commonsense Reasoning 09/10 148

149 Example 2 P1: claim p O1: what about q? Problem: how to
Knowledge bases Inference rules Paul: p q Modus ponens P1: claim p O1: what about q? Olga: p q  p Problem: how to ensure relevance? P2: claim q Possible solution (for open-minded agents, who are prepared to critically test their beliefs): O2: p since q, q  p HC 12 Commonsense Reasoning 09/10 149

150 13:10 Automated Support of Regulated Data Exchange. A Multi-Agent Systems Approach PhD Thesis Pieter Dijkstra (2012) Faculty of Law University of Groningen ISIS

151 The communication language
13:10 The communication language Speech act Attack Surrender request() offer (’), reject() - offer() offer(’) ( ≠ ’), reject() accept() reject() offer(’) ( ≠ ’), why-reject () why-reject() claim (’) claim() why() concede()  since S (an argument) retract()  since S why() (  S) ’ since S’ (a defeater) concede ’ (’  S) deny() This is Jelle van Veenen ISIS

152 The protocol Start with a request
13:10 The protocol Start with a request Repy to an earlier move of the other agent Pick your replies from the table Finish persuasion before resuming negotiation Turntaking: In nego: after each move In pers: various rules possible Termination: In nego: if offer is accepted or someone withdraws In pers: if main claim is retracted or conceded ISIS

153 Example dialogue formalised
13:10 Example dialogue formalised P: Request to tell O: Reject to tell P: Why reject to tell? Embedded persuasion ... O: Offer to tell if no further exchange P: Accept after tell no further exchange ISIS

154 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell ISIS

155 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? ISIS

156 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed ISIS

157 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed ISIS

158 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed P: Exception to R1 since National importance & National importance  Exception to R1 ISIS

159 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed P: Exception to R1 since National importance & National importance  Exception to R1 O: Why National importance? ISIS

160 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed P: Exception to R1 since National importance & National importance  Exception to R1 O: Why National importance? P: National importance since Terrorist threat & Terrorist threat  National importance ISIS

161 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed P: Exception to R1 since National importance & National importance  Exception to R1 O: Concede Exception to R1 O: Why National importance? P: National importance since Terrorist threat & Terrorist threat  National importance ISIS

162 Persuasion part formalised
13:10 Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed O: Retract Not allowed to tell P: Concede What endangers an investigation is not allowed P: Exception to R1 since National importance & National importance  Exception to R1 O: Concede Exception to R1 O: Why National importance? P: National importance since Terrorist threat & Terrorist threat  National importance ISIS

163 Conclusion Argumentation has two sides:
Inference semantics strict vs defeasible inferences preferences Dialogue language + protocol agent design Both sides can be formally and computationally modelled But not in the same way Metatheory of inference much more advanced than of dialogue

164 Reading (1) Collections Abstract argumentation
T.J.M. Bench-Capon & P.E. Dunne (eds.), Artificial Intelligence 171 (2007), Special issue on Argumentation in Artificial Intelligence I. Rahwan & G.R. Simari (eds.), Argumentation in Artificial Intelligence. Berlin: Springer 2009. A. Hunter (ed.), Argument and Computation 5 (2014), special issue on Tutorials on Structured Argumentation Abstract argumentation P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence 77 (1995): P. Baroni, M.W.A. Caminada & M. Giacomin. An introduction to argumentation semantics. The Knowledge Engineering Review 26: (2011)

165 Reading (2) Classical and Tarskian argumentation ASPIC+
Ph. Besnard & A. Hunter, Elements of Argumentation. Cambridge, MA: MIT Press, 2008. N Gorogiannis & A Hunter (2011) Instantiating abstract argumentation with classical logic arguments: postulates and properties, Artificial Intelligence 175: L. Amgoud & Ph. Besnard, Logical limits of abstract argumentation frameworks. Journal of Applied Non-Classical Logics 23(2013): ASPIC+ H. Prakken, An abstract framework for argumentation with structured arguments. Argument and Computation 1 (2010): S. Modgil & H. Prakken, A general account of argumentation with preferences. Artificial Intelligence 195 (2013):

166 Reading (3) Assumption-based argumentation Dialogue
A. Bondarenko, P.M. Dung, R.A. Kowalski & F. Toni, An abstract, argumentation-theoretic approach to default reasoning, Artificial Intelligence 93 (1997): P.M. Dung, P. Mancarella & F. Toni, Computing ideal sceptical argumentation, Artificial Intelligence 171 (2007): Dialogue S. Parsons, M. Wooldridge & L. Amgoud, Properties and complexity of some formal inter-agent dialogues. Journal of Logic and Computation 13 (2003): H. Prakken, Coherence and flexibility in dialogue games for argumentation. Journal of Logic and Computation 15 (2005): H. Prakken, Formal systems for persuasion dialogue. The Knowledge Engineering Review 21 (2006):


Download ppt "Introduction to formal models of argumentation"

Similar presentations


Ads by Google