Presentation is loading. Please wait.

Presentation is loading. Please wait.

Henry Prakken Guangzhou (China) 10 April 2018

Similar presentations


Presentation on theme: "Henry Prakken Guangzhou (China) 10 April 2018"— Presentation transcript:

1 Henry Prakken Guangzhou (China) 10 April 2018
Spring School on Argumentation in AI & Law Day 1 – lecture 2 Structured argumentation Henry Prakken Guangzhou (China) 10 April 2018

2 Overview Structured argumentation: Arguments Attack Defeat

3 From abstract to structured argumentation
Never forget that this is an abstraction of more concrete argumentation structures. P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.

4 From abstract to structured argumentation
Dung defines status of arguments given a set of arguments and a defeat relation But we must also have a theory on How can we construct arguments? How can we attack (and defeat) arguments?

5 Attack on conclusion Employer is liable Employer is not liable
Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs No safety instructions The next slides introduce structured argumentation. Attack on conclusion

6 Attack on premise Employer is liable Employer is not liable
Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Attack on premise Injury caused by poor physical condition

7 Attack on inference Employer is liable Employer is not liable
Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Attack on inference Injury caused by poor physical condition C is friend of claimant

8 Employee secured the stairs
Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Camera evidence Injury caused by poor physical condition Indirect defence C is friend of claimant

9 Employee secured the stairs
Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Camera evidence Injury caused by poor physical condition C is friend of claimant

10 From abstract to structured argumentation
Never forget that this is an abstraction of more concrete argumentation structures. P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.

11 Employee secured the stairs
B Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Camera evidence Injury caused by poor physical condition E C is friend of claimant C D

12 Two accounts of the fallibility of arguments
Tony Hunter Nicholas Rescher Plausible Reasoning: all fallibility located in the premises Assumption-based argumentation (Kowalski, Dung, Toni,… Classical argumentation (Cayrol, Besnard, Hunter, …) Defeasible reasoning: all fallibility located in the defeasible inferences Pollock, Loui, Vreeswijk, Prakken & Sartor, … ASPIC+ combines these accounts Robert Kowalski John Pollock

13 ASPIC+ framework: overview
Argument structure: Inference graphs where Nodes are wff of a logical language L Links are applications of inference rules Rs = Strict rules (1, ..., n  ); or Rd= Defeasible rules (1, ..., n  ) Reasoning starts from a knowledge base K  L Defeat: attack on conclusion, premise or inference, + preferences Argument acceptability based on Dung (1995) If no formula is used more than once, then an argument is a tree. Intuitively, if a rule is strict, then if you accept its premises, you must accept its conclusion no matter what. If a rule is defeasible, and you accept its premises, then you must accept its conclusion if you cannot find good reasons not to accept it.

14 Argumentation systems (with symmetric negation)
An argumentation system is a triple AS = (L,R,n) where: L is a logical language with negation (¬) R = Rs Rd is a set of strict (1,…, n  ) and defeasible (1,…, n  ) inference rules n: Rd  L is a naming convention for defeasible rules Notation: - = ¬ if  does not start with a negation - =  if  is of the form ¬ If phi starts with a negation, then –phi strips off the negation. 14

15 Knowledge bases A knowledge base in AS = (L,R,n) is a set K  L where K is a partition Kn  Kp with: Kn = necessary premises Kp = ordinary premises 15

16 Argumentation theories
An argumentation theory is a pair AT = (AS, K) where AS is an argumentation system and K a knowledge base in AS. 16

17 Structure of arguments
TopRule(A) Gerard Vreeswijk An argument A on the basis of an argumentation theory is:  if   K with Prem(A) = {}, Conc(A) = , Sub(A) = {}, DefRules(A) =  A1, ..., An   if A1, ..., An are arguments such that there is a strict inference rule Conc(A1), ..., Conc(An)   Prem(A) = Prem(A1)  ...  Prem(An) Conc(A) =  Sub(A) = Sub(A1)  ...  Sub(An)  {A} DefRules(A) = DefRules(A1)  ...  DefRules(An) A1, ..., An   if A1, ..., An are arguments such that there is a defeasible inference rule Conc(A1), ..., Conc(An)   DefRules(A) = DefRules(A1)  ...  DefRules(An)  {A1, ..., An  } TopRule(A) 1 declares all premises to be the base cases of an argument. Then 2/3 say that if you have a set of arguments, and you have an inference rule with their conclusions as antecedents, then you can combine them into a new argument.

18 Rs: Rd: p,q  s p  t u,v  w s,r,t  v Kn = {q} Kp = {p,r,u}
A1 = p A5 = A1  t A2 = q A6 = A1,A2  s A3 = r A7 = A5,A3,A6  v A4 = u A8 = A7,A4  w w u, v  w  Rs p v u p, q  s  Rs s,r,t  v  Rd p s r t What are the types of the arguments? p  t  Rd p n p p q p

19 Types of arguments An argument A is: Strict if DefRules(A) = 
Defeasible if not strict Firm if Prem(A)  Kn Plausible if not firm 19

20 - Strict if DefRules(A) =  - Defeasible if not strict
Rs: Rd: p,q  s p  t u,v  w s,r,t  v Kn = {q} Kp = {p,r,u} A1 = p A5 = A1  t A2 = q A6 = A1,A2  s A3 = r A7 = A5,A3,A6  v A4 = u A8 = A7,A4  w w p v u p An argument A is: - Strict if DefRules(A) =  - Defeasible if not strict - Firm if Prem(A)  Kn - Plausible if not firm s r t p n p p q p

21 Attack A undermines B (on ) if A rebuts B (on B’ ) if
Conc(A) = - for some   Prem(B )/ Kn; A rebuts B (on B’ ) if Conc(A) = -Conc(B’ ) for some B’  Sub(B) with a defeasible top rule A undercuts B (on B’ ) if Conc(A) = -n(r ) ’for some B’  Sub(B ) with defeasible top rule r A attacks B iff A undermines or rebuts or undercuts B.

22 Rs: Rd: p,q  s p  t u,v  w s,r,t  v Kn = {q} Kp = {p,r,u}
A1 = p A5 = A1  t A2 = q A6 = A1,A2  s A3 = r A7 = A5,A3,A6  v A4 = u A8 = A7,A4  w w p v u p s r t p n p p q p

23 Structured argumentation frameworks
Let AT = (AS,K) be an argumentation theory A structured argumentation framework (SAF) defined by AT is a triple (Args,C, a) where Args = {A | A is an argument on the basis of AT} C is the attack relation on Args a is an ordering on Args (A <a B iff A a B and not B a A) A c-SAF is a SAF in which all arguments have consistent premises 23

24 Defeat Direct vs. subargument attack/defeat
A undermines B (on ) if Conc(A) = - for some   Prem(B )/ Kn; A rebuts B (on B’ ) if Conc(A) = -Conc(B’ ) for some B’  Sub(B ) with a defeasible top rule A undercuts B (on B’ ) if Conc(A) = -n(r) ’for some B’  Sub(B ) with defeasible top rule r A defeats B iff for some B’ A undermines or rebuts B on B’ and not A <a B’ ; or A undercuts B on B’ Which if B’ =  means and not A <a  Direct vs. subargument attack/defeat Preference-dependent vs. preference-independent attacks 24 24

25 Abstract argumentation frameworks corresponding to SAFs
An abstract argumentation framework corresponding to a (c-)SAF = (Args,C, a) is a pair (Args,D) where D is the defeat relation on Args defined by C and a. 25 25

26 Argument preference In general its origin is undefined
General constraint: A <a B if B is strict-and-firm and A is defeasible or plausible. Sometimes defined in terms of partial preorders  (on Rd) and ’ (on Kp) Origins of  and ’: domain-specific! Some possible criteria: Probabilistic strength Legal priority rules Importance of legal or moral values Partial preorder = transitive + reflexive

27 Which inference rules should we choose?
A tradition in AI: the inference rules encode domain-specific knowledge A philosophically more well-founded approach: the inference rules express general patterns of reasoning Strict rules (+ axioms): a sound and complete proof theory of a ‘standard’ logic for L Defeasible rules: argument schemes. 27

28 Domain-specific vs. inference general inference rules
Flies d1: Bird  Flies s1: Penguin  Bird Penguin  K Rd = {,     } Rs includes {S   | S |-PL  and S is finite} Bird  Flies  K Penguin  Bird  K Bird Penguin Flies Bird Bird Flies Penguin Penguin  Bird 28

29 Deriving the strict rules from a monotonic logic
For any logic L with (monotonic) consequence notion that is compact and satisfies Cut |-L define S  p  Rs iff S is finite and S |-L p Works well only for logics that are compact (what is implied by an infinite set is implied by at least one finite subset) and satisfies the Cut rule (if P implies Q and Q implies R then P implies R). For logics that do not satisfy the Cut rule, chaining of strict rules should be forbidden.

30 Argument(ation) schemes: general form
Douglas Walton But also critical questions Premise 1, … , Premise n Therefore (presumably), conclusion 30

31 Logical vs. dialogical aspects of argument schemes
Some critical questions ask “why this premise?” Other critical questions ask “is there no exception?” But burden of proof is on respondent to show that there are exceptions! One cannot ask such questions; one can only state counterarguments

32 Argument schemes in ASPIC
Argument schemes are defeasible inference rules Critical questions are pointers to counterarguments Some point to undermining attacks Some point to rebutting attacks Some point to undercutting attacks

33 Reasoning with defeasible generalisations
But defaults can have exceptions And there can be conflicting defaults P If P then normally/usually/typically Q So (presumably), Q - What experts say is usually true - People with political ambitions are usually not objective about security - People with names typical from country C usually have nationality C - People who flee from a crime scene when the police arrives are normally involved in the crime - Chinese tea is usually very good Now that generalisations are in the object language, we can reason about them: properties provenance 33 33

34 Legal rule application
IF conditions THEN legal consequence conditions So, legal consequence

35 Legal rule application: critical questions
Is the rule valid? Is the rule applicable to this case? Must the rule be applied? Is there a statutory exception? Does applying the rule violate its purpose? Does applying the rule have bad consequences? Is there a principle that overrides the rule? Example of statutory exception: the Dutch Civil Code states that a CC rule on creditor-debtor relations shall not apply if its application is manifestly unreasonable. Rule: Vehicles are not allowed in the park Example violating purpose: disallowing a war memorial jeep in a park violates purpose of promoting peace and tranquility in the park Example of bad consequences: not allowing an ambulance in the park. Example of violating principle: no person shall profit form his own wrongdoing.

36 Analogy Relevantly similar cases should be decided in the same way
This case is relevantly similar to that precedent Therefore (presumably), this case should have The same outcome as the precedent Critical questions: Are there also relevant differences between the cases? Are there conflicting precedents? Stealing electricity vs copying software

37 Arguments from consequences
Critical questions: Does A also have bad (good) consequences? Are there other ways to bring about G? ... Action A causes G, G is good (bad) Therefore (presumably), A should (not) be done 37

38 Example (arguments pro and con an action)
We should make spam a criminal offence We should not make spam a criminal offence Making spam a criminal offence reduces spam Reduction of spam is good Making spam a criminal offence increases workload of police and judiciary Increased workload of police and judiciary is bad

39 Example (arguments pro alternative actions)
We should make spam a criminal offence We should make spam civilly unlawful Making spam a criminal offence reduces spam Reduction of spam is good Making spam civilly unlawful reduces spam Reduction of spam is good Don’t only compare performing the action with not performing the action, but also with alternative ways of realising the good consequences.

40 Refinement: promoting or demoting legal/societal values
Critical questions: Are there other ways to cause G? Does A also cause something else that promotes or demotes other values? ... Action A causes G, G promotes (demotes) legal/societal value V Therefore (presumably), A should (not) be done 40

41 Example (arguments pro and con an action)
We should save DNA of all citizens We should not save DNA of all citizens Saving DNA of all citizens leads to solving more crimes Solving more crimes promotes security Saving DNA of all citizens makes more private data publicly accessible Making more private data publicly available demotes privacy 41

42 Example (arguments pro alternative actions)
We should save DNA of all citizens We should have more police Saving DNA of all citizens leads to solving more crimes Solving more crimes promotes security Having more police leads to solving more crimes Solving more crimes promotes security 42


Download ppt "Henry Prakken Guangzhou (China) 10 April 2018"

Similar presentations


Ads by Google