Henry Prakken Guangzhou (China) 10 April 2018

Slides:



Advertisements
Similar presentations
Visualization Tools, Argumentation Schemes and Expert Opinion Evidence in Law Douglas Walton University of Winnipeg, Canada Thomas F. Gordon Fraunhofer.
Advertisements

Logic Programming Automated Reasoning in practice.
Commonsense Reasoning and Argumentation 14/15 HC 8 Structured argumentation (1) Henry Prakken March 2, 2015.
Computational Models for Argumentation in MAS
Commonsense Reasoning and Argumentation 14/15 HC 9 Structured argumentation (2) Henry Prakken March 4, 2015.
On the structure of arguments, and what it means for dialogue Henry Prakken COMMA-08 Toulouse,
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Rigorous Software Development CSCI-GA Instructor: Thomas Wies Spring 2012 Lecture 11.
Commonsense Reasoning and Argumentation 14/15 HC 10: Structured argumentation (3) Henry Prakken 16 March 2015.
Legal Argumentation 2 Henry Prakken March 28, 2013.
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Identifying and Analyzing Arguments in a Text Argumentation in (Con)Text Symposium, Jan. 4, 2007, Bergen.
Argumentation Logics Lecture 1: Introduction Henry Prakken Chongqing May 26, 2010.
BIRDS FLY. is a bird Birds fly Tweety is a bird Tweety flies DEFEASIBLE NON-MONOTONIC PRESUMPTIVE?
Elements and Methods of Argumentation Theory University of Padua Lecture Padua, Italy, Dec.1, Douglas Walton Assumption University Chair in Argumentation.
Commonsense Reasoning and Argumentation 14/15 HC 12 Dynamics of Argumentation (1) Henry Prakken March 23, 2015.
Some problems with modelling preferences in abstract argumentation Henry Prakken Luxemburg 2 April 2012.
The Argument Mapping Tool of the Carneades Argumentation System DIAGRAMMING EVIDENCE: VISUALIZING CONNECTIONS IN SCIENCE AND HUMANITIES’ DIAGRAMMING EVIDENCE:
Argumentation in Artificial Intelligence Henry Prakken Lissabon, Portugal December 11, 2009.
FINDING THE LOGIC OF ARGUMENTATION Douglas Walton CRRAR Coimbra, March 24, 2011.
Auto-Epistemic Logic Proposed by Moore (1985) Contemplates reflection on self knowledge (auto-epistemic) Allows for representing knowledge not just about.
Argumentation Logics Lecture 6: Argumentation with structured arguments (2) Attack, defeat, preferences Henry Prakken Chongqing June 3, 2010.
Argumentation Henry Prakken SIKS Basic Course Learning and Reasoning May 26 th, 2009.
Argumentation Logics Lecture 7: Argumentation with structured arguments (3) Henry Prakken Chongqing June 4, 2010.
Argumentation Logics Lecture 6: Argumentation with structured arguments (2) Attack, defeat, preferences Henry Prakken Chongqing June 3, 2010.
Argumentation Logics Lecture 4: Games for abstract argumentation Henry Prakken Chongqing June 1, 2010.
Argumentation Logics Lecture 1: Introduction Henry Prakken Chongqing May 26, 2010.
Programming Language Semantics Denotational Semantics Chapter 5 Part III Based on a lecture by Martin Abadi.
Building Logical Arguments. Critical Thinking Skills Understand and use principles of scientific investigation Apply rules of formal and informal logic.
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Henry Prakken August 23, 2013 NorMas 2013 Argumentation about Norms.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Inference is a process of building a proof of a sentence, or put it differently inference is an implementation of the entailment relation between sentences.
Introduction to formal models of argumentation
MODELING CRITICAL QUESTIONS AS ADDITIONAL PREMISES Douglas Walton CRRAR OSSA, May 19, 2011.
{ Logic in Artificial Intelligence By Jeremy Wright Mathematical Logic April 10 th, 2012.
Argument Mapping and Teaching Critical Thinking APA Chicago April 17/08 Douglas Walton CRRAR Centre for Research in Reasoning, Argumentation & Rhetoric:
CSNB234 ARTIFICIAL INTELLIGENCE
Legal Argumentation 3 Henry Prakken April 4, 2013.
A Mathematical Comment on the Fundamental Difference Between Scientific Theory Formation and Legal Theory Formation Ronald P. Loui St. Louis USA.
Commonsense Reasoning and Argumentation 14/15 HC 14: Dialogue systems for argumentation (2) Henry Prakken 30 March 2015.
LECTURE LECTURE Propositional Logic Syntax 1 Source: MIT OpenCourseWare.
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
On the Semantics of Argumentation 1 Antonis Kakas Francesca Toni Paolo Mancarella Department of Computer Science Department of Computing University of.
Henry Prakken & Giovanni Sartor July 18, 2012 Law Logic Summerschool 2012 Session (Part 2): Burdens of proof and presumptions.
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
Computing & Information Sciences Kansas State University Friday, 13 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 21 of 42 Friday, 13 October.
Weapon of Legal Instruction
Argument patterns & Critical Questions and Scenario Test II
Chapter 7. Propositional and Predicate Logic
From natural language to Bayesian Networks (and back)
Knowledge Representation and Reasoning
Matching Logic An Alternative to Hoare/Floyd Logic
Henry Prakken & Giovanni Sartor July 16, 2012
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Lesson 5 Relations, mappings, countable and uncountable sets
Henry Prakken COMMA 2016 Berlin-Potsdam September 15th, 2016
Chapter 8 Ethics of Managers and Social Responsibility of Businesses
Lesson 5 Relations, mappings, countable and uncountable sets
Back to “Serious” Topics…
Computer Security: Art and Science, 2nd Edition
Artificial Intelligence Chapter 17. Knowledge-Based Systems
Logical Entailment Computational Logic Lecture 3
Chapter 7. Propositional and Predicate Logic
Dimensions and Values for Legal Case Based Reasoning
Artificial Intelligence Chapter 17 Knowledge-Based Systems
CSNB234 ARTIFICIAL INTELLIGENCE
Henry Prakken Chongqing May 27, 2010
Business Law Final Exam
Presentation transcript:

Henry Prakken Guangzhou (China) 10 April 2018 Spring School on Argumentation in AI & Law Day 1 – lecture 2 Structured argumentation Henry Prakken Guangzhou (China) 10 April 2018

Overview Structured argumentation: Arguments Attack Defeat

From abstract to structured argumentation Never forget that this is an abstraction of more concrete argumentation structures. P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.

From abstract to structured argumentation Dung defines status of arguments given a set of arguments and a defeat relation But we must also have a theory on How can we construct arguments? How can we attack (and defeat) arguments?

Attack on conclusion Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs No safety instructions The next slides introduce structured argumentation. Attack on conclusion

Attack on premise Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Attack on premise Injury caused by poor physical condition

Attack on inference Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Attack on inference Injury caused by poor physical condition C is friend of claimant

Employee secured the stairs Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Camera evidence Injury caused by poor physical condition Indirect defence C is friend of claimant

Employee secured the stairs Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Camera evidence Injury caused by poor physical condition C is friend of claimant

From abstract to structured argumentation Never forget that this is an abstraction of more concrete argumentation structures. P.M. Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games. Artificial Intelligence, 77:321–357, 1995.

Employee secured the stairs B Employer is liable Employer is not liable Employer breached duty of care Employee had work-related injury Employee was careless Rule 2 Rule 1 Employee did not secure stairs Employee secured the stairs No safety instructions Employee had no work-related injury Colleague says so Colleague is not credible Camera evidence Injury caused by poor physical condition E C is friend of claimant C D

Two accounts of the fallibility of arguments Tony Hunter Nicholas Rescher Plausible Reasoning: all fallibility located in the premises Assumption-based argumentation (Kowalski, Dung, Toni,… Classical argumentation (Cayrol, Besnard, Hunter, …) Defeasible reasoning: all fallibility located in the defeasible inferences Pollock, Loui, Vreeswijk, Prakken & Sartor, … ASPIC+ combines these accounts Robert Kowalski John Pollock

ASPIC+ framework: overview Argument structure: Inference graphs where Nodes are wff of a logical language L Links are applications of inference rules Rs = Strict rules (1, ..., n  ); or Rd= Defeasible rules (1, ..., n  ) Reasoning starts from a knowledge base K  L Defeat: attack on conclusion, premise or inference, + preferences Argument acceptability based on Dung (1995) If no formula is used more than once, then an argument is a tree. Intuitively, if a rule is strict, then if you accept its premises, you must accept its conclusion no matter what. If a rule is defeasible, and you accept its premises, then you must accept its conclusion if you cannot find good reasons not to accept it.

Argumentation systems (with symmetric negation) An argumentation system is a triple AS = (L,R,n) where: L is a logical language with negation (¬) R = Rs Rd is a set of strict (1,…, n  ) and defeasible (1,…, n  ) inference rules n: Rd  L is a naming convention for defeasible rules Notation: - = ¬ if  does not start with a negation - =  if  is of the form ¬ If phi starts with a negation, then –phi strips off the negation. 14

Knowledge bases A knowledge base in AS = (L,R,n) is a set K  L where K is a partition Kn  Kp with: Kn = necessary premises Kp = ordinary premises 15

Argumentation theories An argumentation theory is a pair AT = (AS, K) where AS is an argumentation system and K a knowledge base in AS. 16

Structure of arguments TopRule(A) Gerard Vreeswijk An argument A on the basis of an argumentation theory is:  if   K with Prem(A) = {}, Conc(A) = , Sub(A) = {}, DefRules(A) =  A1, ..., An   if A1, ..., An are arguments such that there is a strict inference rule Conc(A1), ..., Conc(An)   Prem(A) = Prem(A1)  ...  Prem(An) Conc(A) =  Sub(A) = Sub(A1)  ...  Sub(An)  {A} DefRules(A) = DefRules(A1)  ...  DefRules(An) A1, ..., An   if A1, ..., An are arguments such that there is a defeasible inference rule Conc(A1), ..., Conc(An)   DefRules(A) = DefRules(A1)  ...  DefRules(An)  {A1, ..., An  } TopRule(A) 1 declares all premises to be the base cases of an argument. Then 2/3 say that if you have a set of arguments, and you have an inference rule with their conclusions as antecedents, then you can combine them into a new argument.

Rs: Rd: p,q  s p  t u,v  w s,r,t  v Kn = {q} Kp = {p,r,u} A1 = p A5 = A1  t A2 = q A6 = A1,A2  s A3 = r A7 = A5,A3,A6  v A4 = u A8 = A7,A4  w w u, v  w  Rs p v u p, q  s  Rs s,r,t  v  Rd p s r t What are the types of the arguments? p  t  Rd p n p p q p

Types of arguments An argument A is: Strict if DefRules(A) =  Defeasible if not strict Firm if Prem(A)  Kn Plausible if not firm 19

- Strict if DefRules(A) =  - Defeasible if not strict Rs: Rd: p,q  s p  t u,v  w s,r,t  v Kn = {q} Kp = {p,r,u} A1 = p A5 = A1  t A2 = q A6 = A1,A2  s A3 = r A7 = A5,A3,A6  v A4 = u A8 = A7,A4  w w p v u p An argument A is: - Strict if DefRules(A) =  - Defeasible if not strict - Firm if Prem(A)  Kn - Plausible if not firm s r t p n p p q p

Attack A undermines B (on ) if A rebuts B (on B’ ) if Conc(A) = - for some   Prem(B )/ Kn; A rebuts B (on B’ ) if Conc(A) = -Conc(B’ ) for some B’  Sub(B) with a defeasible top rule A undercuts B (on B’ ) if Conc(A) = -n(r ) ’for some B’  Sub(B ) with defeasible top rule r A attacks B iff A undermines or rebuts or undercuts B.

Rs: Rd: p,q  s p  t u,v  w s,r,t  v Kn = {q} Kp = {p,r,u} A1 = p A5 = A1  t A2 = q A6 = A1,A2  s A3 = r A7 = A5,A3,A6  v A4 = u A8 = A7,A4  w w p v u p s r t p n p p q p

Structured argumentation frameworks Let AT = (AS,K) be an argumentation theory A structured argumentation framework (SAF) defined by AT is a triple (Args,C, a) where Args = {A | A is an argument on the basis of AT} C is the attack relation on Args a is an ordering on Args (A <a B iff A a B and not B a A) A c-SAF is a SAF in which all arguments have consistent premises 23

Defeat Direct vs. subargument attack/defeat A undermines B (on ) if Conc(A) = - for some   Prem(B )/ Kn; A rebuts B (on B’ ) if Conc(A) = -Conc(B’ ) for some B’  Sub(B ) with a defeasible top rule A undercuts B (on B’ ) if Conc(A) = -n(r) ’for some B’  Sub(B ) with defeasible top rule r A defeats B iff for some B’ A undermines or rebuts B on B’ and not A <a B’ ; or A undercuts B on B’ Which if B’ =  means and not A <a  Direct vs. subargument attack/defeat Preference-dependent vs. preference-independent attacks 24 24

Abstract argumentation frameworks corresponding to SAFs An abstract argumentation framework corresponding to a (c-)SAF = (Args,C, a) is a pair (Args,D) where D is the defeat relation on Args defined by C and a. 25 25

Argument preference In general its origin is undefined General constraint: A <a B if B is strict-and-firm and A is defeasible or plausible. Sometimes defined in terms of partial preorders  (on Rd) and ’ (on Kp) Origins of  and ’: domain-specific! Some possible criteria: Probabilistic strength Legal priority rules Importance of legal or moral values … Partial preorder = transitive + reflexive

Which inference rules should we choose? A tradition in AI: the inference rules encode domain-specific knowledge A philosophically more well-founded approach: the inference rules express general patterns of reasoning Strict rules (+ axioms): a sound and complete proof theory of a ‘standard’ logic for L Defeasible rules: argument schemes. 27

Domain-specific vs. inference general inference rules Flies d1: Bird  Flies s1: Penguin  Bird Penguin  K Rd = {,     } Rs includes {S   | S |-PL  and S is finite} Bird  Flies  K Penguin  Bird  K Bird Penguin Flies Bird Bird Flies Penguin Penguin  Bird 28

Deriving the strict rules from a monotonic logic For any logic L with (monotonic) consequence notion that is compact and satisfies Cut |-L define S  p  Rs iff S is finite and S |-L p Works well only for logics that are compact (what is implied by an infinite set is implied by at least one finite subset) and satisfies the Cut rule (if P implies Q and Q implies R then P implies R). For logics that do not satisfy the Cut rule, chaining of strict rules should be forbidden.

Argument(ation) schemes: general form Douglas Walton But also critical questions Premise 1, … , Premise n Therefore (presumably), conclusion 30

Logical vs. dialogical aspects of argument schemes Some critical questions ask “why this premise?” Other critical questions ask “is there no exception?” But burden of proof is on respondent to show that there are exceptions! One cannot ask such questions; one can only state counterarguments

Argument schemes in ASPIC Argument schemes are defeasible inference rules Critical questions are pointers to counterarguments Some point to undermining attacks Some point to rebutting attacks Some point to undercutting attacks

Reasoning with defeasible generalisations But defaults can have exceptions And there can be conflicting defaults P If P then normally/usually/typically Q So (presumably), Q - What experts say is usually true - People with political ambitions are usually not objective about security - People with names typical from country C usually have nationality C - People who flee from a crime scene when the police arrives are normally involved in the crime - Chinese tea is usually very good Now that generalisations are in the object language, we can reason about them: properties provenance 33 33

Legal rule application IF conditions THEN legal consequence conditions So, legal consequence

Legal rule application: critical questions Is the rule valid? Is the rule applicable to this case? Must the rule be applied? Is there a statutory exception? Does applying the rule violate its purpose? Does applying the rule have bad consequences? Is there a principle that overrides the rule? Example of statutory exception: the Dutch Civil Code states that a CC rule on creditor-debtor relations shall not apply if its application is manifestly unreasonable. Rule: Vehicles are not allowed in the park Example violating purpose: disallowing a war memorial jeep in a park violates purpose of promoting peace and tranquility in the park Example of bad consequences: not allowing an ambulance in the park. Example of violating principle: no person shall profit form his own wrongdoing.

Analogy Relevantly similar cases should be decided in the same way This case is relevantly similar to that precedent Therefore (presumably), this case should have The same outcome as the precedent Critical questions: Are there also relevant differences between the cases? Are there conflicting precedents? Stealing electricity vs copying software

Arguments from consequences Critical questions: Does A also have bad (good) consequences? Are there other ways to bring about G? ... Action A causes G, G is good (bad) Therefore (presumably), A should (not) be done 37

Example (arguments pro and con an action) We should make spam a criminal offence We should not make spam a criminal offence Making spam a criminal offence reduces spam Reduction of spam is good Making spam a criminal offence increases workload of police and judiciary Increased workload of police and judiciary is bad

Example (arguments pro alternative actions) We should make spam a criminal offence We should make spam civilly unlawful Making spam a criminal offence reduces spam Reduction of spam is good Making spam civilly unlawful reduces spam Reduction of spam is good Don’t only compare performing the action with not performing the action, but also with alternative ways of realising the good consequences.

Refinement: promoting or demoting legal/societal values Critical questions: Are there other ways to cause G? Does A also cause something else that promotes or demotes other values? ... Action A causes G, G promotes (demotes) legal/societal value V Therefore (presumably), A should (not) be done 40

Example (arguments pro and con an action) We should save DNA of all citizens We should not save DNA of all citizens Saving DNA of all citizens leads to solving more crimes Solving more crimes promotes security Saving DNA of all citizens makes more private data publicly accessible Making more private data publicly available demotes privacy 41

Example (arguments pro alternative actions) We should save DNA of all citizens We should have more police Saving DNA of all citizens leads to solving more crimes Solving more crimes promotes security Having more police leads to solving more crimes Solving more crimes promotes security 42