Download presentation
Presentation is loading. Please wait.
Published byLambert Byrd Modified over 9 years ago
2
A Mathematical Comment on the Fundamental Difference Between Scientific Theory Formation and Legal Theory Formation Ronald P. Loui St. Louis USA
3
Why? Who? Philosophers of science (students of generalized inductive reasoning) should find the legal theory formation problem (generalized moral reasoning) interesting now that there are new tools: –Defeasible conditional –A record of arguments –Models of procedures Diachronic models: confirmational conditionalization, belief revision, derogation, contraction, choice Models of legislative compromise, linguistic interpretation and determination
4
Why? Who? What are the similarities, dissimilarities? –Obviously: attitude toward error –What else? –What formal ramifications? Could the LTF problem be expressed as simply as the STF problem?
5
Further Motivation Is Machine Learning too quick to simplify the problem? Can the important nuances of LTF and STF be written in a mathematically brief way?
6
Legal Theory Formation: LTF Case 1: –Facts: a b c d e –Decision: h Case 2: –Facts: a b c d e f –Decision: !h Induced rule(s): –Defeasibly, a b c d e > __ h –Defeasibly, a b c d e f > __ !h Why not: a > __ h a f > __ ! h
7
Scientific Theory Formation: STF Case 1: –Facts: a b c d e –Decision: h Case 2: –Facts: a b c d e f –Decision: !h Induced rule(s): –Deductively, a b c d e !f h –Deductively, a b c d e f !h Why not: !f h f h
8
SFT vs. LFT Conditionals: –Deductive vs. –Defeasible Bias: –What is simpler? vs. –What is motivated by argument? Input: –State (complete closed world) vs. –Partial (incomplete) Description STF, LFT vs: Belief revision (AGM) –too much (=epistemic state + constraints on chance) vs. –too little (=not enough guidance among choices)
9
Curve-Fitting: assign error as required
10
Spline-Fitting: complexify as required
11
2-DNF Fitting Data: –Case 1: a b c d –Case 2: !a b c !d –Case 3: a !b !c d Formula: –(a v b) ^ (c v d)
12
Transitive fitting Reports of indifference, preference A ~ B B > C A ~ C C ~ D A ~ D Error: remove B > C, actually B ~ C (1 of 5)
13
SFT vs. LFT Fit: –Quantify error (like overturning precedent in LFT) vs. –Distinguish as needed (like auxiliary hypotheses in SFT) SO FAR, ALL THIS IS OBVIOUS
14
More Nuanced Model of SFT Kyburg: –Corpus of accepted beliefs K –Probability of s given K: P K (s) –s is acceptable? P K (s) > 1-e –Theory is U: U K = D-Thm(K0 U) –SFT: choose U* to “fit” K 0 Best fit of U* gives largest PI-Thm(K) PI-Thm(K) = K {s | P K (s) > 1-e } –Trades power (simplicity) and error (fit) If U is too simple, it doesn’t fit, hence all P K small If U is too complicated, D-Thm(K0 U) small
15
More Nuanced Model of LFT Loui-Norman (Prakken-Sartor-Hage- Verheij-Lodder-Roth) –A case has arguments, A 1, … A k, B 1, … B k-1 –Arguments have structure Trees, labeled with propositions Argument for h, h is root Leaves are uncontested “facts” Internal nodes are “intermediary conclusions” Defeasible rules: Children(p) > __ p
16
Argument for h h p q ab c d
17
h p q a b c d
18
h pq abcd
19
h pq abcd
20
h pq abcd Defeasibly, a > __ p b c d > __ q p q > __ h
21
Dialectical Tree A1A1 A3A3 B2B2 A2A2 B1B1 petitionerrespondent
22
Dialectical Tree A1A1 A3A3 B2B2 A2A2 B1B1 Interferes defeats
23
Dialectical Tree A 1 (for h) A 3 for !q B 2 for !r A 2 for !q B 1 (for !p) Interferes defeats Defeats defeats
24
More Nuanced Model of LFT Loui-Norman (Prakken-Sartor-Hage- Verheij-Lodder-Roth) –A case has arguments, A 1, … A k, B 1, … B k-1 –Arguments have structure –Induced rules must be grounded in cases Δ (e.g. c 1 = ({a,b,c,d,e}, h, {(h,{(p,{a}),(q,{b,c,d})}, …) or background sources Ω (e.g., p q > __ h, r 17 = ({p,q},h) )
25
SFT vs. LFT Invention: –Out of (mathematical) thin air vs. –Possible interpretations of cases Purpose: –To discover rules from cases –To summarize cases as rules
26
SFT vs. LFT Invention: –Out of (mathematical) thin air vs. –Possible interpretations of cases Purpose: –To discover (nomological) rules from cases –To summarize cases as (linguistic) rules
27
SFT vs. LFT Invention: –Out of (mathematical) thin air vs. –Possible interpretations of cases Purpose: –To discover (nomological) rules from (accident of) cases –To summarize (wisdom of) cases as (linguistic) rules
28
What is grounded? Case: a b c d e ] __ h φ = {a, b, c, d, e} Any C φ as lhs for rule for h? What if d was used only to argue against h? d > __ h Really? (Even Ashley disallows this) What if e was used only to rebut d-based argument? a b c e > __ h Really? e isn't relevant except to undercut d.
29
Proper Elisions I: Argument Trees h pq abcd p b c d > __ h
30
Proper Elisions I: Argument Trees h pq abcd !q abf p b c d > __ h p b c d f > __ h ?
31
Proper Elisions II: Dialectical Trees A 1 (for h) A 3 for !q B 2 for !r A 2 for !q B 1 (for !p) Interferes defeats Defeats defeats
32
Proper Elisions II: Dialectical Trees A 1 (for h) A 3 for !q B 2 for !r A 2 for !q B 1 (for !p) Interferes defeats Defeats defeats
33
Proper Elisions II: Dialectical Trees A 1 (for h) A 3 for !q B 2 for !r A 2 for !q B 1 (for !p) Interferes defeats Defeats defeats
34
SFT vs. LFT 1.Defeasible 2.Differences distinguished 3.Cases summarized/organized 4.Argument is crucial 5.Justification obsessed 6.Loui: Arguments Grounding Proper Elision Principles 1.Deductive 2.Error quantified 3.Rules discovered 4.Probability is key 5.Simplicity biased 6.Kyburg: Acceptance Error Inference Coherence
35
More Nuanced Model of SFT Kyburg: –Corpus of accepted beliefs K –Probability of s given K: P K (s) –s is acceptable? P K (s) > 1-e –Theory is U: U K = D-Thm(K0 U) –SFT: choose U* to “fit” K 0 Best fit of U* gives largest PI-Thm(K) PI-Thm(K) = K {s | P K (s) > 1-e } –Trades power (simplicity) and error (fit) If U is too simple, it doesn’t fit, hence all P K small If U is too complicated, D-Thm(K0 U) small
36
More Nuanced Model of LFT Loui-Norman (Prakken-Sartor-Hage- Verheij-Lodder-Roth) –A case has arguments, A 1, … A k, B 1, … B k-1 –Arguments have structure –Induced rules must be grounded in cases Δ (e.g. c 1 = ({a,b,c,d,e}, h, {(h,{(p,{a}),(q,{b,c,d})}, …) or background sources Ω (e.g., p q > __ h, r 17 = ({p,q},h) ) –And proper elisions
37
Machine Learning? Models are too simple The problem is in the modeling, not the algorithm SVM is especially insulting
38
Acknowledgements Henry Kyburg Ernest Nagel, Morris Cohen Jeff Norman Guillermo Simari AnaMaguitman, Carlos Chesñevar, Alejandro Garcia John Pollock, Thorne McCarty, Henry Prakken
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.