Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adapting de Finetti's Proper Scoring Rules for Measuring Bayesian Subjective Probabilities when Those Probabilities Are not Bayesian Peter P. Wakker (&

Similar presentations


Presentation on theme: "Adapting de Finetti's Proper Scoring Rules for Measuring Bayesian Subjective Probabilities when Those Probabilities Are not Bayesian Peter P. Wakker (&"— Presentation transcript:

1 Adapting de Finetti's Proper Scoring Rules for Measuring Bayesian Subjective Probabilities when Those Probabilities Are not Bayesian Peter P. Wakker (& Gijs van de Kuilen, Theo Offerman, Joep Sonnemans) April 6, 2006 Rotterdam Topic: How to measure your subjective probability p of event E ("rain tomorrow")? Such that also acceptable to frequentists?

2 Method 1. Ask directly (introspection). Problem. No clear empirical meaning; no incentive(-compatibility). Economists don't like such things. 2

3 Method 2. Reveal (binary) preferences. € 0.30  (€ 1 under E)  € 0.20  0.30 > p > 0.20 (if EV = expected value) Problem. Much work. Get only inequalities. (And: EV may be violated.) Method 3. Reveal indifferences. € 0.25 ~ (€ 1 under E)  p = 0.25 (if EV …) Problem. Indifferences are hard to observe (BDM …). (And: EV may be violated.) 3

4 Method 4. de Finetti's book making. Not explained here. Problem. Not implementable in practice. (And: EV may be violated.) 4

5 Method 5. Proper scoring rules. Choose 0  r  1 as you like (called reported probability). Next, you receive E E c   1 – (1– r) 2 1 – r 2 EV: Take probability p of E; subjective if must be. Then maximize EV. Optimization of r: 5

6 EV = p(1 – (1– r) 2 ) + (1 – p)(1 – r 2 ) 1 st order condition: 2p(1– r) – (1 – p)2r = 0 2p(1– r) = (1 – p)2r r = p! Wow! Avoids all problems mentioned above, except Problem. EV may be violated … Proper scoring rules tractable & widely used: See Hanson (Nature, 2002), Prelec (Science 2005). In accounting (Wright 1988), Bayesian statistics (Savage 1971), business (Stael von Holstein 1972), education (Echternacht 1972), medicine (Spiegelhalter 1986), psychology (Liberman & Tversky 1993; McClelland & Bolger 1994). 6

7 Before analyzing nonEV descriptively, a: Theoretical Example [EV]. Urn K ("known") with 100 balls: 25 G(reen), 25 R(ed), 25 S(ilver), 25 Y(ellow). One ball is drawn randomly. E: ball is not red; E = {G,S,Y}. p = 0.75. Under expected value, optimal r E is 0.75. r G + r S + r Y = 0.25 + 0.25 + 0.25 = 0.75 = r E : r satisfies additivity; as probabilities should! 7 Example reanalyzed later.

8 0.25 0.50 0.75 1 0 p 8 Reported probability R(p) = r E as function of true probability p, under: nonEU 0.69 R(p) EU 0.61 r EV EV r nonEU r nonEUA r nonEUA refers to nonexpected utility for unknown probabilities ("Ambiguity"). (c) nonexpected utility for known probabilities, with U(x) = x 0.5 and with w(p) as common; (b) expected utility with U(x) = x 0.5 (EU); (a) expected value (EV); reward: if E true if not E true EV  r EV =0.75 0.94 0.44 0.8125  r EU =0.69 0.91 0.52 0.8094 r EU  r nonEU =0.61 0.85 0.63 0.7920  r nonEUA =0.52 0.77 0.73 0.7596 next p. go to p. 11, Example EU go to p. 15, Example nonEU 0 0.50 1 0.25 0.75 go to p. 20, Example nonEUA

9 Deviation 1 [Utility curvature]. Bernoulli (1738): risk aversion!  U concave (if EU …). Now optimize pU ( 1 – (1– r) 2 ) + ( 1 – p )U (1 – r 2 ) 9

10 10 U´(1–r 2 ) U´(1 – (1–r) 2 ) (1–p) p + p r = Explicit expression: U´(1–r 2 ) U´(1 – (1–r) 2 ) (1–r) r + r p = EV: r is additive. EU: r is nonadditive (U nonsymmetric about 0.5). Gives critical test of EV versus EU

11 Theoretical Example continued [ Expected Utility ]. (urn K, 25 G, 25 R, 25 S, 25 Y). E: {G,S,Y}; p = 0.75. EV: r EV = 0.75. Expected utility, U(x) =  x: r EU = 0.69. r G + r S + r Y = 0.31 + 0.31 + 0.31 = 0.93 > 0.69 = r E : additivity violated! Such data prove that expected value cannot hold. 11 Example reanalyzed later. go to p. 8, with figure of R(p)

12 Deviation 2 from EV: nonexpected utility ( Allais 1953, Machina 1982, Kahneman & Tversky 1979, Quiggin 1982, Schmeidler 1989, Gilboa 1987, Gilboa & Schmeidler 1989, Gul 1991, Tversky & Kahneman 1992, etc.) 12 For two-gain prospects, all theories are as follows: For r  0.5, nonEU(r) = w(p)U ( 1 – (1–r) 2 ) + ( 1–w(p) ) U(1–r 2 ). r < 0.5, symmetry; soit! Different treatment of highest and lowest outcome: "Rank-dependence."

13 p w(p) 1 1 0 1/3 Figure. The common weighting fuction w. w(p) = exp(–(–ln(p))  ) for  = 0.65. w(1/3)  1/3; 13 1/3 w(2/3) .51 2/3.51

14 Now 14 U´(1–r 2 ) U´(1 – (1–r) 2 ) ( 1–w(p) ) w(p) + w(p) r = U´(1–r 2 ) U´(1 – (1–r) 2 ) (1–r) r + r p = Explicit expression: w –1 ( )

15 Example continued [nonEU]. (urn K, 25 G, 25 R, 25 S, 25 Y). E: {G,S,Y}; p = 0.75. EV: r EV = 0.75. EU: r EU = 0.69. Nonexpected utility, U(x) =  x, w(p) = exp(–(–ln(p)) 0.65 ). r nonEU = 0.61. r G + r S + r Y = 0.39 + 0.39 + 0.39 = 1.17 > 0.61 = r E : additivity is strongly violated! 15 go to p. 8, with figure of R(p) Deviations from EV and Bayesianism so far were at level of behavior; not of beliefs. Now for something different; more fundamental.

16 3 rd violation of EV: Ambiguity (unknown probabilities) Preparation for theoretical example. Random draw from additional urn A. 100 balls, ? G a, ? R a, ? S a, ? Y a. Unknown composition ("Ambiguous"). E a : {G a,S a,Y a }; p = ?. How deal with unknown probabilities? Have to give up Bayesian beliefs. 16

17 1 st Proposal (trying to maintain Bayesian beliefs) Assign "subjective" probabilities to events. Then behave as if known probs (may be nonEU) ("probabilistic sophistication," Machina & Schmeidler 1992; they did it for normative, not as we do now descriptively ). By symmetry then P(G) = P(R) = P(S) = P(Y) = ¼. Treated same as known urn K. Empirically violated (Ellsberg)! 17 > < Probabilistic soph. Bayesian beliefs!

18 18 Instead of additive beliefs p = P(E), nonadditive beliefs B(E), with B(E)  B(G) + B(S) + B(Y). (Dempster&Shafer, Tversky&Koehler, etc.) All currently existing decision models: For r  0.5, nonEU(r) = w(B(E))U ( 1 – (1–r) 2 ) + ( 1–w(B(E)) ) U(1–r 2 ). (Commonly written with W(E) for w(B(E)), so B(E) = w –1 (W(E)); matter of notation.)

19 19 U´(1–r 2 ) U´(1 – (1–r) 2 ) ( 1–w(B(E)) ) w(B(E)) + w(B(E)) r E = U´(1–r 2 ) U´(1 – (1–r) 2 ) (1–r) r + r B(E) = Explicit expression: w –1 ( )

20 Example continued [Ambiguity, nonEUA]. (urn A, 100 balls, ? G, ? R, ? S, ? Y). E: {G,S,Y}; p = ? r EV = 0.75. r EU = 0.69. r nonEU = 0.61 (under plausible assumptions). Typically, w(B(E)) << w(B(E)) (E from known urn). r nonEUA is, say, 0.52. r G + r S + r Y = 0.48 + 0.48 + 0.48 = 1.44 > 0.52 = r E : additivity is very extremely violated! r's are close to always saying fifty-fifty. Belief component B(E) = 0.62. 20 go to p. 8, with figure of R(p)

21 B(E): ambiguity attitude  /=/  beliefs?? Before entering that debate, first: How measure B(E)? Our contribution: through proper scoring rules with "risk correction." Earlier proposals for measuring B: 21

22 Proposal 1 (common in decision theory). Measure U,W, and w from behavior. Derive B(E) = w –1 (W(E)) from it. Problem: Much and difficult work!!! 22

23 Proposal 2 (common in decision analysis of the 1960s, and in modern experimental economics). Measure "canonical probabilities": For ambiguous event E a, find objective prob. p such that (€100 if E a ) ~ (€100 with prob. p). Then (algebra …) B(E) = p. Problem: measuring indifferences is difficult. 23

24 Proposal 3 (common in proper scoring rules): Calibration … Problem: Need many repeated observations. 24

25 25 Our proposal: Take the best of all worlds! Get B(E) = w –1 (W(E)) without measuring U,W, and w from decision making. Get canonical probability without measuring indifferences, or BDM. Calibrate without needing long repeated observations. Do all that with no more than simple proper- scoring-rule questions.

26 26 We reconsider explicit expressions: U´(1–r 2 ) U´(1 – (1–r) 2 ) (1–r) r + r p = w –1 ( ) U´(1–r 2 ) U´(1 – (1–r) 2 ) (1–r) r + r B(E) = w –1 ( ) Corollary. p = B(E) if related to the same r!!

27 27 Because p = B(E) if related to same: We simply measure the R(p) curves, and use their inverses: B(E) = R –1 (r E ). Appling R –1 is called risk correction. Directly implementable empirically. We did so in an experiment, and found plausible results.

28 Experimental Test of Our Correction Method 28

29 Method Participants. N = 93 students. Procedure. Computarized in lab. Groups of 15/16 each. 4 practice questions. 29

30 30 Stimuli 1. First we did proper scoring rule for unknown probabilities. 72 in total. For each stock two small intervals, and, third, their union. Thus, we test for additivity.

31 31 Stimuli 2. Known probabilities: Two 10-sided dies thrown. Yield random nr. between 01 and 100. Event E: nr.  75 (etc.). Done for all probabilities j/20. Motivating subjects. Real incentives. Two treatments. 1. All-pay. Points paid for all questions. 6 points = €1. Average earning €15.05. 2. One-pay (random-lottery system). One question, randomly selected afterwards, played for real. 1 point = €20. Average earning: €15.30.

32 32 Results

33 33 Average correction curves.

34 34 0.8 0.9 1 -2.0-1.5-0.50.00.51.01.5 ρ F(ρ) treatment one treatment all Individual corrections.

35 35

36 Summary and Conclusion Modern decision theories show that proper scoring rules are heavily biased. We present ways to correct for these biases (get right beliefs even if EV violated). Experiment shows that correction improves quality and reduces deviations from ("rational"?) Bayesian beliefs. Correction does not remove all deviations from Bayesian beliefs. Beliefs seem to be genuinely nonadditive/nonBayesian/sensitive- to-ambiguity. 36


Download ppt "Adapting de Finetti's Proper Scoring Rules for Measuring Bayesian Subjective Probabilities when Those Probabilities Are not Bayesian Peter P. Wakker (&"

Similar presentations


Ads by Google