Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Descriptive Decision Theories such as Prospect Theory to Improve Prescriptive Decision Theories such as Expected Utility; the Dilemma of Omission.

Similar presentations


Presentation on theme: "Using Descriptive Decision Theories such as Prospect Theory to Improve Prescriptive Decision Theories such as Expected Utility; the Dilemma of Omission."— Presentation transcript:

1 Using Descriptive Decision Theories such as Prospect Theory to Improve Prescriptive Decision Theories such as Expected Utility; the Dilemma of Omission versus Paternalism Peter P. Wakker (& Bleichrodt & Pinto & Abdellaoui); May 4, 2004 1.A typical example of an application of decision theory in the health domain today; based on expected utility. 2.Inconsistencies. Correct for them at all? Ethical Complications; paternalism … Our model will deliberately deviate from observations … Sin of death in experimental work such as psychology … ??? 3.Corrections for violations of expected utility, based on prospect theory. Don’t forget to make this invisible. Explain a lot in words about medical decision making and EU. Say that at each step I lose more of the audience. Stage 1: those working normatively who do not consider EU to be normative. Stage 2: Those who do not want to be paternalistic. Here I lose all psychologists. 3. Those who prefer other nonEU theories than normative. Point is, there is no easy way to do applied work You always get dirty hands. It is easy to criticize everything stated here, but not easy to give alternatives Say that at each step I lose more of the audience. Stage 1: those working normatively who do not consider EU to be normative. Stage 2: Those who do not want to be paternalistic. Here I lose all psychologists. 3. Those who prefer other nonEU theories than normative. Point is, there is no easy way to do applied work You always get dirty hands. It is easy to criticize everything stated here, but not easy to give alternatives

2 surgery Patient with larynx-cancer (stage T3). Radio-therapy or surgery? radio- therapy 0.4 0.6 0.4 artificial speech  0.6 recurrency, surgery cure normal voice 1p1p p nor- mal voice  or artifi- cial sp eech Hypothetical standard gamble question: 0.3 0.7 0.3 artificial speech  recurrency cure artificial speech For which p equivalence? Patient answers: p = 0.9. Expected utility: U(  ) = 0; U(normal voice) = 1; U(artificial speech) = 0.9  1 + 0.1  0 = 0.9. U 1.9 0 0 p.60.70.16.24.09.21 0.7 UpUp.60.63.144 0.081 0 + EU.744.711 +.744.711 2 Answer: r.th! Before going to hypothetical question, so just after the square appeared around the decision tree, talk some about the tree, pros and cons, essentialness of asking for subjective input of patient where piano player doesn't mind losing voice but teacher does, etc. Also tell already here that analysis is going to be based on expected utilty. Before going to hypothetical question, so just after the square appeared around the decision tree, talk some about the tree, pros and cons, essentialness of asking for subjective input of patient where piano player doesn't mind losing voice but teacher does, etc. Also tell already here that analysis is going to be based on expected utilty. Possibly discuss already here that much can be criticized, such as EU etc. But that this is a machinery that works at least, and that brought many “political” steps forward in the health domain, such as consideration of qualitity of life (iso five- year survival rate), and the very fact that patients and their subjective situation can be involved. That for this technique there are computer programs available to implement it, and C/E analyses can be performed with it. 99% of applications in the field go like this. I in fact bother more about problems in the model than most applied people. Most applied people say: Peter just don’t bother. You will all be criticizing me for not bothering enough.fs

3 Analysis is based on EU!?!? “Classical Elicitation Assumption” I agree that EU is normative. But … p 1p1p  Perf. Health artificial speech ~ EU = U = p Standard gamble question to measure utility: p  1 + (1–p)  0 = p ? 3

4 4 Now comes a hypothetical example to illustrate inconsistencies, and difficulty of making decisions. It is not important in the example whether or not you consider expected utility to be normative. All critical aspects concern more basic points.

5 Treatment decision to be taken for your patient (impaired) health state ("not treat").90.10 ortreat: treatment probability Your patient is now unconscious. You must decide: Treat or not treat. Depends on - goodness of health state relative to - treatment probability. 5 Make yellow comments invisible. ALT-View-O Make yellow comments invisible. ALT-View-O Cur version of this ethical example will be kept separately, and not in this file.

6 Give Handout 1 (given at the end of this file). Background info on similar patients Before, 10.000 similar cases were observed. 6 “quality-of-life” For them, qol-probability was elicited, as follows.

7 7 Elicitation of qol-probability: The following were presented to each patient. - A rich set of health states, containing the above one; - A rich set of probabilities (all multiples of 0.01). For each health state, each patient was asked: health state p 1 – p1 – p ~ qol- probability For which probability p are they equivalent? The answer is the qol-probability. Is index of quality of the health state. High p => high quality.

8 8 health state 0.91 0.09 ~ qol- probability Question 1 to the audience: Would you now treat or not-treat your patient? (Hint: Compare treatment probability = 0.90 to qol-probability = 0.91.) Average, median, and modus qol-probability: 0.91. Do or do not show hint immediately, depending on audience.

9 9 Now suppose something more. Give handout 2. There is also a new elicitation of qol-probability: The following were also presented to each patient. A rich set of health states, containing the above one; A rich set of probabilities (all multiples of 0.01). For each probability p, each patient was asked: health state p 1 – p1 – p ~ For which health state are they equivalent? Such measurements are done for all p. In each case, p is called the new qol-probability of the corresponding health state.

10 10 health state 0.85 0.15 ~ new qol- probability Question 2 to the audience: What would you do, treat or not treat, for the one patient now considered? For the health state of your patient, you expect new qol probability = 0.91 on average. However, data reveal great inconsistencies: p = 0.85 results as new qol-probability, as average, median, and modus. Repeat that the matching was done here for the health state, I.e., for p = 0.85 given, the matching health state was the one now relevant.

11 11 Now suppose something more. Give handout 3. For your one patient, you also observed the (old) qol- probability (“for which probability … equivalent?“). It was 0.91, as for most others. health state 0.91 0.09 ~ qol- probability No more time for new qol-probability measurement. Unfortunately, the patient became unconscious! Question 3 to the audience: What would you do now, treat or not treat, for the one patient now considered?

12 My opinion: Treat the patient. Goes against the elicited opinion. However, elicitation is biased (see 10,000 prior cases). Moral of the story: We have to accept the possibility of systematic biases in preference measurement. Should try to deal with them as good as possible. 12

13 13 This completes the hypothetical example about how to treat your patient started at slide 5. We return to the discussion of the classical elicitation assumption. As we saw before: Common justification of classical elicitation assumption: EU is normative (von Neumann-Morgenstern). I agree that EU is normative. But not that this would justify SG (= standard gamble = “qol-probability measurement”) -analysis. SG measurement (as commonly done) is descriptive. EU is not descriptive. There are inconsistencies, so, violations. They require correction (? Paternalism!?).

14 14 Replies to discrepancies normative/descriptive in the literature: (1) Consumer Sovereignty ("Humean view of preference"): Never deviate from people's pref s. So, no EU analysis here! However, Raiffa (1961), in reply to violations of EU: "We do not have to teach people what comes naturally.“ We will, therefore, try more. (2) Interact with client (constructive view of preference). If possible, this is best. Usually not feasible (budget, time, capable interviewers …) (3) Measure only riskless utility. However, we want to measure risk attitude! (4) We accept biases and try to make the best of it.

15 15 That corrections are desirable, has been said many times before. Tversky & Koehler (1994, Psych. Rev.): “The question of how to improve their quality through the design of effective elicitation methods and corrective procedures poses a major challenge to theorists and practitioners alike.” E. Weber (1994, Psych. Bull.) “ …, and finally help to provide more accurate and consistent estimates of subjective probabilities and utilities in situations where all parties agree on the appropriateness of the expected-utility framework as the normative model of choice.” Debiasing (Arkes 1991 Psych. Bull. etc)

16 16 Schkade (Leeds, SPUDM ’97), on constructive interpretation of preference: “Do more with fewer subjects.” Viscusi (1995, Geneva Insurance): “These results suggest that examination of theoretical characteristics of biases in decisions resulting from irrational choices of various kinds should not be restricted to the theoretical explorations alone. We need to obtain a better sense of the magnitudes of the biases that result from flaws in decision making and to identify which biases appear to have the greatest effect in distorting individual decisions. Assessing the incidence of the market failures resulting from irrational choices under uncertainty will also identify the locus of the market failure and assist in targeting government interventions intended to alleviate these inadequacies.”

17 17 Million-$ question: Correct how? Which parts of behavior are taken as “bias,” to be corrected for, and which not? Which theory does describe risky choices better? Current state of the art according to me: Prospect theory = rank- and sign-dependent utility (Luce & Fishburn 1991, Tversky & Kahneman 1992). Depending on whether public is tired of general discussions or not, state the following point: Several authors have suggested such a role of prospect theory, but always in the context of reconciling inconsistencies. We go one step further. If your data are too poor to elicit inconsistencies if present, then correct for the inconsistencies that you know from other observations, such as collected in prospect theory, nevertheless. As in the ethical example.

18 First deviation from expected utility: probability transformation 18 p w+w+ 1 1 0 Figure. The common weighting function (Luce 2000). w  is similar; Second deviation from expected utility: loss aversion/sign dependence. People consider outcomes as gains and losses with respect to their status quo. They then overweight losses by a factor = 2.25.

19 19 EU: U(x) = p. PT: U(x) = p p + (1  p) w + ( ) ww We: is wrong !! Have to correct for above “mistakes.” Quantitative corrections proposed by Bleichrodt, Han, José Luis Pinto, & Peter P. Wakker (2001), "Making Descriptive Use of Prospect Theory to Improve the Prescriptive Use of Expected Utility," Management Science 47, 1498–1514.

20 Standard Gamble Utilities, Corrected through Prospect Theory, for p =.00,...,.99. 00. 01. 02. 03. 04. 05. 06. 07. 08. 09.0.00.0000.0250.0380.0480.0570.0640.0720.0780.0850.091.1.10.0970.1020.1080.1130.1180.1230.1280.1330.1380.143.2.20.1480.1520.1570.1620.1660.1710.1760.1800.1850.189.3.30.1940.1990.2030.2080.2130.2170.2220.2270.2310.236.4.40.2410.2460.2510.2560.2610.2660.2710.2760.2810.286.5.50.2920.2970.3030.3080.3140.3200.3250.3310.3370.343.6.60.3500.3560.3630.3690.3760.3830.3900.3970.4050.412.7.70.4200.4280.4360.4450.4540.4630.4720.4810.4910.502.8.80.5120.5230.5350.5470.5600.5730.5870.6010.6170.633.9.90.6500.6690.6890.7100.7340.7600.7890.8220.8610.911 20 E.g., if p =.15 then U = 0.123 Skip this table.

21 0 0.2 0.4 0.6 0.8 1 0 0.20.40.60.81 U p Corrected Standard Gamble Utility Curve 21

22 U SG  U CE ( at 1 st = CE(.10), …, at 5 th = CE(.90) ) 5 th 3d3d 1 st 2 nd 4 th *** * ** * *** 0.25 0.00  0.10 0.10 0.20  0.05 0.05 0.15 * Corrected (Prospect theory) U SG  U TO ( at 1 st = x 1, …, at 5 th = x 5 ) U CE  U TO ( at 1 st = x 1, …, 5 th = x 5 ) Classical (EU) 22

23 SG(EU) CE 1/3 SG(PT) SP TO Utility functions (for mean values). 0 1/6 2/6 3/6 4/6 5/6 1 7/6 t 0 = FF5,000 FF U 23 t 6 = FF26,068 Abdellaoui, Barrios, & Wakker (2004)

24 24 This completes the lecture. Hereafter follow the handouts printed and given to the audience on slides 6, 9, 11.

25 (impaired) health state ("not treat").90.10 ortreat treatment probability Treatment decision for your patient 1 health state 0.91 0.09 ~ qol- probability Mean etc. from 10,000 similar patients Question 1 to the audience: Would you treat or not treat your patient?

26 (impaired) health state ("not treat").90.10 ortreat treatment probability Treatment decision for your patient 2 health state 0.91 0.09 ~ qol- probability Mean etc. from 10,000 similar patients Question 2 to the audience: Would you treat or not treat your patient? health state 0.85 0.15 ~ new qol- probability Mean etc. from 10,000 similar patients

27 (impaired) health state ("not treat").90.10 ortreat treatment probability Treatment decision for your patient 3 health state 0.91 0.09 ~ qol- probability Mean etc. from 10,000 similar patients Question 3 to the audience: Would you treat or not treat your patient? health state 0.85 0.15 ~ new qol- probability Mean etc. from 10,000 similar patients health state 0.91 0.09 ~ qol- probability Your own patient: No new-qol measurement could be done with your patient.


Download ppt "Using Descriptive Decision Theories such as Prospect Theory to Improve Prescriptive Decision Theories such as Expected Utility; the Dilemma of Omission."

Similar presentations


Ads by Google