Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overconfidence in judgment: Why experience might not be a good teacher Tom Stewart September 24, 2007.

Similar presentations


Presentation on theme: "Overconfidence in judgment: Why experience might not be a good teacher Tom Stewart September 24, 2007."— Presentation transcript:

1 Overconfidence in judgment: Why experience might not be a good teacher Tom Stewart September 24, 2007

2 2 Einhorn, H. J., & Hogarth, R. M. (1978). Confidence in judgment: Persistence of the illusion of validity. Psychological Review, 85(5), 395-416. “How can the contradiction between the considerable evidence on the fallibility of human judgment be reconciled with the seemingly unshakable confidence people exhibit in their judgmental ability? In other words, why does the illusion of validity persist?” (p. 396)

3 3 Experience, performance, confidence Experience Performance Confidence ? ? ?

4 4 Experience, performance, confidence Experience Performance Confidence ? ? ? Uncertainty Feedback

5 5 r =.50

6 6

7 7 r =.80

8 8 r =.95 See speaker note

9 9 Judgments are continuous Decisions are discrete

10 10 Decision A Threshold model Judgment LowHigh Threshold 1 Decision B Threshold 2 Decision C

11 11 r =.50 Decision threshold Act Don’t Act

12 12 r =.50 Criterion threshold Action is appropriate Action is inappropriate

13 13 Correct rejections Hits r =.50 Decision threshold Criterion threshold False alarms Misses

14 14 Correct rejections Hits r =.50 Decision threshold Criterion threshold False alarms Misses

15 15 Correct rejections Hits r =.50 Decision threshold Criterion threshold False alarms Misses

16 16 Correct rejections Hits r =.95 Decision threshold Criterion threshold Misses False alarms

17 17 Research on judging contingencies between x and y based on information in 2x2 tables suggests that people focus on frequency of Hits. This may be due to the difficulty people have in using disconfirming information. Correct rejections Hits Misses False alarms

18 18 How do people learn to make decisions if feedback (knowledge of results) is incomplete? Selective feedback example – selection task – If an employer chooses not hire an applicant, she will not learn how that applicant would have performed. Selective feedback example – detection task – If a customs officer chooses not to conduct a search of an airline passenger entering the country, he will not learn whether the passenger is smuggling goods into the country.

19 19 Correct rejections Hits Misses False alarms Knowledge of results: Full feedback

20 20 Correct rejections Hits Misses False alarms Knowledge of results: Selective feedback

21 21 Typical results Full feedback Selective feedback

22 22 Possible explanation Encoding of cases when no feedback is available. Two possibilities (not exhaustive): – Positivist – People assume that when feedback is missing accuracy is the same as when feedback is present. – Constructivist (optimistic) – People assume perfect accuracy when feedback is missing. Elwin, E., Juslin, P., Olsson, H., & Enkvist, T. (2007). Constructivist Coding: Learning From Selective Feedback. Psychological Science, 18(2), 105-110.

23 23 Selective feedback – possible types of encoding Objective results Subjective results – Constructivist (optimistic) encoding Subjective results – Positivist encoding = subjective encoding

24 24 Encoding and values affect the cutoff  Subjective encoding  If people assume they are correct when they don’t get feedback, the cutoff will move up (fewer cases selected).  Values of the four outcomes – There is evidence that people value hits more than other outcomes. – This could result in selecting more cases. – However, if people pay attention to the positive hit rate, they might select fewer cases. Einhorn and Hogarth, 1978

25 25 Hits as a function of selection rate (Hit rate is number of hits divided by number of positive decisions.) Hit rate Proportion correct Proportion of all decisions that are hits Note that hit rate can be high even if accuracy is not.

26 26 Plot of expected value vs. decision cutoff Full feedback, objective expected value Selective feedback, constructivist encoding, subjective expected value Selective feedback, positivist encoding, subjective expected value Payoff matrix assumes greater value for hits

27 27 Summary: Selective feedback increases confidence while reducing performance Research suggests that, with limited feedback, people will learn to select fewer cases. – This results in a decision bias that increases the error rate. Other research suggests that people pay more attention to hits than to other outcomes. – This could result in either more cases being selected in order to increase the number of hits, or fewer cases to increase the hit rate. The constructivist encoding hypothesis can account for the experimental results.* Furthermore, with constructivist encoding subjective performance will be better than objective performance, accounting for overconfidence. It appears that while selective feedback results in more decision errors, it may not affect the accuracy of judgment. *Of course, this does not prove that people are actually doing constructivist encoding, and there are certainly individual differences.

28 28 End

29 29 Confidence People pay attention to positive hit rate. – Inability to use disconfirming information – Limited feedback when action not taken Positive hit rate is often high, even when accuracy is not. – Positive hit rate can always be increased by reducing selection rate/increasing threshold. – Treatment effects increase positive hit rate, and this increase is greater for high selection rates.

30 30 If people judge their skill by the true positive rate, what affects that rate? Base rate Correlation Selection rate Treatment effects Illustrate with spreadsheet C:/Documents and Settings/Tom/My Documents/aaDocuments/AAPRJCTS/2005/NSF-TR-SDT-Feedback/6-Talks/T-R- 634Assignment-Einhorn-Hogarth-treatment.xls

31 31 Treatment effect r =.50

32 32 Treatment effect r =.50


Download ppt "Overconfidence in judgment: Why experience might not be a good teacher Tom Stewart September 24, 2007."

Similar presentations


Ads by Google