Download presentation
Presentation is loading. Please wait.
Published byれいな ちゅうか Modified over 6 years ago
1
When “Less Is More” – A Critique of the Heuristics & Biases Approach to Judgment and Decision Making
Psychology 466: Judgment & Decision Making Instructor: John Miyamoto 10/26/2017: Lecture 05-2 Note: This Powerpoint presentation may contain macros that I wrote to help me create the slides. The macros aren’t needed to view the slides. You can disable or delete the macros without any change to the presentation.
2
Discussion: What do you think are the main weaknesses or limitations of the heuristics & biases program? Heuristics & Biases Program Discover heuristic reasoning strategies in reasoning about uncertainty and value (preference). Discover experimental findings that demonstrate that people use a specific heuristic, usually by showing that the use of the heuristic causes a specific pattern of errors. Show that a heuristic causes a predictable pattern of behavior in real decision making, e.g., stock investment decisions. Psych 466, Miyamoto, Aut '17 Lecture Outline
3
Outline Misconceptions promoted by the heuristics and biases movement.
Accuracy/Effort Tradeoff – Is there always an accuracy/effort tradeoff? Less-is-more: Bias/Variance Tradeoff – why less-is-more Opposing Views of JDM: Heuristics & Biases Research Ecological Rationality Naturalistic Decision Making Psych 466, Miyamoto, Aut '17 Intro to Gigerenzer & Brighton (2009) Paper
4
Gigerenzer & Brighton (2009)
Gerd Gigerenzer Adaptive Behavior & Cognition (ABC Group) Gigerenzer, G., & Brighton, H. (2009) Homo Heuristicus: Why biased minds make better inferences Topics in Cognitive Science, 1, GB: Abbreviation for Gigerenzer & Brighton (2009) HB: Abbreviation for heuristics & biases movement. Psych 466, Miyamoto, Aut '17 Meaning of "Heuristic"
5
Meaning of "Heuristic" "The term heuristic is of Greek origin, meaning ‘‘serving to find out or discover.’’ (Gigerenzer & Brighton, p. 108) "A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals." (Wikipedia: Three Misconceptions that were Promoted by the Heuristics and Biases Movement Psych 466, Miyamoto, Aut '17
6
Three Misconceptions Promoted by HB Movement
GB (p. 109): The HB movement gave rise to three misconceptions about human reasoning: Heuristics are always second-best. We use heuristics only because of our cognitive limitations. More information, more computation, and more time would always be better. Hypothesis 3 implies that there is an Accuracy/Effort Tradeoff: "If you invest less effort, the cost is lower accuracy." GB, p. 109. "Less Is More" Effect: In some situations, less information (less effort) leads to greater accuracy. (Contradicts assumption that there is always an accuracy/effort tradeoff.) Psych 466, Miyamoto, Aut '17 Diagram Depicting the Accuracy/Effort Tradeoff
7
Accuracy-Effort Tradeoff
Accuracy High Effort High Method 1: Multiple regression applied to existing data. Method 2: Multiple regression applied to a judge’s predictions Method 3: Importance weighting method Method 4: Unit weighting model Method 0: Intuitive judgment (holistic judgment). Accuracy Low Effort Low Same Slide with Note re Less-Is-More Effect Psych 466, Miyamoto, Aut '17
8
Accuracy-Effort Tradeoff
Accuracy High Effort High Method 1: Multiple regression applied to existing data. Method 2: Multiple regression applied to a judge’s predictions Method 3: Importance weighting method Method 4: Unit weighting model Method 0: Intuitive judgment (holistic judgment). Less-Is-More Effect contradicts the claim that there is an accuracy-effort tradeoff. Sometimes less effort produces greater accuracy (Gigerenzer & ABC Group) Current issues: Are heuristics always bad (maladaptive)? When are they good and when are they bad? What are the psychological mechanisms on which heuristics are based? Accuracy Low Effort Low Reminder re Importance of Linear Judgment Models for Study of JDM Psych 466, Miyamoto, Aut '17
9
Importance of Linear Judgment Models in JDM
1960 – 1980: Provided strong evidence that human judgment had difficulty processing complex information. Gave momentum to the heuristics & biases movement. 1990 – present: Continued study of linear judgment models provides evidence for the value of heuristic reasoning. Example: The less-is-more effect. Gigerenzer & Brighton (2009). When sample sizes are small, unit weighting (tallying) model out-performs multiple regression. Why? Because regression weights have high variance when sample sizes are small. Define Less-Is-More Effects Psych 466, Miyamoto, Aut '17
10
Less-Is-More Effects Decision behavior in a given environment (context) exhibits a Less-Is-More (L-I-M) effect .... if a simpler, lower-information computation can outperform a more complex and theoretically better computation in this environment. Psych 466,, Miyamoto, Aut '17 How to Demonstrate that Less-Is-More Effects Occur in Decision Making
11
How to Demonstrate that Less-Is-More Effects Occur in Decision Making?
Compare simple heuristic model versus complex "rational" model Method of cross-validation can be used to determine whether the simple heuristic model or the complex "rational" model has greater predictive accuracy in a given situation (environment). (Method of cross-validation explained in the next slides) Psych 466,, Miyamoto, Aut '17 Diagram Explaining the Method of Cross-Validation
12
Cross-Validation Studies For Evaluating the Predictive Accuracy of a Model
Sample of Data random split 50% of data in estimation sample: Estimate parameters of the model 50% of data in test sample: Evaluate goodness of fit of predictions on the test sample Psych 466, Miyamoto, Aut '17 Same Slide Without Blocking Rectangles
13
Cross-Validation Studies For Evaluating the Predictive Accuracy of a Model
Sample of Data random split 50% of data in estimation sample: Estimate parameters of the model 50% of data in test sample: Evaluate goodness of fit of predictions on the test sample Predict Use a computer to repeat the procedure over many random splits. Predictive accuracy is the accuracy of prediction over many random splits. Psych 466, Miyamoto, Aut '17 Same Slide Without Blocking Rectangles
14
Cross-Validation Studies For Evaluating the Predictive Accuracy of a Model
Sample of Data random split 50% of data in estimation sample: Estimate parameters of the model 50% of data in test sample: Evaluate goodness of fit of predictions on the test sample Predict Cross-validation is used to prevent research errors that result from over-fitting and differences in model complexity. Purpose of Cross Validation Psych 466, Miyamoto, Aut '17 Transition to a Discussion of the Tallying Model
15
Next: Tallying versus Multiple Regression
Explain Tallying Explain methodology of comparing tallying to multiple regression Psych 466,, Miyamoto, Aut '17 Describe the Tallying Model
16
Tallying (a.k.a. Unit Weighting or Equal Weighting)
When all of the cues are dichotomous, i.e., present or absent, then tallying is the same is counting the number of positive cues and subtracting the number of negative cues. When at least some of the cues are continuous like height or income, then the cues should be converted to z-scores before computing the predicted z-scores. After computing the predicted z-scores, it is necessary to convert the predicted z-scores back to the original scale Tallying is a simple heuristic model. Compare it to multiple regression, a complex "rational" model. Psych 466, Miyamoto, Aut '17 Tallying versus Multiple Regression
17
Tallying (Unit Weighting) versus Multiple Regression (MR)
Compute cross-validation analyses for Tallying. Fit Tallying Model to 50% of data; predict results for remaining 50% of data. Evaluate the accuracy of prediction. Repeat the analysis for many random 50/50 splits of the data. Compute cross-validation analyses for Multiple Regression. Fit MR Model to 50% of data; predict results for remaining 50% of data. Evaluate the accuracy of prediction. Which model, Tallying or Multiple Regression produces higher accuracy in prediction? Psych 466, Miyamoto, Aut '17 Results of Comparison of Tallying Versus Multiple Regression
18
GB, Figure 1 (Take-the-Best has been omitted)
Tallying versus Multiple Regression (MR): Which Method has Higher Predictive Accuracy? GB, Figure 1 (Take-the-Best has been omitted) Czerlinski, Gigerenzer & Goldstein (1999): Averaged over 20 studies, Tallying has (slightly) higher predictive accuracy than MR. Why does this happen? MR overfits the model when the sample size is small. Tallying is robust (gives stable estimates in random data). MR is less robust (its estimates are unstable when sample size is small). * See <\P466\IMAGES\gig.bri.figures.r.docm> for Rcode to create this file. Psych 466, Miyamoto, Aut '17
19
When does Tallying Outperform Multiple Regression (MR)?
Tallying outperforms MR when the following criteria are met: Degree of linear predictability is small (R2 < 0.50; | | < 0.71). Sample size was less than 10 number of cues; Cues are correlated with each other. Psych 466, Miyamoto, Aut '17 Continue This Slide With Additional Comments
20
When does Tallying Outperform Multiple Regression (MR)?
Tallying outperforms MR when the following criteria are met: Degree of linear predictability is small (R2 < 0.50; | | < 0.71). Sample size was less than 10 number of cues; Cues are correlated with each other. Conditions 1, 2 and 3 are all associated with increased variability of regression coefficients. Main points: Everyday experience has many correlated cues but not a lot of data. Tallying should outperform MR in everyday experience. Less is more! Psych 466, Miyamoto, Aut '17 Transition to Take-the-Best Strategy
21
Next: "Take the Best" versus Multiple Regression
Explain cue validity (key idea in the Take-the-Best strategy) Explain the Take-the-Best strategy Compare Take-the-Best to multiple regression Psych 466,, Miyamoto, Aut '17 Explain Cue Validity
22
Definition of Cue Validity
Prediction task: Predicting which of two options has a higher value on a criterion C. Cue validity of feature F = Probability that Option #1 better than Option # given that Option #1 has feature F and Option #2 does not. Cue validity of F = P( COpt.1 > COpt.2 | Opt #1 has F and Opt #2 does not) Example: Predicting which candidate for mayor will get more votes based on (a) endorsement by Seattle Times; (b) endorsement by the Stranger; (c) candidate's support for Black Lives Matter; (d) candidate's support for $15/hour minimum wage; etc. Psych 466, Miyamoto, Aut '17 Explain: F1 has greater cue validity than F2
23
F1 has greater cue validity than F2: What Does This Mean?
Suppose that Option #1 and Option #2 are two options (objects, actions); F1 and F2 are two features of these options; COpt.1 and COpt.2 are the consequences of choosing Option #1 or Option #2. Then F1 has greater cue validity than F2 if P( COpt.1 > COpt.2 | Opt #1 has F1 and Opt #2 does not) > P( COpt.1 > COpt.2 | Opt #1 has F2 and Opt #2 does not) Intuitively, F1 has greater cue validity than F2 if knowing that the options differ on F1 gives you a better chance to guess which option has more of the criterion C than knowing that the options differ on F2 . Psych 466, Miyamoto, Aut '17 Describe the Take-the-Best Strategy
24
"Take the Best" Strategy The Take-the-Best choice strategy has three steps: Examine the cues in decreasing order of their cue validities. Stop as soon as a cue is found on which the options have differing values for the cue (This is called the "first discriminating cue") Take the object that has the higher value on the first discriminating cue. Psych 466, Miyamoto, Aut '17 Example of Take-the-Best Strategy
25
Example of "Take the Best" Strategy
Task: Decide whether the Seahawks or Rams is more likely to win the Western Division competition. Suppose the cue are: F1: Has an excellent defense against the pass; F2: Has an excellent defense against the run F3: Has an excellent passing offense; F4: Has an excellent running offense Suppose that cue validity of F1 > cue validity of F2 cue validity of F2 > cue validity of F3 cue validity of F3 > cue validity of F4 Psych 466, Miyamoto, Aut '17 Continue this Example: Describe the Steps in the Decision
26
Example of "Take the Best" Strategy (cont.)
Take the Best Decision Rule: Choose Team X if Team X has F1 and Team Y does not. If both teams have F1 or if neither team has F1, then proceed to Step 2. Choose Team X if Team X has F2 and Team Y does not. If both teams have F2 or if neither team has F2, then proceed to Step 3. Choose Team X if Team X has F3 and Team Y does not. If both teams have F3 or if neither team has F3, then proceed to Step 2. Choose Team X if Team X has F4 and Team Y does not. If both teams have F4 or if neither team has F4, then guess (make the decision based on a coin flip). Main Feature of "Take the Best": You examine the features in the order from the most to least cue validity. You stop working on the decision as soon as you find a feature that distinguishes between the choices. Psych 466, Miyamoto, Aut '17 Results: Take-the-Best versus Tallying versus Multiple Regression
27
Take the Best versus Tallying versus Multiple Regression (MR)
GB, Figure 1 In many cases, Take the Best has greater predictive accuracy than Tallying (Unit Weighting) and MR. Less is more! Take the Best outperforms more complex decision procedures when sample size is small and there are correlations (dependencies) among the cues. Psych 466, Miyamoto, Aut '17 Why Do Take-the-Best and Tallying Outperform MR in Prediction
28
Explaining "Less-Is-More" - Why Is This True (Sometimes)?
Decision strategies exhibit a bias/variance dilemma. (a.k.a. bias/variance tradeoff). Psych 466, Miyamoto, Aut '17 Graphical Example to Explain the Bias/Variance Dilemma
29
Example: Bias/Variance Dilemma In Temperature Prediction
Larger bias with low variance Better Than Smaller bias with high variance Psych 466, Miyamoto, Aut '17 Results for Weather Prediction (GB Figure 3)
30
GB Figure 3, p. 118 Fits are based on sub-samples of size 30. Bias/variance tradeoff for entire year of London temperatures. Psych 466, Miyamoto, Aut '17 Claim: Heuristics are Adaptively Superior to More Complex Models
31
Claim: Heuristics Are Adaptively Superior to Complex Models
Humans are better off with biased heuristics that are robust (lower variance) in small samples. Precise normative models are less accurate than heuristic models in small samples. Less-is-more. Reminder: All of the above is only true when the sample size is small relative to the number of predictor variables. Minimum required sample = 10 x Number of predictor variables (Very approximate formula; it has flaws.) Psych 466, Miyamoto, Aut '17 Why Does Heuristics and Biases Research Focus on Decision Errors?
32
Why Focus on Decision Errors?
Amabile (1981), "Brilliant but Cruel." Make people look stupid. Make ourselves feel smart. Error patterns are clues to cognitive representations and processes. Sometimes errors are important in real life. Psych 466, Miyamoto, Aut '17 How Should We Test Adaptive Value of Heuristics in the Real World?
33
How Should We Test Adaptive Value of Heuristics in the Real World?
Strategy 1: Randomly sample decisions. Conceptually and practically, this is hard to do. Strategy 2: Focus on how people make a specific decision. E.g., Decisions in a hospital emergency room. E.g., Specific investment decisions. Strategy 3: Study specific decisions in artificial lab settings where variables are easier to control. Psych 466, Miyamoto, Aut '17 Final Look at Opposing Camps in JDM Research - END
34
Opposing Views in JDM Research
Heuristics & Biases Research Ecological Rationality Naturalistic Decision Making Ecological Rationality: Focus on decision environments Naturalistic Decision Making: Interesting insights, but can they exclude plausible alternative explanations. Heuristics & Biases: When do heuristics influence important everyday decisions? What cognitive processes are involved in heuristic reasoning? Memory processes? Attention? Psych 466, Miyamoto, Aut '17 END
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.