Judgment and Decisions

Slides:



Advertisements
Similar presentations
C81COG: Cognitive Psychology 1 PROBABILISTIC REASONING Dr. Alastair D. Smith Room B22 – School of Psychology
Advertisements

Thinking.
Heuristics and Biases in Human Decision Making
Lecture Outline Heuristics Heuristics and Social Influence Types of heuristics Stereotypes as base rates Dilution Effect Other cognitive errors.
1 Intuitive Irrationality: Reasons for Unreason. 2 Epistemology Branch of philosophy focused on how people acquire knowledge about the world Descriptive.
Judgment and Decisions. Judgment: “how likely is that …?” Decision-Making (Choice): ‘should you take a coupon for $200 or $100 in cash, given that …”
Misconceptions and Fallacies Concerning Probability Assessments.
Intro The Idea of Probability Myths about Randomness
Statistical Issues in Research Planning and Evaluation
When Intuition Differs from Relative Frequency
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. 6.1 Chapter Six Probability.
Fallacies in Probability Judgment Yuval Shahar M.D., Ph.D. Judgment and Decision Making in Information Systems.
Thinking, Deciding and Problem Solving
Reasoning What is the difference between deductive and inductive reasoning? What are heuristics, and how do we use them? How do we reason about categories?
Judgment in Managerial Decision Making 8e Chapter 3 Common Biases
Solved the Maze? Start at phil’s house. At first, you can only make right turns through the maze. Each time you cross the red zigzag sign (under Carl’s.
Decision-making II judging the likelihood of events.
Heuristics and Biases. Normative Model Bayes rule tells you how you should reason with probabilities – it is a normative model But do people reason like.
Decision-making II judging the likelihood of events.
Reasoning with Uncertainty. Often, we want to reason from observable information to unobservable information We want to calculate how our prior beliefs.
Inductive Reasoning Bayes Rule. Urn problem (1) A B A die throw determines from which urn to select balls. For outcomes 1,2, and 3, balls are picked from.
Decision-making I heuristics. Heuristics and Biases Tversky & Kahneman propose that people often do not follow rules of probability Instead, decision.
Copyright 2010 McGraw-Hill Companies
Representativeness and Availability Kahneman & Tversky
Heuristics & Biases. Bayes Rule Prior Beliefs Evidence Posterior Probability.
Today Concepts underlying inferential statistics
Decision Making. Test Yourself: Decision Making and the Availability Heuristic 1) Which is a more likely cause of death in the United States: being killed.
Thinking about Probabilities CCC8001. Assignment Watch episode 1 of season 1 of “Ancient Aliens.”
Rank how likely is it that…
Today’s Topic Do you believe in free will? Why or why not?
Heuristics and bias Dr Carl Thompson. Before we start… A quick exercise.
Understanding Probability and Long-Term Expectations Example from Student - Kalyani Thampi I have an example of "chance" that I thought about mentioning.
Chapter 10 Thinking.
Lecture 15 – Decision making 1 Decision making occurs when you have several alternatives and you choose among them. There are two characteristics of good.
Sampling and Probability Chapter 5. Sampling & Elections >Problems with predicting elections: Sample sizes are too small Samples are biased (also tied.
LESSON TWO ECONOMIC RATIONALITY Subtopic 10 – Statistical Reasoning Created by The North Carolina School of Science and Math forThe North Carolina School.
Judgement Judgement We change our opinion of the likelihood of something in light of new information. Example:  Do you think.
Rejecting Chance – Testing Hypotheses in Research Thought Questions 1. Want to test a claim about the proportion of a population who have a certain trait.
Welcome to MM570 Psychological Statistics
14 Statistical Testing of Differences and Relationships.
False Positives Sensitive Surveys Lesson Starter A bag contains 5 red marbles and 4 blue marbles. Two marbles are drawn without replacement. What.
Exercise 2-6: Ecological fallacy. Exercise 2-7: Regression artefact: Lord’s paradox.
1 DECISION MAKING Suppose your patient (from the Brazilian rainforest) has tested positive for a rare but serious disease. Treatment exists but is risky.
Representativeness Heuristic Then: Framing Effects Psychology 355: Cognitive Psychology Instructor: John Miyamoto 6/2 /2015: Lecture 10-2 This Powerpoint.
Decision Making. Reasoning & Problem Solving A. Two Classes of Reasoning I. Deductive Reasoning II. Inductive Reasoning.
CHS AP Psychology Unit 7 Part II: Cognition Essential Task 7.3: Identify decision making techniques (compensatory models, representativeness heuristics,
Probability Judgments: Overview I  Heuristics and Basis:  Availabilty heuristic  Representativeness heuristic.  Anchoring and Adjustment.
Lecture Outline Heuristics and Social Inference Representative Heuristic Base Rate Fallacy Stereotypes as Base Rates Dilution Effect Other Cognitive Errors.
Copyright © 2009 Pearson Education, Inc. Chapter 11 Understanding Randomness.
Probability. Definitions Probability: The chance of an event occurring. Probability Experiments: A process that leads to well- defined results called.
A. Judgment Heuristics Definition: Rule of thumb; quick decision guide When are heuristics used? - When making intuitive judgments about relative likelihoods.
Heuristics and Biases Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany State University.
Reasoning and Judgment PSY 421 – Fall Overview Reasoning Judgment Heuristics Other Bias Effects.
The Representativeness Heuristic then: Risk Attitude and Framing Effects Psychology 355: Cognitive Psychology Instructor: John Miyamoto 6/1/2016: Lecture.
Copyright © 2009 Pearson Education, Inc. 4.4 Statistical Paradoxes LEARNING GOAL Investigate a few common paradoxes that arise in statistics, such as how.
Exercise 2-7: Regression artefact: Lord’s paradox
Unit 7 Part II: Cognition
Unit 5: Hypothesis Testing
CHAPTER 9 Testing a Claim
1st: Representativeness Heuristic and Conjunction Errors 2nd: Risk Attitude and Framing Effects Psychology 355:
Thinking and Language.
CHAPTER 9 Testing a Claim
CHAPTER 9 Testing a Claim
Significance Tests: The Basics
CHAPTER 9 Testing a Claim
CHAPTER 9 Testing a Claim
HEURISTICS.
CHAPTER 9 Testing a Claim
CHAPTER 9 Testing a Claim
Presentation transcript:

Judgment and Decisions

Outline Heuristics Errors & biases Representativeness Availability Anchoring Errors & biases Base rate neglect Gambler’s fallacy Conjunction fallacy Illusory correlations Confirmation bias

Heuristic: - a ‘rule of thumb’ for judgment and decision-making - it takes into account only a portion of the available evidence - it allows for fast and efficient decision-making, but - it is vulnerable to error. Algorithm: - guarantees the correct answer - inefficient (computationally expensive) Judgment: “how likely is that …?” Decision-Making (Choice): ‘should you take a coupon for $200 or $100 in cash, given that …”

farmer Classics scholar William has been randomly selected for an interview. From the interview, the following personal info was revealed: William is a short, shy man. He has a passion for poetry, and loves strolling through art museums. As a child, he was often bullied by his classmates. farmer Classics scholar

similarity: he sounds like a classics scholar Why? similarity: he sounds like a classics scholar There is an issue of stereotype here, but that aside, Should we use ‘similarity’ as a cue to judge? YES: similarity between the inidividual and the category is a good indication of category membership (remember lecture in concepts&categories?) This is particularly true of natural categories (e.g., if it looks like a duck, walks like a duck, and quacks like a duck, then likelihood is that it is a duck) BUT: similarity is less diagnostic of social categories; If you are white, whealthy and tough on crime, you may be a republican (but there are plenty of democtrats with those beliefs too) AND: you shouldn’t forget about probabilities Also:

Michael has been randomly selected for an interview Michael has been randomly selected for an interview. Do you suppose that Michael is: employed unemployed Why?

The Representativeness Heuristic The tendency to judge an event as likely if it “represents” the typical features of its category. (individual is similar to the prototype) Why is it useful? - Typical features often are the most frequent ones Why is it sometimes misleading? - It fails to account for: - prior odds - Base Rate Neglect - Conjunction Fallacy - random process - Gambler’s Fallacy - stereotypes are sometimes incorrect Representativeness Heuristic is Used when calculating the probability: Object A belongs to class B? Event A originates from class B? Process B will generate object A? Using similarity or correspondence of mental models of A & B, ignoring other relevant info.

Base Rate: Some things are very frequent (flu), others are quite infrequent (mad cow disease) Base Rate Neglect: tendency to neglect the overall frequency of an event when predicting its likelihood. If subjects were given only the base rate information, they were good at taking it into account “70% chance that he is a lawyer” If subjects were given only the diagnostic information, they were able to tell that some descriptions favored engineers, while others favored lawyers… but if they were given both types of information, they completely ignored the base rate information. Even if the base rates were completely reversed (e.g. 70 engineers, and 30 lawyers) subjects gave the same answers--that is, answers that were determined by the diagnostic information. (if given a completely neutral description, they estimated a 50/50 probability…again ignoring base rates…) so in the example above, subjects heavily favored the engineer answer even though the base rates were in the opposite direction… Which heuristic was at work here? Representativeness...

Base Rate Neglect: Example • A single witness is found for a hit and run accident involving a taxi cab. • There are 2 cab companies in this town. • A huge blue cab company (with 1000 cars active at a time) and, • A small green cab company (with 50 cars active at a time). • The witness believes the cab was green. • Subsequent experiments show that this person is 90% accurate in determining the color of cabs. Is it more likely that the cab was blue or green? Base Rate Neglect: People’s tendency to neglect the overall frequency of an event when predicting its likelihood. The somewhat surprising answer here is BLUE!!!! Even though the witness is 90% accurate the overwhelming numbers of blue cabs relative to green cabs make it more likely that when the witness claims to see a green car, he is misidentifying one of the blue cars…. Let’s go through this logic carefully...

More likely to be a green car. Do you agree? Yes No

900 “blue” 100 “green” 5 “blue” 45 “green” Suppose the witness were to identify all the cabs in the city... What the witness would report 1000 blue cabs 900 “blue” 100 “green” “green” answers are more often wrong than right! (100/145 are wrong) 50 green cabs 5 “blue” How do you think a jury would respond? First consider the OJ verdict…and then tell me what a jury would say... 45 “green” In this case, the base rate information overwhelms the diagnostic information.

Base rate neglect has real world consequences... Suppose mammograms are 85% likely to detect breast cancer, if it’s really there (hit rate), and 90% likely to return a negative result if there is no breast cancer (correct rejection rate). Suppose we are testing a patient population with an overall likelihood of cancer of 1%. If the mammogram detects cancer, what are the odds that the patient really has cancer? As you can see, the test is pretty reliable...

Mammogram Indicates What’s really Cancer No Cancer Total there cancer present 850 150 1,000 cancer absent 9,900 89,100 99,000 In this case, when the mammogram indicates the presence of cancer, there is an 850/10,750 likelihood that the patient actually has cancer (only about an 8% chance). While positive results on a mammogram surely indicate that more tests would be wise…they should be viewed in the context of the overall probability of the disease they are testing for. Studies have shown that doctors have the same base rate neglect tendencies as the rest of the population. What’s really there First notice the overall incidence is 1000 out of 100,000 now notice the 85% accuracy of positive results and the 90% accuracy of negative results… Now we can just compare the number of times that positive results will be correct, with the number of times they will be incorrect… Of course, for something as serious as breast cancer, a positive result merits further testing just in case… Suppose a doctor got a positive result on a highly diagnostic test for a dangerous but rare disease. Should they immediately administer a risky treatment (one that might have serious side effects), or should they test the patient again?

Base Rate Neglect: Another Example From a sample of 30 engineers and 70 lawyers, you randomly draw Jack…(Base Rate Information) Jack is 45 yrs old... He shows no interest in political or social issues and spends most of his free time on his many hobbies which include... mathematical puzzles. (Diagnostic Information) How likely is it that Jack is an engineer? - Diagnostic and Base Rate information are important - however, when both are provided, subjects ignore the Base rate information and make their judgment based exclusively on the Diagnostic infromation

What can help improve the quality of these kinds of decisions? --Overt cues increase the likelihood that people will use probability information. 70% are lawyers Gigerenzer and colleagues modified the lawyers and engineers problem by letting subjects acttually draw the descriptions out of an urn….by emphasizing the role of chance, they were able to increase subjects’ use of base rate information. However, we’ve already seen that simple awareness of base rate information doesn’t make this an easy task…. The Agnoli and Krantz study was basically exploring another way to make information about category inclusion very salient.

Participants: Students at the Harvard Medical School Question: If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 percent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs? Participants: Students at the Harvard Medical School - 1000 people tested, one has the disease (1/1000). This should lead to: - 50 false positives (5%) and 1 hit (assuming perfect sensitivity) - The chance of having the disease if the result comes positive is 1/51 (1.96%) - This is due to the very low base rate (1/1000). - Almost half of the participants responded 95%. - The average answer was 56%.

The Gambler’s Fallacy: Example Which sequence of coin tosses is more likely? 1. H T T H H H T 2. H H H H H H H The same reasoning is at work when people believe that a batter who has struck out 12 times in a row is “due” or more likely to get a hit on the very next at bat… We attribute to individuals (in this case a ‘sequence of 7 tosses’) the same properties of the category (a long sequence of tosses). Because in a long sequence you will find variety, you come to expect variety in a short sequence, although the likelyhood of variety in a long sequence is much higher than ina short sequence. Similarly, in election 2000, pundits said that ‘americans want moderation, a person who governs from the center’ because the vote was 50-50. Obviously this is nonsense, just because the Country (I.e. the class) is divided in the middle and a decision is hard to reach, it does not follow that each individual cannot make its mind. 

The Gambler’s Fallacy: the misconception that prior outcomes can influence the outcome of an independent probabilistic event. But why?! Because in the long run heads & tails alternate, so a short run in which heads & tails alternate seems more typical (similar) member of the category. We wrongly conclude that if someone got - 10 H in a row, she is cheating - 4 baskets in a row, the player has ‘hot hands’

Streak Shooting Hot hand: basketball players get “hot” (91% of 76ers fans) Analysis of 48 76ers home games during 1980-81 season revealed no basis in fact. Measured probability of making shot after making 1, 2, or 3 shots. missing 1, 2 or 3 shots. Found no difference. How might the representativeness heuristic explain belief in streak shooting?

Linda is 31 years old, single, outspoken, and very bright Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear rallies. Which alternative is more probable. Linda is: a bank teller a bank teller and active in the feminist movement

Conjunction fallacy bank teller feminist feminist bank teller

What can help improve the quality of these kinds of decisions? -- use Venn diagrams to represent categories. This significantly reduced cases of conjunctive fallacy in this group. Gigerenzer and colleagues modified the lawyers and engineers problem by letting subjects acttually draw the descriptions out of an urn….by emphasizing the role of chance, they were able to increase subjects’ use of base rate information. However, we’ve already seen that simple awareness of base rate information doesn’t make this an easy task…. The Agnoli and Krantz study was basically exploring another way to make information about category inclusion very salient.

Hooray for psychology!!! College Helps...

Failure to understand regression to the mean Israeli flight instructors

The Availability Heuristic: Examples Which household chores do you do more frequently than your partner? (e.g. washing dishes, taking out the trash, etc.) - wives report 16/20 chores - husbands report 16/20 chores Ross and Sicoly (1979) Why? Availability! - I remember lots of instances of taking out the trash, washing dishes, but I do not remember lots of instance of my wife doing it People choose “begin with R” even though R appears as the third letter more frequently (same thing true of K, L, N, and V). Why? Because it’s easier to generate examples of words using the first letter…these examples are simply more “available”. But on the other hand, who CARES about this useless fact? Availability also affects judgments about meaningful events. Most people rate motor vehicle accidents as the more likely cause of death. In fact, motor vehicle accidents are responsible for less than 100,000 deaths/year, while heart disease is responsible for around 1,000,000!! Auto accidents are more sensational, reported in the news far more often…and therefore are more available in memory than cases of death from heart disease. Then, there are the REALLY important cases…..Clearly, it’s easier to remember that last time you had to scrub the mildew off of the bathtub than when your roommate did it. Is this heuristic always bad? No…availability is often a GOOD indicator of how often something occurs, because it is usually correlated with frequency. But other times, availability is determined by how often the media chooses to report an event, extensive advertising, or people’s inherent reluctance to take out the garbage...

The Availability Heuristic: Examples Which is more frequent? Words that begin with “R”, or words with “R” as their third letter? Why? Availability! - I can come up with many examples of ‘R_ _ _’, but few of ‘_ _ R_’ People choose “begin with R” even though R appears as the third letter more frequently (same thing true of K, L, N, and V). Why? Because it’s easier to generate examples of words using the first letter…these examples are simply more “available”. But on the other hand, who CARES about this useless fact? Availability also affects judgments about meaningful events. Most people rate motor vehicle accidents as the more likely cause of death. In fact, motor vehicle accidents are responsible for less than 100,000 deaths/year, while heart disease is responsible for around 1,000,000!! Auto accidents are more sensational, reported in the news far more often…and therefore are more available in memory than cases of death from heart disease. Then, there are the REALLY important cases…..Clearly, it’s easier to remember that last time you had to scrub the mildew off of the bathtub than when your roommate did it. Is this heuristic always bad? No…availability is often a GOOD indicator of how often something occurs, because it is usually correlated with frequency. But other times, availability is determined by how often the media chooses to report an event, extensive advertising, or people’s inherent reluctance to take out the garbage...

The Availability Heuristic Tendency to form a judgment on the basis of information is readily brought to mind. Why is it useful? - Frequent events are easily brought to mind (words that start with X) Why is it sometimes misleading? - Factors other than frequency can affect ease of remembering: --Ease of Retrieval (the “r” example) --Recency of the example (advertisement, news) -- Familiarity (“what % of people go to college?”) Famous names example: subjects read a list of names and were later asked to judge how frequently they saw male vs. female names. Groups that saw more famous men in the list estimated a higher frequency of male names, while groups that saw more famous women in the list estimated a higher frequency of female names.

Testing the Availability Heuristic - Keep frequency invariant - Experimentally manipulate availability - Measure estimated frequency (dependent variable) Subjects read a list of names - 50% of names are male names, the rest are female - Group A: Some male names famous (Bill Clinton) - Group B: Some female names famous Test: Where there more men or women in the list?

Availability heuristic: one last example Write: 2 things that are bad about Diego’s class 15 things that are bad about Diego’s class Evaluate Diego as an instructor

Anchoring Tendency to reach an estimate by beginning with an initial guess and altering it based on new information. In general People rely too heavily on the anchor (initial value) Adjustments are too small even when the anchor (reference point) is known to be uninformative. This kind of effect is clearly relevant in sales situations….for instance, when bargaining for an item…the first number mentioned sets the anchor, and this is bound to have implications for the eventual price. Reisberg mentions the tactic used by charity organizations: would you like to donate $100, $50, $30, $10? In that order instead of the reverse. However, in these situations it’s easy to argue that the initial values could be informative..that is, even a perfectly rational decision might take these values into account…so is this a bias or a fact of good decision? Several experiments suggest that it is not an entirely rational tendency…. Multiplication example: 1x2x3x4x5x6x7x8 (median answer: 512) vs 8x7x6x5x4x3x2x1 (median answer: 2250) (answer: 40,320)

“10” “65” Anchoring: Example “What is the proportion of African nations in the UN? Answer: ‘25%’ “What is the proportion of African nations in the UN? Answer: ‘45%’ “65” This chilling

Illusory Correlations --Does a college education lead to a higher paying job? -- Are flaws in the personal arena --sexual escapades, DUI-- correlated with flaws in governing the country? -- Do small dogs bite more often than big dogs? The perceived correlation between two variables is influenced - by the data we observe - by our personal theories --> Illusory Correlations

When subjects observed data without preconceptions... From the Reisberg text: Jennings, Amabile and Ross asked subjects to observe a bunch of data, and later to indicate how correlated various variables were…(e.g. the height of a man and his walking stick…shown in various pictures). In cases where subjects have no preconceived ideas about what relationships they would see, their predictions are related in an orderly way to the data that was observed. (higher predicted correlations were associated with higher predictive relationships between the variables…). If anything, their estimates of correlation were conservative…that is, lower than the actual objective correlations….

When subjects had theories about what they would see…. However, when subjects had preconceived notions about what the relationship might be in the data they were observing, their estimates were heavily influenced by their theories… For instance, subjects estimated a strong relationship between the likelihood that a child would be dishonest on other activities if they had been dishonest while solving a puzzle. Their estimates were around .60 for this variable…even though the actual correlation was only about .2 What implications might this have for a doctor who has a pet theory about the effectiveness of a certain drug treatment? What causes this illusion? Availability. When cases that confirm our theories are encountered, we remember them better….later when we estimate these correlations, the examples “available” in memory are the ones that confirm our beliefs… Do black guys escape punishment for crimes? Black person: criminal / not criminal Verdict: innocent / guilty you remember the time that the ‘black guy’ (OJ simpson) got away with murder, but you don’t remember all the other (more often times) that the black guy - The estimates did not show as orderly a relationship with the data. - The correlation values were over-estimated! Scientists are similarly affected by their theoretical biases Jennings, Amabile, & Ross, 1982

Illusory correlation: Possible Mechanisms Confirmation bias. Tendency to notice and remember evidence that confirms our preconceptions. Data consistent with one’s theories are more easily retrieved. This increased availability biases our judgment. The Reisberg text pointed out that even experienced therapists fell prey to illusory correlations (evaluation of Rorschach ink blot interpretations). Dream she will call, then she call Bingo! But you forget all the instances when you dreamt she will call but she didn’t and the cases when you did not dream but she called anyhow

Outline Heuristics Errors & biases Representativeness Availability Anchoring Errors & biases Base rate neglect Gambler’s fallacy Conjunction fallacy Illusory correlations Confirmation bias