Cognitive Biases 4.

Slides:



Advertisements
Similar presentations
Phil 148 Chances. The importance of understanding chances: A great many injustices are perpetrated upon people who have a poor understanding of mathematical.
Advertisements

Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
CS1512 Foundations of Computing Science 2 Week 3 (CSD week 32) Probability © J R W Hunter, 2006, K van Deemter 2007.
Thinking about Probabilities CCC8001. Assignment Watch episode 1 of season 1 of Ancient Aliens.
Chi-Square Test A fundamental problem is genetics is determining whether the experimentally determined data fits the results expected from theory (i.e.
TEST-TAKING STRATEGIES FOR THE OHIO ACHIEVEMENT READING ASSESSMENT
Expected Value. When faced with uncertainties, decisions are usually not based solely on probabilities A building contractor has to decide whether to.
Probability and Induction
Authority 2. HW 8: AGAIN HW 8 I wanted to bring up a couple of issues from grading HW 8. Even people who got problem #1 exactly right didn’t think about.
Probability, Part 1 We’ve looked at several counting techniques to determine the number of possible arrangements or groupings in a given scenario. These.
Psyc 235: Introduction to Statistics DON’T FORGET TO SIGN IN FOR CREDIT!
Formal Probability Theory
1 Intuitive Irrationality: Reasons for Unreason. 2 Epistemology Branch of philosophy focused on how people acquire knowledge about the world Descriptive.
Hypothesis Testing A hypothesis is a claim or statement about a property of a population (in our case, about the mean or a proportion of the population)
Statistics 100 Lecture Set 7. Chapters 13 and 14 in this lecture set Please read these, you are responsible for all material Will be doing chapters
AP STATISTICS Simulation “Statistics means never having to say you're certain.”
Sample Size.
Misconceptions and Fallacies Concerning Probability Assessments.
Statistical Issues in Research Planning and Evaluation
Phil 148 Probability. The importance of understanding chances: A great many injustices are perpetrated upon people who have a poor understanding of mathematical.
Cognitive Biases 2 Incomplete and Unrepresentative Data.
Theoretical Probability Distributions We have talked about the idea of frequency distributions as a way to see what is happening with our data. We have.
Statistics Introduction.
Hypothesis Testing. Coke vs. Pepsi Hypothesis: tweets reflect market share (people tweet as much as they drink) Market share: – 67% vs. 33% From tweets:
Reasoning What is the difference between deductive and inductive reasoning? What are heuristics, and how do we use them? How do we reason about categories?
Judgment in Managerial Decision Making 8e Chapter 3 Common Biases
PROBABILITY Uses of Probability Reasoning about Probability Three Probability Rules The Binomial Distribution.
Mr Barton’s Maths Notes
Heuristics and Biases. Normative Model Bayes rule tells you how you should reason with probabilities – it is a normative model But do people reason like.
Reasoning with Uncertainty. Often, we want to reason from observable information to unobservable information We want to calculate how our prior beliefs.
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 5): Outliers Fall, 2008.
Heuristics & Biases. Bayes Rule Prior Beliefs Evidence Posterior Probability.
Decision Making. Test Yourself: Decision Making and the Availability Heuristic 1) Which is a more likely cause of death in the United States: being killed.
Introduction to the Adjective/Noun Theme. © 2012 Math As A Second Language All Rights Reserved next #1 Taking the Fear out of Math.
Lesson 4: Percentage of Amounts.
Thinking about Probabilities CCC8001. Assignment Watch episode 1 of season 1 of “Ancient Aliens.”
Mrs. Ramsey. Introductions Syllabus Calculators? Water Taste Test Hand out books! Section 5.1.
Cognitive Biases 4 Fallacies Involving Probability.
Confirmation Bias. Critical Thinking Among our critical thinking questions were: Does the evidence really support the claim? Is there other evidence that.
Mathematics as a Second Language Mathematics as a Second Language Mathematics as a Second Language Developed by Herb Gross and Richard A. Medeiros © 2010.
Understanding Probability and Long-Term Expectations Example from Student - Kalyani Thampi I have an example of "chance" that I thought about mentioning.
From Randomness to Probability
+ Cognition. + Cognition: Thinking All the mental activities associated with thinking, knowing, remembering, and communicating.
Chapter 1 Statistical Thinking What is statistics? Why do we study statistics.
Lecture 15 – Decision making 1 Decision making occurs when you have several alternatives and you choose among them. There are two characteristics of good.
The Burnet News Club GLOSSARY Glossary Burnet News Club words.
Smith/Davis (c) 2005 Prentice Hall Chapter Nine Probability, the Normal Curve, and Sampling PowerPoint Presentation created by Dr. Susan R. Burns Morningside.
LESSON TWO ECONOMIC RATIONALITY Subtopic 10 – Statistical Reasoning Created by The North Carolina School of Science and Math forThe North Carolina School.
Past research in decision making has shown that when solving certain types of probability estimation problems, groups tend to exacerbate errors commonly.
Chapter 21: More About Tests
Welcome to MM570 Psychological Statistics
False Positives Sensitive Surveys Lesson Starter A bag contains 5 red marbles and 4 blue marbles. Two marbles are drawn without replacement. What.
Adding and Subtracting Decimals © Math As A Second Language All Rights Reserved next #8 Taking the Fear out of Math 8.25 – 3.5.
Decision Making. Reasoning & Problem Solving A. Two Classes of Reasoning I. Deductive Reasoning II. Inductive Reasoning.
Cognitive Biases 4 Fallacies Involving Probability.
Journal 9/8/15 Is there anything in your life that you are 100% certain about? Anything you know for sure? Objective Tonight’s Homework To learn about.
Probability judgement. AO1 Probability judgement ‘Probability’ refers to the likelihood of an event occurring, such as the likelihood that a horse will.
Extension: How could researchers use a more powerful measure of analysis? Why do you think that researchers do not just rely on descriptive statistics.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
A. Judgment Heuristics Definition: Rule of thumb; quick decision guide When are heuristics used? - When making intuitive judgments about relative likelihoods.
Heuristics and Biases Thomas R. Stewart, Ph.D. Center for Policy Research Rockefeller College of Public Affairs and Policy University at Albany State University.
The Representativeness Heuristic then: Risk Attitude and Framing Effects Psychology 355: Cognitive Psychology Instructor: John Miyamoto 6/1/2016: Lecture.
1 מקורות החשיבה המדעית/מתימטית ( ) אורי לירון שיעור ראשון – חידות, מֶטָה-חידות, תהיות, וסתם שאלות מעצבנות.
Exercise 2-7: Regression artefact: Lord’s paradox
Examples of testing the mean and the proportion with single samples
Conceptions and Misconceptions
HEURISTICS.
Statistics and Probability-Part 5
For Thursday, read Wedgwood
Presentation transcript:

Cognitive Biases 4

Conjunction Fallacy Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable? 1. Linda is a bank teller. 2. Linda is a bank teller and is active in the feminist movement.

Conjunction Fallacy The correct answer is (1): Linda is a bank teller. Suppose we take all the people in the world who fit the description (20,000, let’s say), and that amounts to 10,000 people. And suppose that among those only 1 is not active in the feminist movement. 9,999 bank tellers who fit the description are active in the feminist movement.

Conjunction Fallacy Then the probability that Linda is a bank teller is 10,000 out of 20,000 or 50%. And the probability that she is a bank teller AND active in the feminist movement is 9,999 out of 20,000, or slightly less than 50%.

This holds in general. If there are N people who fit the description, and M bank tellers who do, then N ≤ M and N/M is the percentage of bank tellers who fit the description. If X of the N bank tellers who fit the description are not feminists, then (N – X) is the number of feminist bank tellers who fit the description,

If X of the N bank tellers who fit the description are not feminists, then (N – X) is the number of feminist bank tellers who fit the description, and (N – X)/ M is the percentage of feminist bank tellers out of those who fit the description.

As a matter of mathematics, (N – M)/ X ≤ M/ X As a matter of mathematics, (N – M)/ X ≤ M/ X. There are always at least as many (and perhaps more) bank tellers who fit the description as feminist bank tellers who fit the description.

Always, the probability of two events happening (Linda being a bank teller AND Linda being a feminist) is less than the probability of just one of those events happening (for example, Linda being a bank teller). The illusion that the opposite is true especially occurs in cases where one event explains the other.

For example, suppose I tell you that there is a man named “George For example, suppose I tell you that there is a man named “George.” George turned water into wine, healed the sick, brought a dead person back to life, and came back to life himself after he died. What is the probability that George did all these things? How likely is it?

You can say whatever you like. 1%, 10%, 99% You can say whatever you like. 1%, 10%, 99%. But suppose I add to the story. I say “George was the son of God. That’s why he had all these powers.”

Many people will say that it’s more likely that George was the son of God AND did all these things than it is that he did all these things. But that can’t be true. “A & B” is always less (or equally) probable than A, or than B. For A & B to happen, A has to happen and also B has to happen.

Debiasing We can avoid this bias if we ask the question differently: There are 100 persons who fit the description above (that is, Linda’s). How many of them are: Bank tellers? ____ of 100 Bank tellers and active in the feminist movement? ____ of 100

This shows that it’s good to translate percentages and probabilities into frequencies (number of X out of number of Y). We are less susceptible to representativeness bias when things are phrased in this way.

Representativeness Our (false) judgment that Linda is more likely to be a feminist bank teller than to just be a bank teller is an example of how we judge the truth of claims based on how “representative” they are.

Consider again our case of coin flips that seem non-random, due to clustering. Since coins land 50% heads and 50% tails, “XO” and “OX” are representative of this even split, whereas “XX” and “OO” don’t represent it. So sequences with clustering seem non-random, even if they are (random).

Representativeness influences our other judgments as well Representativeness influences our other judgments as well. It’s hard to accept that two very tall parents tend to, on average, have less tall children (as regression to the mean requires). Children who are as tall as their parents are more representative of their parents’ heights.

Representativeness is often a good heuristic Representativeness is often a good heuristic. A heuristic is a strategy that is easy to use in problem solving but doesn’t always work when applied. There is often no good reason to distinguish between heuristics and biases.

Representativeness is a good heuristic (sometimes) because (sometimes) things are representative.

Sometimes small effects have small causes Sometimes small effects have small causes. Burnt toast can be caused by leaving bread in the toaster for too long. Sometimes complex effects have complex causes. World War I (a complex effect) was caused by a very complex set of factors, only one of which was the assassination of Archduke Ferdinand.

However: Sometimes large effects have small causes. An outbreak of a disease may be caused by a tiny virus or bacterium. Sometimes complex effects have simple causes. For instance, introducing a foreign species into a new land may cause radical changes in the ecosystem.

Base Rate Fallacy Suppose for a moment that ½ million people in Russia are affected by HIV/ AIDS, and that there are 150 million people in Russia. So the rate of HIV/ AIDS cases is 1 in 300. The government decides this is bad and that they should test everyone for HIV/ AIDS.

They develop a test with the following features: If someone has HIV/ AIDS, then 95% of the time the test will be positive (correct), and only 5% of the time will it be negative (incorrect). If someone does not have HIV/ AIDS, then 95% of the time the test will be negative (correct), and only 5% of the time will it be positive (incorrect).

Suppose you are a Russian who gets tested for HIV/ AIDS under the government program. The test comes out positive. How likely are you to have HIV/ AIDS? Most people will say something like 95%. After all, the tests are 95% correct, right?

This is not true. Remember that there are 150 million people in Russia, and they’re all getting tested. Only ½ million of them have HIV/ AIDS. So 149.5 million people do not have HIV/ AIDS. If you give the test to someone without HIV/ AIDS, it gives the correct result (negative) 95% of the time.

149. 5 million people x 99. 5% true negative rate = 148 149.5 million people x 99.5% true negative rate = 148.8 million people correctly diagnosed as not having HIV/ AIDS. Thus there are 149.5m – 148.8m = 700,000 people incorrectly diagnosed as having HIV/ AIDS who do not have it.

Furthermore, ½ million people actually do have HIV/ AIDS Furthermore, ½ million people actually do have HIV/ AIDS. If you give the test to someone who has HIV/ AIDS, it returns positive 95% of the time, and negative 5% of the time. So if all ½ million people are tested, 475,000 will be correctly diagnosed as positive, while 25,000 will be incorrectly diagnosed as negative.

Test = Yes Test = No HIV/AIDS = Yes 475,000 25,000 HIV/ AIDS = No 700,000 148,800,000

So if you test positive for HIV/ AIDS, your chances of having HIV/ AIDS = The number of people who have HIV/ AIDS and test positive ÷ the number of people who test positive = True positive ÷ (true positive + false positive) = 474,000 ÷ (475,000 + 700,000) = 40%

Whether a test is good or worth doing depends not only on how accurate it is (95% true positive, 95% true negative), but also on how prevalent the condition being tested for is. Very rare conditions require very sensitive tests, whereas very prevalent conditions only need minorly accurate tests.

Consider an even more rare case: the number of terrorists who fly on planes. There are about 50 million passengers per year who fly through Hong Kong International Airport. There are about 10 airplane hijackings per year, but those are spread out over the 40,000 airports in the world.

Let’s assume there’s all 10 terrorists in the entire world fly through HKIA. Now suppose the government introduces a “99% accurate” terrorist detection test:

If someone is a terrorist, 100% of the time the test is positive = terrorist. The other 0% of the time (never) the test is negative = not a terrorist. If someone is not a terrorist, 99% of the time the test is negative = not a terrorist. The other 1% of the time the test is positive = terrorist.

Now imagine that the government applies the test to everyone who flies through HKIA for an entire year. There are 50 million minus 10 terrorists who fly every year. So the test will correctly conclude that (49,999,990 x 99%) = 49,499,990 ∼ 49.5 million of them are not terrorists.

But it will incorrectly conclude that of those 49,999,990 people who are not terrorists, (49,999,990 x 1%) = 499,999 ∼ ½ million of them are terrorists! Almost half a million innocent people classed as terrorists by the “99% accurate” detection device!

And what about the terrorists And what about the terrorists? The test will identify them correctly 100% of the time: 10 x 1.00 = 10 correct identifications and identify them incorrectly 0% of the time: 10 x 0 = 0 incorrect identifications. Doesn’t that mean it’s a good test? Won’t we always catch the terrorist?

No. Suppose someone, Mr. X tests positive. How likely is it that Mr No! Suppose someone, Mr. X tests positive. How likely is it that Mr. X is a terrorist? There are 499,999 “false positives” (non terrorists that the test says are terrorists) and 10 “true positives”. So if you test positive, your chance of being correctly identified as a terrorist is:

So if you test positive, your chance of being correctly identified as a terrorist is: True positive ÷ (true positive + false positive) = 10 ÷ (10 + 499,999) = 10 ÷ 500,009 0.00001999 (about 2 in 100,000)

Even with a test that is 100% accurate at catching terrorists and 99% accurate at letting non-terrorists go free, we are still left with more than half a million “positives” (= people who have tested positive for terrorism), only 10 of whom are actually terrorists! We need a new test to sort these people!

Base Rates The “base rate” is the percentage of people in the population who have a certain property. The base rate of terrorists is the percentage of terrorists in the population, the base rate of HIV/AIDS cases is the percentage of people who have HIV/AIDS in the population, etc.

Base Rates As we have seen, base rates matter. If the base rate of a condition is very low (small percentage of terrorists), then even very accurate tests (100% true positive, 99% true negative) can be useless. In our example only 2 in 100,000 people who tested “positive” for terrorism were terrorists.

Base Rate Neglect The “base rate neglect fallacy” is the fallacy of ignoring the base rate when making a judgment. For example, if I assumed you were a terrorist, because you tested positive, I would be committing the base rate neglect fallacy. I should assume you’re still probably not a terrorist.

Base Rate Neglect Humans have a tendency to ignore base rates. For example, Kahneman and Tversky (1973) conducted a study in which participants were supposed to estimate the GPAs of certain (fictional) students.

Kahneman & Tverskey 1973 Some of the participants were given good evidence that students had high (or low) GPAs. In particular, they were given the students’ percentiles (95th percentile, for example). Other participants were given only very weak evidence: the scores that the students got on a test of humor.

All the participants were given the base rate of students with various GPAs. For example, 20% A, 40% B, 30% C, 10% D. But all of the participants ignored the base rate.

A good test for a prevalent condition (like number of people with A’s, not like being a terrorist) gives you lots of information. If someone is in the 99th percentile, for instance, you can be sure that they got an A. If they’re in the bottom quartile, you know that they did not get an A.

But scoring high on a test of humor is not a good indicator of your GPA. Maybe people with a good sense of humor are a little bit more likely to get better grades, but not much more likely. Given only such information, your guess should be very close to the base rate (for example, it’s 40% likely the student has a B GPA).

But, as I said, participants ignored the base rate But, as I said, participants ignored the base rate. They guessed that people who did very well on the humor test had high GPAs, and people who did poorly on the test had low GPAs.

[Base rate connected with representativeness.]