Introduction to Mechanized Labor Marketplaces: Mechanical Turk Uichin Lee KAIST KSE.

Slides:



Advertisements
Similar presentations
Psyc 235: Introduction to Statistics DON’T FORGET TO SIGN IN FOR CREDIT!
Advertisements

An Exploration of Decision Processes in an Evolutionary Perspective: the Case of the Framing Effect.
Samantha Nicholas & Khrys Nugent Hanover College
1 Intuitive Irrationality: Reasons for Unreason. 2 Epistemology Branch of philosophy focused on how people acquire knowledge about the world Descriptive.
Rationality Alan Kaylor Cline Department of Computer Sciences The University of Texas at Austin Based upon classic decision puzzlers collected by Gretchen.
Introduction to Mechanized Labor Marketplaces: Mechanical Turk Uichin Lee KAIST KSE.
Misconceptions and Fallacies Concerning Probability Assessments.
Primary and Secondary Data
Amazon Mechanical Turk (Mturk) What is MTurk? – Crowdsourcing Internet marketplace that utilizes human intelligence to perform tasks that computers are.
Decision making and economics. Economic theories Economic theories provide normative standards Expected value Expected utility Specialized branches like.
Fallacies in Probability Judgment Yuval Shahar M.D., Ph.D. Judgment and Decision Making in Information Systems.
Running Experiments with Amazon Mechanical-Turk Gabriele Paolacci, Jesse Chandler, Jesse Chandler Judgment and Decision Making, Vol. 5, No. 5, August 2010.
Guidelines and Guideline Development HINF Medical Methodologies Session 13.
© 2004 Prentice-Hall, Inc.Chap 1-1 Basic Business Statistics (9 th Edition) Chapter 1 Introduction and Data Collection.
Or Why We’re Not Really As Rational As We’d Like to Believe.
Chapter 13 Data Collection ♣ ♣ Introduction   Research Participants   Sample Size   Apparatus and/or Instruments   Instructions   Scheduling.
Heuristics and Biases. Normative Model Bayes rule tells you how you should reason with probabilities – it is a normative model But do people reason like.
Research Ethics Levels of Measurement. Ethical Issues Include: Anonymity – researcher does not know who participated or is not able to match the response.
11 Populations and Samples.
Ch12(1)1 Chapter 9 Procedure for Conducting an Experiment.
Knowledge is Power Marketing Information System (MIS) determines what information managers need and then gathers, sorts, analyzes, stores, and distributes.
Heuristics & Biases. Bayes Rule Prior Beliefs Evidence Posterior Probability.
Chapter 12. Observational and Survey Research Methods Chapter Objectives Distinguish between naturalistic and participant observation methods Articulate.
Business and Management Research
Chapter 3 Goals After completing this chapter, you should be able to: Describe key data collection methods Know key definitions:  Population vs. Sample.
Human Computation and Crowdsourcing Uichin Lee May 8, 2011.
Student Engagement Survey Results and Analysis June 2011.
Chapter 1 Introduction and Data Collection
Good thinking or gut feeling
Decision making Making decisions Optimal decisions Violations of rationality.
Surveys and Sampling. Midpoint/Don’t Know Midpoint- allows for neutral response Advantage- might be more accurate Advantage- might be more accurate Disadvantage-
Decision Making choice… maximizing utility framing effects
Framing Effects From Chapter 34 ‘Frame and Reality’ of Thinking Fast and Slow, by D. Kahneman.
Crowdsourcing: Ethics, Collaboration, Creativity KSE 801 Uichin Lee.
Variables, sampling, and sample size. Overview  Variables  Types of variables  Sampling  Types of samples  Why specific sampling methods are used.
Introduction Biostatistics Analysis: Lecture 1 Definitions and Data Collection.
Assumes that events are governed by some lawful order
Statistics (cont.) Psych 231: Research Methods in Psychology.
RISK BENEFIT ANALYSIS Special Lectures University of Kuwait Richard Wilson Mallinckrodt Professor of Physics Harvard University January 13th, 14th and.
RISK BENEFIT ANALYSIS Special Lectures University of Kuwait Richard Wilson Mallinckrodt Professor of Physics Harvard University January 13th, 14th and.
Lecture 15 – Decision making 1 Decision making occurs when you have several alternatives and you choose among them. There are two characteristics of good.
Analyzing Statistical Inferences How to Not Know Null.
Rationality Alan Kaylor Cline Department of Computer Sciences The University of Texas at Austin Based upon classic decision puzzlers collected by Gretchen.
Judgement Judgement We change our opinion of the likelihood of something in light of new information. Example:  Do you think.
Psychology 485 March 23,  Intro & Definitions Why learn about probabilities and risk?  What is learned? Expected Utility Prospect Theory Scalar.
4.4 Marketing Research.
IMPORTANCE OF STATISTICS MR.CHITHRAVEL.V ASST.PROFESSOR ACN.
Survey Research And a few words about elite interviewing.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
1 DECISION MAKING Suppose your patient (from the Brazilian rainforest) has tested positive for a rare but serious disease. Treatment exists but is risky.
CHAPTER 5: Marketing Information & Research Mrs. Piotrowski Principles of Marketing 1.
1 Introduction to Statistics. 2 What is Statistics? The gathering, organization, analysis, and presentation of numerical information.
Basic Business Statistics, 8e © 2002 Prentice-Hall, Inc. Chap 1-1 Inferential Statistics for Forecasting Dr. Ghada Abo-zaid Inferential Statistics for.
Hypothesis Testing Introduction to Statistics Chapter 8 Feb 24-26, 2009 Classes #12-13.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
The Reliability of Crowdsourcing: Latent Trait Modeling with Mechanical Turk Matt Baucum, Steven V. Rouse, Cindy Miller-Perrin, Elizabeth Mancuso Pepperdine.
Conducting Behavioral Research on Amazon‘s Mechanical Turk
Claire M. Renzetti, Ph.D. Judi Conway Patton Endowed Chair, CRVAW
Preference Assessment 1 Measuring Utilities Directly
Rationality Alan Kaylor Cline Department of Computer Sciences
Unit 5: Hypothesis Testing
Effects of Foreign Language on Decision Making
1 Chapter.
Business and Management Research
Choices, Values and Frames
Business and Management Research
Psych 231: Research Methods in Psychology
HEURISTICS.
Presentation transcript:

Introduction to Mechanized Labor Marketplaces: Mechanical Turk Uichin Lee KAIST KSE

Mechanical Turk Begin with a project – Define the goals and key components of your project. For example, your goal might be to clean your business listing database so that you have accurate information for consumers. Break it into tasks and design your HIT – Break the project into individual tasks; e.g., if you have 1,000 listings to verify, each listing would be an individual task. – Next, design your Human Intelligence Tasks (HITs) by writing crisp and clear instructions, identifying the specific outputs/inputs desired and how much you will pay to have work completed. Publish HITs to the marketplace – You can load millions of HITs into the marketplace. Each HIT can have multiple assignments so that different Workers can provide answers to the same set of questions and you can compare the results to form an agreed-upon answer.

Mechanical Turk Workers accept assignments – If Workers need special skills to complete your tasks, you can require that they pass a Qualification test before they are allowed to work on your HITs. – You can also require other Qualifications such as the location of a Worker or that they have completed a minimum number of HITs. Workers submit assignments for review – When a Worker completes your HIT, he or she submits an assignment for you to review. Approve or reject assignments – When your work items have been completed, you can review the results and approve or reject them. You pay only for approved work. Complete your project – Congratulations! Your project has been completed and your Workers have been paid.

Screenshot

AMT Questions Who are the workers that complete these tasks? What type of tasks can be completed in the marketplace? How much does it cost? How fast can I get results back? How big is the AMT marketplace? Analyzing the Amazon Mechanical Turk marketplace, P. G. Ipeirotis, Journal XRDS: Crossroads, 2010 Demographics of Mechanical Turk, P.G. Ipeirotis, NYU TR, 2010

Gender and Age Countries: 46.80% US, India: 34%, Misc: 19.2% (from 66 different countries)

Educational Level Many of the workers are younger than overall population, and this leads to higher educational levels

Income Level Indian Turkers relatively have low income level

Marital Status and Household Size Lots of single workers Indian workers tend to belong to larger households

Level of Engagement on M-Turk Most workers spent less than a day per week, completing HITs, and earning less than $20 per week.

Motivation Why do you complete tasks in Mechanical Turk? Please check any of the following that applies (multiple items possible): – [1] Fruitful way to spend free time and get some cash (e.g., instead of watching TV) – [2] I find the tasks to be fun – [3] To kill time – [4] For "primary" income purposes (e.g., gas, bills, groceries, credit cards) – [5] For "secondary" income purposes, pocket change (for hobbies, gadgets, going out) – [6] I am currently unemployed, or have only a part time job [1][2][3]

Motivation Why do you complete tasks in Mechanical Turk? Please check any of the following that applies: – [1] Fruitful way to spend free time and get some cash (e.g., instead of watching TV) – [2] I find the tasks to be fun – [3] To kill time – [4] For "primary" income purposes (e.g., gas, bills, groceries, credit cards) – [5] For "secondary" income purposes, pocket change (for hobbies, gadgets, going out) – [6] I am currently unemployed, or have only a part time job [4][5][6]

Summary Significant fraction of Turkers are from US and India (> 80% vs. 20% from misc) Turkers are younger (more than 50% from 21-35) More females (US) and more males (India) Turkers relatively have lower income Turkers relatively have smaller families Geographic distribution of Turkers and Internet users is similar

Type of Tasks in M-Turk

Requester vs. Total Rewards Long tail nature of participation

Keywords vs. Ranks

Price Distribution HITgroups with a large number of HITs tend to have a low price (e.g., $0.10) 90% of HITs pay less than $0.10

Posting and Serving Process Cumulative distribution plots: – For each day, the value of tasks being posted by the AMT requesters, and the value of the tasks that got completed in each day Median posting/completion rate: $1,040 vs. $1,155 per day M/M/1 queueing assumption: a task worth $1, average completion time of 12.5 minutes Value of posted/completed HITs in USD ($)

Posting and Serving Process Less posting over the weekends (requesters) Less work done on Monday due to less posting over the weekends PostingCompletion

Completion Time Distribution 10 days Power law distribution: w/ alpha = -1.48

Running Experiments with Amazon Mechanical-Turk Gabriele Paolacci, Jesse Chandler, Jesse Chandler Judgment and Decision Making, Vol. 5, No. 5, August 2010 KSE 801: Human Computation and Crowdsourcing

Practical Advantages of M-Turk Supportive infrastructure: – Fast recruiting – Convenient to run experiments – External site could be used (e.g., validation code) Subject identifiability and prescreening: – M-Turk workers can be required to earn “qualifications” (or prescreening questions) prior to completing a HIT Subject identifiability and longitudinal studies: – Worker IDs can be used to explicitly re-contact former subjects or code can be written that restricts the availability of a HIT to a predetermined list of workers Cultural diversity: – Cross-cultural comparisons feasible (e.g., country, language, currency) Subject anonymity (not easy though) – Ensuring worker’s anonymity (if external site is used) – M-Turk studies can be exempted for the review of IRBs (Institutional Review Boards) if anonymity is guaranteed

Tradeoffs of Different Recruiting Methods

A Comparative Study Tested various Judgment and Decision Making (JDM) findings – M-Turk, a traditional subject pool at a large Midwestern US university, and visitors of online discussion boards – During April to May 2010 Survey: – Asian disease problem – Linda problem – Physician problem

Survey (Asian Disease Problem) Asian disease problem (called framing, Tversky and Kahnerman, 1981) Subjects read one of two hypothetical scenarios – Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: – Problem 1: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is 1/3 probability that 600 people will be saved and 2/3 probability that no people will be saved. Which of the two programs would you favor? – Problem 2: If Program A is adopted, 400 people will die. If Program B is adopted, there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. Two scenarios are numerically identical, but the subjects responded very differently In the scenario framed in terms of gains, subjects were risk-averse (72% chose Program A); in the scenario framed in terms of losses, 78% of subjects preferred Program B (Tversky and Kahnerman, 1981)

Survey (Linda Problem) Example: “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.” Which is more probable? – Linda is a bank teller – Linda is a bank teller and is active in the feminist movement Linda problem (Tversky & Kahneman, 1983) – Demonstrates the conjunction fallacy – People often fail to regard a combination of events as less probable than a single event in the combination Probability of two events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone

Survey (Physician Problem) Physician problem demonstrates the outcome bias: a surgeon deciding whether or not to do a risky surgery on a patient. – The surgery had a known probability of success (e.g., 92%) – Subjects were presented with either a good or bad outcome (in this case living or dying), and asked to rate the quality of the surgeon's pre-operation decision. Judgment of quality of a decision is often dependent on the valence of the outcome (Baron and Hershey, 1988) Subjects rated the quality of a physician’s decision to perform an operation on a patient (on a 7-point scale) – 1: incorrect and inexcusable, 7: clearly correct, and the opposite decision would be inexcusable – Those presented with bad outcomes rated the decision worse than those who had good outcomes.

After Survey After survey, subjects completed the subjective numeracy scale (SNS, 2007) called SNS score – An eight-item self-report measure of perceived ability to perform various mathematical tasks and preference for the use of numerical vs. prose information – Used as a parsimonious measurement of an individual’s quantitative abilities Additional “catch trial” question: to test whether subjects were attending to the questions (by requiring precise and obvious answers) – E.g., “while watching the television, have you ever had a fatal heart attack?” (w/ six-point scale anchored on “Never” and “Often”)

Configuration M-Turk: – Pay: $0.10 (N=318 participated) – Title: “Answer a short decision survey” – Description: “Make some choices and judgments in this 5- minute survey” Estimated completion time is included to provide workers with a rough assessment of the reward/effort ratio (e.g., $1.71/hour) Lab subject pool: – N=141 students from an introductory subject pool at a large university Internet discussion board: – Posted a link to the survey to several online discussion boards that host online experiments in psychology – Online for 2 weeks; and N=137 visitors took part in the survey

Subject Pools: Characteristics Subjects recruited from online discussion forums were significantly less likely to complete the survey than the subjects on M-Turk (69.3% vs. 91.6%, X 2 =20.915, p<.001) # of respondents who failed the catch trial is low, and not significantly different across subject pools (X 2 (2,301)=0.187, p=0.91) Subjects in the three subject pools did not differ significantly in the SNS score: F(2, 299) = 1.193, p=0.30

Results on Experimental Tasks M-Turk is a reliable source of experimental data in JDM

Labor Supply Economic theory predicts that increasing the price paid for labor will increase the supply of labor in most cases M-Turk experiment: after completing the demographic survey and the first task (transcription), subjects were randomly assigned to one of the four treatment groups and offered the chance to perform another transcription for p cents: 1, 5, 15, or 25 Workers receiving high offers were more likely to accept