Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adam R. Brown Jeremy C. Pope

Similar presentations


Presentation on theme: "Adam R. Brown Jeremy C. Pope"— Presentation transcript:

1 Adam R. Brown Jeremy C. Pope
I will report on a randomized experiment that Jeremy and I conducted. We’ve seen the published research defending the use of the CCES and MTurk, and we believe it enough that we have both used these platforms regularly in our research. However, we got to wondering whether there are differences in attention or knowledge across these platforms that may have implications for how researchers write their questions. In particular, we got wondering whether CCES and MTurk respondents might be hyperattentive when compared to normal people. That’s often a concern you hear about MTurk—that they participate in so much social science research and market research for pay that they try to be “good subjects.” We worried that could also be the case with CCES, since the actual participants are part of a recurring panel. We don’t have that concern with samples like the ANES, and especially not with Google Surveys—which are very short surveys that you might have been confronted with while browsing the internet, and most folks who take them just fly through them to be done. So we designed an experiment and fielded it last November across three platforms: CCES MTurk Google Surveys We mention ANES in the subtitle, but we didn’t run it there. However, similar experiments have been run on ANES in the past, and some of the questions from our battery were on the ANES this year. Mostly I’ll talk about CCES, Google, and MTurk Political Knowledge across Platforms Comparing CCES, Google, MTurk, and ANES Adam R. Brown Jeremy C. Pope

2 KNOWLEDGE BATTERY ITEMS:
GROUP 1: If you aren’t sure of the answer to any of these questions, we’d be grateful if you could just give your best guess. GROUP 2: Many people have trouble answering questions like these. So if you can’t think of the answer, don’t worry about it. Just mark that you don’t know and move on to the next one. KNOWLEDGE BATTERY ITEMS: To the best of your knowledge, does your state have its own constitution? Is the U.S. federal budget deficit – the amount by which the government’s spending exceeds the amount of money it collects – now bigger, about the same, or smaller than it was during most of the 1990s? For how many years is a United States Senator elected – that is, how many years are there in one full term of office for a U.S. Senator? On which of the following does the U.S. federal government currently spend the least? Who nominates judges to the Supreme Court? In our experiment, we manipulated whether respondents were encouraged to guess or to mark “don’t know,” then we gave them a 5-item battery of political knowledge. READ treatments We are replicating here an experiment originally run by Luskin and Bullock (2011 JOP) with similar treatments but a different battery. Their purpose was to test whether “don’t know” really means “don’t know”—that is, whether you can get respondents to reveal hidden or unconfident knowledge by discouraging “don’t know.” (They ran essentially the same experiment we’re running on TESS to a nationally representative sample in , and a modified experiment on ANES in 2000.) (Emphasize that this was a subtle treatment – a brief prompt, then all items from the battery on the same page.)

3 Mean Correct Responses, by Platform
Before I get into the treatment’s effects, here are the aggregate results. If you’re like us, you will be very surprised by this chart. We expected MTurk and CCES respondents to score highest. Since they take lots of surveys, we expected these to be high knowledge groups. You can see that’s not what we found. Political knowledge was significantly lower on the CCES. But to understand this figure, you really need to look at what happened with our treatments.

4 Mean Correct Responses, by Treatment and Platform
This chart depicts the same dependent variable, but separated by treatment. MTurk: The treatment had a substantively small effect. When you encourage guessing, people reveal a bit more (hidden) knowledge than when you encourage marking don’t know. But the effect is small – which is consistent with what Luskin and Bullock found. Google: No effect at all. Folks gave the same effort either way. CCES: We’re stumped. When you encourage guessing, people get fewer items correct rather than more. To be clear, they don’t get a smaller percentage of items correct. That wouldn’t surprise us. No, they get a smaller number of items correct – which runs totally against our expectations, and also against what Luskin and Bullock found. Emphasize: There are no discernable differences across these platforms when we encourage people to mark “don’t know.” But when we encourage guessing, CCES respondents just give up – like they’ve been given permission not to try anymore. We’re a stumped why CCES respondents differ so much from other platforms under this condition and would love to hear your thoughts. We’ve checked, and there are not significant demographic interactions. The treatment groups appear to be balanced. And adding the weights (for CCES and Google) doesn’t really change our conclusion. I have that figure at the end of the presentation if folks want it. If anything, adding the weights makes CCES respondents even more distinctive.

5 Mean “Don’t Know,” by Treatment and Platform
Here’s a slightly different dependent variable that produces slightly different insights. Instead of number correct, this is the number of “don’t know” responses (out of 5). This time, all three platforms show a treatment effect in the same direction. When you encourage marking “don’t know,” people do indeed mark “don’t know” more. Google: Very weak (and statistically insignificant) effect MTurk and CCES: Meaningful effects. Strongest in CCES. So this slide isn’t particularly challenging. It’s the other one that’s tough. Unlike respondents on other platforms, CCES respondents actually get more items wrong when we give them permission to just guess. Now, admittedly, this is a small experiment. It doesn’t shake our overall confidence in any of the platforms used. We would still feel comfortable using any of them for most applications. But these results do suggest a potentially important consideration for those who write survey questions: It might matter a lot whether you give respondents a “don’t know” option after all— But how it matters may vary by platform. This is a question that survey researchers have to deal with all the time. Among our departmental colleagues, we’ve had the occasional sharp disagreement over whether to give a “don’t know” option. What we’ve got here is evidence suggesting that the implications may vary across platform.

6 Extra slides go below this one

7 Mean Correct Responses, by Treatment and Platform

8 Mean “Don’t Know,” by Treatment and Platform

9 Treatment Effect on “Don’t Know,” by Platform


Download ppt "Adam R. Brown Jeremy C. Pope"

Similar presentations


Ads by Google