Presentation is loading. Please wait.

Presentation is loading. Please wait.

Shifting incentives from getting it published to getting it right

Similar presentations


Presentation on theme: "Shifting incentives from getting it published to getting it right"— Presentation transcript:

1 Shifting incentives from getting it published to getting it right
Brian Nosek University of Virginia -- Center for Open Science

2 Norms Counternorms Communality Secrecy Open sharing Closed
Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume

3 Norms Counternorms Communality Universalism Secrecy Particularlism
Open sharing Universalism Evaluate research on own merit Secrecy Closed Particularlism Evaluate research by reputation Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume

4 Norms Counternorms Communality Universalism Disinterestedness Secrecy
Open sharing Universalism Evaluate research on own merit Disinterestedness Motivated by knowledge and discovery Secrecy Closed Particularlism Evaluate research by reputation Self-interestedness Treat science as a competition Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume

5 Norms Counternorms Communality Universalism Disinterestedness
Open sharing Universalism Evaluate research on own merit Disinterestedness Motivated by knowledge and discovery Organized skepticism Consider all new evidence, even against one’s prior work Secrecy Closed Particularlism Evaluate research by reputation Self-interestedness Treat science as a competition Organized dogmatism Invest career promoting one’s own theories, findings Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume

6 Norms Counternorms Communality Universalism Disinterestedness
Open sharing Universalism Evaluate research on own merit Disinterestedness Motivated by knowledge and discovery Organized skepticism Consider all new evidence, even against one’s prior work Quality Secrecy Closed Particularlism Evaluate research by reputation Self-interestedness Treat science as a competition Organized dogmatism Invest career promoting one’s own theories, findings Quantity Communality – open sharing with colleagues; Secrecy Universalism – research evaluated only on its merit; Particularism – research evaluated by reputation/past productivity Disinterestedness – scientists motivated by knowledge and discovery, not by personal gain; self-interestedness – treat science as a competition with other scientists Organized skepticism – consider all new evidence, theory, data, even if it contradicts one’s prior work/point-of-view; organized dogmatism – invest career in promoting one’s own most important findings, theories, innovations Quality – seek quality contributions; Quantity – seek high volume

7 Anderson, Martinson, & DeVries, 2007

8 Problems Low power Flexibility in analysis Selective reporting Ignoring nulls Lack of replication Examples from: Button et al – Neuroscience Ioannidis – why most results are false (Medicine) GWAS Biology Two possibilities are that the percentage of positive results is inflated because negative results are much less likely to be published, and that we are pursuing our analysis freedoms to produce positive results that are not really there. These would lead to an inflation of false-positive results in the published literature. Some evidence from bio-medical research suggests that this is occurring. Two different industrial laboratories attempted to replicate 40 or 50 basic science studies that showed positive evidence for markers for new cancer treatments or other issues in medicine. They did not select at random. Instead, they picked studies considered landmark findings. The success rates for replication were about 25% in one study and about 10% in the other. Further, some of the findings they could not replicate had spurred large literatures of hundreds of articles following up on the finding and its implications, but never having tested whether the evidence for the original finding was solid. This is a massive waste of resources. Across the sciences, evidence like this has spurred lots of discussion and proposed actions to improve research efficiency and avoid the massive waste of resources linked to erroneous results getting in and staying in the literature, and about the culture of scientific practices that is rewarding publishing, perhaps at the expense of knowledge building. There have been a variety of suggestions for what to do. For example, the Nature article on the right suggests that publishing standards should be increased for basic science research. [It is not in my interest to replicate – myself or others – to evaluate validity and improve precision in effect estimates (redundant). Replication is worth next to zero (Makel data on published replications; motivated to not call it replication; novelty is supreme – zero “error checking”; not in my interest to check my work, and not in your interest to check my work (let’s just each do our own thing and get rewarded for that) Irreproducible results will get in and stay in the literature (examples from bio-med). Prinz and Begley articles (make sure to summarize accurately) The Nature article by folks in bio-medicine is great. The solution they offer is a popular one in commentators from the other sciences -- raise publishing standards. Sterling, 1959; Cohen, 1962; Lykken, 1968; Tukey, 1969; Greenwald, 1975; Meehl, 1978; Rosenthal, 1979

9 Figure credit: fivethirtyeight.com
Silberzahn et al., 2015

10

11 Median effect size (d) = .29 % p < .05 = 63%
Reported Tests (122) Median p-value = .02 Median effect size (d) = .29 % p < .05 = 63% Unreported Tests (147) Median p-value = .35 Median effect size (d) = .13 % p < .05 = 23% We find that about 40% of studies fail to fully report all experimental conditions and about 70% of studies do not report all outcome variables included in the questionnaire. Reported effect sizes are about twice as large as unreported effect sizes and are about 3 times more likely to be statistically significant. N = 32 studies in psychology Unreported tests (147) Median p-value = .35 Median d = .13 % significant = 23% Reported tests (N = 122) Median p = .02 Median d = .29 % sig p<.05 = 63% Franco, Malhotra, & Simonovits, 2015, SPPS

12 Positive Result Rate dropped from 57% to 8% after preregistration required.

13 Incentives for individual success are focused on getting it published, not getting it right
Nosek, Spies, & Motyl, 2012

14 Barriers Perceived norms (Anderson, Martinson, & DeVries, 2007)
Motivated reasoning (Kunda, 1990) Minimal accountability (Lerner & Tetlock, 1999) I am busy (Me & You, 2016) We can understand the nature of the challenge with existing psychological theory. For example: 1. The goals and rewards of publishing are immediate and concrete; the rewards of getting it right are distal and abstract (Trope & Liberman) 2. I have beliefs, ideologies, and achievement motivations that influence how I interpret and report my research (motivated reasoning; Kunda, 1990). And, even if I am trying to resist this motivated reasoning. I may simply be unable to detect it in myself, even when I can see those biases in others. 3. And, what biases might influence me. Well, pick your favorite. My favorite in this context is the hindsight bias. 4. What’s more is we face these potential biases in a context of minimal accountability. What you know of my laboratory work is only what you get in the published report. … 5. Finally, even if I am prepared to accept that I have these biases and am motivated to address them so that I can get it right. I am busy. So are you. If I introduce a whole bunch of new things that I must now do to check and correct for my biases, I will kill my productivity and that of my collaborators. So, the incentives lead me to think that my best course of action is to just to the best I can and hope that I’m doing it okay.

15 MEANS REWARDS Novel, Positive, Clean Transparency, Reproducibility
Outcomes Process MEANS Research Content Data and Materials Publication REWARDS

16 Signals: Making Behaviors Visible Promotes Adoption
Badges Open Data Open Materials Preregistration Psychological Science (Jan 2014) Kidwell et al., 2016

17 40% 30% % Articles reporting data available in repository 20% 10% 0%

18

19 Data sharing 1 2 3 Article states whether data are available, and, if so, where to access them Data must be posted to a trusted repository. Exceptions must be identified at article submission. Data must be posted to a trusted repository, and reported analyses will be reproduced independently prior to publication.

20 763 Journals, 65 Organizations
AAAS/Science American Academy of Neurology American Geophysical Union American Heart Association American Meterological Society American Society for Cell Biology Association for Psychological Science Association for Research in Personality Association of Research Libraries Behavioral Science and Policy Association BioMed Central Committee on Publication Ethics Electrochemical Society Frontiers MDPI Nature Publishing Group PeerJ Pensoft Publishers Public Library of Science The Royal Society Society for Personality and Social Psychology Society for a Science of Clinical Psychology Ubiquity Press Wiley

21 PREREGISTRATION Context of Justification Confirmation Data independent
Hypothesis testing Context of Discovery Exploration Data contingent Hypothesis generating p-values interpretable p-values NOT interpretable PREREGISTRATION Presenting exploratory as confirmatory increases publishability of results at the cost of credibility of results Study 1 Study 1 Study 2

22 Positive Result Rate dropped from 57% to 8% after preregistration required.

23 Are you okay with receiving treatment based on clinical trials that were not preregistered?
“oh but clinical trials are important” Why would I waste my time on something that isn’t important enough to be worth doing the best that I can do it?

24 Preregistration Challenge http://cos.io/prereg

25 Registered Reports PEER REVIEW Design Collect & Analyze Report Publish
Review of intro and methods prior to data collection; published regardless of outcome Beauty vs. accuracy of reporting Publishing negative results Conducting replications Peer review focuses on quality of methods Committee Chair: Chris Chambers

26 http://www.erpc2016.com/ American Journal of Political Science
American Political Science Review American Politics Research Political Analysis Political Behavior Political Science Quarterly Public Opinion Quarterly State Politics and Policy Quarterly

27 Mundane and Big Challenges for Reproducibility
Forgetting Losing materials and data

28

29 UNIVERSITIES ecosystem PUBLISHING FUNDERS SOCIETIES

30 What can you do? Try OSF, http://osf.io/
Prereg Challenge, Share a preprint, Editors: Badges, Registered Reports, TOP Departments: OSF-Reproducibility workshops, hiring and promotion criteria Individuals: COS Ambassador or

31

32 What can you do? Try OSF, http://osf.io/
Prereg Challenge, Share a preprint, These slides are shared at: [take a picture] or


Download ppt "Shifting incentives from getting it published to getting it right"

Similar presentations


Ads by Google