Challenges for Journals: Encouraging Sound Science

Slides:



Advertisements
Similar presentations
Critical Reading Strategies: Overview of Research Process
Advertisements

Scientific Literature Tutorial
Writing for Publication
Experimental Psychology PSY 433
Guidelines to Publishing in IO Journals: A US perspective Lois Tetrick, Editor Journal of Occupational Health Psychology.
Chapter One: The Science of Psychology
How to write a publishable qualitative article
Publishing your paper. Learning About You What journals do you have access to? Which do you read regularly? Which journals do you aspire to publish in.
FISH 521 Peer review. Peer review Mechanics Advantages Challenges Solutions.
Planning & Writing Laboratory Reports A Brief Review of the Scientific Method.
CRITICAL APPRAISAL OF SCIENTIFIC LITERATURE
Chapter One: The Science of Psychology. Ways to Acquire Knowledge Tenacity Tenacity Refers to the continued presentation of a particular bit of information.
Chris Luszczek Biol2050 week 3 Lecture September 23, 2013.
Highlights from Educational Research: Its Nature and Rules of Operation Charles and Mertler (2002)
Scientific Method for a controlled experiment. Observation Previous data Previous results Previous conclusions.
How to write a professional paper. 1. Developing a concept of the paper 2. Preparing an outline 3. Writing the first draft 4. Topping and tailing 5. Publishing.
Original Research Publication Moderator: Dr. Sai Kumar. P Members: 1.Dr.Sembulingam 2. Dr. Mathangi. D.C 3. Dr. Maruthi. K.N. 4. Dr. Priscilla Johnson.
Dataset citation Clickable link to Dataset in the archive Sarah Callaghan (NCAS-BADC) and the NERC Data Citation and Publication team
The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter One: The Science of Psychology.
Helpful hints for planning your Wednesday investigation.
Experimental Psychology PSY 433 Chapter 5 Research Reports.
Practical Steps for Increasing Openness and Reproducibility Courtney Soderberg Statistical and Methodological Consultant Center for Open Science.
Webinar on increasing openness and reproducibility April Clyburne-Sherin Reproducible Research Evangelist
Practical Steps for Increasing Openness and Reproducibility Courtney Soderberg Statistical and Methodological Consultant Center for Open Science.
Brian Nosek University of Virginia -- Center for Open Science -- Improving Openness.
David Mellor, PhD Project Manager at Improving Openness and Reproducibility of Scientific Research.
Sara Bowman Center for Open Science | Promoting, Supporting, and Incentivizing Openness in Scientific Research.
Sara Bowman Center for Open Science | Promoting, Supporting, and Incentivizing Openness in Scientific Research.
Features of science revision
Dr.V.Jaiganesh Professor
How to write a publishable qualitative article
David Preregistration David
Evaluating the effectiveness of open science practices
Safer science: making psychological science more Transparent & replicable Simine Vazire UC Davis.
David Ockert Toyo University
Shifting the research culture toward openness and reproducibility
Basics.
The Research Design Continuum
Experimental Psychology
Lorne Campbell University of Western Ontario
Chapter 6 Publishing research results
Three points 1. Scientists’ Conflict of Interest 2
Putting it all together: Writing research proposals and reports
David Mellor Building infrastructure to connect, preserve, speed up, and improve scholarship David Mellor
Psi Chi’s Network for International Collaborative Exchange (NICE)
Preregistration Challenge
AF1: Thinking Scientifically
Achieving Open Science
Transparency increases the credibility and relevance of research
Preregistration on the Open Science Framework
Section 2: Science as a Process
The Reproducible Research Advantage
Incentives for a more #openscience
Writing for Academic Journals
Study Pre-Registration
The Resistible Rise of Questionable Research Practices
Disrupting Scholarly Communication
Experimental Psychology PSY 433
IS Psychology A Science?
IS Psychology A Science?
The Scientific Method.
Lesson 5. Lesson 5 Extraneous variables Extraneous variable (EV) is a general term for any variable, other than the IV, that might affect the results.
1.1.2 Scientific Method.
Experimental Psychology PSY 433
Introduction to Psychology Chapter 1
Introductory Reviewer Development
School of Psychology, Cardiff University
Chapter 4 Summary.
Experimental Psychology PSY 433
Open Science & Reproducibility
Presentation transcript:

Challenges for Journals: Encouraging Sound Science Barbara A. Spellman University of Virginia School of Law There is a war between the ones who say there is a war and the ones who say there isn’t…

2009: Would you like to be Editor of Perspectives on Psychological Science? Sure ! Jan 2010 – Letter from Incoming Editor 1. Thanks to previous editor... 2. New types of articles...

2011 (Jan OL) – Bem, “Feeling the Future” 2011 (Oct Interim Rpt 2011 (Jan OL) – Bem, “Feeling the Future” 2011 (Oct Interim Rpt.) -- Stapel Fraud 2011 (Oct OL) – Simmons et al., “False-positive Psychology” (“QRPs”)

RETRACTION Failure to Replicate Underpowered Studies Failure to Replicate RETRACTION Fraud Failure to Replicate Failure to Replicate File Drawer Problem Failure to Replicate

Take your shot...

November 2012 Hal Pashler & EJ Wagenmakers & Me Special Section on Replicability in Psychological Science Special Section on Research Practices 170 pages 120 authors Invited, collected, un-rejected, requested reply, bid for (at auction)

November 2012 Hal Pashler & EJ Wagenmakers & Me Problems Solutions Is there a replication crisis ? (Non-)Value of conceptual replications Undead theories Aesthetic standards Too many successful “predictions” It is not self-correcting Teaching Replications Rewarding Replication Restructuring Incentives Open to Change / Wary of Rules Outline of the RP:P What Journals Can Do

How... ...can journals help document the problems? Raise awareness of the problems? ...did journals contribute to the problems? ...can journals help to reduce the problems in the future? ...can journals help fix the problems they created in the past (i.e., fix the record)?

Why are there failures to replicate? Fault of Original Researcher Fault of Neither, One, or Both Fault of Replicator Fraud QRPs (“Questionable Research Practices”) Chance Bad Copy (methods / data / analysis description) Things have changed QRPs Bad intentions Incompetence

The “Ideal”: Hypothetico-Deductive Model Generate and specify hypotheses Design study Collect data Analyse data & test hypotheses Interpret data Publish or conduct next experiment

External Influences Provide Bad Incentives GET Job, Tenure, Promotion, Fame, Students… External Influences Provide Bad Incentives GET FUNDED Generate and specify hypotheses Design study Collect data Analyse data & test hypotheses Interpret data Publish or conduct next experiment GET PUBLISHED

External Influences Provide Bad Incentives GET Job, Tenure, Promotion, Fame, Students… External Influences Provide Bad Incentives GET FUNDED Generate and specify hypotheses Design study Collect data Analyse data & test hypotheses Interpret data Publish or conduct next experiment GET PUBLISHED

What do Journals Want? To make money ? Get people to buy / read / cite journal. Important papers. Novel papers. Clear papers. Papers confirm some new hypothesis. Don’t spend too much. Keep articles short. Don’t spend lots on reviewing, fact checking, etc.

TO GET PUBLISHED What do Journals Want? Scientists To make money ? Get people to buy / read / cite journal. Important papers. Novel papers. Clear papers. Papers confirm some new hypothesis. Don’t spend too much. Keep articles short. Don’t spend lots on reviewing, fact checking, etc.

The Norm: Research Incentives and The Garden of Forking Paths Novelty bias: lack of replication as a norm means that only 1 in 1000 papers are replicated Generate and specify hypotheses Design study Collect data Analyse data & test hypotheses Interpret data Publish or conduct next experiment Publication bias – 92% positive results Data, materials, method not shared (<30% data shared) Peer review is secret Changing the hypotheses, “HARKing” to fit the data or analyses Work through these one-by-one Can we put together an example research paradigm? “Cherry picking” measures or constructs that support hypotheses Mining data for statistically significant associations Low statistical power, poor chance to detect effects even if they exist Lack of pre-registration “Cherry picking” data that supports hypotheses Experimenter influences on data coding and analysis

Requiring “Novelty” distorts the literature

Requiring “Novelty” distorts the literature

How can I get beautiful results to support my novel hypothesis? Improve your own luck ! Take risks with small samples. Try a lot of things; pick the good cherries.

How can I get beautiful results to support my novel hypothesis? Improve your own luck ! Take risks with small samples. Try a lot of things; pick the good cherries. Change your hypothesis to fit your data ! HARKing: Hypothesising after the results are known Coined by Norbert Kerr (1998) to describe the practice of: “presenting post hoc hypotheses in a research report as if they were, in fact, a priori hypotheses”

HARKing and aesthetic standards Generate and specify hypotheses Design study Collect data Analyse data & test hypotheses Interpret data Publish or conduct next experiment Changing the hypotheses, “HARKing” to fit the data or analyses Work through these one-by-one Can we put together an example research paradigm?

Keeping Methods Short: Hurts Ability to Evaluate and Replicate

Not Requiring Data Sharing or Material/Method Sharing or Code Sharing: Hurts Ability to Evaluate (+ can’t reproduce)

Change/Align Incentives What is good for individual scientist’s career should be what is good for science itself.

How... ...did journals contribute to the problems? ...can journals help document the problems? Raise awareness of the problems? ...can journals help to reduce the problems in the future? ...can journals help fix the problems they created in the past (i.e., fix the record)? Use: Incentives, Technology & Determination

  I’d like to personally invite you to an important workshop titled, “Increasing Scientific Transparency and Reproducibility in the Social and Behavioral Sciences” being held on November 3-4, 2014 at the Center for Open Science (COS) in Charlottesville, VA. The workshop is funded by the Laura and John Arnold Foundation and co-hosted by the journal Science, the Berkeley Initiative for Transparency in the Social Sciences (BITSS), and COS.

Transparency and Openness Promotion (TOP) Guidelines Eight policy guidelines for increasing the transparency and reproducibility of the published research. Agnostic to discipline Steps (Nothing, Disclose, Require, Verify) to lower barriers Modular See cos.io/top for more detailed language

Materials transparency Data transparency Code transparency Level 1 Level 2 Level 3 Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication See cos.io/top for more detailed language

Materials transparency Data transparency Code transparency Level 1 Level 2 Level 3 Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication What does it mean? What does it solve? How do we get authors to do it? See cos.io/top for more detailed language

Materials transparency Data transparency Code transparency Level 1 Level 2 Level 3 Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication Ask authors who submit to answer 2 questions: Are the data/code/materials available in a public repository? Yes/No If Yes, where: URL: ________ Make answers available in article metadata, or simply in footnotes. See cos.io/top for more detailed language

Journal Carrots: Badges for Open Science Provide more “room” for complete methods, materials, data presentations. (New Technology!) Award badges. Open Data, Materials, Analyses

Data, Analytic Methods (Code), and Research Materials Transparency Level 1: Authors must disclose action

Data, Analytic Methods (Code), and Research Materials Transparency Level 2: Authors must share (exceptions permitted)

Data, Analytic Methods (Code), and Research Materials Transparency Level 3: Journal or third party will verify that the data can be used to reproduce the findings presented in a paper.

Design and Analysis Transparency Society or journal defines the relevant reporting standards that are appropriate for their discipline. http://www.cell.com/star-methods http://resource-cms.springer.com/springer-cms/rest/v1/content/7117202/data/v2/Minimum+standards+of+reporting+checklist http://www.equator-network.org/ https://www.nature.com/authors/policies/ReportingSummary.pdf http://www.apa.org/pubs/journals/releases/amp-amp0000191.pdf

Materials transparency Data transparency Code transparency Level 1 Level 2 Level 3 Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication What does it mean? What does it solve? How do we get authors to do it? See cos.io/top for more detailed language

Materials transparency Data transparency Code transparency Level 1 Level 2 Level 3 Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication See cos.io/top for more detailed language

Preregistration A preregistration is a time-stamped, read-only version of your research plan created before the study. It increases credibility by specifying in advance how data will be analyzed, and makes the distinction between confirmatory and exploratory work more clear.

Alison ledgerwood

Confirmatory versus exploratory analysis “In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning (double dipping) would be involved: something seems true in the limited data set, therefore we hypothesize that it is true in general, therefore we (wrongly) test it on the same limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing (from Latin post hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis.”

Confirmatory versus exploratory analysis Preregistration Context of confirmation Traditional hypothesis testing Results held to the highest standards of rigor Goal is to minimize false positives P-values interpretable Context of discovery Pushes knowledge into new areas/ predata-led discovery Finds unexpected relationships Goal is to minimize false negatives P-values meaningless “In statistics, hypotheses suggested by a given dataset, when tested with the same dataset that suggested them, are likely to be accepted even when they are not true. This is because circular reasoning (double dipping) would be involved: something seems true in the limited data set, therefore we hypothesize that it is true in general, therefore we (wrongly) test it on the same limited data set, which seems to confirm that it is true. Generating hypotheses based on data already observed, in the absence of testing them on new data, is referred to as post hoc theorizing (from Latin post hoc, "after this"). The correct procedure is to test any hypothesis on a data set that was not used to generate the hypothesis.” Presenting exploratory results as confirmatory increases publishability at the expense of credibility

Preregistration or Replication Level 1: Disclose preregistration, encourage replication http://www.psychonomic.org/?page=journals

How Stop HARKING? Pre-registration / Registered reports “Because the study is accepted in advance, the incentives for authors change from producing the most beautiful story to producing the most accurate one.” Chris Chambers, Cardiff University (section editor at Cortex and Royal Society Open Science) “Registered Reports eliminates the bias against negative results in publishing because the results are not known at the time of review” Daniel Simons, University of Illinois (co-Editor of Registered Replication Reports at Perspectives on Psychological Science)

Preregistration or Replication Level 3: Registered Reports If YES, then study is granted “in principle acceptance” (IPA), a promise to publish regardless of outcome. Are the hypotheses well founded and worth addressing? Are the methods and proposed analyses able to address the hypotheses? Have the authors included sufficient positive controls to confirm that the study will provide a fair test?

Preregistration or Replication Level 3: Registered Reports Did the authors follow the approved protocol? Did positive controls succeed? Are the conclusions justified by the data? 88 Journals use Registered Reports, see more at cos.io/rr

Materials transparency Data transparency Code transparency Level 1 Level 2 Level 3 Eight Standards Data citation Materials transparency Data transparency Code transparency Design transparency Study Preregistration Analysis Preregistration Replication What does it mean? What does it solve? How do we get authors to do it? See cos.io/top for more detailed language

How can you make researchers do better. Journals How can you make researchers do better? Journals. How can you make JOURNALS do better? 1) Make it easier for them to do it. COS

2) Peer Pressure....

How can you make researchers do better? Journals can help. How can you make JOURNALS do better? Make it easier for them to do it. COS Peer pressure. Organizational action. Individual editor action. Pressure from reviewers. Pressure from researchers (readers, authors).

Advances in Methods and Practices in Psychological Science Or – Start a New Journal Advances in Methods and Practices in Psychological Science

How... ...can journals help document the problems? Raise awareness of the problems? ...did journals contribute to the problems? ...can journals help to reduce the problems in the future? ...can journals help fix the problems they created in the past (i.e., fix the record)?

What Is NOT Happening? Building Better Bricks but not Better Buildings 1. Stop losing important information. 2. Get better at compiling results. (Meta-analysis) 3. Get better at connecting findings. keywords (ugh) reasons for citations (like law?) ----------------- 4. More Theory / Review Journals ? 5. More open places for (moderated?) commentary / discussion for post-pub discussion

Why are there failures to replicate? Fault of Original Researcher Fault of Neither, One, or Both Fault of Replicator Fraud QRPs (“Questionable Research Practices”) Chance Bad Copy (methods / data / analysis description) QRPs Bad intentions Incompetence

“A success of using technology and changing incentives to implement the timeless values of science.”

Advances in Methods and Practices in Psychological Science END Thanks to... Brian Nosek Simine Vazire Alison Ledgerwood David Mellor Alan Kraut The Center for Open Science Advances in Methods and Practices in Psychological Science

QRPs: Questionable Research Practices

1 2 3 4 5 Include/Exclude participants to achieve p<.05. Collect and analyze multiple conditions, drop those that do not show p<.05. Stop collecting data once p<.05 is reached (or keep collecting more data until p<.05). Include many measures, but report only those p<.05. Include covariates in statistical analysis to get p<.05.