Evaluating the effectiveness of open science practices

Slides:



Advertisements
Similar presentations
Session One. Types of research articles Theoretical Empirical.
Advertisements

Doug Altman Centre for Statistics in Medicine, Oxford, UK
 Discuss with your neighbor, and be ready to share:  To the best of your ability, define the term, “peer- reviewed scientific article”.  We’ll discuss.
Writing for Publication
How does the process work? Submissions in 2007 (n=13,043) Perspectives.
Chapter One: The Science of Psychology
©2007 Prentice Hall Organizational Behavior: An Introduction to Your Life in Organizations Chapter 19 OB is for Life.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Making all research results publically available: the cry of systematic reviewers.
Funded through the ESRC’s Researcher Development Initiative
So you want to publish an article? The process of publishing scientific papers Williams lab meeting 14 Sept 2015.
How to Write a Critical Review of Research Articles
Research Utilization in Nursing Chapter 21
PLoS Enlivening Scientific Culture Dr Chris Surridge Managing Editor, PLoS ONE Public Library of Science.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
Methodology Matters: Doing Research in the Behavioral and Social Sciences ICS 205 Ha Nguyen Chad Ata.
Background Information Audience Response Systems (ARS) are a technology used in classrooms that consist of an input device controlled by the learner, a.
FOR 500 The Publication Process Karl Williard & John Groninger.
DA-RT What publishing houses can and can’t do Patrick McCartan Publishing Director, Social Science and Humanities Journals Cambridge University Press.
Review Characteristics This review protocol was prospectively registered with BEME (see flow diagram). Total number of participants involved in the included.
The Process of Conducting Research. What is a theory? a set of general principles that explains the how and why of phenomena. Theories are not directly.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Webinar on increasing openness and reproducibility April Clyburne-Sherin Reproducible Research Evangelist
Practical Steps for Increasing Openness and Reproducibility Courtney Soderberg Statistical and Methodological Consultant Center for Open Science.
Source: S. Unchern,  Research is not been completed until the results have been published.  “You don’t write because you want to say something,
Smith/Davis (c) 2005 Prentice Hall Chapter One The Science of Psychology PowerPoint Presentation created by Dr. Susan R. Burns Morningside College.
HCS 465 OUTLET Experience Tradition /hcs465outlet.com FOR MORE CLASSES VISIT
Publishing research in a peer review journal: Strategies for success
Educational Psychology
Significance of Findings and Discussion
Writing Scientific Research Paper
Academic Writing Skills
Do Adoptees Have Lower Self Esteem?
Journeys into journals: publishing for the new professional
Evidence Synthesis/Systematic Reviews of Eyewitness Accuracy
Shifting the research culture toward openness and reproducibility
Goals of the Presentation
Right-sized Evaluation
How to read a paper D. Singh-Ranger.
Intro to Research Methods
Achieving Open Science
Planning your Dissertation
A Meta Analysis of the Impact of SBI on Healthcare Utilization
Transparency increases the credibility and relevance of research
Research & Writing in CJ
Applied Research Methods (ARMs) ARMS 1 – Critical Reading & Writing
Study Pre-Registration
Role of peer review in journal evaluation
PSYCH 610 Competitive Success/snaptutorial.com
PSYCH 610 Education for Service/snaptutorial.com.
Chapter Six Training Evaluation.
Lesson 5. Lesson 5 Extraneous variables Extraneous variable (EV) is a general term for any variable, other than the IV, that might affect the results.
Research Methods unit 2 lecture vocabulary
BHS Methods in Behavioral Sciences I
Introductory Reviewer Development
Helene Brinken Bootcamp – Day 1
Practice- How to Present the Evidence
School of Psychology, Cardiff University
Welcome.
Writing Scientific Papers: From Theory to Practice
Dr. Matthew Keough August 8th, 2018 Summer School
A Meta Analysis of the Impact of SBI on Healthcare Utilization
H676 Week 5 - Plan for Today Review your project and coding to date
What are systematic reviews and why do we need them?
Managerial Decision Making and Evaluating Research
Chapter 4 Summary.
Tips for Writing Proposals
Some Further Considerations in Combining Single Case and Group Designs
Manuscripts and publishing
Presentation transcript:

Evaluating the effectiveness of open science practices Dr. George Banks Department of Management; Belk College of Business Organizational Science Program University of North Carolina at Charlotte

Evaluating the effectiveness of open science practices There have been countless articles that lament the use of questionable research practices (QRPs) and encourage open science practices (Banks et al., 2016; John et al., 2012; O’Boyle et al., in press; Simmons et al., 2011) These articles subsequently make recommendations for improving our scientific practices Many recommendations focus on journal review and publishing practices How can we evaluate the effectiveness of these recommendations? What sorts of evidence would help to create constructive change? It is unlikely that all recommendations work consistently well across contexts. How do we account for this?

Example recommendation: TOP guidelines (538 journal signatories) Preregistration of studies (Nosek et al., 2015) Level 0: Journal says nothing Level 1: Journal encourages preregistration of studies and provides link in article to preregistration if it exists Level 2: Journal encourages preregistration of studies and provides link in article and certification of meeting preregistration badge requirements Level 3: Journal requires preregistration of studies and provides link and badge in article to meeting requirements

Example recommendation: Editor Ethics 2. 0 (https://editorethics. uncc 232 affirming associate and senior editors Article II. Promotion of ethical research practices. the reporting (and publishing) of theoretically/methodologically relevant null results the refraining from opportunistic post-hoc hypothesizing under the guise of deductive research.

The peer-review system “If the first story does not appeal, you rewrite it to appease the editors/reviewers. This is a key to getting one’s work accepted. Anyone who pretends otherwise is a fool and probably unpublished. Sorry to burst your bubble. This is not science, it is the art of persuasion.” “Change must occur top-down. Get journals to publish articles containing both significant and insignificant findings, get tenure to be less about ‘5 articles in A journals,’ and the appropriate behaviors will follow.” Banks et al. (2016)

Evaluation example: Results-blind review (RBR) Past research suggests that reviewers place considerable weight on results without discounting for methodological flaws (Emerson et al., 2010). Such bias is not possible within a RBR process Reviewers focus directly on the theoretical and/or conceptual contribution and the methodological rigor of the research If evidence can be shown that a RBR approach… Improves the accuracy of editorial decisions Reduces the engagement in QRPs (e.g., presenting post hoc hypotheses as a priori; selective inclusion of results with a preference for those that are statistically significant) …then authors and editors alike may be motivated to move all of their work though such a process.

Evaluation example: Results-blind review (RBR) Evaluation procedure For those papers in the RBR condition, the results, discussion, and parts of abstract reporting results would be removed Two reviewers will be randomly assigned to the traditional condition Two reviewers will be randomly assigned to the RBR condition.

Evaluation example: Results-blind review (RBR) Reviewers’ decision recommendations, ratings on theoretical and practical implications, and ratings on methods will be compared. We will also consider the depth and nature of the comments raised between the two approaches (e.g., do reviewers provide different types of comments when they're focused on just intro and methods?). For instance, we would have objective measures such as word length, number of recommendations, positive/negative comment ratio established through sentiment text-mining analysis, etc. Are there other immediate outcomes that should be measured?

Evaluation example: Results-blind review (RBR) We will also query action editors following the study for insights gained as they considered RBR and traditional reviews side by side in making their editorial decisions. In addition, we will examine moderating variables, such as (1) whether or not hypotheses were confirmed; (2) level of analysis of the study reported in the submission; (3) archival data versus primary study data; (4) Etc. The aim of testing these factors is to identify potential contingencies on the effect of submission type on reviewer ratings.

Stimulating further discussion… What issues might make journal editors hesitant to implement validated review interventions? How might we overcome these issues? Downstream effects Would the implementation of a RBR process affect: Self-report rates of engagement in QRPs? The prevalence of null results in scientific literature? The replication rate of studies? The rate of submission to journals? Journal impact factors? Are there other downstream effects we should consider?

References Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., et al. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42, 5-20. Emerson, G. B., Warme, W. J., Wolf, F. M., Heckman, J. D., Brand, R. A., & Leopold, S. S. (2010). Testing for the presence of positive-outcome bias in peer review: A randomized controlled trial. Archives of internal medicine, 170, 1934-1939. doi: 10.1001/archinternmed.2010.406 John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524-532. doi: 10.1177/0956797611430953 Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., et al. (2015). Promoting an open research culture. Science, 348, 1422-1425. O’Boyle Jr., E.H., Banks, G.C., Gonzalez-Mule, E. (in press). The chrysalis effect: How ugly data metamorphosize into beautiful articles. Journal of Management. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. doi: 10.1177/0956797611417632