Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating the effectiveness of open science practices

Similar presentations


Presentation on theme: "Evaluating the effectiveness of open science practices"— Presentation transcript:

1 Evaluating the effectiveness of open science practices
Dr. George Banks Department of Management; Belk College of Business Organizational Science Program University of North Carolina at Charlotte

2 Evaluating the effectiveness of open science practices
There have been countless articles that lament the use of questionable research practices (QRPs) and encourage open science practices (Banks et al., 2016; John et al., 2012; O’Boyle et al., in press; Simmons et al., 2011) These articles subsequently make recommendations for improving our scientific practices Many recommendations focus on journal review and publishing practices How can we evaluate the effectiveness of these recommendations? What sorts of evidence would help to create constructive change? It is unlikely that all recommendations work consistently well across contexts. How do we account for this?

3 Example recommendation: TOP guidelines (538 journal signatories)
Preregistration of studies (Nosek et al., 2015) Level 0: Journal says nothing Level 1: Journal encourages preregistration of studies and provides link in article to preregistration if it exists Level 2: Journal encourages preregistration of studies and provides link in article and certification of meeting preregistration badge requirements Level 3: Journal requires preregistration of studies and provides link and badge in article to meeting requirements

4 Example recommendation: Editor Ethics 2. 0 (https://editorethics. uncc
232 affirming associate and senior editors Article II. Promotion of ethical research practices. the reporting (and publishing) of theoretically/methodologically relevant null results the refraining from opportunistic post-hoc hypothesizing under the guise of deductive research.

5 The peer-review system
“If the first story does not appeal, you rewrite it to appease the editors/reviewers. This is a key to getting one’s work accepted. Anyone who pretends otherwise is a fool and probably unpublished. Sorry to burst your bubble. This is not science, it is the art of persuasion.” “Change must occur top-down. Get journals to publish articles containing both significant and insignificant findings, get tenure to be less about ‘5 articles in A journals,’ and the appropriate behaviors will follow.” Banks et al. (2016)

6 Evaluation example: Results-blind review (RBR)
Past research suggests that reviewers place considerable weight on results without discounting for methodological flaws (Emerson et al., 2010). Such bias is not possible within a RBR process Reviewers focus directly on the theoretical and/or conceptual contribution and the methodological rigor of the research If evidence can be shown that a RBR approach… Improves the accuracy of editorial decisions Reduces the engagement in QRPs (e.g., presenting post hoc hypotheses as a priori; selective inclusion of results with a preference for those that are statistically significant) …then authors and editors alike may be motivated to move all of their work though such a process.

7 Evaluation example: Results-blind review (RBR)
Evaluation procedure For those papers in the RBR condition, the results, discussion, and parts of abstract reporting results would be removed Two reviewers will be randomly assigned to the traditional condition Two reviewers will be randomly assigned to the RBR condition.

8 Evaluation example: Results-blind review (RBR)
Reviewers’ decision recommendations, ratings on theoretical and practical implications, and ratings on methods will be compared. We will also consider the depth and nature of the comments raised between the two approaches (e.g., do reviewers provide different types of comments when they're focused on just intro and methods?). For instance, we would have objective measures such as word length, number of recommendations, positive/negative comment ratio established through sentiment text-mining analysis, etc. Are there other immediate outcomes that should be measured?

9 Evaluation example: Results-blind review (RBR)
We will also query action editors following the study for insights gained as they considered RBR and traditional reviews side by side in making their editorial decisions. In addition, we will examine moderating variables, such as (1) whether or not hypotheses were confirmed; (2) level of analysis of the study reported in the submission; (3) archival data versus primary study data; (4) Etc. The aim of testing these factors is to identify potential contingencies on the effect of submission type on reviewer ratings.

10 Stimulating further discussion…
What issues might make journal editors hesitant to implement validated review interventions? How might we overcome these issues? Downstream effects Would the implementation of a RBR process affect: Self-report rates of engagement in QRPs? The prevalence of null results in scientific literature? The replication rate of studies? The rate of submission to journals? Journal impact factors? Are there other downstream effects we should consider?

11 References Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., et al. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42, 5-20. Emerson, G. B., Warme, W. J., Wolf, F. M., Heckman, J. D., Brand, R. A., & Leopold, S. S. (2010). Testing for the presence of positive-outcome bias in peer review: A randomized controlled trial. Archives of internal medicine, 170, doi: /archinternmed John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, doi: / Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., et al. (2015). Promoting an open research culture. Science, 348, O’Boyle Jr., E.H., Banks, G.C., Gonzalez-Mule, E. (in press). The chrysalis effect: How ugly data metamorphosize into beautiful articles. Journal of Management. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, doi: /


Download ppt "Evaluating the effectiveness of open science practices"

Similar presentations


Ads by Google