1 Negative Results and Publication Bias in Single- Case Research Applications of the WWC Standards in Literature Reviews.

Slides:



Advertisements
Similar presentations
SPSS Session 2: Hypothesis Testing and p-Values
Advertisements

Chapter 3 Flashcards. obligation of an individual to other individuals based on a social or legal contract to justify his or her actions; the processes.
Effect Size and Meta-Analysis
+ Evidence Based Practice University of Utah Presented by Will Backner December 2009 Training School Psychologists to be Experts in Evidence Based Practices.
Introduction to Meta-Analysis Joseph Stevens, Ph.D., University of Oregon (541) , © Stevens 2006.
Unit 2: Research Methods in Psychology
Introduction to Research
Evaluating Hypotheses Chapter 9 Homework: 1-9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics ~
Matching level of measurement to statistical procedures
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Overview of Lecture Independent and Dependent Variables Between and Within Designs.
Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 7 th Edition Chapter 9 Hypothesis Testing: Single.
Practical Meta-Analysis -- D. B. Wilson 1 Practical Meta-Analysis David B. Wilson.
Today Concepts underlying inferential statistics
Variables and Measurement (2.1) Variable - Characteristic that takes on varying levels among subjects –Qualitative - Levels are unordered categories (referred.
Single-Subject Designs
Chapter 6 Flashcards. systematic process for interpreting results of single-case design data that involves the visual examination of graphed data within.
Descriptive Statistics
Making all research results publically available: the cry of systematic reviewers.
Chapter 4 Principles of Quantitative Research. Answering Questions  Quantitative Research attempts to answer questions by ascribing importance (significance)
Categorical Data Prof. Andy Field.
Chapter 4 Hypothesis Testing, Power, and Control: A Review of the Basics.
Overview of Statistical Hypothesis Testing: The z-Test
Chapter 10 Hypothesis Testing
HYPOTHESIS TESTING Dr. Aidah Abu Elsoud Alkaissi
School Counselors Doing Action Research Jay Carey and Carey Dimmitt Center for School Counseling Outcome Research UMass Amherst CT Guidance Leaders March.
THE RESEARCH PROCESS.
Daniel Acuña Outline What is it? Statistical significance, sample size, hypothesis support and publication Evidence for publication bias: Due.
Research Methods Key Points What is empirical research? What is the scientific method? How do psychologists conduct research? What are some important.
Single-Case Research: Standards for Design and Analysis Thomas R. Kratochwill University of Wisconsin-Madison.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Making decisions about distributions: Introduction to the Null Hypothesis 47:269: Research Methods I Dr. Leonard April 14, 2010.
Effect Sizes for Meta-analysis of Single-Subject Designs S. Natasha Beretvas University of Texas at Austin.
Randomization: A Missing Component of the Single-Case Research Methodological Standards Adapted from Kratochwill, T. R., & Levin, J. R. (2010). Enhancing.
Classroom-Based Applications of Single-Case Designs: Methodological and Statistical Issues Joel R. Levin University of Arizona.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 4 Gathering Data Section 4.3 Good and Poor Ways to Experiment.
Current Methodological Issues in Single Case Research David Rindskopf, City University of New York Rob Horner, University of Oregon.
Single-Subject Experimental Research
Hypothesis Testing – A Primer. Null and Alternative Hypotheses in Inferential Statistics Null hypothesis: The default position that there is no relationship.
Random Thoughts On Enhancing the Scientific Credibility of Single-Case Intervention Research: Randomization to the Rescue Thomas R. Kratochwill and Joel.
WWC Standards for Regression Discontinuity Study Designs June 2010 Presentation to the IES Research Conference John Deke ● Jill Constantine.
How to read a paper D. Singh-Ranger. Academic viva 2 papers 1 hour to read both Viva on both papers Summary-what is the paper about.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. The Scientific Method The approach used by social scientists.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Module 3: Research in Psychology Learning Objectives What is the scientific method? How do psychologist use theory and research to answer questions of.
Question paper 1997.
Research Methods Ass. Professor, Community Medicine, Community Medicine Dept, College of Medicine.
Components of a Statistical Study Target Population: This is the group about which you want to make an overall judgment. It could be all people, voters,
Introduction to Research. Purpose of Research Evidence-based practice Validate clinical practice through scientific inquiry Scientific rational must exist.
Randomized Single-Case Intervention Designs Joel R
European Patients’ Academy on Therapeutic Innovation The Purpose and Fundamentals of Statistics in Clinical Trials.
Experiment An experiment deliberately imposes a treatment on a group of objects or subjects in the interest of observing the response. Differs from an.
Scientific Methodology: Background Information, Questions, Research Hypotheses, the Hypothetico- Deductive Approach, and the Test of Hypothesis BIOL457.
IES Project Director’s Meeting June 2010 Rob Horner University of Oregon.
IES Single-Case Research Institute: Training Visual Analysis Rob Horner University of Oregon
Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 8 th Edition Chapter 9 Hypothesis Testing: Single.
Producing Data 1.
Week Seven.  The systematic and rigorous integration and synthesis of evidence is a cornerstone of EBP  Impossible to develop “best practice” guidelines,
Copyright © Springer Publishing Company, LLC. All Rights Reserved. EVIDENCE-BASED TEACHING IN NURSING – Chapter 15 –
Analytical Interventional Studies
Logic of Hypothesis Testing
DAY 2 Visual Analysis of Single-Case Intervention Data Tom Kratochwill
Goals of the Presentation
Heterogeneity and sources of bias
Motivation/Rationale for "Standards" for Single-Case Intervention Research:
Principles of Experiment
Introduction and Literature Review
Goals of the Presentation
Randomization: A Missing Component of the Single-Case Research Methodological Standards Joel R. Levin University of Arizona Adapted from Kratochwill, T.
Evidence Based Practice
Thinking critically with psychological science
Presentation transcript:

1 Negative Results and Publication Bias in Single- Case Research Applications of the WWC Standards in Literature Reviews

The Legacy of Negative Results and its Relationship to Publication Bias The Importance of Negative Results in Developing Evidence-Based Practices (Kratochwill, Stoiber, & Gutkin, 2000) Negative Results in Single-Case Intervention Research Examples using the WWC Standards 2

The term negative results traditionally has meant that there are either: (a) no statistically significant differences between groups that receive different intervention conditions in randomized controlled trials; or (b) no documented differences (visually and/or statistically) between baseline and intervention conditions in experimental single-case designs. 3

In the domain of SCD research, negative results reflect findings of (a) no difference between baseline (A) and intervention (B) phases (As = Bs), (b) a difference between baseline and intervention phases but in the opposite direction to what was predicted (As > Bs, where B was predicted to be superior to A), (c) no difference between two alternative interventions, B and C (Bs = Cs), or (d) a difference between two alternative interventions, but in the direction opposite to what was predicted (Bs > Cs, where C was predicted to be superior to B). 4

Negative results/findings in SCD intervention research should be distinguished from negative effects in intervention research (i.e., iatrogenic effects). Some interventions may actually produce negative effects on participants (i.e., participants get worse or show negative side effects from an intervention)―see, for example Barlow (2010). 5

Selective results refer to the withholding of any findings in a single study or in a replication series (i.e., a series of single-case studies in which the treatment is replicated several times in independent experiments; see also our discussion below for selective results issues in replication series) and can be considered as a part of the domain of negative results. 6

Erroneous results have been considered in traditional “group” research in situations where various statistical tests are incorrectly conducted or interpreted to yield findings that are reported as statistically significant but are found not to be when the correct test or interpretation is applied (e.g., Levin, 1985). Also included in the erroneous results category are “spurious” findings that are produced in various research contexts. 7

Publication bias results when studies with positive outcomes or more favorable results, are more likely to be published than are those studies with null or negative findings. Negative results are less likely to be published, a tendency often known as publication bias. Publication bias may occur in single-case design research but less is known about this methodology. If literature summaries such as meta-analyses fail to include negative results, they may overestimate the size of an effect.

Sham and Smith (2014) examined publication bias by comparing effect sizes in single-case research in published studies (n=21) and non-published dissertation studies (n=10) in the area of pivotal response treatment (PRS). The effect sizes were assessed with PND. They found that the mean PND for published studies was 22% higher than unpublished studies. Nevertheless, PRS was found to be overall effective in published and unpublished studies.

Shadish, Zelinsky, Vevea, and Kratochwill (2015) surveyed SCD researchers about their publication practices to answer the following questions: (a) Does a bias exist against publishing results from SCD research that shows low effect sizes? (b) Does this bias change based on such characteristics of the data as data overlap and total data variability? (c) Does this bias fluctuate over participant demographics? and (d) Is this bias present in any of three forms: not submitting research to a journal, dropping cases before submission, and not recommending for publication? Results suggest SCD researchers do give preference to large effects in publication decisions.

11