Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the Standards Five studies.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Autism Observation Instrument General Education Classrooms
US Office of Education K
From Evidence-based Practice to Practice-based Evidence: Behavior Analysis in Special Education Ronnie Detrich Wing Institute.
School-wide Positive Behavioral Interventions and Supports & Students with Autism Jointly developed by the above organizations with funding from the U.S.
Experimental Research Designs
THE DISABILITY EXPERIENCE CONFERENCE. Training Parents and Staff to Use a Social-Communication Intervention with Children with Autism: A Focus on Treatment.
+ Evidence Based Practice University of Utah Presented by Will Backner December 2009 Training School Psychologists to be Experts in Evidence Based Practices.
Chapter 5: Improving and Assessing the Quality of Behavioral Measurement Cooper, Heron, and Heward Applied Behavior Analysis, Second Edition.
Chapter 13: Descriptive and Exploratory Research
New Hampshire Enhanced Assessment Initiative: Technical Documentation for Alternate Assessments Alignment Inclusive Assessment Seminar Brian Gong Claudia.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Single-Case Research: Documenting Evidence-based Practice Rob Horner University of Oregon.
Chapter 10: Planning and Evaluating Applied Behavior Analysis Research
RELIABILITY AND VALIDITY OF DATA COLLECTION. RELIABILITY OF MEASUREMENT Measurement is reliable when it yields the same values across repeated measures.
Experiences and requirements in teacher professional development: Understanding teacher change Sylvia Linan-Thompson, Ph.D. The University of Texas at.
Ch 6 Validity of Instrument
Overview of MSP Evaluation Rubric Gary Silverstein, Westat MSP Regional Conference San Francisco, February 13-15, 2008.
Chapter 11 Research Methods in Behavior Modification.
Trina D. Spencer ABA 2009 Research Based Principles What Practice Can’t Do Without.
Single-Case Research: Standards for Design and Analysis Thomas R. Kratochwill University of Wisconsin-Madison.
Classroom Assessments Checklists, Rating Scales, and Rubrics
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Behavior Management: Applications for Teachers (5 th Ed.) Thomas J. Zirpoli Copyright © 2008 by Pearson Education, Inc. All rights reserved. 1 CHAPTER.
Single-Case Research Designs: Training Protocols in Visual Analysis Wendy Machalicek University of Oregon Acknowledgement: Rob Horner Tom.
…SO many ASD treatments, practices, strategies, information, etc…. How to choose???
Current Methodological Issues in Single Case Research David Rindskopf, City University of New York Rob Horner, University of Oregon.
Laying the Foundation for Scaling Up During Development.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
Orientation for New Behavior Team Members – Vocabulary Activity Illinois Service Resource Center 3444 W Dundee Rd Northbrook, IL
Evidence-based Education and the Culture of Special Education Chair: Jack States, Wing Institute Discussant: Teri Palmer, University of Oregon.
Response to Intervention Activity: Selecting the ‘Best of the Best’ Tier I Intervention Ideas.
OSEP Project Directors’ Conference Washington, DC July 21, 2008 Tools for Bridging the Research to Practice Gap Mary Wagner, Ph.D. SRI International.
SOCW 671 # 8 Single Subject/System Designs Intro to Sampling.
Tier III Implementation. Define the Problem  In general - Identify initial concern General description of problem Prioritize and select target behavior.
What Are the Characteristics of an Effective Portfolio? By Jay Barrett.
An Expanded Model of Evidence-based Practice in Special Education Randy Keyworth Jack States Ronnie Detrich Wing Institute.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
Master’s Project. Function Based Support Elements & Process Problem Behavior Functional Assessment Intervention & Support Plan Fidelity of Implementation.
National PE Cycle of Analysis. Fitness Assessment + Gathering Data Why do we need to asses our fitness levels?? * Strengths + Weeknesses -> Develop Performance.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
The Continuum of Interventions in a 3 Tier Model Oakland Schools 3 Tier Literacy Leadership Team Training November
IES Advanced Training Institute on Single-Case Research Methods
Classroom Assessments Checklists, Rating Scales, and Rubrics
DAY 2 Visual Analysis of Single-Case Intervention Data Tom Kratochwill
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Training Personnel Using Autism online ebp Modules
Agenda What is a high probability (high-p) request sequence?
Are Evidence-Based Practice Websites Trustworthy and Accessible?
Role of The Physical Therapist in Critical Inquiry
Goals of the Presentation
The Continuum of Interventions in a 3 Tier Model
QUESTIONNAIRE DESIGN AND VALIDATION
Chapter 12 Single-Case Evaluation Designs
Classroom-based assessment to promote equity
Classroom Assessments Checklists, Rating Scales, and Rubrics
Motivation/Rationale for "Standards" for Single-Case Intervention Research:
Goals of the Presentation
Goals of the Presentation
Improving the Use of Effective Practices Through Coaching
Integrating Outcomes Learning Community Call February 8, 2012
DAY 2 Single-Case Design Drug Intervention Research Tom Kratochwill
Role of The Physical Therapist in Critical Inquiry
Randomization: A Missing Component of the Single-Case Research Methodological Standards Joel R. Levin University of Arizona Adapted from Kratochwill, T.
Update from ECO: Possible Approaches to Measuring Outcomes
Eloise Forster, Ed.D. Foundation for Educational Administration (FEA)
Institute of Education Sciences Summer Research Training Institute: Single-Case Intervention Design and Data Analysis June 18-22, 2019 Madison, Wisconsin.
DAY 2 Single-Case Design Drug Intervention Research Tom Kratochwill
Social Validity and Treatment Integrity
Presentation transcript:

DAY 4 Applications of the WWC Standards in Literature Reviews Tom Kratochwill

Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the Standards Five studies documenting experimental control (i.e., Meets Design Standards or Meets Design Standards With Reservations) Conducted by at least three research teams with no overlapping authorship at three different institutions The combined number of cases totals at least 20 Each study demonstrates an effect size of ___

Literature Reviews Kiuhara et al. (in press) found over 70 review articles; The number of research reviews that adopt the WWC Pilot Standards has increased in recent years; Options are now available to combine the effect sizes from single-case studies and group designs (see http://ies.ed.gov/ncser/pubs/2015002/ (Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom)

Examples of using Single-Case Research to Document Evidence-Based Practice A systematic evaluation of token economies as a classroom management tool for students with challenging behavior (Maggin, Chafouleas, Goddard, & Johnson, 2011) Studies documenting experimental control [n=7/3 (MDS-student/classroom),4/0 (MDSWR-student/classroom)] At least three settings /scholars (yes) At least 20 participants (no) EVIDENCE CRITERIA: Strong evidence (n=1 at the student level and n=3 at the classroom level) Moderate evidence (n=8 at the student level and n=0 at the classroom level) No evidence (n=2 at the student level and n=0 at the classroom level)

Examples of using Single-Case Research to Document Evidence-Based Practice An application of the What Works Clearinghouse Standards for evaluating single-subject research: Synthesis of the self-management literature base (Maggin, Briesch, & Chafouleas, 2013). Studies documenting experimental control [n=37 (MDS)/n=31(MDSWR)] At least three settings /scholars (Yes) At least 20 participants (Yes) EVIDENCE CRITERIA: Strong evidence (n=25) Moderate evidence (n=30) No evidence (n=13)

Further Examples of using Single-Case Research to Document Evidence-Based Practice: Topical Areas Repeated Reading (WWC) Peer Management Interventions Writing Interventions for High School Students with Disabilities Sensory-Based Treatments for Children with Disabilities The Good Behavior Game Evidence –Based Practices in Education and Treatment of Learners with Autism Spectrum Disorders Academic Interventions for Incarcerated Adolescents

Need for Replication Studies in Single-Case Intervention Research Can establish the reliability of findings Can establish the generalizability of findings Can help determine what works and what does not work in treatment Ultimately can determine the credibility of “one-time” findings

Three types of Replication in Single-Case Design (Barlow, Nock, & Hersen, 2009): Direct Replication: Replication of the experiment by the same researcher (sometimes called interparticipant replication). Systematic Replication: An attempt by the researcher to replicate findings from a direct replication series while varying settings, therapists, behavior problems/disorders, and any combination of the above (can involve intraexperiment and/or interexperiment replication). Clinical Replication: Administration of a treatment package by the same investigator containing two or more distinct treatment procedures. Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single-case experimental designs: Strategies for studying behavior change (3rd ed.). Boston, MA: Allyn & Bacon.

Disseminating Evidence-Based Practices Evidence-based Treatment is Not Sufficient: In addition to the features of the practice: define what outcomes, when/where used, by whom, with what target populations, at what fidelity? The innovative practice needs to not only be evidence-based, but dramatically easier and better than what is already being used. The practice should be defined conceptually as well as procedurally, to allow guidance for adaptation.

Evidence May Not Guide Practice Good Evidence May Not Be Available to Guide Practice Evidence to Guide Practice May Not Have Been Disseminated Negative Scientific Evidence May Be Ignored

An Example of the Persistence of Fad Interventions Lillenfeld, Marshall, Todd, and Shane (2015) provide an example of “fad interventions” that persist even though there is negative scientific evidence available. The example comes from the use of facilitated communication for autism…and more recently, from sensory integration treatments. Scott O. Lilienfeld, Julia Marshall, James T. Todd, Howard C. Shane. The persistence of fad interventions in the face of negative scientific evidence: Facilitated communication for autism as a case example. Evidence-Based Communication Assessment and Intervention, 2015; 1 DOI: 10.1080/17489539.2014.976332

Questions and Discussion

Day 4 Special Topics in Single-Case Intervention Research Tom Kratochwill Measurement issues in single-case design Treatment integrity Treatment intensity Social validity in treatment research

Measurement Issues in Single-Case Design Methods of Assessment Quality of Assessment

Methods of Assessment: Choice of the Dependent Variable Assessment of Overt Behavior (sometimes called direct assessment; e.g., frequency, duration, latency, psychophysiological measures); Alternative Assessment (sometimes called indirect assessment; e.g., self-report, checklists and rating scales); Specification of the Conditions of Assessment (e.g., natural versus analogue, human observes versus automated recording, etc.).

Quality of Assessment Accuracy of Assessment Observer Agreement (required in the WWC Pilot Standards)

Remember Standard 2: Inter-Assessor Agreement Each Outcome Variable for Each Case Must be Measured Systematically by More than One Assessor. Researcher Needs to Collect Inter-Assessor Agreement: In each phase On at least 20% of the data points in each condition (i.e., baseline, intervention) Rate of Agreement Must Meet Minimum Thresholds: (e.g., 80% agreement or Cohen’s kappa of 0.60) If No Outcomes Meet These Criteria, Study Does Not Meet Design Standards.

Observer Agreement in the WWC Pilot Standards The Panel not specify the method of observer agreement (e.g., using only measures that control for chance agreement). Hartmann, Barrios, and Wood (2004) reported over 20 measures to assess interobserver agreement. Probably good practice to report measures that control for chance agreement and report agreement for occurrences and non-occurrences of the dependent variable. Good to assess accuracy: the degree to which observer data reflect actual performance or some true measure of behavior (e.g., video performance of the behavior).

Artifact, Bias, and Complexity of Assessment Researcher needs to consider factors that can obscure observer agreement: Direct assessment by observers can be reactive; Observers can drift in their assessment of the dependent variable; Observers can have expectancies for improvement that are conveyed by the researcher; Coding systems can be too complex and lead to error.

Questions and Discussion

Treatment Integrity Assessing and promoting Treatment Integrity is a more recent standard in intervention research. See Sanetti and Kratochwill (2014) for comprehensive coverage of the topic. Reference: Sanetti, L. M., & Kratochwill, T. R. (Eds) (2014). Treatment Integrity: A Foundation for Evidence-Based Practice in Applied Psychology. Washington, DC: American Psychological Association.

Treatment Integrity: Definition and Dimensions “Treatment integrity is the extent to which required intervention components are delivered as designed in a competent manner while proscribed procedures are avoided by an interventionist trained to deliver the intervention in a particular setting to a particular population” (Perepletchikova, 2014, p. 138).

Representative Features of Treatment Integrity Data Across Treatment Phases Across Therapists/Intervention Agents Across Situations Across Days/Sessions Across Cases

Questions and Discussion

Treatment Intensity: Definition and Dimensions Treatment intensity typically refers to treatment strength and traditionally, the most commonly recognized dimension is dose of the intervention (Codding & Lane, 2014). Reference: Codding, R. S., & Lane, K. L. (2014). A spotlight on treatment intensity: An important and often overlooked component of intervention inquiry. Journal of Behavioral Education. doi:10.1007/s10864-014-9210-z.

Dimensions of Treatment Intensity* Dose Learning trial/session Session length Session frequency Length of treatment Cumulative intensity Group Size (From Codding & Lane, 2014)

Dimensions of Treatment Intensity (Continued) Treatment Design Form Positive corrective feedback Pace of instruction Opportunities to practice or respond Transitions between subjects or classes Goal specificity Treatment complexity

Dimensions of Treatment Intensity (Continued) Interventionist Characteristics Experience, knowledge, skills Implementation Characteristics Intervention management and planning Materials and tangible resources required Deviation from classroom routines

Questions and Discussion

Social Validity in Single-Case Design Research

Effect-Size Estimation Social Validity Assessment Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence No Evidence Effect-Size Estimation Social Validity Assessment Does the Design allow Documentation of Experimental Control ? Stop Does the Presentation Allow Interpretation? Stop What is the Effect and is it Socially Important?

Social Validity in Treatment Research Social Validity involves three questions about an intervention (Kazdin, 2011): Are the goals of the intervention relevant to the person’s life? Is the intervention acceptable to “consumers” and others involved in the procedures? Are the outcomes of the intervention important (i.e., do changes make a difference in lives of persons involved)?

Social Validity in Treatment Research Three social validation methods can be used in intervention research: Social Comparison Subjective Evaluation Sustainability of Treatment

Social Validity in Treatment Research Social Comparison Normative data can be used when such data provide a good benchmark for positive functioning. Typically involves identification of a “peer group” similar to the client. The peer group would consist of persons who are functioning in an adequate or positive manner. Sometimes standardized assessment instruments can be used for social comparison purposes.

Social Validity in Treatment Research Subjective Evaluation Involves the assessment by significant others who have knowledge of the client and can make a judgment of the need for intervention and the outcomes of intervention. Individuals rendering a “subjective” judgment may be parents, teachers, or professionals who have expert status in an area. Specific goals may be established and serve as the basis for intervention and a benchmark for outcome evaluation.

Social Validity in Treatment Research Sustainability The degree to which the effects of a treatment are sustained over time (Kennedy, 2005). The assessment is a measure of how long the program stays in place or is adopted.

Social Validity in Treatment Research Some challenges with social comparison: It is not so easy to establish normative comparisons. Normative comparisons may be unrealistic. Normative range of functioning may be an impossible goal given the level of impairment. Normative goals may not reflect overall quality of life.

Social Validity in Treatment Research Some challenges with subjective evaluation: Global ratings may be biased. Persons completing the rating (s) may perceive a small change as major when in fact, not much of significance has occurred. Subjective ratings may not correspond to other outcome data (e.g., direct observation).

Social Validity in Treatment Research Some Challenges with sustainability Requires a long time to conduct the analysis; Factors other than consumer acceptance may depend on factors other than acceptance to consumers; Sustainability is an indirect index of social validity.

Questions and Discussion