Presentation is loading. Please wait.

Presentation is loading. Please wait.

Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the Standards Five studies.

Similar presentations


Presentation on theme: "Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the Standards Five studies."— Presentation transcript:

1 DAY 4 Applications of the WWC Standards in Literature Reviews Tom Kratochwill

2 Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the Standards Five studies documenting experimental control (i.e., Meets Design Standards or Meets Design Standards With Reservations) Conducted by at least three research teams with no overlapping authorship at three different institutions The combined number of cases totals at least 20 Each study demonstrates an effect size of ___

3 Literature Reviews Kiuhara et al. (in press) found over 70 review articles; The number of research reviews that adopt the WWC Pilot Standards has increased in recent years; Options are now available to combine the effect sizes from single-case studies and group designs (see (Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom)

4 Examples of using Single-Case Research to Document Evidence-Based Practice
A systematic evaluation of token economies as a classroom management tool for students with challenging behavior (Maggin, Chafouleas, Goddard, & Johnson, 2011) Studies documenting experimental control [n=7/3 (MDS-student/classroom),4/0 (MDSWR-student/classroom)] At least three settings /scholars (yes) At least 20 participants (no) EVIDENCE CRITERIA: Strong evidence (n=1 at the student level and n=3 at the classroom level) Moderate evidence (n=8 at the student level and n=0 at the classroom level) No evidence (n=2 at the student level and n=0 at the classroom level)

5 Examples of using Single-Case Research to Document Evidence-Based Practice
An application of the What Works Clearinghouse Standards for evaluating single-subject research: Synthesis of the self-management literature base (Maggin, Briesch, & Chafouleas, 2013). Studies documenting experimental control [n=37 (MDS)/n=31(MDSWR)] At least three settings /scholars (Yes) At least 20 participants (Yes) EVIDENCE CRITERIA: Strong evidence (n=25) Moderate evidence (n=30) No evidence (n=13)

6 Further Examples of using Single-Case Research to Document Evidence-Based Practice: Topical Areas
Repeated Reading (WWC) Peer Management Interventions Writing Interventions for High School Students with Disabilities Sensory-Based Treatments for Children with Disabilities The Good Behavior Game Evidence –Based Practices in Education and Treatment of Learners with Autism Spectrum Disorders Academic Interventions for Incarcerated Adolescents

7 Need for Replication Studies in Single-Case Intervention Research
Can establish the reliability of findings Can establish the generalizability of findings Can help determine what works and what does not work in treatment Ultimately can determine the credibility of “one-time” findings

8 Three types of Replication in Single-Case Design (Barlow, Nock, & Hersen, 2009):
Direct Replication: Replication of the experiment by the same researcher (sometimes called interparticipant replication). Systematic Replication: An attempt by the researcher to replicate findings from a direct replication series while varying settings, therapists, behavior problems/disorders, and any combination of the above (can involve intraexperiment and/or interexperiment replication). Clinical Replication: Administration of a treatment package by the same investigator containing two or more distinct treatment procedures. Barlow, D. H., Nock, M. K., & Hersen, M. (2009). Single-case experimental designs: Strategies for studying behavior change (3rd ed.). Boston, MA: Allyn & Bacon.

9 Disseminating Evidence-Based Practices
Evidence-based Treatment is Not Sufficient: In addition to the features of the practice: define what outcomes, when/where used, by whom, with what target populations, at what fidelity? The innovative practice needs to not only be evidence-based, but dramatically easier and better than what is already being used. The practice should be defined conceptually as well as procedurally, to allow guidance for adaptation.

10 Evidence May Not Guide Practice
Good Evidence May Not Be Available to Guide Practice Evidence to Guide Practice May Not Have Been Disseminated Negative Scientific Evidence May Be Ignored

11 An Example of the Persistence of Fad Interventions
Lillenfeld, Marshall, Todd, and Shane (2015) provide an example of “fad interventions” that persist even though there is negative scientific evidence available. The example comes from the use of facilitated communication for autism…and more recently, from sensory integration treatments. Scott O. Lilienfeld, Julia Marshall, James T. Todd, Howard C. Shane. The persistence of fad interventions in the face of negative scientific evidence: Facilitated communication for autism as a case example. Evidence-Based Communication Assessment and Intervention, 2015; 1 DOI: /

12 Questions and Discussion

13 Day 4 Special Topics in Single-Case Intervention Research Tom Kratochwill
Measurement issues in single-case design Treatment integrity Treatment intensity Social validity in treatment research

14 Measurement Issues in Single-Case Design
Methods of Assessment Quality of Assessment

15 Methods of Assessment: Choice of the Dependent Variable
Assessment of Overt Behavior (sometimes called direct assessment; e.g., frequency, duration, latency, psychophysiological measures); Alternative Assessment (sometimes called indirect assessment; e.g., self-report, checklists and rating scales); Specification of the Conditions of Assessment (e.g., natural versus analogue, human observes versus automated recording, etc.).

16 Quality of Assessment Accuracy of Assessment
Observer Agreement (required in the WWC Pilot Standards)

17 Remember Standard 2: Inter-Assessor Agreement
Each Outcome Variable for Each Case Must be Measured Systematically by More than One Assessor. Researcher Needs to Collect Inter-Assessor Agreement: In each phase On at least 20% of the data points in each condition (i.e., baseline, intervention) Rate of Agreement Must Meet Minimum Thresholds: (e.g., 80% agreement or Cohen’s kappa of 0.60) If No Outcomes Meet These Criteria, Study Does Not Meet Design Standards.

18 Observer Agreement in the WWC Pilot Standards
The Panel not specify the method of observer agreement (e.g., using only measures that control for chance agreement). Hartmann, Barrios, and Wood (2004) reported over 20 measures to assess interobserver agreement. Probably good practice to report measures that control for chance agreement and report agreement for occurrences and non-occurrences of the dependent variable. Good to assess accuracy: the degree to which observer data reflect actual performance or some true measure of behavior (e.g., video performance of the behavior).

19 Artifact, Bias, and Complexity of Assessment
Researcher needs to consider factors that can obscure observer agreement: Direct assessment by observers can be reactive; Observers can drift in their assessment of the dependent variable; Observers can have expectancies for improvement that are conveyed by the researcher; Coding systems can be too complex and lead to error.

20 Questions and Discussion

21 Treatment Integrity Assessing and promoting Treatment Integrity is a more recent standard in intervention research. See Sanetti and Kratochwill (2014) for comprehensive coverage of the topic. Reference: Sanetti, L. M., & Kratochwill, T. R. (Eds) (2014). Treatment Integrity: A Foundation for Evidence-Based Practice in Applied Psychology. Washington, DC: American Psychological Association.

22 Treatment Integrity: Definition and Dimensions
“Treatment integrity is the extent to which required intervention components are delivered as designed in a competent manner while proscribed procedures are avoided by an interventionist trained to deliver the intervention in a particular setting to a particular population” (Perepletchikova, 2014, p. 138).

23 Representative Features of Treatment Integrity Data
Across Treatment Phases Across Therapists/Intervention Agents Across Situations Across Days/Sessions Across Cases

24 Questions and Discussion

25 Treatment Intensity: Definition and Dimensions
Treatment intensity typically refers to treatment strength and traditionally, the most commonly recognized dimension is dose of the intervention (Codding & Lane, 2014). Reference: Codding, R. S., & Lane, K. L. (2014). A spotlight on treatment intensity: An important and often overlooked component of intervention inquiry. Journal of Behavioral Education. doi: /s z.

26 Dimensions of Treatment Intensity*
Dose Learning trial/session Session length Session frequency Length of treatment Cumulative intensity Group Size (From Codding & Lane, 2014)

27 Dimensions of Treatment Intensity (Continued)
Treatment Design Form Positive corrective feedback Pace of instruction Opportunities to practice or respond Transitions between subjects or classes Goal specificity Treatment complexity

28 Dimensions of Treatment Intensity (Continued)
Interventionist Characteristics Experience, knowledge, skills Implementation Characteristics Intervention management and planning Materials and tangible resources required Deviation from classroom routines

29 Questions and Discussion

30 Social Validity in Single-Case Design Research

31 Effect-Size Estimation Social Validity Assessment
Evaluate the Design Meets Design Standards Meets with Reservations Does Not Meet Design Standards Evaluate the Evidence Strong Evidence Moderate Evidence No Evidence Effect-Size Estimation Social Validity Assessment Does the Design allow Documentation of Experimental Control ? Stop Does the Presentation Allow Interpretation? Stop What is the Effect and is it Socially Important?

32 Social Validity in Treatment Research
Social Validity involves three questions about an intervention (Kazdin, 2011): Are the goals of the intervention relevant to the person’s life? Is the intervention acceptable to “consumers” and others involved in the procedures? Are the outcomes of the intervention important (i.e., do changes make a difference in lives of persons involved)?

33 Social Validity in Treatment Research
Three social validation methods can be used in intervention research: Social Comparison Subjective Evaluation Sustainability of Treatment

34 Social Validity in Treatment Research
Social Comparison Normative data can be used when such data provide a good benchmark for positive functioning. Typically involves identification of a “peer group” similar to the client. The peer group would consist of persons who are functioning in an adequate or positive manner. Sometimes standardized assessment instruments can be used for social comparison purposes.

35 Social Validity in Treatment Research
Subjective Evaluation Involves the assessment by significant others who have knowledge of the client and can make a judgment of the need for intervention and the outcomes of intervention. Individuals rendering a “subjective” judgment may be parents, teachers, or professionals who have expert status in an area. Specific goals may be established and serve as the basis for intervention and a benchmark for outcome evaluation.

36 Social Validity in Treatment Research
Sustainability The degree to which the effects of a treatment are sustained over time (Kennedy, 2005). The assessment is a measure of how long the program stays in place or is adopted.

37 Social Validity in Treatment Research
Some challenges with social comparison: It is not so easy to establish normative comparisons. Normative comparisons may be unrealistic. Normative range of functioning may be an impossible goal given the level of impairment. Normative goals may not reflect overall quality of life.

38 Social Validity in Treatment Research
Some challenges with subjective evaluation: Global ratings may be biased. Persons completing the rating (s) may perceive a small change as major when in fact, not much of significance has occurred. Subjective ratings may not correspond to other outcome data (e.g., direct observation).

39 Social Validity in Treatment Research
Some Challenges with sustainability Requires a long time to conduct the analysis; Factors other than consumer acceptance may depend on factors other than acceptance to consumers; Sustainability is an indirect index of social validity.

40 Questions and Discussion


Download ppt "Toward a Professional Consensus on Using Single-Case Research to Identify Evidence-Based Practices : Some Initial Options from the Standards Five studies."

Similar presentations


Ads by Google