Project VIABLE: Overview of Directions Related to Training to Enhance Adequacy of Data Obtained through Direct Behavior Rating (DBR) Sandra M. Chafouleas.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Standardized Scales.
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Using Video Segments to Enhance Early Clinical Experiences of Prospective Teachers Kristen Cuthrell, Michael Vitale, College of Education, East Carolina.
Collecting data Chapter 5
Plan Evaluation/Progress Monitoring Problem Identification What is the problem? Problem Analysis Why is it happening? Progress Monitoring Did it work?
Research Methods in Psychology
Statistical Issues in Research Planning and Evaluation
Direct Behavior Rating: An Assessment and Intervention Tool for Improving Student Engagement Class-wide Rose Jaffery, Lindsay M. Fallon, Sandra M. Chafouleas,
Culture and psychological knowledge: A Recap
Assessment: Reliability, Validity, and Absence of bias
Concept of Measurement
Alcohol Consumption Past 90-day drinking was assessed with self-report items measuring typical quantity of alcohol consumption, drinking frequency, and.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Classroom Assessment A Practical Guide for Educators by Craig A. Mertler Chapter 5 Informal Assessments.
Partnering with parents
Copyright © 2001 by The Psychological Corporation 1 The Academic Competence Evaluation Scales (ACES) Rating scale technology for identifying students with.
Assessing the Curriculum Gary L. Cates, Ph.D., N.C.S.P.
The Learning Behaviors Scale
Generalizability and Dependability of Direct Behavior Ratings (DBRs) to Assess Social Behavior of Preschoolers Sandra M. Chafouleas 1, Theodore J. Christ.
Recent public laws such as Individuals with Disabilities Education Improvement Act (IDEIA, 2004) and No Child Left Behind Act (NCLB,2002) aim to establish.
Student Engagement Survey Results and Analysis June 2011.
Developing Teaching Assistant Self-Efficacy through a Pre-Semester Teaching Assistant Orientation K. Andrew R. Richards & Chantal Levesque-Bristol Purdue.
CLASS Keys Orientation Douglas County School System August /17/20151.
Standardization and Test Development Nisrin Alqatarneh MSc. Occupational therapy.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Classroom Assessment A Practical Guide for Educators by Craig A
The Impact of Training on the Accuracy of Teacher-Completed Direct Behavior Ratings (DBRs) Teresa J. LeBel, Stephen P. Kilgus, Amy M. Briesch, & Sandra.
Measuring Complex Achievement
CCSSO Criteria for High-Quality Assessments Technical Issues and Practical Application of Assessment Quality Criteria.
THE DANIELSON FRAMEWORK. LEARNING TARGET I will be be able to identify to others the value of the classroom teacher, the Domains of the Danielson framework.
INTRODUCTION Project VIABLERESULTSRESULTS CONTACTS This study represents one of of several investigations initiated under Project VIABLE. Through Project.
+ Development and Validation of Progress Monitoring Tools for Social Behavior: Lessons from Project VIABLE Sandra M. Chafouleas, Project Director Presented.
Training Interventionists to Implement a Brief Experimental Analysis of Reading Protocol to Elementary Students: An Evaluation of Three Training Packages.
Classroom Assessment for Student Learning March 2009 Assessment Critiquing.
McGraw-Hill/Irwin © 2005 The McGraw-Hill Companies, Inc. All rights reserved Performance Appraisals Chapter 11.
Leading (and Assessing) a Learning Intervention IMPACT Lunch and Learn Session August 6, 2014 Facilitated By Ozgur Ekmekci, EdD Interim Chair, Department.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Training Individuals to Implement a Brief Experimental Analysis of Oral Reading Fluency Amber Zank, M.S.E & Michael Axelrod, Ph.D. Human Development Center.
Responsiveness to Instruction RtI Tier III. Before beginning Tier III Review Tier I & Tier II for … oClear beginning & ending dates oIntervention design.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
KVEC Presents PGES Observation Calibration Are You On Target?
Online students’ perceived self-efficacy: Does it change? Presenter: Jenny Tseng Professor: Ming-Puu Chen Date: July 11, 2007 C. Y. Lee & E. L. Witta (2001).
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
1 Scoring Provincial Large-Scale Assessments María Elena Oliveri, University of British Columbia Britta Gundersen-Bryden, British Columbia Ministry of.
Training Strategies to Improve Accuracy Sayward E. Harrison, M.A./C.A.S. T. Chris Riley-Tillman, Ph.D. East Carolina University Sandra M. Chafouleas, Ph.D.
National PE Cycle of Analysis. Fitness Assessment + Gathering Data Why do we need to asses our fitness levels?? * Strengths + Weeknesses -> Develop Performance.
Assistant Instructor Nian K. Ghafoor Feb Definition of Proposal Proposal is a plan for master’s thesis or doctoral dissertation which provides the.
Outcomes for the ESA 6 Regional In-Service “What Works in Schools” Gain an understanding of … Dr. Marzano’s, “What Works in Schools.” the eleven factors.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Scott Kissau, PhD, University of North Carolina at Charlotte Laura Hart, EdD, University of North Carolina at Charlotte Annual Conference of the American.
Measurement and Scaling Concepts
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 47 Critiquing Assessments.
EVALUATING EPP-CREATED ASSESSMENTS
Middle School Training: Ensuring a Strong Foundation of Supports
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Classroom Assessment A Practical Guide for Educators by Craig A
Understanding Results
TREATMENT SENSITIVITY OF THE DYADIC PARENT-CHILD INTERACTION CODING SYSTEM-II Jenny Klein, B.S., Branlyn Werba, M.S., and Sheila Eyberg, Ph.D. University.
School-wide Positive Behavioral Interventions & Supports (SWPBIS) Readiness Activity miblsi.org.
New Goal Clarity Coach Training October 27, 2017
Week 3 Class Discussion.
What is DBR? A tool that involves a brief rating of a target behavior following a specified observation period (e.g. class activity). Can be used as: a.
XXXXX School Ci3T Implementation Report Social Validity and Treatment Integrity 20XX – 20XX ____________________________________ Fall 20XX Lane and Oakes.
Assuring the Quality of your COSF Data
Performance Management
Assuring the Quality of your COSF Data
Presentation transcript:

Project VIABLE: Overview of Directions Related to Training to Enhance Adequacy of Data Obtained through Direct Behavior Rating (DBR) Sandra M. Chafouleas 1, T. Chris Riley-Tillman 2, Theodore J. Christ 3, & George Sugai 1 University of Connecticut 1 East Carolina University 2 University of Minnesota 3 PROJECT GOALS For additional information, please visit correspondence regarding the project should be directed to Sandra Chafouleas at Phase I: Foundations of Measurement (Instrumentation and Procedures) How should the DBR scale be comprised? (e.g., Likert-type versus continuous), How should assessment items be worded, and how many items should be included for each target behavior?, How many observations and/or what duration should be used for each session?, How long should the DBR observation rating period be in order to obtain a reliable estimate of the behavior? Phase II: Decision Making and Validity In response to an intervention, does the DBR data correspond to data obtained via systematic direct observations, ODRs, traditional rating scales?, Can DBR data be used to screen a classroom and reliably identify students at-risk (engagement, disruption)?, Can DBR data be used in tri-annual assessments to assess level and rate of student behavior? Phase III: Feasibility (Training and Use, Perceived Usability) What does DBR training require?*, How intrusive are procedures?, How well do procedures and instrumentation generalize to teacher assessments within classroom settings?, How do consumers of DBR perceive its usability?, How acceptable is DBR?, How useful are DBR outcomes for school-based decisions? * Content for this poster. FINDINGS TO DATE Rater training may be needed to enhance outcomes, particularly for certain individuals Training to enhance accurate rating of mid-scale levels of behavior most challenging yet important Training need not necessarily be high intensity (lengthy) Components likely to be beneficial include direct training with: Overview with clear review of definitions and modeling of rating Practice and immediate feedback Behavior examples that utilize the scale range (low, medium, high) Impact of incorporating “frame of reference” and “error training” still to be determined NEXT STEPS Would our current findings related to instrumentation/procedures be improved if these training components were included? All future work will include training beyond familiarization What else might be done to enhance a) outcomes for variable individuals, and b) mid-scale rating accuracy? Following training, what happens to rating accuracy over time? How do we ensure training access to all? Develop an on-line dynamic training module What is the influence of rater on data obtained from DBR? In the “absence of training”, work to date under Phase 1 has suggested: Generally, profiles of aggregated DBR are consistent across raters (when averaged within student across occasion or across students across occasions) Reliable estimates of level can be established with relatively few observations (5-10 for low stakes, for high stakes) completed by the same rater, BUT some individual raters do not fall within these guidelines Individual raters seem to anchor ratings within a range of gradients and then subsequent ratings are made relative to that range When looking across raters, preliminary evidence supports systematic bias in rating. What components to training might be included to enhance outcomes? Direct training – active modeling and rehearsal as opposed to didactic or written Opportunities for practice and feedback – direct practice and immediate corrective feedback Intensity of training – how much training is sufficient? Explanation using “frame of reference” – provide clear rationale for rating, including qualitative descriptions Rater error training – definitions/examples of types of error to avoid when rating, such as leniency/severity, halo SUMMARY POINTS: Direct – Rating occurs in close proximity to the time and place of the observation. Thus, the rater must observe the target for a “sufficient” portion of the observation period. Behavior – The target of rating must be well-defined and accessible for observation. Rating – The rating component quantifies rater perception of the target behavior. Defining Direct Behavior Rating The purpose of the study was to examine whether providing users of DBR with a training session utilizing practice and performance feedback would increase rating accuracy. Method involved instructing 59 undergraduate students to watch video clips of student behavior and then rate target behaviors using DBR in one of two conditions: No training: 23 minute overview of DBR and behavior assessment Training: 23 minute instruction on DBR use, with practice and feedback In summary, results were consistent with initial hypotheses in that ratings conducted by trained participants were more accurate than those conducted by untrained participants. Specifically, using standard difference scores or Cronbach’s differential accuracy, raters in the training condition were significantly more accurate than those in the brief familiarization condition for ratings of academic engagement and disruption. As such, training resulted in higher levels of absolute or rank order accuracy. There was also far less variability among the ratings completed by the trained group than by the brief familiarization (no training) group. Schlientz, M.D., Riley-Tillman, T.C., Briesch, A.M., Walcott, C.M., & Chafouleas, S.M. (in press). The impact of training on the accuracy of Direct Behavior Ratings (DBRs). School Psychology Quarterly. STUDY 2 The purpose of the study was to examine whether DBR rating accuracy is significantly impacted by the type of training package. Research questions include: a) Does the addition of frame of reference (FOR) to standard training (ST) improve rater accuracy over and above ST alone?, b) Does the addition of rater error training (RET) to FOR and ST suggest an improvement in rater accuracy over and above ST alone and ST+FOR, and c) Does the addition of increased exposure, defined as opportunity for practice/feedback (2x more = 6x), improve rater accuracy over the standard 3x? Method involved undergraduates watching and rating video clips of student behavior in one of 6 possible conditions (described above). Clips were purposefully chosen to reflect a range of ratings on the DBR scale, and a composite accuracy score for each behavior will be used in analysis. Two-way ANOVA (2 Levels of Exposure X 3 Types of Training) will be conducted to test for differences among groups with regard to differential accuracy (the outcome). Data analyses in process. Chafouleas, S.M., Riley-Tillman, T.C., Kilgus, S.P., Amon, J., Jaffery, G. & Brooks, S. (in preparation). Critical components of DBR training to enhance rater accuracy: An investigation of training content and exposure. STUDY 1 BACKGROUND STUDY 3 STUDY 4 The purpose of this study was to examine whether direct training procedures resulted in greater DBR accuracy than either indirect training or no training. In addition, teacher acceptability of DBR in behavior assessment was assessed. Method involved having middle/high school teachers from a private school watch video clips of student behavior, and then rate academic engagement and disruption using DBR. Teachers were assigned to one of 3 conditions: No training: brief overview of study procedures Indirect: 20 minute instruction on DBR use, with example ratings Direct: 20 minute instruction on DBR use, with practice and feedback In summary, chi square analyses suggested that direct training did not improve rating accuracy. Moderate acceptability for DBR was found. Questions about selection and ordering of video samples during training were raised as initial exposure to mid-scale behavior ratings may create rater frustration. LeBel, T.J., Kilgus, S.P., Briesch, A.M., & Chafouleas, S.M. (in press). The impact of training on the accuracy of teacher-completed Direct Behavior Ratings (DBRs). Journal of Positive Behavioral Interventions. In this study, the extent to which DBR training with different levels of practice and performance feedback impacts rating accuracy was examined. Method involved undergraduates watching and rating video clips of student behavior in one of 3 conditions: No training: Overview of DBR and behavior assessment Training: Overview of DBR and instruction on DBR use, with practice and feedback over 3 opportunities Extended Training: Same but 6 opportunities In summary, initial analysis indicated no significant difference in accuracy between training conditions or at one week re-test. Further analyses compared training vs. control (no training) across three base-rate levels (low, medium and high) for each behavior. Training did not substantially improve overall ability to rate academic engagement but ratings were more accurate with high academic engagement, as demonstrated by lower systematic accuracy difference scores. Training significantly improved accuracy when rating all levels of disruptive behavior, and across conditions, accuracy was highest when disruptive behavior was low or high. Participants rated compliance with significantly less accuracy when targets displayed medium rates of compliance. Riley-Tillman, T.C. Harrison, S, Amon, J., & Brooks, S. (in preparation). An investigation of rating accuracy on DBR following training involving practice and feedback. Example single-item DBR scale