The Impact of Training on the Accuracy of Teacher-Completed Direct Behavior Ratings (DBRs) Teresa J. LeBel, Stephen P. Kilgus, Amy M. Briesch, & Sandra.

Slides:



Advertisements
Similar presentations
Co-Teaching as a Model of Student Teaching: Common Trends and Levels of Student Engagement Co-Teaching as a Model of Student Teaching: Common Trends and.
Advertisements

Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Method Participants 184 five-year-old (M age=5.63, SD=0.22) kindergarten students from 30 classrooms in central Illinois Teacher ratings The second edition.
Amber Zank, M.S.E. Michael Axelrod, Ph.D. University of Wisconsin- Eau Claire NASP Conference 2011 Training Individuals to Implement a Brief Experimental.
Direct Behavior Ratings and Daily Behavior Cards
Standardized Scales.
Using Video Segments to Enhance Early Clinical Experiences of Prospective Teachers Kristen Cuthrell, Michael Vitale, College of Education, East Carolina.
S-1 SUPERVISION. S-2 Instructional Leadership Development Framework for Data-driven Systems QUALITY STUDENT PERFORMANCE ETHICS AND INTEGRITY Curriculum/Instruction/
Mindfulness and Sleep Quality: The Importance of Acceptance Christina Barrasso, M.A. 1, Karolina Kowarz, M.A. 1, Dasa Jendrusakova, M.A. 1, Jennifer Block-Lerner,
Examining the Relationship Between Confrontational Naming Tasks & Discourse Production in Aphasia Leila D. Luna & Gerasimos Fergadiotis Portland State.
Advanced Topics in Standard Setting. Methodology Implementation Validity of standard setting.
Online Career Assessment: Matching Profiles and Training Programs Bryan Dik, Ph.D. Kurt Kraiger, Ph.D.
Direct Behavior Rating: An Assessment and Intervention Tool for Improving Student Engagement Class-wide Rose Jaffery, Lindsay M. Fallon, Sandra M. Chafouleas,
Chapter 5: Improving and Assessing the Quality of Behavioral Measurement Cooper, Heron, and Heward Applied Behavior Analysis, Second Edition.
Observation Tools Overview and User Guide. Does the need to determine the impact a student's ADHD is having in the classroom or quantitatively describe.
Assessing the Social Acceptability of Brief Experimental Analysis in the Context of a Complete Reading Intervention Program Greta Fenske, Erin Liffrig,
Chapter 7 Correlational Research Gay, Mills, and Airasian
Classroom Assessment A Practical Guide for Educators by Craig A
Universal Screening and Progress Monitoring Nebraska Department of Education Response-to-Intervention Consortium.
Partnering with parents
WHAT DO WE KNOW ABOUT SOCIAL INCLUSION?. SOCIAL INCLUSION Social inclusion is a process which ensures that those at risk of poverty and social exclusion.
DEVELOPING ALGEBRA-READY STUDENTS FOR MIDDLE SCHOOL: EXPLORING THE IMPACT OF EARLY ALGEBRA PRINCIPAL INVESTIGATORS:Maria L. Blanton, University of Massachusetts.
Office of Institutional Research, Planning and Assessment January 24, 2011 UNDERSTANDING THE DIAGNOSTIC GUIDE.
Generalizability and Dependability of Direct Behavior Ratings (DBRs) to Assess Social Behavior of Preschoolers Sandra M. Chafouleas 1, Theodore J. Christ.
Recent public laws such as Individuals with Disabilities Education Improvement Act (IDEIA, 2004) and No Child Left Behind Act (NCLB,2002) aim to establish.
Standardization and Test Development Nisrin Alqatarneh MSc. Occupational therapy.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
Why principal evaluation? Because Leadership Matters!
Classroom Assessments Checklists, Rating Scales, and Rubrics
Classroom Assessment A Practical Guide for Educators by Craig A
Chapter 8 Introduction to Hypothesis Testing
 Collecting Quantitative  Data  By: Zainab Aidroos.
Using handheld computers to support the collection and use of reading assessment data Naomi Hupert.
INTRODUCTION Project VIABLERESULTSRESULTS CONTACTS This study represents one of of several investigations initiated under Project VIABLE. Through Project.
Project VIABLE: Overview of Directions Related to Training to Enhance Adequacy of Data Obtained through Direct Behavior Rating (DBR) Sandra M. Chafouleas.
+ Development and Validation of Progress Monitoring Tools for Social Behavior: Lessons from Project VIABLE Sandra M. Chafouleas, Project Director Presented.
Training Interventionists to Implement a Brief Experimental Analysis of Reading Protocol to Elementary Students: An Evaluation of Three Training Packages.
Classroom Assessment for Student Learning March 2009 Assessment Critiquing.
Information commitments, evaluative standards and information searching strategies in web-based learning evnironments Ying-Tien Wu & Chin-Chung Tsai Institute.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Training Individuals to Implement a Brief Experimental Analysis of Oral Reading Fluency Amber Zank, M.S.E & Michael Axelrod, Ph.D. Human Development Center.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
(c) 2007 McGraw-Hill Higher Education. All rights reserved. Accountability and Teacher Evaluation Chapter 14.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
1 Scoring Provincial Large-Scale Assessments María Elena Oliveri, University of British Columbia Britta Gundersen-Bryden, British Columbia Ministry of.
Fidelity of Implementation A tool designed to provide descriptions of facets of a coherent whole school literacy initiative. A tool designed to provide.
Outline of Today’s Discussion 1.The Chi-Square Test of Independence 2.The Chi-Square Test of Goodness of Fit.
Seeing myself interact: Understanding interactions with children by embedding the CLASS in professional development Marilyn Chu, WWU – ECE FOCUS on Children.
Review: Stages in Research Process Formulate Problem Determine Research Design Determine Data Collection Method Design Data Collection Forms Design Sample.
Reliability EDUC 307. Reliability  How consistent is our measurement?  the reliability of assessments tells the consistency of observations.  Two or.
Training Strategies to Improve Accuracy Sayward E. Harrison, M.A./C.A.S. T. Chris Riley-Tillman, Ph.D. East Carolina University Sandra M. Chafouleas, Ph.D.
STAT MINI- PROJECT NKU Executive Doctoral Co-hort August, 2012 Dot Perkins.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Implementing the Professional Growth Process Session 3 Observing Teaching and Professional Conversations American International School-Riyadh Saturday,
The Role of Prior Knowledge in the Development of Strategy Flexibility: The Case of Computational Estimation Jon R. Star Harvard University Bethany Rittle-Johnson.
Instructional Practice Guide: Coaching Tool Making the Shifts in Classroom Instruction Ignite 2015 San Diego, CA February 20, 2015 Sandra
SAM (Self-Assessment of MTSS Implementation) ADMINISTRATION TRAINING
EVALUATING EPP-CREATED ASSESSMENTS
Contribution of Physical Education to Physical Activity of Children
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Dr. Scott Thur Dr. Kathleen Marshall
Classroom Assessment A Practical Guide for Educators by Craig A
Developing Common Assessments How do they enhance student learning?
NKU Executive Doctoral Co-hort August, 2012 Dot Perkins
Florida’s MTSS Project: Self-Assessment of MTSS (SAM)
What is DBR? A tool that involves a brief rating of a target behavior following a specified observation period (e.g. class activity). Can be used as: a.
XXXXX School Ci3T Implementation Report Social Validity and Treatment Integrity 20XX – 20XX ____________________________________ Fall 20XX Lane and Oakes.
SIR Observation Preparation and Report Examples
Implementing Efficient Writing Fluency Interventions in the Classroom
Measuring Data Quality
Presentation transcript:

The Impact of Training on the Accuracy of Teacher-Completed Direct Behavior Ratings (DBRs) Teresa J. LeBel, Stephen P. Kilgus, Amy M. Briesch, & Sandra M. Chafouleas University of Connecticut Research has shown a pressing need for proactive efforts to address challenging behavior in order to facilitate both academic and social behavioral success (e.g. Walker, Ramsey, & Gresham, 2004). However, when making decisions about intervention selection and implementation, data are needed to provide understanding of the effects of those intervention attempts. It is of vital importance that the data collection procedures result in reliable and accurate data, are easy to use, require minimal training, are time efficient, and are acceptable to the user (i.e., the teacher). The Direct Behavior Rating (DBR) is a brief measure used to rate behavior over a specified period of time and under specific and similar conditions (Chafouleas, Christ, Riley-Tillman, Briesch, & Chanese, 2007). DBR-type tools have typically been investigated as and shown to be an acceptable and efficient method of intervention (e.g., Chafouleas, Riley-Tilman, & McDougal, 2002; Crone, Horner, & Hawken, 2002; McCain & Kelley, 1993). However, focus on DBR use in assessment has recently been increasing given the potential efficiency of data collection. Despite the interest in DBR use in the assessment of social behavior, to date, few studies have investigated the psychometric properties of DBRs across assessment purposes. However, a growing body of evidence (e.g., Chafouleas, McDougal, Riley-Tillman, Panahon, & Hilt, 2005; Chafouleas et al., 2007; Riley-Tillman, Chafouleas, Sassu, Chanese & Glazer, in press; Steege, Davin, & Hathaway, 2001) supports the use of DBRs in behavioral assessment. Given the potential promise of DBRs in behavioral assessment, it is important to investigate the degree of training necessary for intended users (i.e., teachers) to reliably and accurately rate behavior using a DBR. As previous studies in related areas (e.g. school based consultation) have found direct training including modeling and feedback to be an effective method for enhancing performance, it is important to investigate whether this type of training is equally effective for DBRs. This study provided preliminary investigation of the effect of type of training on the accuracy of teacher DBR use, as well as teacher acceptability of the assessment measure. Introduction UCONN Participants included 40 general education teachers employed in a private high school in the Northeast. Video footage of a third grade classroom setting, as well as simulated classroom behavior footage, was collected and edited into 2-minute clips. Clips were selected based on the behaviors exhibited and visibility of the target children. The DBR consisted of a 100 mm continuous line divided into 10 equal gradients with three anchors (0%, 50%, 100%). Participants were asked to rate the percentage of time the target student was exhibiting disruptive behavior or academic engagement. Definitions of these behaviors and brief instructions regarding DBR procedures were included on the rating form. The Assessment Rating Profile- Revised (ARP-R; Eckert, Hintze, & Shapiro, 1997) was administered to participants at study completion to assess teacher acceptability of the DBR as a tool for documenting student behavior. Participants were randomly assigned to one of three groups (Direct Training, Indirect Training, No Training), which differed in the level of instruction and modeling provided, and the opportunity for practice using the DBR. The No Training (NT) group received only general instructions regarding viewing the video, behavior definitions, and completing the DBR. The Indirect Training (IT) group was given an instructional session on DBRs by a proctor, including reasons for use, how to complete the rating, and examples of rating specific behaviors. The Direct Training (DT) group received the exact same procedures as the Indirect Group, but in addition, had the opportunity to practice rating the specified behaviors. Each group, following the training conditions, was asked to watch the same two-minute video clip of typical classroom instruction and rate the target student on the proportion of time the student exhibited disruptive behavior and academic engagement. The DBR data were then compared to expert ratings, compiled from direct observational data. Method Summary and Conclusions Results For additional information, please direct all correspondence to Teresa LeBel at LeBel, T. J., Kilgus, S. P., Briesch, A. M., & Chafouleas, S.M. (2008, February). The influence of training on teacher- completed direct behavior ratings. Poster presentation at the National Association of School Psychologists Annual Convention, New Orleans, LA. Accuracy. Chi-square goodness-of-fit analyses were conducted to compare the proportion of individuals within each training group who rated accurately (see Table 3). For this method of analysis, data from both true score (i.e. expert rating) and DBR ratings made on a continuous scale (i.e %) were converted to a categorical scale (i.e. 0-10). For example, a continuous measurement of 0% corresponded to a categorical score of ‘0’, 5% to ‘1’, 49% to ‘5’, and 72% to ‘8’. The conversion was made in order to create a categorical range of accepted accuracy. Specifically, any DBR rating which fell within ten percentage points of the true score was deemed to be accurate for the purpose of this study. A summary of the means and standard deviations of group ratings is provided in Table 2. For chi-square analyses of rating accuracy, comparisons were made between the proportion of “Passes” and “Fails” within each training group. For ratings of disruptive behavior, no significant differences in accuracy were found between groups. However, for academic engagement, differences in accuracy were found between the DT and NT groups (  2 = 4.64, p =.03), and the IT and DT groups (  2 = 7.54, p =.02). No significance, however, was found between the IT and NT groups (Yates  2 =.10, p =.75). Acceptability. An examination of ARP-R results indicates that those within the IT group found the DBR to be the most acceptable (mean = 3.96). The ratings across all groups, however, fell within the mid-range in terms of acceptability (DT group M=3.35, SD=.80; IT group M=3.96, SD =1.03; NT group M=2.84, SD =1.08). Results Results from the current study indicated that no difference in rating accuracy existed between the NT and IT groups in regards to both target behaviors (i.e. disruptive behavior, academic engagement). This lack of discrepancy is interesting, in that it may suggest that a minimal level of rater training may be sufficient to produce accurate ratings. Yet, although no difference in accuracy was found between the NT and IT groups, each was significantly more accurate when rating academic engagement than those individuals within the DT group. However, no such differences were found between any of the three groups in regard to the rating of disruptive behavior. Overall, results of this preliminary investigation on DBR training suggest that (a) there exists an optimal degree of training, and (b) any level of training beyond this point may have a deleterious effect on rating accuracy. In addition, results suggest that teachers find the DBR to be a moderately acceptable tool for assessing student behavior. Given the preliminary nature of this study, additional research is needed before firm conclusions about DBR training requirements can be drawn. It will be important to determine which components of training (e.g. rater feedback) are deemed critical to produce accurate ratings. In addition, it may prove useful to further investigate the sources of rater error associated with DBR use (e.g. rater bias) in order to target the minimization of such sources of error in training. Lastly, although the use of video footage allowed for a preliminary exploration of the effects of DBR training on rater accuracy, investigation should be further extended into actual classrooms to gain a more complete understanding of the training needed to accurately rate student behavior using a DBR. In sum, despite certain limitations, results of the current study hold promise for the future of the DBR as a tool for the assessment of social behavior. Specifically, results seem to concur with previous work indicating that only a moderate degree of training may be sufficient to prepare teachers to make accurate ratings of student behavior (e.g. Angkaw et al., 2006, Chafouleas et al., 2005). This implication highlights the feasibility of the DBR, as it may only require a small amount of time to be spent in preparing teachers to use the tool accurately. This, in addition to the ease with which the DBR can be used in a classroom (an estimated seconds per student; Chafouleas, Riley-Tillman, & McDougal, 2002) is promising, as other methods of social behavior assessment (e.g. systematic direct observation, rating scales) require an extensive amount of both time and training to reliably employ (Pelham, Fabiano, & Massetti, 2005). UCONN