Presentation is loading. Please wait.

Presentation is loading. Please wait.

Developing an evaluation of professional development Webinar #3: Going Deeper into Identifying & Measuring Target Outcomes 1.

Similar presentations


Presentation on theme: "Developing an evaluation of professional development Webinar #3: Going Deeper into Identifying & Measuring Target Outcomes 1."— Presentation transcript:

1 Developing an evaluation of professional development Webinar #3: Going Deeper into Identifying & Measuring Target Outcomes 1

2 Information and materials mentioned or shown during this webinar are provided as resources and examples for the viewer's convenience. Their inclusion is not intended as an endorsement by the Regional Educational Laboratory Southeast or its funding source, the Institute of Education Sciences (Contract ED-IES-12-C-0011). In addition, the instructional practices and assessments discussed or shown in these presentations are not intended to mandate, direct, or control a State’s, local educational agency’s, or school’s specific instructional content, academic achievement system and assessments, curriculum, or program of instruction. State and local programs may use any instructional content, achievement system and assessments, curriculum, or program of instruction they wish. 2

3 Overview of Today’s Webinar Reliable outcomes State of Mississippi Literacy Initiative Example – Observation tools – Developing new measures School-wide support for implementation Question & answer session 3

4 RELIABLE OUTCOMES 4 Dr. Barbara Foorman

5 Sessions on why/how-to, modeling, coaching, feedback Educator knowledge Pre/post quiz Published attitude scale Mock demonstration Educator implementation Observation by a coach or peer Published observation NOT formal evaluation Curriculum script w/ checklist Behavior – tickets, SET Student achievement in targeted skill Unit test Vocab words CBM if target is fluency Behavioral Direct behavior ratings PBIS - ODR Generalized student achievement High stakes achievement Norm-referenced Attendance Drop-out Graduation NOT grades 5

6 Reliability Test with multiple choice items, calculate cronbach’s alpha, >.70 Open response items or observation measure, inter- rater reliability >.70 If using different forms at pre-test & post-test, parallel form reliability >.70 Not Reliable Reliable Valid Not Valid 6

7 MISSISSIPPI LITERACY INITIATIVE EXAMPLE 7 Dr. Jessica Sidler Folsom

8 8 MS Dept. of Education K-3 Literacy Initiative StudentStudent Coaches Teachers Knowledge ObservationsTraining Training

9 Mississippi Literacy Initiative Comprehensive state-level support Experts (coaches) embedded in target schools Research-based early literacy PD disseminated Intensive training schedule that balances needs of the schools Sustainability plan 9

10 Observation tool System support (superintendents & principals) Collaborative not evaluative observations Components of observation tool – Explicit instruction of each of the critical parts of instruction identified in the training – Quantifies student engagement – Indicator of quality of instruction Specific plan for reliability & replication 10

11 MATK Development Process Identify items Administer items Check test/items 11 Binks-Cantrell et al., 2012; Bos et al,, 2001; Carlisle et al., 2009; Carlisle et al., 2011; Cunningham et al., 2004;Mather et al., 2001; Moats & Foorman, 2003; Reutzel et al., 2011; Salinger et al., 2010; Spear- Swerling & Cheesman, 2012

12 Identify pool of items Identify existing research or literature with surveys/tests of related content Select items and categorize items that best match – Content (e.g., comprehension, phonological awareness) – Type (e.g., knowledge, teaching) – Matching LETRS module Randomly assign items to one of three test banks – Pre-test, Post-test, and “linking” – Linking items will go on both pre- and post-test Example comprehension – knowledge question: Why is metacognition important in reading comprehension? a)It helps students monitor their own comprehension b)It makes the teacher aware of when the students are experiencing difficulty during reading c)It prompts students to create mental images d)It causes automatic processing of the text so that students can make meaning of the text 12

13 Check test & item statistics The goal in designing a reliable instrument is for scores on similar items to be related (internally consistent), but for each to contribute some unique information as well. Cronbach’s Alpha – Most common measure of test reliability based on item variances – Values higher than.7 are acceptable, though.8 indicates good reliability Point-biserial correlation (or item-to-total correlation) – Correlation between right/wrong scores of an item and the total score across all items – High point-biserial correlation indicates that student who got the item correct also scored high on the test. – Items with low point-biserial correlations should be checked – Values higher than.15 are acceptable, though.25 or higher is better P-value – Proportion of individuals that get the item correct – The higher the p-value the easier the item – For a more reliable test, p-values should be spread out All can be computed in Excel http://languagetesting.info/ statistics/excel.html http://languagetesting.info/ statistics/excel.html https://eddata.com/wp- content/uploads/2015/11/E DS_Point_Biserial.pdf https://eddata.com/wp- content/uploads/2015/11/E DS_Point_Biserial.pdf 13

14 SCHOOL-WIDE SUPPORT FOR IMPLEMENTATION 14 Mr. Kevin Smith

15 Who is responsible? Is this a priority goal for that team? Establishing agreed upon clear goals and timelines Formalizing Planning 15

16 Securing district and school leadership support (example) – Why this? Why now? need this info to market initiative Determining and sharing user-friendly tie-ins to current practice/curricula Identifying the coalition of the willing (faculty vote & explicit commitment to attending sessions, observing & being observed, taking quizzes) Collaborating 16

17 Determining available coaching resources – Who? When? How often? Establishing coaching expectations and protocols – Developing relationships, modeling, supporting, troubleshooting, reviewing data, making decisions (level of Tier 1 follow-up) Observations once/ month with checklist feedback Coaching/Peer Coaching 17

18 Low- implementation Pre-determine criteria for low implementation & implement Tier 2 PD practices – < 80% on knowledge test, schedule additional opportunities for practice – < 80% on implementation observation, schedule opportunities for modeling, co-teaching, more frequent feedback & goal-setting 18

19 Developing an evaluation of professional development Webinar 4: Going Deeper into Analyzing Results 1/19/2016, 2:00pm Webinar 5: Going Deeper into Interpreting Results & Presenting Findings 1/21/2016, 2:00pm 19

20 Questions & Answers Please type any questions in the chat box. 20


Download ppt "Developing an evaluation of professional development Webinar #3: Going Deeper into Identifying & Measuring Target Outcomes 1."

Similar presentations


Ads by Google