Download presentation
Presentation is loading. Please wait.
Published byMaximillian Neil Mills Modified over 9 years ago
1
What We Know About Effective Professional Development: Implications for State MSPs Part 2 Iris R. Weiss June 11, 2008
2
Comments on selected indicators A.4. Are goals measurable? There are important goals where we lack instruments, e.g., PCK, but in my view we should continue to focus on them, doing the best we can to monitor teacher progress.
3
B.5. Are the proposed activities innovative? Innovation is overrated, in my view, whether we are talking about PD or classroom instruction. Competent implementation of existing, well-designed approaches is a better bet.
4
B.9. Have teachers been integrally involved in the development of the PD plan? Teachers won’t necessarily know what they don’t know, nor how to design activities that focus appropriately on adult-level content. PD activities have to be both helpful, and perceived as helpful, so having a range of teachers review the PD plan is useful.
5
C.1. PD Provider Knowledge and Skills STEM faculty bring in-depth knowledge, but will likely need orientation to the world of K-12 instruction. Teacher leaders bring in-depth knowledge of K-12 teaching, but will likely need encouragement/assistance in focusing on content.
6
State MSP RFPs The scoring rubric sends strong signals about what the state is seeking. Avoid giving mixed messages; the scoring rubric needs to be consistent with the RFP text.
7
Take a few minutes to consider whether/how you would revise the rubric used in your state.
8
Evaluating the Quality and Impact of State MSPs
9
Project Evaluations Each project has a “theory of action” for how the planned activities will lead to the desired outcomes.
10
Basic Logic Model for PD Professional Development Teacher Knowledge Classroom Practice Improved Student Achievement
11
Deciding on Mid-Course Corrections Both project management and evaluators have responsibility for monitoring project implementation. Having PD providers observe classrooms is particularly powerful.
12
Project Evaluations State MSPs may want to measure quality and impact in any of a number of places in the logic model: –Quality of PD, and consistency among PD providers –Teacher knowledge –Classroom practice –Student achievement
13
Mechanisms for Mid-Course Corrections Professional development projects should include mechanisms for assessing effectiveness and making mid-course corrections.
14
Thorny Issues Abound Cost of classroom observations, scoring open-ended assessments, etc., and scarcity of people who are trained to do this work. Lack of valid, reliable measures that are feasible for large-scale administration.
15
Thorny Issues Abound Development of new measures requires considerable resources and expertise. Principals, teachers, and parents are reluctant to take time away from instruction to administer tests beyond those already used.
16
Thorny Issues Abound Strong research designs are needed in order to make the case that any measured gains are attributable to the treatment.
17
Thorny Issues Abound The fact that students of participating teachers scored higher in the spring than in the fall isn’t convincing; you expect students to learn mathematics/ science each year.
18
Thorny Issues Abound Showing that students of participating teachers scored higher than similar students of non-participating teachers isn’t necessarily convincing either; maybe the teachers who chose to participate were better teachers to begin with.
19
Thorny Issues Abound Strong research designs, including those that use random assignment and careful quasi-experiments that can rule out “rival hypotheses” such as these are not trivial to design and implement, especially if they require sophisticated multi-level analyses.
20
Thorny Issues Abound Even if individual projects have strong evaluations, it is difficult to aggregate results across projects that focus on different parts of the logic model, and/or use different measures.
21
Program evaluations can solve some of these problems There are two different approaches to program evaluations. In the first, the state hires a group to design/implement a statewide program evaluation.
22
Advantages The state can select a group with the necessary expertise to design and implement the evaluation statewide. Program evaluation allows for aggregation of results across projects, as well as the potential for learning about what works, for whom, and under what conditions.
23
Project-Based Evaluation With Common Components A second approach that allows aggregation of results is to have project-based evaluations with some or all data collection common across projects.
24
Project-Based Evaluation With Common Components In this approach, the state hires a group to design the evaluation, select/develop instruments, train project evaluators, and analyze the results.
25
Project-Based Evaluation With Common Components Project-based evaluations with common components are more feasible when the projects are similar in both goals and activities.
26
Project-Based Evaluation With Common Components One advantage to externally- coordinated project-based evaluations (compared to program evaluations where the external group collects the data) is that they help develop the capacity of project evaluators.
27
Project-Based Evaluation With Common Components Disadvantages include the difficulty in quality control of data collection, and the fact that collecting common data competes for resources that might otherwise be used to evaluate the quality of project-specific activities.
28
Regardless of evaluation approach Project/program evaluators need instruments appropriate to the goals of the MSPs. Given the emphasis on deepening teacher content knowledge in state MSPs, measures of teacher content knowledge that can be used on a large scale are particularly important.
29
MSP KMD resources for designing and evaluating PD Instrument database Knowledge reviews (see excerpt)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.