Download presentation
Presentation is loading. Please wait.
Published byRussell Harral Modified over 10 years ago
1
Waiting Room Todays webinar will begin shortly. REMINDERS: Dial 800-503-2899 and enter the passcode 6496612# to hear the audio portion of the presentation Download todays materials from the sign-in page: Webinar Series Part 6 PowerPoint slides Correlation Example Excel file REMINDERS: Dial 800-503-2899 and enter the passcode 6496612# to hear the audio portion of the presentation Download todays materials from the sign-in page: Webinar Series Part 6 PowerPoint slides Correlation Example Excel file
2
Determining How to Integrate Assessments into Educator Evaluation: Developing Business Rules and Engaging Staff Webinar Series Part 6
3
Webinar Series TitleDateLengthTime 1 Introduction: District-Determined Measures and Assessment Literacy 3/1460 minutes4-5pm 2 Basics of Assessment4/490 minutes4-5:30pm 3 Assessment Options4/2560 minutes4-5pm TA and Networking Session I7/113 hours9am-12pm 4 Determining the Best Approach to District- Determined Measures 7/1860 minutes4-5pm 5 Measuring Student Growth and Piloting District- Determined Measures 8/1560 minutes4-5pm TA and Networking Session II9/193 hours 2:30pm- 5:30pm 6 Integrating Assessments into Educator Evaluation: Developing Business Rules and Engaging Staff 10/2460 minutes4-5pm 7 Communicating Results12/560 minutes4-5pm TA and Networking Session III12/123 hours 2:30pm- 5:30pm 8 Sustainability1/2360 minutes4-5pm 2
4
Audience & Purpose Target audience District teams that will be engaged in the work of identifying, selecting, and piloting District- Determined Measures. After today participants will understand: Examples of practical solutions to issues of fairness in using District-Determined Measures (DDMs). Practical examples of engaging educators in the process of implementing DDMs. 3
5
Agenda Student Impact Rating Rollout Reminder DDM Comparability Identifying Bias Standardizing DDMs Ensuring Sufficient Variability Q&A and Next Steps 4
6
Student Impact Rating Rollout: DateAction Sept. 2013: Decide which DDMs to pilot and submit list to ESE. Sept. 2013 – June 2014: Pilot DDMs in at least the five required areas and research DDMs in additional areas. June 2014: Submit final plans, including any extension requests, for implementing DDMs during the 2014-15 school year*. SY 2014-2015 Implement DDMs and collect Year 1 Student Impact Rating data for all educators (with the exception of educators who teach the particular grades/subjects or courses for which an extension has been granted). SY 2015-2016 Implement DDMs, collect Year 2 Student Impact Rating, and determine and report Student Impact Ratings for all educators (with the exception of educators who teach the particular grades/subjects or courses for which a district has received an extension). *ESE will release the June 2014 submission template and DDM implementation extension request form in December 2013. 5
7
DDM Key Questions Is the measure aligned to content? Does it assess what the educators intend to teach and whats most important for students to learn? Is the measure informative? Do the results tell educators whether students are making the desired progress, falling short, or excelling? Do the results provide valuable information to schools and districts about their educators? 6
8
Refining your Pilot DDMs Districts will employ a variety of approaches to identify pilot DDMs (e.g., build, borrow, buy). Key considerations: 1.How well does the assessment measure growth? 2.Is there a common administration protocol? 3.Is there a common scoring process? 4.How do results correspond to low, moderate, of high growth? 5.Is the assessment comparable to other DDMs? Use the DDM Key Questions and these considerations to strengthen your assessments during the pilot year. 7
9
DDM Comparability: Two Types DDMs must be comparable across schools, grades, and subject matter district-wide. (Per 603 CMR 35.09(2)a)603 CMR 35.09(2)a Comparability = Two types (Type 1) Comparable across schools (Type 2) Comparable across grades and subject matter Learn more in Technical Guide B, page 9 and appendix G 8
10
Comparability (Type 1) Comparable across schools Example: Teachers with the same job (e.g., all 5 th grade teachers) Where possible, measures are identical Easier to compare identical measures Do identical measures provide meaningful information about all students? When might they not be identical? Different content (different sections of Algebra I) Differences in untested skills (reading and writing on math test for ELL students) Other accommodations (fewer questions to students who need more time) 9
11
Error and Bias Error is the difference between true ability and a students score. Random error Student sleeps poorly, lucky guess, … etc Systematic error (bias) Error occurs for one type or group of students ELL student misreads a set of questions Systematic Error = Bias Why This matters? Error (OK) decreases with longer/additional measures Bias (BAD) does not decrease with longer/additional measures Even with identical DDM, bias threatens comparability 10
12
When does bias occur? Situation: Students who score high on the pre- test have less of an opportunity to grow because they cannot get more than a top score (Ceiling Effect). Situation: Special education students gain fewer points from pre-post test, and as a result are less likely to be labeled as having high growth. 11
13
Checking for Bias Do all students have an equal chance to grow? Is there a relationship between the initial score and gain score? We can do this in EXCEL using correlation We have Pre-Test Score Post-Test Score Gain Score Type =correl, click formula Highlight Pre-Test Scores, Press Comma Highlight Difference Scores, Close Parentheses, Press Enter Correlation formula in Excel: =CORREL(PRE-TEST SCORES, GAIN SCORES) Correlation formula in Excel: =CORREL(PRE-TEST SCORES, GAIN SCORES) 12
14
Interpreting Correlation Correlation is the degree to which two numbers are related Correlation Number between -1 and 1. A zero correlation means numbers are unrelated Closer to 1 or -1 means strong correlation DDMs should provide all students an opportunity to demonstrate growth We want to see little to no correlation between pre-test scores and gain scores A correlation above.3 or below -.3 suggests that there are systematic differences in gain for low and high ability students 13
15
Correlation Example Demonstration of computing Correlation between pre-test and gain Very Low Correlation students of all ability were equally likely to demonstrate growth Negative Correlation Students of high ability systematically demonstrated less growth (due to ceiling effect) Positive Correlation Students with lower scores generally grew less (bias) 14
16
Interpreting Correlation Strong correlation is an indication of a problem A low correlation is not a guarantee of no bias! Strong effect in small sub-population Counteracting effects at both low and high end Use common sense Always look at a graph! Create a scatter-plot graph and look for patterns 15
17
Example of Bias at Teacher Level Teacher A PrePostGain 341 341 341 341 8146 Teacher B PrePostGain 341 8146 8 6 8 6 8 6 Even though similar students gained the same amount Teacher As average gain is 2 Teacher Bs average gain is 5 16
18
Solution: Grouping Grouping allows teachers to be compared based on similar students, even when the number of those students is different Teacher Average Growth Low Students A1 B1 High Students A6 B6 17
19
Addressing Bias: Grouping How many groups? What bias are you addressing? Enough students in each group? Using Groups Weighted average Rule based (all groups must be above cut off) Professional judgment 18
20
Comparability (Type 2) Comparability across different DDMs Across different grades and subject matter Are different DDMs held to the same standard of rigor? Does not require identical number of students in each of the three groups of low, moderate, and high Common sense judgment of fairness 19
21
One option: Standardization Standardization is a process of putting different measures on the same scale For example Most cars cost $25,000 give or take $5,000 Most apples costs $1.50 give or take $.50 Getting a $5000 discount on a car is about equal to what discount on an apple? Technical terms Most are = mean Give or take = standard deviation 20
22
Guest Speaker Jamie LaBillois – Executive Director of Instruction, Norwell Public Schools 21
23
Developing Local Norms Student A English: 15/20 Math: 22/25 Art: 116/150 Social Studies:6/10 Science:70/150 Music:35/35 We learned early on that we needed a process that would create one universal measurement unit to discuss student progress. 22
24
Transform the Data… 23
25
How? Step One Calculated the difference between Post and Pre (or any approach from Technical Guide B) Step Two Find the mean (average) of the difference scores Step Three Find the standard deviation of the difference scores 24
26
How? Now, were ready to transform the difference scores into a universal measurement system. Step Four Calculate the z-score of each individual difference score (observation – Mean) Z = ------------------------------------ Standard Deviation Step Five Calculate percentile rank for each z-score 25
27
Developing Local Norms Student A English: 15/20 Math: 22/25 Art: 116/150 Social Studies:6/10 Science:70/150 Music:35/35 Student A English: 62 %ile Math: 72 %ile Art: 59 %ile Social Studies:71 %ile Science:70 %ile Music:61 %ile 26
28
Examining an Educators Impact Grade 4 DIBELS Oral Reading Fluency MEDIAN %ile per class: Teacher 1:65 %ile Teacher 2:71 %ile Teacher 3:59 %ile Teacher 4:59 %ile Teacher 5: 62 %ile Teacher 6:57 %ile Teacher 7:29 %ile Teacher 8:50 %ile Evaluators Focus 27
29
Lessons Learned Growth vs. Achievement Robust Tool Timely Analysis Re-Assessment of Instruction Re-Assessment of Ability vs. Disability Development of Building-Based Evaluators Educator Engagement is Essential 28
30
Ensuring Sufficient Variability Technical Guide Bs two key questions: Is DDM aligned to content? Does the DDM provide information to educators and evaluations? Lack of variability reduces information 29
31
Looking for Variability The second graph is problematic because it doesnt give us information about the difference between average and high growth because so many students fall into the high growth category. 30
32
Guest Speaker Experience with constructing measures with greater variability 31
33
Wrap-Up Today, we discussed three strategies for evaluating the fairness of your DDMs 1.Check for bias by computing the correlation between pre-test scores and gain scores. Remember: Zero correlation indicates that all students have an equal chance to demonstrate growth. 2.Standardization can help you compare DDMs in different content areas. 3.Look for variability in student growth. A lack of variability reduces the amount of information available to educators about their students. 32
34
Resources Available Now at http://www.doe.mass.edu/edeval/ddm/:http://www.doe.mass.edu/edeval/ddm/ Technical Guide B DDMs and Assessment Literacy Webinar Series Technical Assistance and Networking Sessions Core Course Objectives and Example DDMs Coming Soon Using Current Assessments Guidance (Curriculum Summit) Model Contract Language DDM Pilot Plan Cohorts 33
35
Register for Webinar Series Part 7 Part 7: Communicating Results Date: December 5th, 2013 Time: 4-5pm EST (60 minutes) Register: https://air-event500.webex.com/air- event500/onstage/g.php?d=597905353&t=a https://air-event500.webex.com/air- event500/onstage/g.php?d=597905353&t=a 34
36
Questions Contact Craig Waterman at cwaterman@doe.mass.educwaterman@doe.mass.edu Ron Noble at rnoble@doe.mass.edurnoble@doe.mass.edu Feedback Tell us how we did: http://www.surveygizmo.com/s3/1421848/Dist rict-Determined-Measures-amp-Assessment- Literacy-Webinar-6-Feedback http://www.surveygizmo.com/s3/1421848/Dist rict-Determined-Measures-amp-Assessment- Literacy-Webinar-6-Feedback 35
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.