School Improvement, Short Cycle Assessments and Educator Evaluation Orlando June 22, 2011 Allan Odden and Anthony Milanowski Strategic Management of Human.

Slides:



Advertisements
Similar presentations
1 Survey of Super LEAs Evaluation Systems Performance Evaluation Advisory Council July 16 th, 2010.
Advertisements

Impact on Student Learning The conversation is about multiple indicators for this category BUT Few if any places actually have viable multiple indicators.
Instructional Decision Making
When Students Can’t Read…
Alabama Teacher Leaders VAL-ED Instructional Leadership Survey January 2013.
Briefing: NYU Education Policy Breakfast on Teacher Quality November 4, 2011 Dennis M. Walcott Chancellor NYC Department of Education.
Pennsylvania’s Continuous Improvement Process. Understanding AYP How much do you know about AYP?
VALUE – ADDED 101 Ken Bernacki and Denise Brewster.
OVERVIEW OF CHANGES TO EDUCATORS’ EVALUATION IN THE COMMONWEALTH Compiled by the MOU Evaluation Subcommittee September, 2011 The DESE oversees the educators’
 Reading School Committee January 23,
MEASURING TEACHING PRACTICE Tony Milanowski & Allan Odden SMHC District Reform Network March 2009.
1 Standards, Curriculum, and Research Mathematically Connected Communities (MC 2 ) Adapted from a PowerPoint by Barbara A. Austin, Ph.D.
Washington State Teacher and Principal Evaluation Project Preparing and Applying Formative Multiple Measures of Performance Conducting High-Quality Self-Assessments.
JUNE 26, 2012 BOARD MEETING Measures of Academic Progress (MAP)
1 GENERAL OVERVIEW. “…if this work is approached systematically and strategically, it has the potential to dramatically change how teachers think about.
March, What does the new law require?  20% State student growth data (increases to 25% upon implementation of value0added growth model)  20%
Teacher/Principal Evaluation Overview (Digging a bit deeper) April 19, 2011 Dana Anderson, ESD 113 Teaching and Learning.
Helping Teachers Become More Effective While Measuring Teaching Effectiveness: Combining Multiple Measures AASA Webinar, 2011 Allan Odden Strategic Management.
Meeting of the Staff and Curriculum Development Network December 2, 2010 Implementing Race to the Top Delivering the Regents Reform Agenda with Measured.
Silas Deane Middle School Steven J. Cook, Principal Cynthia Fries, Assistant Principal October 22, 2013 Wethersfield Board of Education.
March 28, What does the new law require?  20% State student growth data (increases to 25% upon implementation of value0added growth model)  20%
John Cronin, Ph.D. Director The Kingsbury NWEA Measuring and Modeling Growth in a High Stakes Environment.
The Quality Review A Reflection.
Addressing Student Growth In Performance Evaluation For Teachers and Administrators Patricia Reeves.
Maximizing Reading Gains to Meet AYP Targets: Decision Support Analytics for School Board Providence School District, RI April 2014.
Colorado’s Student Perception Survey. Agenda Why use a Student Perception Survey? What the Research Says Survey Overview Survey Administration Use of.
SB : The Great Teachers and Leaders Act State-wide definition of “effective” teacher and principal in Colorado Academic growth, using multiple measures.
PRESENTED BY THERESA RICHARDS OREGON DEPARTMENT OF EDUCATION AUGUST 2012 Overview of the Oregon Framework for Teacher and Administrator Evaluation and.
Sharing in Leadership for Student Success DeAnn Huinker & Kevin McLeod, UWM Beth Schefelker, MPS 18 April 2008.
Laying the Groundwork for the New Teacher Professional Growth and Effectiveness System TPGES.
Evaluation Team Progress Collaboration Grant 252.
The Instructional Decision-Making Process 1 hour presentation.
Overview to Common Formative Assessments (CFAs) Adapted from The Leadership and Learning Center Presented by Jane Cook & Madeline Negron For Windham Public.
Teacher Algebra Network: Our Model for Professional Development in Three Rural North Carolina Counties Presented by Katie J. Mawhinney and Tracie McLemore.
{ Principal Leadership Evaluation. The VAL-ED Vision… The construction of valid, reliable, unbiased, accurate, and useful reporting of results Summative.
Designing Local Curriculum Module 5. Objective To assist district leadership facilitate the development of local curricula.
Final Reports from the Measures of Effective Teaching Project Tom Kane Harvard University Steve Cantrell, Bill & Melinda Gates Foundation.
Using Teacher Evaluation as a Tool for Professional Growth and School Improvement Redmond School District
Expeditionary Learning Queens Middle School Meeting May 29,2013 Presenters: Maryanne Campagna & Antoinette DiPietro 1.
Why Do State and Federal Programs Require a Needs Assessment?
1 Milwaukee Mathematics Partnership Program Evaluation Year 5 Results Carl Hanssen Hanssen Consulting, LLC Cindy Walker University of Wisconsin-Milwaukee.
Using Adequate Resources to Double Student Performance Sarah Archibald Allan Odden CPRE Invitational Conference February 21, 2007.
March Madness Professional Development Goals/Data Workshop.
MEAP / MME New Cut Scores Gill Elementary February 2012.
TPEP Teacher & Principal Evaluation System Prepared from resources from WEA & AWSP & ESD 112.
Educator Effectiveness: State Frameworks and Local Practice(??) CCSSO Annual Conference, June 2012 Juan M. D’Brot Executive Director of Assessment and.
Educator Effectiveness: State Frameworks and Local Practice CCSSO Annual Conference, June 2012 Allan Odden Strategic Management of Human Capital (SMHC)
Melrose High School 2014 MCAS Presentation October 6, 2014.
ESEA, TAP, and Charter handouts-- 3 per page with notes and cover of one page.
Changes in Professional licensure Teacher evaluation system Training at Coastal Carolina University.
Springfield Effective Educator Development System (SEEDS)
Standard VI Teachers Contribute to the Academic Success of Students.
Overview of Student Learning Objectives (SLOs) for
Goal Setting in Educator Evaluation Sept. 11 th,
Educator Effectiveness Process Introduction to the Grant and Guide to the Unit Meeting.
Overview of Student Growth and T-TESS. Keys of Appraisal Student growth is a part of the appraisal process: Formative Ongoing and Timely Formalize what.
Math Study Group Meeting #1 November 3, 2014 Facilitator: Simi Minhas Math Achievement Coach, Network 204.
Purpose of Teacher Evaluation and Observation Minnesota Teacher Evaluation Requirements Develop, improve and support qualified teachers and effective.
East Longmeadow Public Schools SMART Goals Presented by ELPS Leadership Team.
[School Name]’s Student Perception Survey Results This presentation is a template and should be customized to reflect the needs and context of your school.
Springfield Public Schools Springfield Effective Educator Development System Overview for Educators.
Instructional Leadership and Application of the Standards Aligned System Act 45 Program Requirements and ITQ Content Review October 14, 2010.
Standard VI Teachers Contribute to the Academic Success of Students.
Approaches to Measuring Teaching Practice: Review of Seven Systems Tony Milanowski University of Wisconsin-Madison (With Contributions from Herb Heneman.
Outcomes By the end of our sessions, participants will have…  an understanding of how VAL-ED is used as a data point in developing professional development.
Overview This presentation provides information on how districts compile evaluation ratings for principals, assistant principals (APs), and vice principals.
Proactive Assessments
Discussion and Vote to Amend the Regulations
Administrator Evaluation Orientation
Overview This presentation provides information on how districts compile evaluation ratings for principals, assistant principals (APs), and vice principals.
Presentation transcript:

School Improvement, Short Cycle Assessments and Educator Evaluation Orlando June 22, 2011 Allan Odden and Anthony Milanowski Strategic Management of Human Capital (SMHC) University of Wisconsin-Madison

Overview of Presentation 1.Prime challenge is to improve student performance 2.Key strategy to attain that goal (on which I focus today): talent and human capital management 3.Support tactic for talent management – multiple measures of effectiveness used in new teacher evaluation systems 2

Improving Student Performance CPRE research and Lawrence O. Picus and Associates research from school finance adequacy studies: Odden (2009) & Odden & Archibald (2009) Research by others – Ed Trust, Karen Chenoweth (2007), Supowitz, etc. Schools from urban, suburban and rural communities Many schools and districts with high concentrations of children from low income and minority backgrounds Finalists for the Broad Prize in Urban Education =  Ten Strategies for Improving Performance

Ten Strategies to Improve Performance 1.Initial data analysis, largely analyzing state accountability tests 2.Set high ambitious goals – double student performance, 90% to advanced standards 3.Adopt new curriculum materials and over time a systemic view of effective instructional practices that all teachers are expected to implement 4.Implement data based decision making with benchmark and short cycle assessments – e.g. Renaissance Learning STAR Enterprise

Ten Strategies to Improve Performance 5.Invest in comprehensive, ongoing professional development, including instructional coaches in all schools 6.Use school time more effectively – protected core subject times for reading and math – collaborative time for teacher teams working in Professional Learning Communities 7.Multiple extra-help strategies for struggling students – tutoring, extended day, summer 8.Widespread distributed instructional leadership

Ten Strategies to Improve Performance 9.Reflect “best practices” and incorporate research knowledge – not doing your own thing 10.Be serious about talent – finding it, developing it, determining effective from ineffective teachers and principal; promoting, highly compensating and retaining only teachers and principals based on measures of effectiveness

Human Capital Management Obama and Duncan administration has made improving teacher and principal talent and their effectiveness central to education reform Goal: put an effective teacher into every classroom and an effective principal into every school To implement these practices and manage teachers (and principals) around them, develop multiple measures of teacher effectiveness (long-hand for new teacher evaluation systems) Scores of states and districts working on this issue These issues also central to ESEA reauthorization The question is not whether teacher evaluation will change but how it will be changed 7

Core Elements of the Strategy Multiple Measures of Teaching Effectiveness 1.Measures of instructional practice – several systems 2.Measures of pedagogical content knowledge 3.Student perceptions of the academic environment 4.Indicators of impact on student learning All this is now mandated by Illinois law Use of those measures: a)In new evaluation systems, for teachers and principals b)For tenure c)For distributing and placing effective teachers d)For dismissing ineffective teachers e)For compensating teachers 8

Teacher Evaluation Two major pieces of the evaluation: 1.Measure of instructional practice – Danielson Framework, INTASC, Connecticut BEST system, CLASS, PACT, National Board, the new North Carolina system – see Milanowski, Heneman, Kimball, Review of Teaching Performance Assessments for Use in Human Capital Management, 2009 at and go to resources Review of Teaching Performance Assessments for Use in Human Capital Management, 2009www.smhc-cpre.org 2.Measure of impact on student learning: a.The only model at the present time is value added using end of year state summative tests b.One new proposal is to use interim-short cycle (every 4-6 weeks) assessment data, aligned to state content standards, that show student/classroom growth relative to a normed (national or state?) growth trajectory 9

Other National Efforts Measuring Effective Teaching project of the Gates Foundation: –Multiple value added measures –Several teacher rubrics, with video tool to replace direct observations –Student survey – Ron Ferguson 10

Measuring Educator Performance

Specifically, focus on short-cycle assessments

Measuring Educator Performance  The indicators of impact on student learning, must devolve from tests that: 1.Are valid and Reliable 2.Are instructionally sensitive and instructionally useful (linked to state content standards and provide data to teachers about how to improve instructional practice) 3.Provide stable results, which mean they should be given multiple times a year (every 4-6 weeks) Many state accountability tests fall short of these psychometric standards 14

Measuring Educator Performance  Be very helpful if the data system can be used: By teachers to guide their instructional practice To roll up the individual data to the classroom to indicate teacher impact on student learning gains To roll up the individual data to the grade level and/or the school level to indicate impact of school and school leadership on student learning gains 15

Final Contextual Comment All these systems must be embedded within a framework of ongoing educator development AND During these tight fiscal times, funds for professional development should NOT be cut 16

17

Multiple Measures of Teaching Performance for Accountability & Development Standard Prescription: Instructional practice measure (e.g., teacher evaluation ratings) + Gain, growth, or value- added based on state standards-based assessments But: –Practice ratings and assessment gain, growth, or value-added don’t measure the same thing; measurement error sources are different and don’t cancel –Gain, growth, or value-added on state assessments are of limited use for teacher development

Advantages of Adding Short-cycle Assessments to the Mix 1.For teacher development: because such assessments are frequent, teachers get feedback that they can use to adjust instruction before the state test –Teachers can see if student achievement is improving, and if assessments are linked to state proficiency levels, whether students are on track to proficiency 2.For teacher accountability: –More data points allow estimation of a growth curve –The growth curve represents learning within a single school year; no summer to confuse attribution –The slope of the average growth curve or average difference between predicted end points provides another indicator of teaching effectiveness –Combining with growth, gain, or value-added based on state assessments provides multiple measures of productivity –If linked to state assessments, can predict school year proficiency growth

Short Cycle Assessment Growth Curve

Issues in Combining Practice & Student Achievement Measures Models: Report Card, Compensatory, Conjoint When Combining Need to Address: –Different Distributions, Scales and Reference Points –Weighting in Compensatory Models Equal Policy Proportional to reliability

Report Card Model 22 Performance Domain Performance Dimensions Score Levels Requirement for Being Considered Effective Instructional Practice Planning & Assessment Classroom Climate Instruction 1-4 Rating of 3 or higher on all dimensions Professionalism Cooperation Attendance Development 1-4 Rating of 3 or higher on all dimensions Student Growth, Gain, or VA on State Assessments Math Reading/ELA Other Tested Subjects Percentiles in state/district distribution for each subject Being in the 3 rd Quintile or Higher for All Tested Subjects Student Growth on Short Cycle Assessment Math Reading Avg. Growth Curve Translated into Predicted State Test Scale Score Change Predicted Gain Over Year Sufficient to Bring Student from Middle of “Basic” Range to “Proficient”

Scales, Distributions, & Reference Points for Value-Added vs. Practice 23

Putting Practice Ratings and Student Achievement on the Same Scale Emerging Practice: Rescale growth, gain or value-added measure to match the practice rating scale –Standardize and set cut-off points in units of standard error, standard deviation or percentiles CategoryIn S.E. UnitsPercentiles Distinguished (4) >1.5 S.E. Above Mean70 th + Proficient (3) +/- 1.5 S.E. Around Mean30 th to 69 th Basic (2) S.E. Below Mean15 th to 29 th Unsatisfactory (1) > 2 S.E. Below MeanBelow 15 th

Compensatory (Weighted Average) Model for Combining Performance Measures DimensionRatingWeightProduct Growth, Gain, Value- Added on State Test 225%0.50 Growth as Measured by Short-Cycle Assessment 325%0.75 Practice Evaluation450% = Unsatisfactory, = Basic, = Proficient, = Distinguished 25

Conjoint Model for Combining 2 Measures Student Outcome Rating Teaching Practice = Advanced = Proficient = Basic =Unsatis- factory

Conjoint Model for Combining 3 Measures To Get a Summary Rating of Need Scores of at Least: 4 4 on two measures and 3 on the other 3 2 on the practice measure and 4 on both the student achievement measures - or - 3 on the practice measure and 3 on at least one of the student achievement measures 2 2 on the practice measure and 2 on either of the student achievement measures 1 1 on the practice measure and 1 on either student achievement measure 27

Other ideas about using short cycle assessments in Educator evaluation

Impact on Student Learning The conversation is about “multiple indicators” for this category BUT Few if any places actually have viable multiple indicators The prime and in most cases only indicator here is a “value added measure derived from state summative, accountability tests” 29

Impact on Student Learning Most teachers do not like value-added measures using end of year state summative tests; don’t understand them; don’t like state tests So what could be actual and practical additional indicators, indicators that could augment these value added statistics that derive from state summative tests (which whatever our viewpoint will probably not go away) 30

Impact on Student Learning Interim, short cycle assessments, that are given multiple times during the year Interim short-cycle assessments (STAR is one example) are used to help teachers improve instruction and also can be used to show student and classroom growth The only new, viable, specific idea in this area that is now on the table, and gives comparable evidence across teachers 31

Several Additional Indicators Background points: –STAR Reading and Math cover grades K-12 so cover classrooms above the standard grade 3-8 and 11 –Administered in computer-based format so provide immediate feedback to teachers for use in instructional improvement and change –Vertically aligned scales so can compare scores across months and years –Following charts derive from individual student data 32

First Set of Ideas: Chart 1

Chart 1 Interim assessments given monthly Student data aggregated to classroom Red squares are progress line for similar classes of students in a state (or nation) Green triangles are actual class progress Yellow star is state proficiency level Shows growth during the months of just the academic year 34

Chart 1 Modest student learning when the class had a substitute teacher Growth happened when regular teacher returned Actual class growth (green triangles) was much greater than the reference norm (red squares) In value added terms, the class would have a high value added – performance growth was above the average (the red square trend line) for this typical classroom 35

Chart 1 How to use these data: 6.Compare growth of this class to other classes: a.In the same school b.In the same district c.With similar demographics d.In the same state e.Across the nation f.To classes in schools with similar demographics Many different ways to use the data in such a Chart 36

Chart 2 37

Chart 2 Interim assessments given monthly Student data aggregated to school level Red squares are progress line for schools with similar students in a state (or nation) Green triangles are actual school progress Yellow star is state proficiency level Shows growth during the months of just the academic year 38

Chart 2 Rolls the classroom data up to the school level Could also roll up student data across classes for each grade Multiple ways to create an indicator 39

Chart 2 How to use these data: 1.Shows school performed above the state proficiency level 2.Shows school performed above similar schools 3.So compare end of year score on interim assessments with end of year score on state summative test – do both show exceeded proficiency? 4.Compute “standard deviation” of change – fall to spring and COMPARE to “standard deviation” of change on state summative test 5.Compare end-of-year interim assessment score to end-of- year state proficiency score, in terms of standard deviation above proficiency level, or standard deviation of growth over the year 40

Chart 2 Compare to other schools in the district Compare to other schools in the state Compare to other schools in the nation Compare to schools with similar demographics Compare value added or growth scores on interim assessments to that on state summative assessments 41

Chart 2 When data are rolled up to the school level, they provide additional indicators for: –Those education systems, like Hillsborough (FL), which are using school wide gains for teachers of non-tested subjects 42

Chart 3 43

Chart 3 Indicates whether classroom (school) is low performance level but high or low growth OR high performance level and high or low growth Could use simply to give points if high growth, or negative points of both low performance level and low growth (indicates a real performance issue) 44

Final Comments Interim short cycle assessments can be used to provide additional indicators of teacher (school) impact on learning growth The data supplement what is shown by value added with state summative tests Thus such data reduce the weight given to such indicators And these data derive from a system designed to help teachers be better at teaching 45

Allan Odden University of Wisconsin- Madison