The Power of Reliable Rubrics Promoting Equitable and Effective Assessment Practices through Collaboratively Designed Rubrics Whitney Bortz, Leslie Daniel,

Slides:



Advertisements
Similar presentations
Writing an NCATE/IRA Program Report
Advertisements

Writing Assessment Plans for Secondary Education / Foundations of Educations Department 9 th Annual Assessment Presentation December 3, 2013 Junko Yamamoto.
Victorian Curriculum and Assessment Authority
National Council for Accreditation of Teacher Education February 2006 image files formats.
CONNECT WITH CAEP | | Teachers Know Their Content And Teach Effectively: CAEP Standard 1 Stevie Chepko,
The Marzano School Leadership Evaluation Model Webinar for Washington State Teacher/Principal Evaluation Project.
Training Module for Cooperating Teachers and Supervising Faculty
Susan Malone Mercer University.  “The unit has taken effective steps to eliminate bias in assessments and is working to establish the fairness, accuracy,
WORKING TOGETHER ACROSS THE CURRICULUM CCSS ELA and Literacy In Content Areas.
The Program Review Process: NCATE and the State of Indiana Richard Frisbie and T. J. Oakes March 8, 2007 (source:NCATE, February 2007)
NCATE Institutional Orientation Session on PROGRAM REVIEW Moving Away from Input- based Programs Toward Performance-based Programs Emerson J. Elliott,
Unit Assessment Plan Weber State University’s Teacher Preparation Program.
OCTOBER ED DIRECTOR PROFESSIONAL DEVELOPMENT 10/1/14 POWERFUL & PURPOSEFUL FEEDBACK.
Hiring Practices “Getting it Right” Brenda Hammons- Assistant Superintendent Dave Cox – Director of Academic Programs.
Evaluation of Student Learning, Department of Mathematics and Computer Science, Westmont College, Santa Barbara, CA Interpretation The.
Student Learning Objectives 1 Implementing High Quality Student Learning Objectives: The Promise and the Challenge Maryland Association of Secondary School.
 Description  The unit has a conceptual framework that defines how our programs prepare candidates to be well-rounded educators. Every course in the.
Student Learning Objectives 1 Phase 3 Regional Training April 2013.
BY Karen Liu, Ph. D. Indiana State University August 18,
Engaging the Arts and Sciences at the University of Kentucky Working Together to Prepare Quality Educators.
Learner-Ready Teachers  More specifically, learner-ready teachers have deep knowledge of their content and how to teach it;  they understand the differing.
CONNECT WITH CAEP | | Three-Year-Out Review of Assessments (Pending Accreditation Council and CAEP.
Technology Standards in Teacher Education Proficiencies and Assessment Ellen Hoffman Eastern Michigan University MDE Workshop October 10, 2003.
Classroom Assessments Checklists, Rating Scales, and Rubrics
The College of Saint Rose School of Education Department of Literacy and Special Education Teacher Candidate Assessment.
Addressing Teacher Dispositions at the community college Level
Oregon State Program Review Process February 10-12, 2010 Commission Meeting.
Sultan Qaboos University College of Education Course: Instructor:
Comprehensive Educator Effectiveness: New Guidance and Models Presentation for the Special Education Advisory Committee Virginia Department of Education.
Evidence-Based Observations Training for Observers of Teachers Module 5 Dr. Marijo Pearson Dr. Mike Doughty Mr. John Schiess Spring 2012.
Developing a Teaching Portfolio for the Job Search Graduate Student Center University of Pennsylvania April 19, 2007 Kathryn K. McMahon Department of Romance.
EDU 385 CLASSROOM ASSESSMENT Week 1 Introduction and Syllabus.
OCTOBER ED DIRECTOR PROFESSIONAL DEVELOPMENT 10/1/14 POWERFUL & PURPOSEFUL FEEDBACK.
Using Missouri’s Annual Performance Report for Continuous Improvement in Educator Preparation Gale “Hap” Hairston Director – Educator Preparation David.
The NCATE Journey Kate Steffens St. Cloud State University AACTE/NCATE Orientation - Spring 2008.
NCATE for Dummies AKA: Everything You Wanted to Know About NCATE, But Didn’t Want to Ask.
TPEP Teacher & Principal Evaluation System Prepared from resources from WEA & AWSP & ESD 112.
Assessment and Continuous Improvement in Teacher Education.
The Conceptual Framework: What It Is and How It Works Linda Bradley, James Madison University Monica Minor, NCATE April 2008.
Sharon M. Livingston, Ph.D. Assistant Professor and Director of Assessment Department of Education LaGrange College LaGrange, GA GaPSC Regional Assessment.
Reviewer Training 5/18/2012. Welcome & Introductions Co-Chairs: NHDOE Representative:Bob McLaughlin.
What is ? Nationally available, subject-specific performance assessment Focuses on student learning and principles from research and theory Designed.
Continuous Improvement. Focus of the Review: Continuous Improvement The unit will engage in continuous improvement between on-site visits. Submit annual.
Identifying Assessments
CCSSO Task Force Recommendations on Educator Preparation Idaho State Department of Education December 14, 2013 Webinar.
Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting.
QCC General Education Assessment Task Force March 21 and 22, 2016 Faculty Forum on General Education Outcomes.
CONNECT WITH CAEP | | Measures of Teacher Impact on P-12 Students Stevie Chepko, Sr. VP for Accreditation.
NOTE: To change the image on this slide, select the picture and delete it. Then click the Pictures icon in the placeholder to insert your own image. COMMON.
Designing Quality Assessment and Rubrics
Clinical Educators Design Team CAEP State Alliance for Clinical Partnership Presented by team members Laurie Henry, University of Kentucky & Nicole Nickens,
CAEP Standard 4 Program Impact Case Study
Data Conventions and Analysis: Focus on the CAEP Self-Study
EVALUATING EPP-CREATED ASSESSMENTS
Anthony Williams, Maria Northcote, Jason Morton and John Seddon
Bob Michael Associate Vice Chancellor, University System of Georgia
Phyllis Lynch, PhD Director, Instruction, Assessment and Curriculum
Office of Field and Clinical Partnerships and Outreach: Updates
Elayne Colón and Tom Dana
EDCI Retreat; Aug TJ Oakes Phillip VanFossen
Creating Analytic Rubrics April 27, 2017
REPORT CARD ON EDUCATOR PREPARATION
Designing and Using Rubrics
Rubrics for academic assessment
Bob Michael Associate Vice Chancellor, University System of Georgia
Clinical Educator and Teacher Candidate Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
Deconstructing Standard 2a Dr. Julie Reffel Valdosta State University
Training Chapter for the Advanced Field Experience Form (Pre-CPAST
Tennessee edTPA Conference
Cooperating Teacher and Student Teacher Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
Presentation transcript:

The Power of Reliable Rubrics Promoting Equitable and Effective Assessment Practices through Collaboratively Designed Rubrics Whitney Bortz, Leslie Daniel, & Jennifer McDonel (CEHD/STEL) February 2016

Objectives of the Session Experience how rubrics can improve the assessment process Encounter a model for collaboratively writing assessment tools, introducing tools to faculty, and testing reliability using a data management system Consider how rubric assessment can be improved in our own institutions (CEHD/STEL)

Rate the painting below on a scale of 1 (bad) to 4 (great). (CEHD/STEL)

Reflection Did the rubric help you confidently rate the painting? Explain. How might the rubric have been helpful to the student artist? (CEHD/STEL)

Radford University Context Teacher education includes programs in several colleges Formerly accredited by National Council for the Accreditation of Teacher Education (NCATE) Now working toward accreditation from Council for Accreditation of Educator Preparation (CAEP) CAEP requires common assessments across all initial licensure programs that demonstrate candidate performance on the Interstate New Teachers Assessment and Support Consortium (InTASC) standards (CEHD/STEL)

Accreditation Challenges Teacher preparation as a unit – Art Ed – Dance Ed – Early Childhood Ed – Elementary Ed – Middle Ed – Music Ed – Physical Ed – Secondary Ed: (English, Math, Science, and Social Studies) – Special Ed (5 different licensure areas) Common EPP unit-wide assessments – Intern Evaluations – Lesson Plans – Observations – Impact on student learning assessment – Professional Characteristics and Dispositions

Questions How many of you work directly with teacher preparation programs at your institution? – Play a key role in preparing for CAEP or SPA (Specialized Professional Association) accreditation? – Regularly use detailed rubrics to validate common assessments? If not from teacher preparation—what similarities do you have in accreditation/assessment requirements? Do any of you come from departments that utilize common assessments? If so, do these utilize detailed rubrics? (CEHD/STEL)

Benefits of Rubrics Influence the learning process positively (Jonsson and Svingby, 2007) Play a key role in formative assessment (Sadler, 1989) Assist in the feedback process, specifically (Hattie & Timperley, 2007; Shute, 2008) Increase the quality of student performance (Petkov and Petkova, 2006) Help identify programmatic areas for improvement (Song, 2006; Andrade, 2005; Powell, 2001) Others? (CEHD/STEL)

Rubrics should: have enough indicators to encompass all aspects of the traits/skills in question have 3–5 rating levels for each indicator have clear, distinguishable descriptions of performance at each rating level be valid, reliable, and fair (Andrade, 2005) (also, required by CAEP) However, creating such rubrics requires much time and effort. (Reddy & Andrade, 2010) Why? Judgments are inherently subjective (Turbow, 2015) (CEHD/STEL)

What makes a good rubric? According to CAEP (Chepko, 2014)… Appropriate - Alignment to standards Definable - clear, agreed-upon meaning Observable – quality of performance can be perceived Distinct from one another – each level defines distinct levels of candidate performance Complete – all criteria together describes the whole of the learning outcome (CEHD/STEL)

Validity (Allen & Knight, 2009) Focus: 1.Insure the concepts are learning skills students need (employer data and professional standards) 2.Insure rubrics are professionally anchored (standards and experts) (CEHD/STEL)

Rubric Writing Tips Start from the middle and work out (Chepko, 2014; Tomei, 2014) Changes in levels – Additive – each levels adds more advanced behaviors – Qualitative – the quality of the behavior changes in each level The lowest level should contain a description of what the rater could expect to see rather than simply the “absence” of something. Align the rubric and the assignment guide, if applicable. (CEHD/STEL)

Common Issues Double- or multi-barreled criteria or descriptors in one row Use of subjective language Performance descriptors do not encompass all possible scenarios Overlapping performance descriptors Use of technical language that may be interpreted differently across students and/or raters Others? (CEHD/STEL)

Formation of the Rubric Writing Team Director of STEL Director of Field Experience Director of Assessment Six Faculty Members – Disciplinary Experts – Hand-selected Interdisciplinary – Early childhood special education – Elementary Education – Math Education – Middle education – Music education – Special education

Professional Learning: Wenger Community of Practice (Wenger, 1998, p. 5) Individual development as the negotiation of meanings through practice Negotiation of – Expectation – Meaning – Priority – Performance (CEHD/STEL) Learning Community learning as belonging Identity learning as becoming Meaning learning as experience Practice learning as doing

Planning and preparation Internal grant funding Established goals and tasks Collaboratively wrote grant proposal Established roles and expectations Training and consulting (CEHD/STEL)

Creating new tools Collaborative writing Aligned to existing tools (e.g. lesson plans) Aligned to standards (InTASC)

The Writing Process 1.Whole group collaborative writing 2.Split into three groups of 2–3 persons 3.Drafted the three assessment tools 4.Sent to whole group for feedback 5.Met with a Consultant 6.Groups switched assessments for further editing 7.Whole group reviewed, edited, and commented 8.Final edits (CEHD/STEL)

Compare to final product (handout)

Launching the Rubrics 1.Roll out to Faculty Accompanied by a guidance document Presented as a “suite” of assessments 2.Request for Feedback 3.Inter-rater reliability exercise (CEHD/STEL)

Inter-rater Reliability Process (CEHD/STEL) Select Student Work Score Independently Share ratings Discuss and Approach Consensus (Turbow, 2015) Change assessment tool, as needed

Inter-rater Reliability Exercises Rated two sample lesson plans independently Shared ratings (see handout) Utilized The Delphi Technique – until consensus was reached (Hsu & Sandford, 2007) – Consensus often required changes to the rubric and/or to the assignment guide Repeated with two new sample lesson plans using the modified rubric (CEHD/STEL)

Activity: Evaluate part of a lesson plan using the Lesson Plan Rubric

Discussion and Sharing Describe rubric use at your institution? – How are rubrics created? – How are they used? – What benefits have you seen? What are some areas of challenge or areas for growth? (CEHD/STEL)

Resources ubrics.htm ubrics.htm (CEHD/STEL)

References (CEHD/STEL) Allen, S. & Knight, J. (2009). A Method for collaboratively developing and validating a rubric. International Journal for the Scholarship of Teaching and Learning, 3(2), 1 – 17. Retrieved from: Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27–31. Chepko, S. (2014, September). Quality Assessment Workshop. Workshop presented at the annual meeting of the Council for Accreditation of Educator Preparation, Washington D.C. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. Hsu, C. & Sandford, B. (2007). The Delphi Technique: Making sense of concensus. Practical Assessment, Research & Evaluation,12 (10), 1 – 8. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. Petkov, D., & Petkova, O. (2006). Development of scoring rubrics for IS projects as an assessment tool. Issues in Informing Science and Information Technology, 3, 499– 510.

References Powell, T. A. (2001). Improving assessment and evaluation methods in film and television production courses (Unpublished doctoral dissertation). Capella University, Minneapolis. Song, K. H. (2006). A conceptual model of assessing teaching performance and intellectual development of teacher candidates: A pilot study in the US. Teaching in Higher Education, 11(2), 175–190. Tomei, L. (2014, October). Rubric Design Webinar, Part II. Presented online on behalf of LiveText. Turbow, D. (2015, February). Introduction to Rubric Norming. Presented online on behalf of LiveText. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Retrieved from er+1998+&ots=kelf2kew5d&sig=6o4HfPARHoP2zkgas2Gy7DdCx3s (CEHD/STEL)