Download presentation
Presentation is loading. Please wait.
Published byBertram Beasley Modified over 9 years ago
1
The Power of Reliable Rubrics Promoting Equitable and Effective Assessment Practices through Collaboratively Designed Rubrics Whitney Bortz, Leslie Daniel, & Jennifer McDonel (CEHD/STEL) February 2016
2
Objectives of the Session Experience how rubrics can improve the assessment process Encounter a model for collaboratively writing assessment tools, introducing tools to faculty, and testing reliability using a data management system Consider how rubric assessment can be improved in our own institutions (CEHD/STEL)
3
Rate the painting below on a scale of 1 (bad) to 4 (great). (CEHD/STEL)
4
Reflection Did the rubric help you confidently rate the painting? Explain. How might the rubric have been helpful to the student artist? (CEHD/STEL)
5
Radford University Context Teacher education includes programs in several colleges Formerly accredited by National Council for the Accreditation of Teacher Education (NCATE) Now working toward accreditation from Council for Accreditation of Educator Preparation (CAEP) CAEP requires common assessments across all initial licensure programs that demonstrate candidate performance on the Interstate New Teachers Assessment and Support Consortium (InTASC) standards (CEHD/STEL)
6
Accreditation Challenges Teacher preparation as a unit – Art Ed – Dance Ed – Early Childhood Ed – Elementary Ed – Middle Ed – Music Ed – Physical Ed – Secondary Ed: (English, Math, Science, and Social Studies) – Special Ed (5 different licensure areas) Common EPP unit-wide assessments – Intern Evaluations – Lesson Plans – Observations – Impact on student learning assessment – Professional Characteristics and Dispositions
7
Questions How many of you work directly with teacher preparation programs at your institution? – Play a key role in preparing for CAEP or SPA (Specialized Professional Association) accreditation? – Regularly use detailed rubrics to validate common assessments? If not from teacher preparation—what similarities do you have in accreditation/assessment requirements? Do any of you come from departments that utilize common assessments? If so, do these utilize detailed rubrics? (CEHD/STEL)
8
Benefits of Rubrics Influence the learning process positively (Jonsson and Svingby, 2007) Play a key role in formative assessment (Sadler, 1989) Assist in the feedback process, specifically (Hattie & Timperley, 2007; Shute, 2008) Increase the quality of student performance (Petkov and Petkova, 2006) Help identify programmatic areas for improvement (Song, 2006; Andrade, 2005; Powell, 2001) Others? (CEHD/STEL)
9
Rubrics should: have enough indicators to encompass all aspects of the traits/skills in question have 3–5 rating levels for each indicator have clear, distinguishable descriptions of performance at each rating level be valid, reliable, and fair (Andrade, 2005) (also, required by CAEP) However, creating such rubrics requires much time and effort. (Reddy & Andrade, 2010) Why? Judgments are inherently subjective (Turbow, 2015) (CEHD/STEL)
10
What makes a good rubric? According to CAEP (Chepko, 2014)… Appropriate - Alignment to standards Definable - clear, agreed-upon meaning Observable – quality of performance can be perceived Distinct from one another – each level defines distinct levels of candidate performance Complete – all criteria together describes the whole of the learning outcome (CEHD/STEL)
11
Validity (Allen & Knight, 2009) Focus: 1.Insure the concepts are learning skills students need (employer data and professional standards) 2.Insure rubrics are professionally anchored (standards and experts) (CEHD/STEL)
12
Rubric Writing Tips Start from the middle and work out (Chepko, 2014; Tomei, 2014) Changes in levels – Additive – each levels adds more advanced behaviors – Qualitative – the quality of the behavior changes in each level The lowest level should contain a description of what the rater could expect to see rather than simply the “absence” of something. Align the rubric and the assignment guide, if applicable. (CEHD/STEL)
13
Common Issues Double- or multi-barreled criteria or descriptors in one row Use of subjective language Performance descriptors do not encompass all possible scenarios Overlapping performance descriptors Use of technical language that may be interpreted differently across students and/or raters Others? (CEHD/STEL)
14
Formation of the Rubric Writing Team Director of STEL Director of Field Experience Director of Assessment Six Faculty Members – Disciplinary Experts – Hand-selected Interdisciplinary – Early childhood special education – Elementary Education – Math Education – Middle education – Music education – Special education
15
Professional Learning: Wenger Community of Practice (Wenger, 1998, p. 5) Individual development as the negotiation of meanings through practice Negotiation of – Expectation – Meaning – Priority – Performance (CEHD/STEL) Learning Community learning as belonging Identity learning as becoming Meaning learning as experience Practice learning as doing
16
Planning and preparation Internal grant funding Established goals and tasks Collaboratively wrote grant proposal Established roles and expectations Training and consulting (CEHD/STEL)
17
Creating new tools Collaborative writing Aligned to existing tools (e.g. lesson plans) Aligned to standards (InTASC)
18
The Writing Process 1.Whole group collaborative writing 2.Split into three groups of 2–3 persons 3.Drafted the three assessment tools 4.Sent to whole group for feedback 5.Met with a Consultant 6.Groups switched assessments for further editing 7.Whole group reviewed, edited, and commented 8.Final edits (CEHD/STEL)
19
Compare to final product (handout)
20
Launching the Rubrics 1.Roll out to Faculty Accompanied by a guidance document Presented as a “suite” of assessments 2.Request for Feedback 3.Inter-rater reliability exercise (CEHD/STEL)
21
Inter-rater Reliability Process (CEHD/STEL) Select Student Work Score Independently Share ratings Discuss and Approach Consensus (Turbow, 2015) Change assessment tool, as needed
22
Inter-rater Reliability Exercises Rated two sample lesson plans independently Shared ratings (see handout) Utilized The Delphi Technique – until consensus was reached (Hsu & Sandford, 2007) – Consensus often required changes to the rubric and/or to the assignment guide Repeated with two new sample lesson plans using the modified rubric (CEHD/STEL)
23
Activity: Evaluate part of a lesson plan using the Lesson Plan Rubric
24
Discussion and Sharing Describe rubric use at your institution? – How are rubrics created? – How are they used? – What benefits have you seen? What are some areas of challenge or areas for growth? (CEHD/STEL)
25
Resources http://www.introductiontorubrics.com/samples.html http://www.introductiontorubrics.com/samples.html https://www.aacu.org/value-rubrics http://manoa.hawaii.edu/assessment/howto/r ubrics.htm http://manoa.hawaii.edu/assessment/howto/r ubrics.htm (CEHD/STEL)
26
References (CEHD/STEL) Allen, S. & Knight, J. (2009). A Method for collaboratively developing and validating a rubric. International Journal for the Scholarship of Teaching and Learning, 3(2), 1 – 17. Retrieved from: http://www.georgiasouthern.edu/ijsotlhttp://www.georgiasouthern.edu/ijsotl Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27–31. Chepko, S. (2014, September). Quality Assessment Workshop. Workshop presented at the annual meeting of the Council for Accreditation of Educator Preparation, Washington D.C. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. Hsu, C. & Sandford, B. (2007). The Delphi Technique: Making sense of concensus. Practical Assessment, Research & Evaluation,12 (10), 1 – 8. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130–144. Petkov, D., & Petkova, O. (2006). Development of scoring rubrics for IS projects as an assessment tool. Issues in Informing Science and Information Technology, 3, 499– 510.
27
References Powell, T. A. (2001). Improving assessment and evaluation methods in film and television production courses (Unpublished doctoral dissertation). Capella University, Minneapolis. Song, K. H. (2006). A conceptual model of assessing teaching performance and intellectual development of teacher candidates: A pilot study in the US. Teaching in Higher Education, 11(2), 175–190. Tomei, L. (2014, October). Rubric Design Webinar, Part II. Presented online on behalf of LiveText. Turbow, D. (2015, February). Introduction to Rubric Norming. Presented online on behalf of LiveText. Wenger, E. (1998). Communities of practice: Learning, meaning, and identity. Retrieved from http://books.google.com/books?hl=en&lr=&id=heBZpgYUKdAC&oi=fnd&pg=PR11&dq=weng er+1998+&ots=kelf2kew5d&sig=6o4HfPARHoP2zkgas2Gy7DdCx3s (CEHD/STEL)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.