Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting.

Slides:



Advertisements
Similar presentations
AA-AAS Whats Up, Whats Down and Whats Next in South Dakota Linda Turner Special Education Programs SD Dept of Education.
Advertisements

Ongoing Training Day 1. Welcome Back! [re]Orientation Lead Evaluator Training Agenda Review.
Tenure is awarded when the candidate successfully demonstrates meritorious performance in teaching, research/scholarly/creative accomplishment and service.
Comparison of School and KSU Assessment of Teachers
1 SESSION 3 FORMAL ASSESSMENT TASKS CAT and IT ASSESSMENT TOOLS.
Gwinnett Teacher Effectiveness System Training
CONNECT WITH CAEP | | Teachers Know Their Content And Teach Effectively: CAEP Standard 1 Stevie Chepko,
PEER REVIEW OF TEACHING WORKSHOP SUSAN S. WILLIAMS VICE DEAN ALAN KALISH DIRECTOR, UNIVERSITY CENTER FOR ADVANCEMENT OF TEACHING ASC CHAIRS — JAN. 30,
Susan Malone Mercer University.  “The unit has taken effective steps to eliminate bias in assessments and is working to establish the fairness, accuracy,
Erica Schurter and Molly Mead Department of Information Access.
EdTPA: Task 1 Support Module Mike Vitale Mark L’Esperance College of Education East Carolina University Introduction edTPA INTERDISCIPLINARY MODULE SERIES.
PROPOSED MULTIPLE MEASURES FOR TEACHER EFFECTIVENESS
Jamal Abedi University of California, Davis/CRESST Presented at The Race to the Top Assessment Program January 20, 2010 Washington, DC RACE TO THE TOP.
performance INDICATORs performance APPRAISAL RUBRIC
Washington State Teacher and Principal Evaluation Project Preparing and Applying Formative Multiple Measures of Performance Conducting High-Quality Self-Assessments.
Lead Evaluator Training
Engaging the Arts and Sciences at the University of Kentucky Working Together to Prepare Quality Educators.
Teacher Keys Effectiveness System Forsyth County Schools Orientation May 2013 L.. Allison.
CONNECT WITH CAEP | | Three-Year-Out Review of Assessments (Pending Accreditation Council and CAEP.
CONNECT WITH CAEP | Transitioning from NCATE and TEAC to CAEP: How? Patty Garvin, Senior Director,
Marco Ferro, Director of Public Policy Larry Nielsen, Field Consultant With Special Guest Stars: Tammy Pilcher, President Helena Education Association.
The College of Saint Rose School of Education Department of Literacy and Special Education Teacher Candidate Assessment.
Unit 1 – Preparation for Assessment LO 1.1&1.2&1.3.
Introduction to Working Portfolios Educator Effectiveness System Training.
Laying the Groundwork for the New Teacher Professional Growth and Effectiveness System TPGES.
Classroom Assessment A Practical Guide for Educators by Craig A. Mertler Chapter 7 Portfolio Assessments.
Understanding Meaning and Importance of Competency Based Assessment
Invention Convention Seth Krivohlavek Angie Deck.
Developing Structured Activity Tools. Aligning assessment methods and tools Often used where real work evidence not available / observable Method: Structured.
Ongoing Training Day 2. Welcome Back! [re]Orientation Lead Evaluator Training Agenda Review.
Student Growth Measures in Teacher Evaluation Using Data to Inform Growth Targets and Submitting Your SLO 1.
Using Missouri’s Annual Performance Report for Continuous Improvement in Educator Preparation Gale “Hap” Hairston Director – Educator Preparation David.
Agenda Introductions Objectives and Agenda Review Principal Evaluation: Different? One Year Later Coaching Principals Collect evidence Support your local.
The Conceptual Framework: What It Is and How It Works Linda Bradley, James Madison University Monica Minor, NCATE April 2008.
Sharon M. Livingston, Ph.D. Assistant Professor and Director of Assessment Department of Education LaGrange College LaGrange, GA GaPSC Regional Assessment.
Education Unit The Practicum Experience Session Two.
ASSESSMENT LITERACY PROJECT Kansas State Department of Education Introduction and Overview Welcome !
Jeffrey Freund. Jeff Freund: Education and Work History Class of 2000 Class of 2004 Elementary Education Middle Level Mathematics.
Instruction and Technology November 24, First… Sign attendance sheet. 2. Place name tent on CPU 3. Open class wiki at
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
Evidence for Impact on Student Learning Tony Norman Associate Dean, CEBS In P. R. Denner (Chair), Evidence for Impact on Student Learning from the Renaissance.
Master Teacher Program Fall House Bill 1 Changes to Master Teacher Program –Eliminates EMIS report until 2011 Form I deleted Removes December timeline.
This video is for UNC Charlotte faculty developing a response form as part of the content validity protocol. The response form is what expert reviewers.
30/10/2006 University Leaders Meeting 1 Student Assessment: A Mandatory Requirement For Accreditation Dr. Salwa El-Magoli Chair-Person National Quality.
An Institutional Writing Assessment Project Dr. Loraine Phillips Texas A&M University Dr. Yan Zhang University of Maryland University College October 2010.
Purpose of Teacher Evaluation and Observation Minnesota Teacher Evaluation Requirements Develop, improve and support qualified teachers and effective.
1 Expanded ADEPT Support and Evaluation System Training Module for Cooperating Teachers and Supervising Faculty.
GRADING n n Grading that reflects the actual ability of the student n n Grading as a personal communication.
Council for the Accreditationof EducatorPreparation Standard 1: CONTENT AND PEDAGOGICAL KNOWLEDGE 2014 CAEP –Conference Nashville, TN March 26-28, 2014.
Instructional Leadership Supporting Common Assessments.
Designing Quality Assessment and Rubrics
By: Miss Michelle M. Brand Pine Grove Area Elementary School PSCA President-Elect.
Implementing edTPA An Overview.
CAEP Standard 4 Program Impact Case Study
Data Conventions and Analysis: Focus on the CAEP Self-Study
Lessons from a CAEP Early-Adopter
EVALUATING EPP-CREATED ASSESSMENTS
Rubrics, Validity, and Reliability: Oh My!
Bob Michael Associate Vice Chancellor, University System of Georgia
Improving Teaching Practices through the Use of Video-Case Analysis
Phyllis Lynch, PhD Director, Instruction, Assessment and Curriculum
Elayne Colón and Tom Dana
EDA: Educator Disposition Assessment
McREL TEACHER EVALUATION SYSTEM
Bob Michael Associate Vice Chancellor, University System of Georgia
Clinical Educator and Teacher Candidate Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
McREL TEACHER EVALUATION SYSTEM
Training Chapter for the Advanced Field Experience Form (Pre-CPAST
Cooperating Teacher and Student Teacher Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
Presentation transcript:

Rubrics, and Validity, and Reliability: Oh My! Pre Conference Session The Committee on Preparation and Professional Accountability AACTE Annual Meeting 2016

CPPA Members George Drake Dean, College of Education and Human Services, Millersville University Mark Meyers Educational Administration Program Director, Xavier University Trish Parrish (Committee chair) Associate Vice President, Academic Affairs, Saint Leo University Debbie Rickey Associate Dean, College of Education, Grand Canyon University Carol Ryan Associate Dean, College of Education and Human Services, Northern Kentucky University Jill Shedd Assistant Dean for Teacher Education, School of Education, Indiana University Carol Vukelich (Board liaison) Interim Dean, College of Education and Human Development, University of Delaware

Agenda Welcome and Introductions Rubric Design BREAK Table Work: Improving Rubrics Application: Rubrics for Field Experiences Debrief and Wrap Up

Table Discussion What type of assessment system does your EPP use? Do you use locally developed instruments as part of your key assessment of teacher candidates? Have you followed a formal process to establish reliability and validity of these instruments?

Rubric Design Role of rubrics in assessment Formative and Summative Feedback Transparency of Expectations Illustrative Value

Rubric Design Criteria for Sound Rubric Development Appropriate Definable Observable Diagnostic Complete Retrievable

Rubric Design Steps in Writing Rubrics Select Criteria Set the Scale Label the Ratings Identify Basic Meaning Describe Performance

Resources for Rubric Design National Postsecondary Education Cooperative Rubric Bank at Univ of Hawaii University of Minnesota Penn State Rubric Basics AACU VALUE Project VALUE Rubrics Irubric

Introduction to Validity Construct Validity: How well a rubric measures what it claims to measure Content Validity: Estimate of how the rubric aligns with all elements of a construct Criterion Validity: Correlation with standards Face Validity: A measure of how representative a rubric is “at face value”

Table Discussion Which of the types of validity would be most helpful for locally developed rubrics? Construct Content Criterion Face

Approaches to Establishing Validity Locally-established methodology, such as tagging, developed by the EPP with the rationale provided by the EPP Research-based methodology, such as Lawshe, removes need for EPP to develop a rationale

Tagging Example CAEP 1.2: Providers ensure that candidates use research and evidence to develop an understanding of the teaching profession and use both to measure their P-12 students’ progress and their own professional practice. Component DescriptorIneffectiveEmergingTarget Developing objectivesLists learning objectives that do not reflect key concepts of the discipline. Lists learning objectives that reflect key concepts of the discipline but are not aligned with relevant state or national standards. Lists learning objectives that reflect key concepts of the discipline and are aligned with state and national standards. Uses pre-assessmentsFails to use pre-assessment data when planning instruction. Considers baseline data from pre-assessments; however, pre- assessments do not align with stated learning targets/objectives. Uses student baseline data from pre-assessments that are aligned with stated learning targets/objectives when planning instruction. Planning assessmentsPlans methods of assessment that do not measure student performance on the stated objectives. Plans methods of assessment that measure student performance on some of the stated objectives. Plans methods of assessment that measure student performance on each objective.

Table Discussion Why is reliability important in locally developed rubrics? Which types of reliability are most important for locally developed rubrics?

Introduction to Reliability Inter-Rater: Extent to which different assessors have consistent results Test-Retest: Consistency of a measure from one time to another Parallel Forms: Consistency of results of two rubrics designed to measure the same content Internal Consistency: Consistency of results across items in a rubric

Approaches to Establishing Reliability This can be done via a research-based approach or through a locally-based approach More on this in the second half of our presentation!

Time to Practice Opportunities to practice two methods of validity Criterion (Correlation with standards) Content (Estimate of how the rubric aligns with all elements of a construct) Opportunity to practice inter-rater reliability (Extent to which different assessors have consistent results)

Criterion Validity Correlation with Standards As an individual- Review the Learning Environment section on the blue document “Tag” each of the elements in that section to the appropriate InTASC and CAEP standards As a table- Come to a consensus on the most appropriate “Tags” for each element of the Learning Environment section

Content Validity Using the Lawshe Method As an individual, review and rate each element of the Designing and Planning Instruction section on the yellow document Choose a table facilitator Tally the individual ratings Calculate the CVR Value of each element CVR= ne- N/2 N/2 n e= # of experts who chose essential; N= total # of experts The closer to +1.0 the more essential the element What is the CVR value your group suggests as the minimum score for keeping an element?

Content Validity Discuss the elements that would be cut Why was the element rated as less than essential? Can/ should it be reworded to have a higher CVR value? If so, how would you reword it?

Inter-Rater Reliability (Extent to which different assessors have consistent results) Two raters observing the same lesson In person or via recorded lesson Raters can be two university clinical educators, two P-12 clinical educators, or one university and one P-12 clinical educator Each rater independently completes the observation form Statistics are run to determine the amount of agreement between the two raters (SPSS or Excel)

Inter-Rater Reliability Watch the Elementary Math Teaching video e=youtu.be e=youtu.be Rate the teacher using the Learning Environment criteria on the blue sheet Identify a partner at the table and compare your ratings On which criteria were your ratings the same? Different?

Inter-Rater Reliability Watch the High School Science Teaching video Rate the teacher using the Instruction criteria on the pink sheet Identify a partner at the table and compare your ratings On which criteria were your ratings the same? Different?

Table Discussion What would you do to increase inter-rater reliability?

Summary Validity and Reliability directed at two basic questions re: assessments Is the assessment useful? Does it provide constructive feedback to both candidates and the faculty? Is the assessment fair? Does it provide feedback consistently and as intended?

What will you attempt on your own campus? What additional information do you need? What questions do you still have?