LiveText™ Visitor Pass

Slides:



Advertisements
Similar presentations
Sue Sears Sally Spencer Nancy Burstein OSEP Directors’ Conference 2013
Advertisements

Gwinnett Teacher Effectiveness System Training
April 6, 2011 DRAFT Educator Evaluation Project. Teacher Education and Licensure DRAFT The ultimate goal of all educator evaluation should be… TO IMPROVE.
CONNECT WITH CAEP | | Teachers Know Their Content And Teach Effectively: CAEP Standard 1 Stevie Chepko,
The Marzano School Leadership Evaluation Model Webinar for Washington State Teacher/Principal Evaluation Project.
1 Why is the Core important? To set high expectations – for all students – for educators To attend to the learning needs of students To break through the.
The Program Review Process: NCATE and the State of Indiana Richard Frisbie and T. J. Oakes March 8, 2007 (source:NCATE, February 2007)
ACCREDITATION SITE VISITS.  DIVISION 010 – SITE VISIT PROCESS  DIVISION 017 – UNIT STANDARDS  DIVISION 065 – CONTENT STANDARDS.
Virginia Teacher Performance Evaluation System
What should be the basis of
performance INDICATORs performance APPRAISAL RUBRIC
TIMELESS LEARNING POLICY & PRACTICE. JD HOYE President National Academy Foundation.
A Decade of Implementing Research- Based Dispositions Instruments: Student Admissions Processes DISPOSITIONS: A DECADE OF PROGRESS? Seventh Symposium on.
NORTH CAROLINA TEACHER EVALUATION INSTRUMENT and PROCESS
Embedded Assessment M.Ed. In Curriculum & Instruction with a Specialization in Language & Literacy.
CA Teacher Performance Assessments Orientation
Teacher Keys Effectiveness System Forsyth County Schools Orientation May 2013 L.. Allison.
CONNECT WITH CAEP | | Three-Year-Out Review of Assessments (Pending Accreditation Council and CAEP.
CONNECT WITH CAEP | | CAEP Standard 3: Candidate quality, recruitment and selectivity Jennifer Carinci,
Stronge Teacher Effectiveness Performance Evaluation System
CONNECT WITH CAEP | Transitioning from NCATE and TEAC to CAEP: How? Patty Garvin, Senior Director,
March 24, :00 pm to 3:00 pm Exhibition Lounge, Corey Union TEC Agenda and Notes.
 This prepares educators to work in P-12 schools (1)  It provides direction (1)  It is knowledge-based, articulated, shared, coherent, consistent with.
NCATE Standard 3: Field Experiences & Clinical Practice Monica Y. Minor, NCATE Jeri A. Carroll, BOE Chair Professor, Wichita State University.
March 26-28, 2013 SINGAPORE CDIO Asian Regional Meeting and Workshop on Engineering Education and Policies for Regional Leaders Programme Evaluation (CDIO.
Transitioning from NCATE to CAEP November 10, 2014 Dr. Lance Tomei Retired (2013) Director for Assessment, Accreditation, and Data Management, University.
1. Housekeeping Items June 8 th and 9 th put on calendar for 2 nd round of Iowa Core ***Shenandoah participants*** Module 6 training on March 24 th will.
SACS-CASI Southern Association of Colleges and Schools Council on Accreditation and School Improvement FAMU DRS – QAR Quality Assurance Review April 27-28,
MISSOURI PERFORMANCE ASSESSMENTS An Overview. Content of the Assessments 2  Pre-Service Teacher Assessments  Entry Level  Exit Level  School Leader.
Standard Two: Understanding the Assessment System and its Relationship to the Conceptual Framework and the Other Standards Robert Lawrence, Ph.D., Director.
March Madness Professional Development Goals/Data Workshop.
Department of Secondary Education Program Assessment Report What We Assessed: Student Learning Outcomes (SLOs) and CA State Teaching Performance.
NCATE for Dummies AKA: Everything You Wanted to Know About NCATE, But Didn’t Want to Ask.
Ohio Department of Education March 2011 Ohio Educator Evaluation Systems.
The Conceptual Framework: What It Is and How It Works Linda Bradley, James Madison University Monica Minor, NCATE April 2008.
Sharon M. Livingston, Ph.D. Assistant Professor and Director of Assessment Department of Education LaGrange College LaGrange, GA GaPSC Regional Assessment.
CONNECT WITH CAEP | | CAEP Accreditation and STEM Stevie Chepko, Sr. VP for Accreditation
Systems Accreditation Berkeley County School District School Facilitator Training October 7, 2014 Dr. Rodney Thompson Superintendent.
STANDARD 4 & DIVERSITY in the NCATE Standards Boyce C. Williams, NCATE John M. Johnston, University of Memphis Institutional Orientation, Spring 2008.
Continuous Improvement. Focus of the Review: Continuous Improvement The unit will engage in continuous improvement between on-site visits. Submit annual.
Update on Program Review Margie Crutchfield AACTE February, 2009.
Why So Much Attention on Rubric Quality? CAEP Standard 5, Component 5.2: The provider’s quality assurance system relies on relevant, verifiable, representative,
Candidate Assessment of Performance CAP The Evidence Binder.
Candidate Assessment of Performance CAP The Evidence Binder.
Dr. Derrica Davis Prospective Principal Candidate: Fairington Elementary School.
Office of Service Quality
UPDATE ON EDUCATOR EVALUATIONS IN MICHIGAN Directors and Representatives of Teacher Education Programs April 22, 2016.
CONNECT WITH CAEP | | Standard 2: Partnership for Practice Stevie Chepko, Sr. VP for Accreditation.
Purpose of Teacher Evaluation and Observation Minnesota Teacher Evaluation Requirements Develop, improve and support qualified teachers and effective.
CONNECT WITH CAEP | | Measures of Teacher Impact on P-12 Students Stevie Chepko, Sr. VP for Accreditation.
FLORIDA EDUCATORS ACCOMPLISHED PRACTICES Newly revised.
NOTE: To change the image on this slide, select the picture and delete it. Then click the Pictures icon in the placeholder to insert your own image. COMMON.
Council for the Accreditationof EducatorPreparation Standard 1: CONTENT AND PEDAGOGICAL KNOWLEDGE 2014 CAEP –Conference Nashville, TN March 26-28, 2014.
PGES Professional Growth and Effectiveness System.
PLCs Professional Learning Communities Staff PD. Professional Learning Committees The purpose of our PLCs includes but is not limited to: teacher collaborationNOT-
Designing Quality Assessment and Rubrics
CAEP Standard 4 Program Impact Case Study
EVALUATING EPP-CREATED ASSESSMENTS
NCATE Unit Standards 1 and 2
Partnership for Practice
Phyllis Lynch, PhD Director, Instruction, Assessment and Curriculum
Elayne Colón and Tom Dana
October 25, 2017 | Sponsored by LiveText™
(AKA: Meeting CAEP Accreditation Expectations)
Standard Four Program Impact
Clinical Educator and Teacher Candidate Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
Deconstructing Standard 2a Dr. Julie Reffel Valdosta State University
Standard one: revisions
Deborah Anne Banker Committee Chair
Cooperating Teacher and Student Teacher Training Chapter for the Candidate Preservice Assessment for Student Teachers (CPAST) Form Developed by the VARI-EPP*
Presentation transcript:

LiveText™ Visitor Pass Go to www.livetext.com Click on “Use Visitor Pass” Enter “9409ACEF” in the Pass Code field and click on Visitor Pass Entry

Rubric Design & CAEP Implications Dr. Lance Tomei Educational consultant specializing in assessment and accreditation Retired Director for Assessment, Accreditation, and Data Management: University of Central Florida (UCF), College of Education and Human Performance Former UCF NCATE Coordinator Experienced NCATE BOE Team Chair/Designated CAEP Site Team Leader Experienced Florida State Site Visit Team Chair Former member, FL DOE Student Growth Implementation Committee (SGIC) Former member, FL DOE Teacher and Leader Preparation Implementation committee (TLPIC) Former chair, FL DOE TLPIC Site Visit Protocol Subcommittee

Overview CAEP Standards/Components addressing assessment of candidate learning & other CAEP requirements Implications for your assessment system and key assessments Discussion on rubric design Rubric workshop Rubric templates and design strategies Summary/reflection

For CAEP Standard 1, Component 1: Evaluate initial candidates’ progressive acquisition and mastery of knowledge and skills in the following four categories of InTASC standards: The learner and learning Content knowledge Instructional practice Professional responsibility Evaluate advanced candidates’ progressive acquisition and mastery of knowledge and skills specific to their discipline.

For CAEP Standard 1, Components 1.2 – 1.5: Summative assessments should ensure that candidates nearing program completion: Apply research and evidence in their practice Apply content and pedagogical knowledge in a manner consistent with professional standards Demonstrate skills and commitment that afford all P-12 students access to rigorous college- and career-ready standards Model and apply technology standards to engage students and enhance learning

For CAEP Standard 2, Component 2.3: . . .Clinical experiences, including technology-enhanced learning opportunities, are structured to have multiple performance-based assessments at key points within the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions, as delineated in Standard 1, that are associated with . . . . . . a positive impact on the learning and development of all P-12 students. (Initial) . . . creating a supportive school environment that results in a positive impact on the learning and development of all P-12 students. (Advanced)

For CAEP Standard 3, Components 3. 2 – 3 For CAEP Standard 3, Components 3.2 – 3.6: (Similar to NCATE Transition Point Requirements) 3.2 Program admission 3.3 Professional dispositions/non-academic attributes 3.4 The provider creates criteria for program progression and monitors candidates’ advancement from admissions through completion . . . Providers present multiple forms of evidence to indicate candidates’ developing content knowledge, pedagogical content knowledge, pedagogical skills, and the integration of technology in all of these domains. 3.5 & 3.6: Program exit

For CAEP Standard 5, Component 5.2: The provider’s quality assurance system relies on relevant, verifiable, representative, cumulative and actionable measures, and produces empirical evidence that interpretations of data are valid and consistent [emphasis added].

New CAEP Requirement Announced at the Fall 2014 CAEP Conference At its fall 2014 conference, CAEP announced that its accreditation process will require the early submission of all key assessment instruments (rubrics, surveys, etc.) used by an Educator Preparation Provider (EPP) to generate data provided as evidence in support of CAEP accreditation. Once CAEP accreditation timelines are fully implemented, this will occur three years prior to the on-site visit.

Principles for Measures Used in the CAEP Accreditation Process (Peter Ewell, May 29, 2013) Validity and Reliability Relevance Verifiability Representativeness Cumulativeness Fairness Stakeholder Interest Benchmarks Vulnerability to Manipulation Actionability

Implications Your overall assessment system needs to ensure that you can demonstrate the validity and reliability of your key assessment data as well as your analysis, interpretation, and application of those data to evaluate program impact and support continuous quality improvement. The quality of your key assessment instruments will be a critical factor in meeting many components of the new CAEP standards. Build a solid arch!

Designing an Assessment System is Like Building an Arch Image from http://www.bing.com/image

The Continuous Quality Improvement Cycle Plan Measure Analyze Evaluate & Integrate Change

Key/Signature Assessments: Some Important Considerations Who should participate and who should take the lead? Self-selected or directed artifacts (major implications for rubric design)? Do formative assessments collectively address all applicable competencies? Do summative assessments collectively address all applicable competencies? Are formative assessments and summative assessments well-articulated? Are key assignments fully aligned with key assessments? Can you demonstrate the validity and reliability of your current supporting evidence?

Why Discuss Rubric Design? Designing high-quality rubrics is difficult and time-consuming, but . . . Well-designed rubrics enhance teaching and learning, and . . . . . . improve validity and inter- and intra-rater reliability in assessing student learning Bottom line: good rubrics = good data!

Well-designed Rubrics: Enhance student learning outcomes Serve as a learning scaffold by clarifying formative and summative learning objectives For each target learning outcome, establish critical indicators aligned to applicable standards/ competencies (=construct & content validity) Facilitate self- and peer-evaluations Provide actionable assessment data for individual students

Well-designed Rubrics: Provide a consistent and effective framework for key assessments Establish clear/concrete performance descriptors for each assessed criterion at each performance level Help ensure articulation of formative and summative assessments Improve validity and reliability of assessment data Produce actionable program-level data

Available online at http://www.aacu.org/value/rubrics/index_p.cfm AAC&U VALUE Rubric – Information Literacy: An Example of a Well-designed Rubric Available online at http://www.aacu.org/value/rubrics/index_p.cfm I’m sure many of you are familiar with the AAC&U VALUE Rubrics. Those rubrics includes two formative levels (Milestones 1 and 2) and one level of mastery (Capstone). Here you see one of the 16 AAC&U VALUE Rubrics. Note that the performance levels run in the opposite direction from the other examples we have looked at today. It really doesn’t matter which direction your rubrics flow, so long as you are consistent across all rubrics.

Attributes of an Effective Rubric Rubric and the assessed activity or artifact are well-articulated.

Attributes of an Effective Rubric (Continued) Rubric has construct validity (e.g., standards-aligned) and content validity (rubric criteria represent all critical indicators for the competency to be assessed).

Attributes of an Effective Rubric (Continued) Each criterion assesses an individual construct No overly broad criteria No double- or multiple-barreled criteria

Overly Broad Criterion Unsatisfactory Developing Proficient Distinguished Assessment No evidence of review of assessment data. Inadequate modification of instruction. Instruction does not provide evidence of assessment strategies. Instruction provides evidence of alternative assessment strategies. Some instructional goals are assessed. Some evidence of review of assessment data. Alternative assessment strategies are indicated (in plans). Lessons provide evidence of instructional modification based on learners' needs. Candidate reviews assessment data to inform instruction. Candidate selects and uses assessment data from a variety of sources. Consistently uses alternative and traditional assessment strategies. Candidate communicates with learners about their progress.

Double-barreled Criterion & Double-barreled Descriptor Unsatisfactory Developing Proficient Alignment to Applicable State P-12 Standards and Identification of Appropriate Instructional Materials Lesson plan does not reference P-12 standards or instructional materials. Lesson plan references applicable P-12 standards OR appropriate instructional materials, but not both. Lesson plan references applicable P-12 standards AND identifies appropriate instructional materials

Attributes of an Effective Rubric (Continued) To enhance reliability, performance descriptors should: Provide concrete/objective distinctions between performance levels (there is no overlap between performance levels) Collectively address all possible performance levels (there is no gap between performance levels) Eliminate or minimize double/multiple-barrel narratives (exception: progressive addition of barrels)

Overlapping Performance Levels Criterion Unsatisfactory Developing Proficient Distinguished Communicating Learning Activity Instructions to Students Makes two or more errors when describing learning activity instructions to students Makes no more than two errors when describing learning activity instructions to students Makes no more than one error when describing learning activity instructions to students Provides complete, accurate learning activity instructions to students

Possible Gap in Performance Levels Criterion Unsatisfactory Developing Proficient Distinguished Instructional Materials Lesson plan does not reference any instructional materials Instructional materials are missing for one or two parts of the lesson Instructional materials for all parts of the lesson are listed and directly relate to the learning objectives. Instructional materials for all parts of the lesson are listed, directly relate to the learning objectives, and are developmentally appropriate.

Double-barreled Criterion & Double-barreled Descriptor Unsatisfactory Developing Proficient Alignment to Applicable State P-12 Standards and Identification of Appropriate Instructional Materials Lesson plan does not reference P-12 standards or instructional materials. Lesson plan references applicable P-12 standards OR appropriate instructional materials, but not both. Lesson plan references applicable P-12 standards AND identifies appropriate instructional materials

Attributes of an Effective Rubric (Continued) Rubric contains no unnecessary performance levels (e.g., multiple levels of mastery) Common problems: Use of subjective terms to differentiate performance levels Use of performance level labels or surrogates Use of inconsequential terms to differentiate performance levels Worst case scenario: failure to maintain the integrity of target learning outcomes Resulting data are actionable

Use of Subjective Terms Criterion Unsatisfactory Developing Proficient Distinguished Knowledge of Laboratory Safety Policies Candidate shows a weak degree of understanding of laboratory safety policies Candidate shows a relatively weak degree of understanding of laboratory safety policies Candidate shows a moderate degree of understanding of laboratory safety policies Candidate shows a high degree of understanding of laboratory safety policies

Use of Performance Level Labels Criteria Unacceptable Acceptable Target Analyze Assessment Data Fails to analyze and apply data from multiple assessments and measures to diagnose students’ learning needs, inform instruction based on those needs, and drive the learning process in a manner that documents acceptable performance. Analyzes and applies data from multiple assessments and measures to diagnose students’ learning needs, informs instruction based on those needs, and drives the learning process in a manner that documents acceptable performance. Analyzes and applies data from multiple assessments and measures to diagnose students’ learning needs, informs instruction based on those needs, and drives the learning process in a manner that documents targeted performance.

Use of Surrogates (& Use of Subjective Terms) Criterion Unsatisfactory Developing Proficient Distinguished Quality of Writing Poorly written Satisfactorily written Well written Very well written

Use of Inconsequential Terms Criteria Unacceptable Acceptable Target Alignment of Assessment to Learning Outcome(s) The content of the test is not appropriate for this learning activity and is not described in an accurate manner. The content of the test is appropriate for this learning activity and is described in an accurate manner. The content of the test is appropriate for this learning activity and is clearly described in an accurate manner.

Failure to Maintain Integrity of Target Learning Outcomes Criterion Unsatisfactory Developing Proficient Distinguished Alignment to Applicable State P-12 Standards No reference to applicable state P-12 standards Referenced state P-12 standards are not aligned with the lesson objectives and are not age-appropriate Referenced state P-12 standards are age-appropriate but are not aligned to the learning objectives. Referenced state P-12 standards are age-appropriate and are aligned to the learning objectives.

Attributes of an Effective Rubric (Continued) Resulting data are actionable To remediate individual candidates To help identify opportunities for program quality improvement Base on the first four attributes, the following meta-rubric has been developed for use in evaluating the efficacy of other rubrics…

“Meta-rubric” to Evaluate Rubric Quality Criteria Unsatisfactory Developing Mastery Rubric alignment to assignment. The rubric includes multiple criteria that are not explicitly or implicitly reflected in the assignment directions for the learning activity to be assessed. The rubric includes one criterion that is not explicitly or implicitly reflected in the assignment directions for the learning activity to be assessed. The rubric criteria accurately match the performance criteria reflected in the assignment directions for the learning activity to be assessed. Comprehensiveness of Criteria More than one critical indicator for the competency or standard being assessed is not reflected in the rubric. One critical indicator for the competency or standard being assessed is not reflected in the rubric. All critical indicators for the competency or standard being assessed are reflected in the rubric. Integrity of Criteria More than one criterion contains multiple, independent constructs (similar to “double-barreled survey question). One criterion contains multiple, independent constructs. All other criteria each consist of a single construct. Each criterion consists of a single construct. Quality of Performance Descriptors Performance descriptors are not distinct (i.e., mutually exclusive) AND collectively do not include all possible learning outcomes. Performance descriptors are not distinct (i.e., mutually exclusive) OR collectively do not include all possible learning outcomes. Performance descriptors are distinct (mutually exclusive) AND collectively include all possible learning outcomes.

Workshop Instructions Each workshop participant was asked to bring a current rubric in use in the program along with the assignment instructions for the artifact or activity to be assessed using that rubric. During the workshop, each participant should: Evaluate your rubric using the meta-rubric as a guide. Identify any perceived opportunities to improve the quality of your rubric and/or assignment instructions. Determine what actions you would take to improve the quality of your rubric, if any. At the conclusion of individual work, report out to the group at least one finding regarding your rubric along with your thoughts about how you might respond to that finding.

“Meta-rubric” to Evaluate Rubric Quality Criteria Unsatisfactory Developing Mastery Rubric alignment to assignment. The rubric includes multiple criteria that are not explicitly or implicitly reflected in the assignment directions for the learning activity to be assessed. The rubric includes one criterion that is not explicitly or implicitly reflected in the assignment directions for the learning activity to be assessed. The rubric criteria accurately match the performance criteria reflected in the assignment directions for the learning activity to be assessed. Comprehensiveness of Criteria More than one critical indicator for the competency or standard being assessed is not reflected in the rubric. One critical indicator for the competency or standard being assessed is not reflected in the rubric. All critical indicators for the competency or standard being assessed are reflected in the rubric. Integrity of Criteria More than one criterion contains multiple, independent constructs (similar to “double-barreled survey question). One criterion contains multiple, independent constructs. All other criteria each consist of a single construct. Each criterion consists of a single construct. Quality of Performance Descriptors Performance descriptors are not distinct (i.e., mutually exclusive) AND collectively do not include all possible learning outcomes. Performance descriptors are not distinct (i.e., mutually exclusive) OR collectively do not include all possible learning outcomes. Performance descriptors are distinct (mutually exclusive) AND collectively include all possible learning outcomes.

Summary/Reflection Common Rubric Problems Including more performance levels than are needed to accomplished the desired assessment task Using double- or multiple-barreled criteria or performance descriptors Failing to include all possible performance outcomes Using overlapping performance descriptors Failing to include performance descriptors or including descriptors that are simply surrogates for performance level labels

Summary/Reflection Helpful Hints In designing rubrics for key formative and summative assessments, think about both effectiveness and efficiency Identify critical indicators for target learning outcomes and incorporate those into your rubric Limit the number of performance levels to the minimum number needed to meet your assessment requirements Populate the target learning outcome column first (Proficient, Mastery, etc.) Make clear (objective/concrete) distinctions between performance levels; avoid the use of subjective terms in performance descriptors Be sure to include all possible outcomes Don’t leave validity and reliability to chance Most knowledgeable faculty should lead program-level assessment work; engage stakeholders; align key assessments to applicable standards/competencies; focus on critical indicators Train faculty on the use of rubrics Conduct and document inter-rater reliability and fairness studies