Download presentation
Presentation is loading. Please wait.
Published byBritney Thornton Modified over 9 years ago
1
Transitioning from NCATE to CAEP November 10, 2014 Dr. Lance Tomei Retired (2013) Director for Assessment, Accreditation, and Data Management, University of Central Florida, College of Education and Human Performance Former NCATE Coordinator, University of Central Florida Experienced NCATE BOE Team Chair; Trained CAEP Site Visit Team Leader
2
Overview Acknowledgement & Disclaimer Context for the CAEP Standards Similarities between what has been required for NCATE and what is required for CAEP Differences: What are the main “new” or materially different requirements for CAEP? Some general observations about CAEP standards and their components Strategies and resources to help you transition from NCATE to CAEP
3
CAEP: The Big Picture CAEP’s Hallmarks: – Continuous Improvement – Transformation – Evidence and Inquiry CAEP’s Interim (overarching) Standards: – Candidates demonstrate knowledge, skills, and professional dispositions for effective work in schools – Data drive decisions about candidates and programs – Resources and practices support candidate learning
4
Standards Commission: 4 Critical Points of Leverage 1.Build partnerships and strong clinical experiences 2.Raise and assure candidate quality 3.Include all providers 4.Insist that preparation be judged by outcomes and impact on P-12 student learning and development- Results matter; effort is not enough.
5
The Standards NCATE StandardsCAEP Standards 1: Candidate Knowledge, Skills, and Professional Dispositions 1: Content and Pedagogical Knowledge 2: Assessment System and Unit Evaluation2: Clinical Partnerships and Practice 3: Field Experiences and Clinical Practice3: Candidate Quality, Recruitment, and Selectivity 4: Diversity4: Program Impact 5: Faculty Qualifications, Performance, and Development 5: Provider Quality Assurance and Continuous Improvement 6: Unit Governance and Resources
6
CAEP/NCATE Crosswalk CAEP Standards/ComponentsNCATE Standards/ElementsExamples of Relevant NCATE Recommended Exhibits Comments Standard 1: Content and Pedagogical Knowledge. The provider ensures that candidates develop a deep understanding of the critical concepts and principles of their discipline and, by completion, are able to use discipline-specific practices flexibly to advance the learning of all students toward attainment of college- and career-readiness standards. Standard 1: Candidate Knowledge, Skills, and Professional Dispositions Candidate Knowledge, Skills, and Professional Dispositions 1.1 Candidates demonstrate an understanding of the 10 InTASC standards at the appropriate progression level(s) in the following categories: the learner and learning; content; instructional practice; and professional responsibility. 1a. Content Knowledge for Teacher Candidates; 1b. Pedagogical Content Knowledge and Skills for Teacher Candidates; 1c. Professional and Pedagogical Knowledge and Skills for Teacher Candidates; 1d. Student Learning for Teacher Candidates; 1g. Professional Dispositions for All Candidates 1.3.c - Key assessments and scoring guides used for assessing candidate learning against professional and state standards...; 1.3.d - Aggregate data on key assessments...; 1.3.g - Examples of candidates’ assessment and analysis of P-12 student learning; 1.3.h - Examples of candidates' work... from programs across the unit InTASC Standards: 1-Learner Development, 2- Learning Differences, 3-Learning Environments, 4-Content Knowledge, 5- Application of Content, 6-Assessment, 7- Planning for Instruction, 8-Instructional Strategies, 9-Professional Learning and Ethical practice, 10-Leadership and Collaboration Provider Responsibilities 1.2 Providers ensure that completers use research and evidence to develop an understanding of the teaching profession and use both to measure their P-12 students’ progress and their own professional practice. 1d. Student Learning for Teacher Candidates Need to establish criteria to assess this since the target group for which data are needed is program completers, not candidates.
7
Standard 1 vs. Standard 4 Standard 1 Components 1.2 through 1.5 all begin with, “Providers ensure that completers …” BUT… From CAEP Evidence Guide: “NOTE: In Standard 1, the subjects of components are “candidates.” The specific knowledge and skills described will develop over the course of the preparation program and may be assessed at any point, some near admission, others at key transitions such as entry to clinical experience and still others near candidate exit as preparation is completed.” Standard 1: Candidate-focused Standard 4: Completer-focused
8
Similarities (cont.) The good news: –Some of the recommended evidence for NCATE standards will also support CAEP standards.* –CAEP acknowledges that “providers begin in different places... [but] must be on a certain path to reach... more rigorous standards and evidence expectations.” Emphasis remains on candidates’ performance and continuous quality improvement with heighted emphasis on program impact (most importantly, impact on P-12 student learning) *But you may need to improve the quality of your assessment instruments to ensure that you have valid, reliable data
9
Major Differences Focus of new CAEP standards is on output variables; most capacity metrics are reported outside the standards framework Conceptual framework is never explicitly referenced, but “EPP’s Shared Values and Beliefs for Educator Preparation” are reported via the Self Study Diversity and Technology are “cross-cutting themes” (integrated) Emphasis on partnerships in clinical practice and a need to more proactively engage partners and stakeholders Emphasis on impact of program completers (Standard 4) Expectation of benchmarking (5.4) Professional dispositions are re-envisioned
10
Research-based Professional Dispositions: “Educator preparation providers establish and monitor attributes and dispositions beyond academic ability that candidates must demonstrate at admissions and during the program. The provider selects criteria, describes the measures used and evidence of the reliability and validity of those measures, and reports data that show how the academic and non-academic factors predict candidate performance in the program and effective teaching.” (3.3)
11
Major Differences (cont.) Specific incorporation of InTASC Model Core Teaching Standards (“Content and pedagogical knowledge expected of candidates is articulated through the InTASC standards.”) Reference to “rigorous college- and career-ready [P-12] standards” 1.2 Providers ensure that completers use research and evidence to develop an understanding of the teaching profession and use both to measure their P-12 students’ progress and their own professional development. Progressive, phased increase in admission requirements –Cohort average is in the top 50 percent of “nationally normed ability/achievement assessments such as ACT, SAT, or GRE” from 2016-2017 –Top 40 percent from 2018-2019 –Top 33 percent by 2020
12
Changes in annual reporting requirements and actionability of annual data (“trigger points”) Probable expansion of critical accreditation indicators –Current example: 80 percent (or state requirement if higher) pass rate on certification examination within two administrations –All components of Standard 4! –Components 5.3 and 5.4 of Standard 5 –Program outcome and consumer information: Completer or graduation rates Ability of completers to meet licensing (certification) and any additional state accreditation requirements Ability of completers to be hired in education positions for which they are prepared Major Differences (cont.)
13
Acknowledgement that new and improved accountability metrics are needed: “… many measures of both academic and non-academic factors associated with high-quality teaching and learning need to be studied for reliability, validity, and fairness. CAEP should encourage development and research related to these measures. It would be shortsighted to specify particular metrics narrowly because of the now fast-evolving interest in, insistence on, and development of new and much stronger preparation assessments, observational measures, student surveys, and descriptive metrics.” Heightened expectations for the quality of evidence –Stakeholder involvement (contribution to validity) –Development of key activities and associated assessment instruments –Provide “empirical evidence that interpretations of data are valid and consistent (5.2)” –Implications for design of key activities and associated assessment instruments –Attention to “Principles for Measures Used in the CAEP Accreditation Process” (Peter Ewell, May 29, 2013)
14
Principles for Measures Used in the CAEP Accreditation Process (Peter Ewell, May 29, 2013) 1.Validity and Reliability 2.Relevance 3.Verifiability 4.Representativeness 5.Cumulativeness 6.Fairness 7.Stakeholder Interest 8.Benchmarks 9.Vulnerability to Manipulation 10.Actionability
15
Implications for Key Assessments: The quality of key assessment instruments will be extremely important –Assessment System –Supporting Technology –Key Assessment Instruments –…building an arch! Key considerations: –Who should participate, and who should take the lead? –Comprehensiveness and articulation of key formative and key summative assessments –Self-selected or directed artifacts? –Assignment-assessment alignment –Can you demonstrate the validity and reliability of your current supporting evidence?
16
New CAEP Requirement Announced at the Fall 2014 CAEP Conference At its fall 2014 conference, CAEP announced that its accreditation process will require the submission of all key assessment instruments (rubrics, surveys, etc.) used by an Educator Preparation Provider (EPP) to generate data provided as evidence in support of CAEP accreditation. Once CAEP accreditation timelines are fully implemented, this will occur three years prior to the on-site visit. Submissions will be evaluated for quality by a panel of assessment experts and feedback will be provided to the EPP well in advance of the reaccreditation review.
17
Attributes of an Effective Rubric Rubric & the assessed activity or artifact are well-articulated Rubric has construct validity (e.g., standards-aligned) and content validity (rubric criteria represent all critical indicators for the competency to be assessed) Each criterion assesses an individual construct (there are no double- or multiple-barreled criteria) To enhance reliability, performance descriptors should: Provide concrete/objective distinctions between performance levels (there is no overlap between performance levels) Collectively address all possible performance levels (there is no gap between performance levels) Eliminate or minimize double/multiple-barrel narratives (exception: progression of additional components at higher performance levels) Rubric contains no unnecessary performance levels (e.g., multiple levels of mastery) Resulting data are actionable
18
“Meta-rubric” to Evaluate Rubric Quality CriteriaUnsatisfactoryDevelopingMastery Rubric alignment to assignment. The rubric includes multiple criteria that are not explicitly or implicitly reflected in the assignment directions for the learning activity to be assessed. The rubric includes one criterion that is not explicitly or implicitly reflected in the assignment directions for the learning activity to be assessed. The rubric criteria accurately match the performance criteria reflected in the assignment directions for the learning activity to be assessed. Comprehensiveness of Criteria More than one critical indicator for the competency or standard being assessed is not reflected in the rubric. One critical indicator for the competency or standard being assessed is not reflected in the rubric. All critical indicators for the competency or standard being assessed are reflected in the rubric. Integrity of Criteria More than one criterion contains multiple, independent constructs (similar to “double- barreled survey question). One criterion contains multiple, independent constructs. All other criteria each consist of a single construct. Each criterion consists of a single construct. Quality of Performance Descriptors Performance descriptors are not distinct (i.e., mutually exclusive) AND collectively do not include all possible learning outcomes. Performance descriptors are not distinct (i.e., mutually exclusive) OR collectively do not include all possible learning outcomes. Performance descriptors are distinct (mutually exclusive) AND collectively include all possible learning outcomes.
19
“To Do” List Conduct a thorough review of current key assessments (formative and summative) and map them against CAEP Standards/Components If your focus is on state or national professional standards, ensure that InTASC standards are correlated as well Evaluate key assessments for validity, reliability, and fairness—some assessment instruments may need to be revised Identify evidence requirements (associated with CAEP Standards Components) that current assessments/data don’t yet support and initiate plans to fill the gaps (see correlation matrix) Strengthen P-12 and stakeholder partnerships Establish a comprehensive data analysis plan Review your continuous quality improvement policies and practices
20
The Continuous Quality Improvement Cycle
21
LiveText™ Visitor Pass Go to www.livetext.comwww.livetext.com Click on “Use Visitor Pass” Enter “ 9409ACEF ” in the Pass Code Click on Visitor Pass Entry You will have access to –This PowerPoint presentation –CAEP Accreditation Standards 130829 –CAEP Standards for Advanced Programs 140605 –CAEP Evidence Guide 130611 –Principles for Measures Used in the CAEP Accreditation Process (Peter Ewell, May 29, 2013) –InTASC Model Core Teaching Standards: A Resource for State Dialogue (April 2011) –My updated, unofficial NCATE/CAEP standards correlation matrix. –My “meta-rubric” for evaluating rubric quality
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.