Standard 9 - Assessment of Candidate Competence Candidates preparing to serve as professional school personnel know and demonstrate the professional knowledge.

Slides:



Advertisements
Similar presentations
Writing Assessment Plans for Secondary Education / Foundations of Educations Department 9 th Annual Assessment Presentation December 3, 2013 Junko Yamamoto.
Advertisements

 Teacher Evaluation and Effectiveness laws are now in place  Legislature has passed a law that student performance can now be a part of teacher evaluation.
Gwinnett Teacher Effectiveness System Training
Freehold Borough Teacher Evaluation System Freehold Intermediate School Friday – February 15, 2013 Rich Pepe Director of Curriculum & Instruction.
Briefing: NYU Education Policy Breakfast on Teacher Quality November 4, 2011 Dennis M. Walcott Chancellor NYC Department of Education.
PREPARING FOR NCATE May 19, 2008 Teacher Education Retreat.
Deconstructing Standard 2c Angie Gant, Ed.D. Truett-McConnell College 1.
Training Module for Cooperating Teachers and Supervising Faculty
Student Growth Measures in Teacher Evaluation
Student Growth Developing Quality Growth Goals STEP 1 1 Teacher Professional Growth & Effectiveness System (TPGES)
September 2013 The Teacher Evaluation and Professional Growth Program Module 2: Student Learning Objectives.
The Program Review Process: NCATE and the State of Indiana Richard Frisbie and T. J. Oakes March 8, 2007 (source:NCATE, February 2007)
Unit Assessment Plan Weber State University’s Teacher Preparation Program.
ASSESSMENT SYSTEMS FOR TSPC ACCREDITATION Assessment and Work Sample Conference January 13, 2012 Hilda Rosselli, Western Oregon University.
ACCREDITATION SITE VISITS.  DIVISION 010 – SITE VISIT PROCESS  DIVISION 017 – UNIT STANDARDS  DIVISION 065 – CONTENT STANDARDS.
1 NCATE Standards. 2  Candidate Performance  Candidate Knowledge, Skills, & Dispositions  Assessment System and Unit Evaluation  Unit Capacity Field.
Unit Assessment Plan Weber State University’s Teacher Preparation Program.
Measuring Learning Outcomes Evaluation
Accountability Assessment Parents & Community Preparing College, Career, & Culturally Ready Graduates Standards Support 1.
Welcome… The attendee will understand assessment basics with a focus on creating learning activities and identifying assessment expectations. Apply the.
 Description  The unit has a conceptual framework that defines how our programs prepare candidates to be well-rounded educators. Every course in the.
Commission on Teacher Credentialing Ensuring Educator Excellence Biennial Reports Technical Assistance Meeting December 16, 2010.
Curriculum and Learning Omaha Public Schools
Creating a Student Learning Objective (SLO). Training Objectives Understand how Student Learning Objectives (SLOs) fit into the APPR System Understand.
Classroom Assessments Checklists, Rating Scales, and Rubrics
A California Perspective Sally Mearns, with thanks to: Phyllis Jacobson, California Commission on Teacher Credentialing Helene Chan, PACT Guru.
2 The combination of three concepts constitutes the foundation for results: 1) meaningful teamwork; 2) clear, measurable goals; and 3) regular collection.
Deconstructing Standard 2c Dr. Mike Mahan Gordon College 1.
NCATE STANDARD I REVIEW Hyacinth E. Findlay Carol Dawson Gwendolyn V. King.
 This prepares educators to work in P-12 schools (1)  It provides direction (1)  It is knowledge-based, articulated, shared, coherent, consistent with.
Commission on Teacher Credentialing Ensuring Educator Excellence 1 Biennial Report October 2008.
Teacher Evaluation and Professional Growth Program Module 4: Reflecting and Adjusting December 2013.
PERSONNEL EVALUATION SYSTEMS How We Help Our Staff Become More Effective Margie Simineo – June, 2010.
California’s Accreditation System: Providing Information about Your Intern Program Spring Intern Director’s Meeting April Ensuring Educator Excellence.
EDU 385 CLASSROOM ASSESSMENT Week 1 Introduction and Syllabus.
Using PACT Data for National Accreditation Gladys L. Benerd School of Education University of the Pacific Presenters: Betsy Keithcart, Assessment Coordinator.
Revision of Initial and Continued Approval Standard Guidelines for Educational Leadership Programs Presentation to FAPEL Winter Meeting Tallahassee, FL.
Standard Two: Understanding the Assessment System and its Relationship to the Conceptual Framework and the Other Standards Robert Lawrence, Ph.D., Director.
Department of Secondary Education Program Assessment Report What We Assessed: Student Learning Outcomes (SLOs) and CA State Teaching Performance.
NCATE for Dummies AKA: Everything You Wanted to Know About NCATE, But Didn’t Want to Ask.
The Conceptual Framework: What It Is and How It Works Linda Bradley, James Madison University Monica Minor, NCATE April 2008.
Sharon M. Livingston, Ph.D. Assistant Professor and Director of Assessment Department of Education LaGrange College LaGrange, GA GaPSC Regional Assessment.
NCATE STANDARD I STATUS REPORT  Hyacinth E. Findlay  March 1, 2007.
Data Analysis Processes: Cause and Effect Linking Data Analysis Processes to Teacher Evaluation Name of School.
Continuous Improvement. Focus of the Review: Continuous Improvement The unit will engage in continuous improvement between on-site visits. Submit annual.
Changes in Professional licensure Teacher evaluation system Training at Coastal Carolina University.
CSU Center for Teacher Quality Assessing Teacher Preparation Outcomes for Program Improvement and Institutional Accountability CSU Academic Council Meeting.
Professional Education Unit Assessment System Model Operationalizing the Conceptual Framework Operationalizing.
BIR Update Session Programs Cluster Site Visits Ensuring Educator Excellence.
Commission on Teacher Credentialing Ensuring Educator Excellence 1 Program Assessment Technical Assistance Meetings December 2009.
1 California Teaching Performance Assessment Background.
Fidelity of Implementation A tool designed to provide descriptions of facets of a coherent whole school literacy initiative. A tool designed to provide.
Creating a Student Learning Objective (SLO). Training Objectives Understand how Student Learning Objectives (SLOs) fit into the APPR System Understand.
Candidate Assessment of Performance CAP The Evidence Binder.
Stetson University welcomes: NCATE Board of Examiners.
Unit Meeting Feb. 15, ◦ Appreciative Inquiry Process-BOT Steering Committee and Committee Structure. ◦ Four strategies identified from AIP: Each.
CONNECT WITH CAEP | | Standard 2: Partnership for Practice Stevie Chepko, Sr. VP for Accreditation.
Deconstructing Standard 2c Laura Frizzell Coastal Plains RESA 1.
Learning to Teach System Skill Building Three.
Standards-Based Teacher Education Continuous Assessment of Teacher Education Candidates.
NCATE Unit Standards 1 and 2
Classroom Assessments Checklists, Rating Scales, and Rubrics
Partnership for Practice
Evaluating Real Life Integration and Application of Content Knowledge
California Teaching Performance Assessment
Field Experiences and Clinical Practice
Elayne Colón and Tom Dana
Classroom Assessments Checklists, Rating Scales, and Rubrics
Mentoring: from Teacher Candidate to Successful Intern
Deconstructing Standard 2a Dr. Julie Reffel Valdosta State University
Presentation transcript:

Standard 9 - Assessment of Candidate Competence Candidates preparing to serve as professional school personnel know and demonstrate the professional knowledge and skills necessary to educate and support effectively all students in meeting the state-adopted academic standards. Assessments indicate that candidates meet the Commission-adopted competency requirements, as specified in the program standards.

Types of Data Used in Biennial Reports,

Background Staff reviewed 155 separate program reports representing more than 35 institutions Biennial reports submitted in the fall of represented every kind of credential program approved by the CTC The most frequently reported credential programs were the single and multiple subjects This report describes the types of data reported for MS/SS and for education admin programs.

Reporting the Data Programs have the option of deciding how to organize, report and analyze data. Some programs reported MS data separately from SS, whereas others reported data from the two programs together. Similarly, some programs reported preliminary ed admin program data separately, whereas others reported the prelim. and professional ed admin data together.

Understanding the Count Staff made notes in the feedback form for every program in every institution that submitted reports. Some of those notes were pretty cryptic. The tables represent our best attempts at categorizing data in meaningful ways. Data were organized to identify when, in the program, the assessment was performed (e.g., pre-student teaching, at the end of student teaching). Every type of data from every report was counted as a single example.

Understanding the Count Multiple sets of one type of data were counted as one example (e.g., grades from four courses in the same program). Data from one course, pre-student teaching observations, and student-teaching observations from the same program were counted as three examples of data. Grades from one course reported in one program at an institution and grades from one course reported in a second program at that institution were counted as two examples.

How to Use This Information As we discuss the tables: Remember that types of data transcend program type. A type of data used in a MS program (e.g., pre-student teaching observation) can model a type of data appropriate for a school nursing credential program. Identify whether you currently use instruments similar to those on the tables. Notice who does the assessment; faculty member, candidate, clinic supervisor, etc.

How to use this information If one of the instruments suggests something you might want to do, make note of it to share with your colleagues. Similarly, if we identify problems with a particular type of data that’s similar to something you plan to report, please say something about that during the webcast. Be assured that you won’t be the only person with that kind of question.

MS/SS Data Eight major categories of data RICA and Candidate Knowledge represent tests or assignments designed to measure candidate content and pedagogical knowledge. Grades were used as an indicator of candidate quality. In virtually every example of these types of data, faculty were doing the assessments of candidate competence. What problems do you see with using grades or candidates’ GPA to measure candidate competence or program quality? What problem could occur if an institution only used assessments from faculty?

MS/SS Data Candidate dispositions is a type of data that was reported by only six programs. What does this type of data tell you about a program’s effectiveness? Pre-student-teaching observations These data were distinct from the other pre- student-teaching data because they used a standards-based rubric and were completed by faculty or district supervisor. What makes these data more useful than previous types of data?

MS/SS Data The second most frequent type of data was student-teaching evaluations. The majority of these evaluations were standards-based (TPE, CSTP, institution- developed) The evaluations were completed by faculty or a district supervisor. In some cases, these assessments were used to provide both formative and summative feedback to the candidate. What qualities of these data make them particularly informative? How can they be used?

MS/SS Data Programs were required to report TPA data and nearly all did so. Some programs reported the results of multiple test-taking attempts which could be analyzed to demonstrate remediation efforts. Some programs utilized TPA standards (the TPEs) for assessing coursework and student teaching. What did this enable them to do? They were able to monitor candidate progress throughout the entire educator preparation program. They were able to measure program impact on candidate development.

MS/SS Data Program evaluation surveys were the most common type of instrument used. Of these, the most common was the CSU exit survey and one year out survey. Some non- CSU institutions have adopted it or something similar. The majority of individuals who provided this data were candidates (course evaluations) or program completers. District-employed supervisors and employers provided some of the information.

MS/SS Data, summary Overall, candidates and program completers (former candidates) provided the majority of the information. Faculty provided the next greatest amount of information by evaluating coursework and half of the student teaching evaluations. District-employed supervisors had two means for providing feedback; student teaching evaluations and program evaluations

MS/SS Data Summary The most frequently used measures for MS/SS programs were evaluation surveys. The second most frequently used were student teaching evaluations. Assessments of candidate knowledge were the third most frequently used sources of data. Questions or comments?

Education Admin Data The most common source of data was coursework and the most frequent rater was faculty. Fieldwork was also a source of data but, unlike for teacher prep programs, there was little uniformity regarding the standards used. CPSELS was used and was MINDSCAPES.

Education Admin Data The second most frequent type of data was evaluation surveys. The surveys provided feedback on courses, programs, and practicum/fieldwork experiences. Unlike for teacher preparation programs, there are no institution- wide completer or employer survey efforts.

Education Admin Data Summary Ed Admin Programs have less institutional guidance for assessing candidate competencies or program effectiveness. Types of data are more likely to reflect program-specific emphases and the needs of district partners or of individual candidates. In addition, the data are muddled because some programs integrated preliminary and professional ed admin program data. Questions or comments?

Using Individual-Level Data for Program Evaluation Every type of data in the tables is individual-level data. Even evaluation survey data reflects the perspectives of one individual. There are two main types of data we’ve discussed. Candidate competence and evaluation data. Evaluation data, generally collected through surveys, is intended to be reported in the aggregate. Also, evaluations generally reflect on something that’s in the past; reflect back on an experience or on the competencies of an individual trained by a program. How might a program use candidate competency data for program evaluation?

Using Individual-Level Data for Program Evaluation If candidate assessment data measures competencies described in standards and can be summarized (quantitatively or qualitatively), it can be aggregated to the program level. However, how do you know whether the competency level of the candidates is due to the program? What if the candidates come into the program with those skills? What if your admission requirements screen for those skills? One way to significantly reduce uncertainty about what results of candidate assessments mean is to measure candidate competencies multiple times.

Using Individual-Level Data for Program Evaluation How can you do that? For example, assess candidates with a standards- based measure early in the program, to provide a baseline. Prior to student teaching, measure those attributes again, using the same instrument or another instrument but against the same standards. At the end of student-teaching, assess the candidates again. Change between the scores gives an indication of program impact. This may not be true for any individual candidate, but across a group of candidates, it can be indicative of program quality. In a minute, Cheryl will provide examples.

Using Individual-Level Data for Program Evaluation Gathering data from multiple stakeholders also ensures that your data realistically reflects your program. As you plan your “system,” build in opportunities for multiple, informed stakeholders to provide feedback on candidate competencies and on program quality (evaluation). If all data points in the same direction, you can make program modifications with confidence. If data points in different directions, you may need to reassess your instruments, or wait another year before modifying assessment and evaluation system. Comments or questions?