Presentation is loading. Please wait.

Presentation is loading. Please wait.

NAGC Reviewer Refresher

Similar presentations


Presentation on theme: "NAGC Reviewer Refresher"— Presentation transcript:

1 NAGC Reviewer Refresher
Reviewing Teacher Preparation Programs in Gifted and Talented Education Using the NAGC-CEC Teacher Preparation Standards

2 Reviewer Responsibilities
NAGC is grateful to reviewers for volunteering to: Commit personal time to review electronically one program report per cycle. Write individual report findings and post on the CAEP’s AIMs system. Communicate and cooperate with teammates to schedule time to discuss findings and write a final team Recognition Report. Submit quality reports and meet CAEP-established report deadlines. Maintain confidentiality of the review and any comments about the program made in discussions or in final report.

3 Steps for Completing a Program Review
Open Program Report and Read Sections I – V. Analyze the quality of the report by examining the alignment of the report and its attachments to the standards. THIS IS THE HEART OF YOUR WORK but note that it can be slow going to review each assessment, scoring guide, and data chart. Each reviewer on a team writes a Recognition Report in AIMS prior to team discussion. Lead Reviewer on team will contact team to establish timeline for completing review and date(s) for discussion of findings. Teams use CAEP teleconference system. Lead Reviewer submits final Recognition Report on behalf of the team. 3

4 The Basics to Keep in Mind
DATA RULE Minimum requirement for data are from two applications of the assessments (per program location) For revised or response to condition reports, data from one application of assessments are required RESPONSE TO CONDITIONS REPORTS Reviews may address only the conditions itemized in Part G of the prior recognition report USE OF GRADES SPAs must accept grades as an assessment of candidate knowledge. See the reporting rules on grades at

5 More Basics….. WRITING CONDITIONS
Condition statements should be specific so that program has enough information to make changes. Use phrases from the reviewer report writing document when possible for consistency. AVOID providing suggestions for changes and instead, write descriptively about what is missing, restate what is required, etc. OVERALL APPROACH The review process is not punitive for programs – but rather collaborative and collegial, using a holistic approach. However, avoid “pity scores.” Going easy on programs in the first review makes subsequent review(s) extremely challenging.

6 Program Report – Section I
Section I provides the background info needed to make sense of the report and to complete portions of the Recognition Report. Two narrative responses plus: Candidates and Completers chart Faculty chart Program of Study Read carefully for the following: Are there sufficient full-time faculty for the program? Does the faculty have credentials in gifted education? Do the numbers of completers provided here approximate the number in the assessment data charts? (if not, you should include a comment that asks for for an explanation) What info can you use to describe program strengths?

7 Program Report - Sections II & III
The Section II chart tells you what assessments the program has submitted, and where the assessments take place in the program. The Section III chart indicates which assessments provide evidence that candidates have mastered the knowledge and skills for each standard. As you read these sections Note the variations in the types of assessments (formative or summative) Note the “n” for the data tables; the number taking the assessments and the number of completers reported should very closely match.

8 Program Report – Section IV
Section IV – the Heart of the Program Report For each of the assessments submitted, the program must provide: A narrative including a description of the assessment, the alignment of the assessment to standards, analysis of the data from the assessment, and an explanation of how the assessment provides evidence of meeting standards; and Documentation of the assessment (attachments to the report) the 2-page narrative the assessment itself (or directions for completing the assessment) the scoring guide, which is organized by standard/element the data chart that shows one or two applications of the data, presented in a way that reflects the scoring guide and is disaggregated by performance level

9 Program Report – Section V
The program describes how it has used the data from the submitted assessments as a way to evaluate and make changes to its program. For example, the subscores on a state test may show that candidates score lower in a particular domain or skill. This should have prompted the program to consider whether it needs to increase attention to that area within the curriculum – or just monitor that area carefully in future data reports.

10 Program Report – Section VI
Section VI: Required only in Revised or Response to Conditions reports. This section should tell the reviewer what the program has done to address the concerns or conditions to recognition specified in the previous report, as well as provide a summary of what has been submitted in the current report. Reviewers have access to the previous program report and to the previous recognition report.

11 It’s All About the Assessments…. Which Should
Align to standard/elements (course assignment to scoring guide to data chart to standard) Assess meaningful knowledge and skills (key aspects of standard/element are addressed) Include a scoring guide (rubric) that shows distinct performance levels (observable and measureable content-specific behaviors that show qualitative differences from one level to the next); and flows from a stem descriptor that includes the elements of the standards Include data that show that candidates meet standard Data chart aligns to each standard/element described in the rubric/scoring guide. Number and percentage of candidates performing at each level for each standard/element is reported.

12 Assessment Alignment to the NAGC-CEC Standards: What to Look For
Determine if the key elements contained within each NAGC-CEC Standard are evaluated within the assessments, as claimed in the program report. Some assessments will address multiple standards. Some assessments will claim to assess multiple standards, but do not. (No penalty for that as long as each standard is covered by at least one dimension within the rubric or checklist.) Make a note in the comment field of the recognition report of the standard(s) not addressed by an assessment. Some standards are covered by assessments not claimed in Section III of the report. Give credit and make a note in the comment field for that standard. See ELCC standards 12

13 Most Common Issues Found By Reviewers - Assessments
Assessments and scoring guides are difficult to interpret because of missing elements, data. Reviewers should unravel as much as possible and describe the situation as specifically as possible in the comment sections so that the program can make improvements for the revised submission. Assessments align to standards as a whole but are too general to evaluate candidate mastery of specific concepts contained in the standards. Candidate self-assessment, by itself, is not acceptable as a key assessment. Self-assessments where results are reviewed and discussed with faculty or supervising teacher can be acceptable assessments. Assessments offer assignment choices that focus on different knowledge and skills, which may result in gaps in standards mastery. Show after quiz 13

14 Common Issues - Rubrics
Rubric descriptors mirror the language of the standards rather than describe candidate expectations in operational terms. Rubrics/scoring guides are too general and do not describe the expectations for specific concepts from the standards. Rubrics are not aligned to the activities described in (or components) of the assessments.

15 Rubrics, cont’d Rubrics do not isolate individual standards or elements It is possible to group a few closely related elements together in one rubric category. Note: we would expect to see fewer instances of this with the 2013 standards, which have 28 elements. Rubric do not describe meaningful differences between levels of mastery on the assessment. Quantitative rather than Qualitative descriptors: requiring 4 references for something, compared to 2, is not a meaningful difference between meeting/not meeting expectations. Subjective qualifiers ("most," "somewhat," "exceptional") are not meaningful ways to describe differences between levels of performance.

16 Problem Rubric Descriptors
Mimics standard language Candidate provides evidence of differentiated instructional plans that Align with state and national academic standards Are created with the individual student in mind Use a variety of curriculum resources, strategies, and product options for differences among individuals with gifts and talents Address academic and career guidance experiences into the learning plan Reliance on subjective descriptors The candidate provided sufficient evidence, or description, of the following: (list of elements within a standard follows) OR The candidate demonstrated competency beyond standard expectation levels in understanding instructional planning for the gifted specific to the targeted criteria.

17 Problem Rubric Descriptors
Expectations are based on concepts in the standard elements but are not described Competent demonstration of understanding, with satisfactory descriptions and defense of answers. Evaluation based on qualities not related to the NAGC-CEC standards Provides significant information about the purpose, organization, and rationale for the selection of documents (for portfolio).

18 Common Issues - Data Data Charts are not aligned to the scoring guides. Data presented as a total score, rather than by each standards-based concept presented in the rubric/scoring guide. The reason to disaggregate in this way is for the program to be able to spot the strength/weakness areas and make adjustments for program improvement. Insufficient data provided. Data not disaggregated by # of candidates scoring at each performance level on each item in the scoring guide Data are inconsistent with numbers of program completers. Administration date not identified Data charts do not report separately where program is at multiple sites Where grades are submitted, the program must also submit the program/institution’s minimum grade point/grade requirements.

19 Evaluating Individual Standards as You Review Each Assessment
Standard/Element is Not Met Standard/Element is Met with Conditions Standard/Element is Met Assessments assess meaningful content specific knowledge and skills for the standard. Assessments fail to measure key components of the standard. OR Assessments consist of simply a checklist of items to be included in the assessment and do not address the quality of candidate performance. Numerous items are presented on the assessments; however, the items are limited in scope or only partially provide evidence for meeting of the elements of the standard. The assessments fail to define candidate behavior at each level in operational terms. For example: Levels of candidate proficiency are differentiated in terms of quantity of the same behavior, such as familiarity with 4 key references (vs. 2, vs. 1). One item is purported to align with multiple standards and not one individual standard. While an assessment may provide evidence for multiple standards, individual items on the assessment usually cannot provide adequate evidence for multiple standards. Assessments identify key components of required content specific knowledge and skills and provide evidence of candidate knowledge and/or attainment of the standard.

20 Standard/Element is Not Met Standard/Element is Met with Conditions Standard/Element is Met Scoring guides (rubrics) assess distinct levels of candidate proficiency Scoring guide (rubric) consist of a checklist of behaviors that can be answered yes or no. Behaviors are not defined or expectations identified. Performance levels are unclear and/or subjective, potentially allowing for biased results. OR Scoring guides (rubrics) are inconsistent or incomplete. Distinctions between performance levels are not clear. The assessments fail to define candidate behavior at each level in operational terms. For example: Throughout the scoring guide (rubric), descriptions of candidate proficiency are not objective, using such terms as “consistently”, “occasionally,” “somewhat,” “never” or “exceptional” Scoring guides (rubrics) rely on quantity, rather than quality. Scoring guides (rubrics) identify distinct levels of candidate proficiency in terms of criteria, are content specific, observable, and measurable behaviors, allowing for fair and unbiased results. Moreover, they use a scale with descriptors of each item to be rated. Quality and quantity indicators are employed as appropriate. Preponderance of evidence While some evidence is provided, it is insufficient for reviewers to determine the standard is met. Assessments fail to assess the key concepts within the standard. Assessments fail to assess the majority of key components of the standards. Data presented as evidence are comingled; thereby, making it difficult for the reviewer to determine if the standard is met. Multiple assessments are provided for meeting standards, but provide only partial or marginal evidence. The assessments submitted only partial alignment with the standard. Sufficient evidence is presented in the required format for reviewers to determine that the key concepts within the standard have been assessed.

21 Standard/Element is Not Met Standard/Element is Met with Conditions Standard/Element is Met Data demonstrate that candidates have met the standard/element Data charts do not align with the assessment. Data charts fail to identify percentage of candidates at the acceptable level and simply report a mean score. OR A generic scoring guide is used that simply assigns a value across all items. Data are missing from the chart. Data are reported by individual student and not aggregated Insufficient data are provided; therefore, the reviewer cannot determine if the standards are met. Insufficient data are presented or data are not disaggregated to the program level. Program fails to report the overall number of candidates. Data charts do not align directly with the scoring guide. The data charts report mean scores for categories while the scoring guide is organized by item or percentage of candidates achieving a specific level and are reported for the category and not individual item. Data are aligned to multiple standards and an aggregate score is reported for an overall category. Data must be reported at the same level as it was collected in the assessment. If data are collected on individual items, it must be reported by individual items and not an average or overall score for the assessment or for a category. Data charts are aligned with the assessment; percentage and/or mean and range of candidates achieving the acceptable level is reported; charts are correctly labeled; and all required data are reported.

22 The Recognition Report
Part A: Recognition Decision Complete this section last. National Recognition under the 2013 standards is more stringent than under the 2006 standards. See separate slides Part B: Status of Meeting Each NAGC-CEC Standard See separate slide Part C: Evaluation of Program Report Evidence This section is completed using evidence from assessments provided, informed by reviewer’s expertise.

23 Recognition Report, cont’d
Part D: Evaluation of the Use of Assessment Results Comments in this section derive primarily from information provided in section I of the program report (context). Part E: Areas for Consideration Not required to complete. Comments should be over-arching comments, such as increasing the number of faculty that have expertise in gifted education; ensuring candidates are working with multiple gifted K-12 learners. Part F: Additional Comments Not required to complete. Space provided to draw attention to accreditation examiners. For example, calling attention to the lack of faculty expertise in gifted education.

24 Recognition Report, Part B
Part B: Status of Meeting Each NAGC-CEC Standard This is the heart of the report writing. Designate each standard as met, not met, or met with conditions after reviewing the assessments from a holistic approach (“preponderance of the evidence”) to determine the degree to which candidate mastery of the breadth/depth of the concepts in each standard is assessed and measured. Comments are helpful; comments are required when decision is “met with conditions” or “not met.” Not appropriate to include comments directing how the program could/should change an assessment; focus is on explaining what needs improvement. See the next slide for the differences between decisions for programs using the 2006 standards vs. those using the 2013 standards.

25 Individual Standard Decisions 2006 vs. 2013 Standards
FOR THE 2006 Standards For a standard to be determined as “met,” more than 50% of the elements (100% is not required) in that standard must be measured and assessed within the assessment(s) scoring rubrics. (See the NAGC Guidance for Program Reviewers for 2006 standards for more details.) FOR THE 2013 Standards For a standard to receive a “met” determination, all the elements of the standard have been addressed and a preponderance of the evidence presented shows that the candidates are meeting the expectations of the standard. (See the NAGC Guidance for Program Reviewers for 2013 Standards for more details.)

26 Report Writing: Directions for Writing Comments
In Part B -- please compose each comment in the following way: “The program indicates that assessments (#s) address standard (#).” Next, address each assessment in sequence; begin each para with “Assessment (#) (include label for assessment the first time it is referenced)…and continue with brief comments about what concepts/elements of the standard are addressed by the assessment. If the assessment does not address the standard as claimed, make a brief note of it. If issues with an assessment, scoring guide/rubric, or data, indicate the problem, being as specific as possible. These comments will become part of the conditions statements listed in Part G of the report. Can be as simple as “Assessment 7: The rubric does not include criteria for this standard,” or “Assessment 4, which is a self-report by the candidates, is not acceptable as a key assessment.”

27 Writing Comments, cont’d
The final para of comments for each standard should conclude with a statement such as, “The assessments address all of the elements of standard (#) and the data support candidate mastery; therefore, the standard is determined to be met.” As you move through the standards, no need to repeat comments about same issue/problem with an assessment, simply refer back to the first standard where the problem was described: “Assessment 5: see comments under standard 4.” Use the Reviewer Report Writing Document for assistance in choosing phrases in describing issues with assessments, scoring guides (rubrics), and data. Using these phrases ensures consistency across SPAs, which is helpful to universities when they receive multiple program reports.

28 Importance of Clear Conditions
Conditions describe the concerns about the assessments that show candidate mastery of the NAGC-CEC standards. Conditions guide program improvement and are the focus of a revised or response to conditions report. Thus, it is essential that the conditions statements are as clear as possible. Keep in mind that subsequent reviews focus only on the conditions statements developed by the previous reviewers. Please be sure that you are as thorough as possible in describing the concern, taking care not to write directive (“the program should…”) language. Using same descriptors for same/similar issues with assessments helps reduce follow up questions / confusion from programs. Again, use the Reviewer Report Writing Document for assistance in choosing phrases where possible.

29 Reviewing Revised or Response to Conditions Reports
If possible, the report will be assigned to at least one reviewer from the original review. If Revised, reviewers will only evaluate standards that were previously not met or met with conditions. If Response to Conditions, reviewers only address issues listed in Part G Reviewers may not reverse previous decisions on “met” standards or add new concerns or conditions unrelated to concerns/standards addressed in the second report.

30 Program Recognition Decisions
After completing Part B of the Recognition Report, any conditions statements should be included in Part G. Lastly, return to Part A to make a recommendation on the program’s recognition status, using the following guidelines, which differ for programs being reviewed under the 2006 vs NAGC-CEC standards. See next slides.

31 Recognition – 2006 Standards
For Programs Using the 2006 NAGC-CEC Standards National Recognition – program meets CAEP’s 80% rule on state licensure exam (where applicable), 8 of the 10 individual NAGC-CEC standards are determined to be “met,” and the program meets the data requirements. National Recognition w/Conditions – program received “met” or “met with conditions” on 5 or more of the individual standards. Not Recognized – program met or “met with conditions” 1-4 of the individual standards and needs major improvements in most or all areas noted above.

32 Recognition – 2013 Standards
For Programs Using the 2013 NAGC-CEC Standards National Recognition – program meets CAEP’s 80% rule on state licensure exam (where applicable); all 7 of the individual standards must be determined to be “met.” ** Not Recognized - a program received “not met” on 1 or more individual standards. National Recognition with Conditions – applies to all other cases. ** ** Where programs have received “met” on 6 standards and “met with conditions” on 1 standard, reviewers should use their professional judgment in considering the conditions for the 7th standard to determine whether national recognition status is appropriate.

33 Final Steps for Completing Review
The Lead Reviewer’s task is to summarize team discussion(s) and using individual reports write up the final Recognition Report and post to the CAEP AIMs system. Reviewers should check the final report for fairness, bias, and ensure the report is clear and contains sufficient explanation, using the recommended phrasing where possible. Note: An NAGC program auditor will review the team report, making any edits for consistency, and submit a final report to CAEP. Give quiz assignment 33

34 Resources for Reviewers
CAEP will be moving materials from the old NCATE website to the new site ( In the meantime, it is still possible to access archived webinars and other resources at Also, CAEP staff (Elizabeth Vilky), NAGC’s SPA Coordinator (Jane Clarenbach), and your review partners are all resources to help you get comfortable and to answer questions.

35 Contact Information Contact at CAEP (Elizabeth Vilky, ) if you have problems with technology, accessing the report, contacting a team member, or an emergency that requires you to withdraw from your assignment (or for general questions as well). ( ) Contact at NAGC (Jane Clarenbach, if you have questions about NAGC-CEC standards, alignment of standards in assessments, writing a quality report, etc. ( )


Download ppt "NAGC Reviewer Refresher"

Similar presentations


Ads by Google