Presentation is loading. Please wait.

Presentation is loading. Please wait.

Office of Education Improvement and Innovation

Similar presentations


Presentation on theme: "Office of Education Improvement and Innovation"— Presentation transcript:

1 Office of Education Improvement and Innovation
Instructional Learning Cycle: Pre-Planning Phase: Part 2 Office of Education Improvement and Innovation Instructional Learning Cycle: Pre-Planning Phase: Part 2

2 Outline In part 1 of this presentation you learned about the first two collaborative tasks of the Pre-Planning Phase of the Instructional Learning Cycle and the ILC Process Document.

3 Outline Assessments In Part 2 we’re going to learn about the third collaborative task—assessments—and the individual tasks that are part of the Pre-Planning phase of the Instructional Learning Cycle.

4 Plan Assessment STATE STANDARDS STATE STANDARDS
The last task in the Pre-Planning Phase is to plan the assessment. Notice that you are planning the assessment BEFORE you select the content and strategies. Doing this type of backward planning allows you to make sure that your assessment, learning targets and standards are aligned and that there is consistency among all parts of the instruction. Selecting content and instructional strategies that are aligned with the learning targets and assessment is part of the Instructional Planning Phase, which will be covered in the next presentation. CONTENT CONTENT INSTRUCTIONAL STRATEGIES INSTRUCTIONAL STRATEGIES

5 Formative Summative Assessment FOR learning
Results used by teacher to inFORm instruction Assessment OF learning Snapshot of what the student knows at a particular point in time Before we go into detail about planning the assessment, let’s look at the different types of assessment. Assessments are usually categorized as formative or summative. A formative assessment is an assessment FOR learning; the results are used by the teacher to inFORm instruction. A summative assessment is an assessment OF learning and provides a snapshot of what a student knows at a particular point in time. Summative assessments are usually graded.

6 The Assessment Continuum – Most Formative
Exit slip Provide feedback on understanding, instructional strategy, materials or teaching Not used for grading purposes Although there are only the two categories, assessments do not always fall cleanly into one or the other—it really is a continuum as shown on the screen. Let’s look at the continuum starting from the left. An example of the most formative assessment is the daily classroom assessment such as having students do an “exit slip” where students can rate their current understanding of a topic covered or provide feedback on the instructional strategy, materials or teaching. This type of assessment is not used for grading purposes.

7 The Assessment Continuum – More Formative
Pre-assessments to establish a baseline Post-assessments such as a weekly quiz or unit test Some may be used as both formatively and summatively Aimsweb, DIBELS, and DRA assessments May or may not be graded As you progress to the right on the continuum into the more formative assessments, this type of assessment refers to common formative assessments such as pre-assessments to establish a baseline or post-assessments such as a weekly quiz or unit test. Other examples of assessments that fall somewhere between more formative and more summative include Aimsweb, DIBELS and DRA assessments. These examples are not to be viewed as recommendations and are only mentioned as examples of more formative and more summative assessments. All of these examples may or may not be graded or used to calculate a grade and can be used formatively as well as summatively. A pre-assessment used to establish a baseline to inform your planning should not be graded. A post-assessment, however, may be used summatively and graded, but it may also be used to inform instruction and to determine what will be taught next. For example, are the students ready to move on or do certain learning targets need to be retaught? Some of the assessments, e.g., DIBELS, may be used formatively early in the year but then the final one of the year may be used summatively to determine whether a student has met the grade-level target or not.

8 The Assessment Continuum – More Summative
District wide benchmark assessments Quarterly common writing assessments NWEA’s Measures of Academic Progress Collaboratively developed mid-term and final exams As you continue moving right on the continuum, the assessments become more summative. Examples of more summative assessments include district wide benchmark assessments that may be administered two to four times per year. These include collaboratively developed assessments such as quarterly common writing assessments or assessments that are included as part of a purchased program or NWEA’s Measures of Academic Progress. Other examples of more summative assessments moving toward most summative are collaboratively developed mid-term and final exams.

9 The Assessment Continuum – More Summative
Annual state-mandated summative assessments M-STEP or MME Establish level of proficiency according to criteria set by the state Finally, on the right-most end of the continuum are the annual state-mandated summative assessments, e.g., the M-STEP or MME. These assessments are developed outside of the district and although they do not affect a student’s grade in a class, it does establish the level of proficiency according to criteria set by the State.

10 How do we assess the identified learning targets?
Plan Assessment How do we assess the identified learning targets? Pre-assessments to establish a baseline to inform instructional planning Post-assessments may be the same or similar to the pre-assessment Now that you’ve seen the different variations of formative and summative assessments, let’s look more closely at planning the assessment. In the ILC, when we refer to an assessment, we are referring to a common formative assessment. Why formative? Research consistently shows that regular, high-quality formative assessments increase student achievement (Black & Wiliam, 1998). The key is to use the results to plan or adjust instruction as necessary. Why common? When possible teams consisting of teachers of the same content and grade level or course would develop the assessments together. At this point in the ILC process, the common formative assessment is used as a pre-assessment to establish a baseline to inform your instructional planning. This pre-assessment should not be used for grading purposes. Review the graphic to see where the common formative assessment falls on the assessment continuum. There are different philosophies about using the exact same assessment for a pre-assessment as the post-assessment, so this is a decision to be made within your building, grade level or teacher team. If you do not use the exact same assessment, you will need to create a similar common formative assessment to be used as the post-assessment later in the ILC. Source: Black & Wiliam, Assessment and Classroom Learning, Assessment in Education: Principles, Policy, and Practice (1998), 5(1), 7–73.

11 Why Common Formative Assessments?
Share results Why common formative assessments? The use of a common assessment enables teachers to share the results of their instruction with colleagues, which enriches the conversations around the results. This ability to share results provides an opportunity for teachers to learn from each other, which is one of the more powerful forms of professional development. For an even more powerful professional development experience, not only should everyone agree to use a common assessment, but they should all help create it! Learn from each other Help create the assessment

12 Why Common Formative Assessments?
Principals or Coaches Keep in mind that if your district or building configuration is such that the teachers do not have common content areas or grade levels for collaboration, the common formative assessment items may be modified for each grade level or subject area as appropriate. For example, a common formative assessment on reading comprehension in science should include text and questions related to what is being studied in science while in social studies the selected text and questions should be related to what is being studied in social studies. In some schools there is only one teacher responsible for all the ELA, Math, Science or Social Studies instruction in grades It is sometimes difficult to find a true collaborative partner to share in the creation of the assessment. In these cases, we encourage principals or coaches to support the teacher’s thinking as they develop the assessment. Reading Comprehension

13 Why Collaboratively Developed Common Formative Assessments?
Builds on teachers’ understanding of what is being taught Encourages collaborators to think deeply about and come to agreement on how to assess certain learning targets The simple act of collaboratively developing a common assessment requires teachers to reflect upon and examine their practice and has several benefits including the following: It builds on the teachers’ understanding of what is being taught as they co-develop how it will be assessed. It encourages the collaborators to think deeply about and come to agreement on how to assess a certain learning target. It helps to ensure that the assessment is aligned with what is being taught by all of the team members when common grade level content is being taught. Ensures assessments are aligned with what is being taught by all of the team members when common grade level content is being taught

14 Creating Pre-Assessments
Standard: Solve real world problems involving multiplication of fractions and mixed numbers, e.g., by using visual fraction models or equations to represent the problem. Learning Target: Solve real world problems involving multiplication of fractions by using visual fraction models. Earlier in this presentation, you learned that one of the key ways to identify learning targets that correctly reflect the standard is to use the same language used in the standards, particularly the verbs, in the learning targets. To ensure alignment between the learning targets and an assessment, you should also use the same language. By looking carefully at the language used in the standards, you can determine the cognitive level demanded by the standard. Your learning targets should reflect this same level of thinking, and assessments should follow suit. For example, the verb “solve” requires a different level of thinking than “identify,” and a student should be assessed differently on “solving” than on “identifying.” Once you’ve correctly identified the learning targets, your assessment items are already partially written for you. For example, for the sample standard that we unwrapped, each assessment item should start with “Solve” and end with either “by using visual fraction models” or “by using equations.” The only part of the assessment item that remains to be created is the “real world problem involving the multiplication of fractions.” Assessment: Solve <insert real world problem involving multiplication of fractions here> by using visual fraction models.

15 Creating Pre-Assessments
2-3 Standards 4-5 Items per standard Click here to access the Hess’ Cognitive Rigor Matrix There is no set length for a common formative assessment, but according to Thomas Many, PhD, an assessment covering two or three standards using four or five items per standard provides a sufficient amount of data about the learner. Regardless of the number of standards being focused on, each assessment item should be clearly aligned with one of the identified learning targets and all learning targets for a standard should be assessed. If there are several learning targets for a standard and there are at least two items per learning target, you most likely will have more than five items for that standard. Remember the reason for the pre-assessment – to obtain accurate data on which to base your instructional planning in the next phase of the ILC. Keep this in mind while creating the pre-assessment! Take a few minutes to review Hess’ Cognitive Rigor Matrix at the link on the screen to see how the verbs relate to the different cognitive levels and which verbs are used at the different cognitive levels. Source: Uncommonly Good Common Assessment Practices, Presentation 1/10/2014 at Macomb ISD by Thomas Many, PhD.

16 Selected Response Items
11/14/2018 Selected Response Items Pre-Assessment Post-Assessment Multiple Choice Matching True/False Now let's look at some general tips for writing assessment items. These tips apply to pre- and post-assessments. Selected response assessment items, also referred to as objective assessment items, include multiple choice, matching, and true/false questions. Note that the term “objective” refers to the fact that each question has a right and wrong answer and that they can be impartially scored. These question types can be very effective and efficient methods for measuring students’ knowledge and reasoning. Sometimes objective tests are criticized because the questions emphasize only lower-level thinking skills, but it is possible to address higher level thinking skills by including items that focus on "how" and "why." Multiple choice items that involve scenarios, case studies, and analogies can be effective ways to assess students in the application, analysis, synthesis and evaluation of information. Source:

17 Guidelines for Writing Multiple Choice Questions
11/14/2018 Multiple Choice Items Guidelines for Writing Multiple Choice Questions Most of the information for the item should be in the stem. The information should be relevant and realistic. Use new examples and cases, if needed. Questions should be written in a positive format. There should be only one correct answer. The distractors should be plausible. Distractors should be mutually exclusive. The length of all answer options should be similar. Avoid “All of the above,” “None of the above,” “A and C,” etc. All answer options should be similar in complexity. If applicable, list answer choices in alphabetical or numerical order. Multiple choice questions consist of a stem, which may be a question or a statement, and several answer choices. There should be one correct answer and several incorrect answers, known as distractors. Here are some guidelines for writing multiple choice questions. Most of the information for the item should be in the stem. The information should be relevant and realistic. Use new examples and cases, if needed. Questions should be written in a positive format. There should be only one correct answer. The distractors should be plausible. Distractors should be mutually exclusive. The length of all answer options should be similar. Avoid “All of the above,” “None of the above,” “A and C,” etc. All answer options should be similar in complexity. If applicable, list answer choices in alphabetical or numerical order.

18 Guidelines for Writing Matching Questions
11/14/2018 Matching Items Guidelines for Writing Matching Questions Instructions should clarify the basis for matching, e.g., match each of the photos of a bird on the left with its correct scientific name on the right. Instructions should indicate whether responses can be used only once, if they can be used more than once, and if there are some that may not be used at all. Questions and responses in one item should all be in the same category. Each matching item should have four to eight stems and responses. Stems and responses should be grammatically consistent. All of the stems and responses for one item should be on the same page. Generally, the stems should be longer and the responses should be shorter. All of the responses should be plausible for each stem. Responses should be listed in some systematic order. Responses are generally listed to the right of the stems. Matching items consist of two lists of words, phrases, or images (often referred to as stems and responses). Students review the list of stems and match each with a word, phrase, or image from the list of responses. Here are some guidelines for writing matching items. Instructions should clarify the basis for matching, for example, match each of the photos of a bird on the left with its correct scientific name on the right. Instructions should indicate whether responses can be used only once, if they can be used more than once, and if there are some that may not be used at all. Questions and responses in one item should all be in the same category. Each matching item should have four to eight stems and responses. Stems and responses should be grammatically consistent. All of the stems and responses for one item should be on the same page. Generally, the stems should be longer and the responses should be shorter. All of the responses should be plausible for each stem. Responses should be listed in some systematic order. Responses are generally listed to the right of the stems.

19 Guidelines for Writing True/False Questions
11/14/2018 True / False Guidelines for Writing True/False Questions Statements should be clearly true or false. Each statement should contain only one idea. There should be an equal number of true and false correct responses. True/False items should be of equal length. False items should be plausible. Avoid broad statements and those that contain words such as “always,” “never,” “some,” or “not.” Avoid trivia statements. *In most cases, true/false questions are not recommended True/false questions may seem to be easy to write, but it can be difficult to write effective true/false questions. Also, the reliability of true/false questions is not generally very high because of the high probability of guessing correctly. In most cases, true/false questions are not recommended, but if you do use them, here are some guidelines. Statements should be clearly true or false. Each statement should contain only one idea. There should be an equal number of true and false correct responses. True/False items should be of equal length. False items should be plausible. Avoid broad statements and those that contain words such as “always,” “never,” “some,” or “not.” Avoid trivia statements.

20 Selected Response Items
In selected response assessment items, such as multiple choice, matching or true/false, the answer is visible and the student only needs to recognize it. Although selective response items can address the higher levels of Bloom's taxonomy, many of them demand only lower levels of cognition.

21 Constructed Response Items
Another category of assessment items is constructed response. With constructed response assessments, the answer is not visible -- the student must recall or construct it. Although often more time-consuming to grade, constructed response items are a more efficient way to assess higher level thinking skills than selected response assessments. Examples of constructed response items that are often used in pre-assessments include short answer questions, essays and fill-in the blank items. Short Answer Essays Fill-in the Blank

22 Fill-in-the-Blank & Short Answer
Which state was the 26th state to be admitted to the union? The 26th state to be admitted to the union was ____________. The most basic constructed response items are fill-in-the-blank or short answer questions. Here are examples of two common ways that these types of questions are presented. Which state was the 26th state to be admitted to the union? The 26th state to be admitted to the union was ___________________. Fill in the blank and short answer assessment items are relatively easy to create compared to creating an effective multiple choice question, and they require a student to recall rather than only recognize the correct answer.

23 Short Answer Guidelines for Writing Short Answer Questions
Ask an unambiguous, complete question. Call for a specific response, e.g., a number, word, symbol, or phrase. Have one correct response. When creating short answer items, follow these guidelines: Ask an unambiguous, complete question. Call for a specific response, e.g., a number, word, symbol, or phrase. Have one correct response.

24 Fill-in-the-Blank Guidelines for Writing Fill-in the Blank Questions
Place the blank or blanks at the end of the sentence. Blanks should replace keywords. Blanks within one item should be equal length. All cues should lead to keywords. Use blanks per item. Assign each blank a value of one point. When creating fill in the blank items, follow these guidelines: Place the blank or blanks at the end of the sentence. Blanks should replace keywords. Blanks within one item should be equal length. All cues should lead to keywords. Use 1 – 3 blanks per item. Assign each blank a value of one point.

25 Essay Questions Guidelines for Writing Essay Questions
Note the standard or objective being assessed. Provide the point value of the question or parts of the question and any time limits. Clearly define the task using clear action verbs that are aligned with the learning objective. If lengthy, subdivide into multiple sections. Provide a problem situation when appropriate. State the criteria for grading. Avoid letting students select which essay item to respond to. Essay questions are often used to assess the higher level thinking skills such as analysis, synthesis, and evaluation. Verbs such as explain, compose, propose, defend, develop, and evaluate are just a few examples of verbs that are better tested with an essay question. Some general guidelines for well-written essay questions include: Note the standard or objective being assessed. Provide the point value of the question or parts of the question and any time limits. Clearly define the task using clear action verbs that are aligned with the learning objective. If lengthy, subdivide into multiple sections. Provide a problem situation when appropriate. State the criteria for grading. Avoid letting students select which essay item to respond to.

26 Grade 100% 80% Define Proficiency
What does proficient student work look like for the identified learning targets? The final part of planning the pre-assessment is defining what “proficient” work looks like for the identified learning targets. In other words, what does a student have to demonstrate to you to prove that they are proficient? How will you know where to start in your instructional planning? It is important that the definition of “proficient” be determined collaboratively so that you can compare results across all students who were assessed if your team consists of teachers from the same grade and content area. The output of this task is an answer key for objective questions and a rubric and sample answers for each constructed response item. Guidelines for creating a rubric are on the next slide. After creating an answer key and any necessary rubrics, the team also needs to define proficiency for the entire assessment. How many items does a student need to answer correctly to be considered proficient? 80%? 100%? Once the team agrees upon that, they can define far from and close to proficient for the assessment. Remember that although you are reviewing the pre-assessments and determining each student’s current proficiency level, this assessment does not count for a grade! 80% Grade 100%

27 Rubrics Define the expected performance.
Determine the dimensions to assess. For example, some categories for a writing rubric might be ideas, content, word choice, and conventions. For each of the dimensions, define at least three different levels of performance. The more detail for each level, the better. Assign a point value, e.g., 1, 2, and 3 and/or words to each level, e.g., basic, proficient, advanced. For some constructed response items such as an essay, the team will need to develop a rubric for grading. A rubric contains a list of objectives, characteristics, or criteria and it allows you to indicate the degree or level to which the objective, characteristic, or criteria was met. Often a rubric includes ratings for multiple dimensions, and the rating scale for each dimension may be a numerical point value or may be categories such as basic, proficient, or advanced. Once you have determined the performance target based on the objective or standard, follow these steps to create a rubric with a rating scale: Define the expected performance. Determine the dimensions to assess. For example, some categories for a writing rubric might be ideas, content, word choice, and conventions. For each of the dimensions, define at least three different levels of performance. The more detail for each level, the better. Assign a point value, e.g., 1, 2, and 3 and/or words to each level, e.g., basic, proficient, advanced.

28 Action Items Once these collaborative Pre-Planning tasks are completed and the Pre-Planning questions answered, there are several action items to complete. At the end of the last meeting of the Pre-Planning phase, the team should identify a date to administer the pre-assessment and set the date of the next meeting for the group to begin the Instructional Planning Phase. Then, the teachers individually administer the pre-assessment in their classrooms on the identified date and analyze the results for their classroom prior to the next meeting date. Note that the date of the next meeting should be after the pre-assessment date and allow the teachers a day or two to analyze the results. The next meeting where you combine and analyze the pre-assessment results is the first meeting of the Instructional Planning Phase and will be covered in the next presentation. It is in that meeting that any potential revisions to the rubric should be discussed and agreed upon.

29 Pre-Assessment Results
Click on the document to download it for further review. There is a page with the ILC Process Document that contains a chart for you to record your individual classroom results. Level of proficiency should be determined by using the rubric and sample answers that the team agreed upon earlier in this phase. As you are using the rubric to evaluate the assessments, you may find that you may want to fine-tune the criteria. Quite often, defining proficiency levels is an iterative process, but for the time being use the originally agreed upon rubric to analyze your individual class results. Any suggested changes should be discussed with the group in the next meeting after you have combined and compared proficiency levels across all classrooms.

30 Putting Faces on the Data
Donovan Jane Alan real world applications involving unit rates of areas. When recording the pre-assessment results for your class, it is important to take the time to enter at least first names of the students who fall within each proficiency level. While you are noting each student’s proficiency level, in particular the students who are not yet proficient, consider some challenges they might face in learning the content and think of what these students might need during instruction to move them to proficiency. Naming each student is a way to personalize the data and is important as you move forward with instructional planning. Misty Akesha Mike multiplying/diving fractions writing unit rates to compare to different ratios

31 Putting Faces on the Data
Increased student engagement Positive impact on school culture In their book, “Putting Faces on the Data,” Lyn Sharrett and Michael Fullan describe how two of the many benefits of personalizing data include increased student engagement and positive impact on school culture. The classroom results template from the Instructional Learning Cycle encourages you to put the faces on the data and to determine the best instructional plan to meet the needs of all students.

32 Review You must answer each question in the review before you can continue to the last slide of the presentation On the next slide you will be presented with a series of questions to review the content from this presentation. Once you begin the questions, you must answer them all before you move on. After completing the review, you will continue to the final slide of this presentation.

33 Question 1: Which of the following is an example of a “more formative” assessment based on the assessment continuum covered in this presentation? The M-STEP Collaboratively developed pre-assessment District wide benchmark assessments The SAT B

34 Question 2: Approximately how many items should be included for each standard being assessed on a common formative assessment in order to provide a sufficient amount of data about the learner? 1-2 items 2-3 items 3-4 items 4-5 items D

35 Question 3: Which of the following is an example of a “more summative” assessment based on the assessment continuum covered in this presentation? M-STEP Exit slip Collaboratively developed pre-assessment NWEA Measures of Academic Progress D

36 Conclusion Part 1 of this presentation covered the tasks and action items related to the Pre-Planning Phase of the Instructional Learning Cycle. We also introduced the ILC Process Document to help guide you through the process. In Part 2 you learned about the third collaborative task—assessments—and the individual tasks that are part of the Pre-Planning phase of the Instructional Learning Cycle. In the next presentation, we’ll cover the Instructional Planning Phase of the ILC.


Download ppt "Office of Education Improvement and Innovation"

Similar presentations


Ads by Google