Download presentation
Presentation is loading. Please wait.
1
GEA Survey – GSD Assessment Protocol
Submitted by: Susen Zobel, GEA President, Starleen Orullian, GEA Executive Director, and Cindy Formeller, Associate Director
2
GEA Assessment Protocol Survey Results
Overview of assignment by GSD Board of Directors Focus Groups Survey Results GEA was given the charge in the September 15, 2015 study session of the GSD School Board to survey teachers regarding the Benchmark Assessment Protocols new to teachers this year. The protocols were developed to reduce the number of times the Benchmark Assessments are administered. The question the Board asked was “how do teachers like the new protocols, and do they add value to their teaching and student development.” GEA agreed to initiate the survey and began by holding focus groups to narrow down the pros and cons of the benchmark assessment protocol
3
Focus Groups Elementary Teachers (K-3) Elementary Teachers (4-6)
Secondary Teachers (SAGE Tested Subjects) Secondary Teachers (Non-SAGE Tested Subjects) Beginning first with Focus Groups. Knowing that each school level will have different needs and concerns, we divided the Focus Groups into four teaching levels. Each group had approximately 10 or more teachers attend. Questions were the same for each group of teachers: 1. “What are the positives with the new assessment protocol? “What difficulties do you have with the assessment protocol? Problem-solving difficulties? Brainstorm - Thoughts
4
Commonalities for All Four Groups
An “I Don’t Know” button essential for Pre-assessment. Pre-assessment needs to be ready no later than first day of school. Math curriculum aligns easily with Benchmark Assessment; not so for Language Arts. Rely on Common Formative Assessments (CFAs) to inform instruction. More time is needed to assess, review, and reflect on testing data. After reviewing the notes taken from the four groups, we found five consistent messages.
5
Elementary Grades Top Three Concerns
Lower Grades (K-3) Upper Grades (4-6) Test questions too lengthy for this group. This age group has trouble with keyboarding. Vocabulary and reading passages too long and above grade level for young students. Teachers need more access to test questions, training, and simplified results. Takes more than 2 periods to get through test. Accommodations are necessary for special needs students and students not familiar with computerized tests. Elementary and Secondary needs/concerns were very different. We found we also needed to divide elementary into two parts; lower and upper grades
6
Secondary Top Three Concerns
SAGE Tested Subjects Non-SAGE Tested Subjects Benchmarks don’t prepare or align with SAGE. Questions poorly worded. ELL need accommodations. Lack of computer lab time. Countless hours spent writing and validating test questions. PLCs and review not as strong for non-core subjects. SAGE Tested: Benchmark questions don’t match the depth of knowledge of SAGE Questions are confusing, allowing for multiple correct or partially correct answers, science illustrations are vague and undiscernible, Math word problems only, no calculations, 7th grade vocabulary is are beyond grade level. Although ELL students understand Math calculations, they find it difficult to read the word problems, same with ELA – need accommodations. Non-Sage Tested Subjects: Availability of computer lab usage and time is a problem for elective courses. Each subject teacher writes his/her own questions and back-up validation. For Anatomy/Physiology, Dance, Music, and CTE PLCs don’t function as well as they should for electives. Some are co-opted to network PLCs and not allowed brainstorming time at their own school.
7
Granite District Benchmark Assessments Survey
Open for 10 days – Received 710 Responses Upfront, let us say that along with the survey findings, we also learned a lot about the “art of questioning”. Following are the results of the survey.
8
Q1. How would you rate the value of the Granite District Benchmark Assessments?
From the 708 respondents, you will see that half find the benchmark assessments “somewhat valuable” there was no option for comments on this slide as we hoped to gain a quick peek at how many teachers found value in the benchmarks. More teachers found some or great value and 12% view the benchmarks as not valuable. An assumption we can make is that the teachers who found no value may be teaching an elective or CTE assignment.
9
Q2: Are your students able to complete the Benchmark assessments in one class period?
A common concern in the focus groups was the extended time it was taking to get through the benchmark testing. The reasons are varied depending on grade or subject. There were 289 comments shared with this question. We categorized the comments as follows: 46% - said it took 2 or more class sessions to complete the test. 5% - reported that they did not administer the test 17% - taught K-3 and reported varied reasons why it took more than one class period: some reported weeks, some said the tests were too difficult, 30% - reported that lower skill level students (SPED or K-1) needed time accommodations. The language portion of the test was the most difficult and time consuming overall. 8% - reported that one class period was doable 7% - reported that students finish in one class period, but mostly due to random clicking due to the inability to read fast enough to finish, or attention deficits 1% - reported that the testing takes too much time from teaching. 5% - reported that they taught electives and therefore, assessments were very different. 1% - reported that they wished they gained more data from the testing
10
Q3: Are the Benchmark questions testing the key concepts of the Core?
As you will see from this graph, a strong majority agree that the benchmark assessments are testing key concepts of the Core respondents wrote a comment and we categorized as follows: 6% - reported that they do not administer benchmarks 5% - reported that they teach SPED students and they do not take the benchmarks but rely on other assessment strategies 6% - reported that the Math benchmarks align more closely with the core than language arts. 30% - reported that the benchmark assessments “mostly” followed the Core 33% - reported that the Benchmark questions need to be more in line with the Core 2% - reported that they did not administer testing (counselors) 5% - believed the questions were adequate 28% - reported that the Benchmark questions were worded poorly, i.e. are frustrating to students, not compatible with student learning, or simply written nonsensically.
11
Q4: How functional is Granite technology in assisting Benchmark assessments?
As you know, we had a slow start to the pre-test this year and many teachers complained about the lack of technology support. Some of this was because of the SAGE platform managed by the state,. We had 132 comments to this question categorized as follows: 12% - shared concerns that students’ ability to navigate technology hurt their progress – either they were SPED students, younger students, or ELL, the lack of keyboarding and computer skills made the testing difficult. 9% - reported that they needed more training on navigating the system, and gathering data for review. 16% - reported experiencing either system or equipment failure when attempting to administer assessments 8% - reported varied technological issues 3% - reported that they received great support from STS or other technology experts and that the data reports were extremely helpful. 2% - reported that the technology works “fine” 23% - Uncategorized – there were 30 remaining responses that are individual and varied – overall expressing some dissatisfaction.
12
Q5: Do you review data in your PLCs?
WE left this question vague on purpose. From the focus groups, we discovered that teachers were relying on CFA’s, or other tests to gauge student growth and direct instruction. Wording the question in this manner, we hoped to get a feel for what data teachers are using overall respondents left a comment which we categorized as follows: 26% - reported using Benchmark Data in PLCs 11% - reported using CFAs specifically in PLCs 26% - reported using a combination of Benchmark Date, Dibles, Go Math, CFAs, etc 18% - reported using data in general – non specific 9% - reported not reviewing in PLCs for a myriad of reasons – no time, no structure, not applicable 9% - reported using data other than Benchmarks 2% - reported using SAGE specifically
13
Q6: Do Benchmark assessments prepare your students for the summative SAGE test?
The bar chart shows that more teachers selected “no” and the reasons are varied as the following 320 categorized responses will show: 19% - said the question was not applicable or they taught early childhood 7% - gave clear “no” responses 12% - indicated that they did not teach a sage-tested subject 16% - reported that the assessments somewhat prepare – many said it was due to the practice of computerized testing. 2% - strongly shared that THEY prepare students – not the tests 17% - reported that the tests don’t align and it frustrates students and teachers 4% - stated that they were unsure 2% - reported “yes” clearly And 21% are yet uncategorized because their responses were individual and did not fit into a clear category. The question does show that there is little comparison between the tests by teachers, and that the time to dive deep into the data is non-existent.
14
Q7: Are the Benchmark reports easy to access and interpret?
As you can see, this question was almost 50/50. There were 213 responses as follows: 52% - wrote that the assessments were difficult to access and read 21% - reported that they liked the data and that it was easier to access this year 7% - reported “yes” due to the support from a building STS or technology support 21% - had varied responses that do not fall into a specific category.
15
Q8-9: Demographics ELA – 18.77% Math – 14.83% Science – 6.62%
Mix of Above – 5.28% Non-SAGE Tested Subjects – % Special Education – 7.65% K-3 – 32.13% 4-6 – 25.24% 7-8 – 14.26% 9-12 – 19.12% Elementary General – 59.31% Other – 9.25%
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.