Download presentation
Presentation is loading. Please wait.
Published byAlannah Robertson Modified over 8 years ago
1
ASSESSMENT AND FEEDBACK: PRINCIPLES & PRACTICE
2
Assessment and Feedback: Principles and Practice Chris Rust Oxford Centre for Staff and Learning Development Oxford Brookes University, UK
4
Cartoon by Bob Pomfret, copyright Oxford Brookes University Student learning and assessment “Assessment is at the heart of the student experience” (Brown, S & Knight, P., 1994) “From our students’ point of view, assessment always defines the actual curriculum” (Ramsden, P.,1992) “Assessment defines what students regard as important, how they spend their time and how they come to see themselves as students and then as graduates.... If you want to change student learning then change the methods of assessment” (Brown, G et al, 1997)
5
Influence on student learning Assessment influences both: Cognitive aspects - what and how Operant aspects - when and how much (Cohen-Schotanus, 1999)
6
Issues in assessment
7
Reliability Laming (1990) Very poor agreement between blind double marking of exam scripts at ‘a certain university’. The only encouraging thing that can be said about these correlations is that they are all positive. Newstead & Dennis (1994) Huge differences between both internal and external examiners’ marks, with externals more random than internals LowestHighest Student E Externals5280 Internals4768 Student FExternals5085 Internals5575
8
Activity Marking exercise Take 5 minutes to individually mark the two sample papers. We will ‘moderate’ our marks following this
9
Reliability contd Hartog & Rhodes (1935) Experienced examiners, 45% marked differently to the original. When remarked, 43% gave a different mark Hanlon et al (2004) Careful and reasonable markers given the same guidance and the same script could produce differing grades; Difference between marks of the same examiners after a gap of time
10
Reliability contd QAA (2007) “Focusing on the fairness of present degree classification arrangements and the extent to which they enable students' performance to be classified consistently within institutions and from institution to institution, this paper [Quality Matters: The classification of degree awards] suggests that the findings of the audit reports leave no room for complacency”
11
Calibration, by subject communities “to establish a simple mechanism to bring together examiners from within a subject community (however best described) to compare their students’ work and to judge student achievement against the standards set in order to improve comparability and consistency” HEFCE QAR (2015, p27). “..to provide public assurance that the academic standards achieved by students are broadly comparable across the UK, more work is needed to ensure consistency in calibration. We envisage this would be done through external examiners working within their wider subject communities to assure better comparability of standards.” QAA Response to QAR (2015 p7)
12
Task In 3s: Consider the three examples of assessment grids. a) What do you think of them in principle? Could you make use of the idea? b) What do you think of these specific examples? How would you need to adapt them? c) Do you use anything like this already? If so, what is it like and how well does it work?
13
Issues of validity and authenticity “We continue to assess student learning - and to graduate and certify students much as we did in 1986, 1966, or 1946, without meaningful reference to what students should demonstrably know and be able to do” (Angelo, 1996) “Assessment systems dominate what students are oriented towards in their learning. Even when lecturers say that they want students to be creative and thoughtful, students often recognise that what is really necessary, or at least what is sufficient, is to memorise.” (Gibbs, 1992)
14
Constructive alignment - what is it? “The fundamental principle of constructive alignment is that a good teaching system aligns teaching method and assessment to the learning activities stated in the objectives so that all aspects of this system are in accord in supporting appropriate student learning” (Biggs, 1999)
15
Constructive alignment: 3-stage course design What are “desired” outcomes? What teaching methods require students to behave in ways that are likely to achieve those outcomes? What assessment tasks will tell us if the actual outcomes match those that are intended or desired? This is the essence of ‘constructive alignment’ (Biggs, 1999) Learning outcomes Assessment Learning activity Learner
16
Task Individually: 1. Consider one of your courses and its learning outcomes. 2. Then consider whether the current teaching and assessment methods are consistent with these outcomes. Especially ask yourself, is whether each of the learning outcomes has been achieved really being assessed? In pairs: Take it in turns to tell your partner your conclusions. Where you have found discrepancies, the job of the listener is to try and help their partner to find ways that you might change the assessment methods to make them more valid and aligned with your learning outcomes Refer to handout
17
Purposes of assessment
18
Task: purposes of assessment Individually: Write down as many purposes of assessment as you can 3s: Compare and compile your lists into one you agree on. Any patterns?
19
Purposes of assessment (adapted from Brown G et al 1997) motivate students diagnose a student's strengths and weaknesses help students judge their own abilities provide a profile of what each student has learnt provide a profile of what the whole class has learnt grade or rank a student permit a student to proceed select for future courses license for practice select, or predict success, in future employment provide feedback on the effectiveness of the teaching evaluate the strengths and weaknesses of the course achieve/guarantee respectability and gain credit with other institutions and employers
20
Purposes of assessment 2 1.Motivation 2.Create learning activities 3.Providing feedback 4.Judging performance (to produce marks, grades, degree classifications; to differentiate; gatekeeping; qualification) 5.Quality assurance 1, 2 & 3 concern learning and perform a largely formative function; should be fulfilled frequently 4 & 5 are largely summative functions; need to be fulfilled infrequently but well
21
Summative and formative assessment
22
Formative vs Summative assessment Formative: focus is to help the student learn Summative: focus is to measure how much has been learnt. not necessarily mutually exclusive, but…. Summative assessment tends to: come at the end of a period or unit of learning focus on judging performance, grading, differentiating between students, gatekeeping be of limited or even no use for feedback
23
Problems of summative assessment Can: encourage surface/strategic approaches not value/build on/make use of prior learning and experience and student ability encourage playing safe/avoid risk-taking not provide feedback (e.g. exams) damage self-efficacy be time consuming for staff/reduce overall amount of assessment
24
Potential of formative assessment Feedback is the most powerful single influence that makes a difference to student achievement Hattie (1987) - in a comprehensive review of 87 meta- analyses of studies Feedback has extraordinarily large and consistently positive effects on learning compared with other aspects of teaching or other interventions designed to improve learning Black and Wiliam (1998) - in a comprehensive review of formative assessment
25
11 conditions under which assessment supports learning 1 (Gibbs and Simpson, 2002) 1.Sufficient assessed tasks are provided for students to capture sufficient study time (motivation) 2.These tasks are engaged with by students, orienting them to allocate appropriate amounts of time and effort to the most important aspects of the course (motivation) 3.Tackling the assessed task engages students in productive learning activity of an appropriate kind (learning activity) 4.Assessment communicates clear and high expectations (motivation)
26
11 conditions 2 5.Sufficient feedback is provided, both often enough and in enough detail 6.The feedback focuses on students’ performance, on their learning and on actions under the students’ control, rather than on the students themselves and on their characteristics 7.The feedback is timely in that it is received by students while it still matters to them and in time for them to pay attention to further learning or receive further assistance 8.Feedback is appropriate to the purpose of the assignment and to its criteria for success 9.Feedback is appropriate, in relation to students’ understanding of what they are supposed to be doing 10.Feedback is received and attended to 11.Feedback is acted upon by the student
27
Task Think about a single course that you teach. How are you doing on the 11 conditions?
28
Formative assessment – where & when? (Chickering and Gamson, 1987) Knowing what you know and don’t know focuses learning. Students need appropriate feedback on performance to benefit from courses. In getting started, students need help in assessing existing knowledge and competence. In classes, students need frequent opportunities to perform and receive suggestions for improvement. At various points during college, and at the end, students need chances to reflect on what they have learnt, what they still have to learn, and how to assess themselves.
29
If more formative assessment – how? Self-assessment Peer-marking (model answers) Peer-feedback (on problems/reports/essays/presentations) Two-stage assignments or tests, using self/peer Mechanised assessment (including CAA) Sampling for feedback Generic feedback NB Student effort/time (identify learning hours)
30
Good feedback practice: (Nicol and Macfarlane-Dick 2006) helps clarify what good performance is (goals, criteria, standards) facilitates the development of self-assessment and reflection in learning delivers high quality information to students about their learning encourages teacher and peer dialogue around learning encourages positive motivational beliefs and self esteem provides opportunities to close the gap between current and desired performance provides information to teachers that can be used to help shape teaching
31
Activity In small groups (3 or 4): Identify one or two examples from your own experience of assessment tasks that satisfy at least 4 of the seven principles of good feedback practice
32
Self and Peer Assessment
33
Involve the students – 1 Self assessment Simple: Strengths of this piece of work Weaknesses in this piece of work How this work could be improved The grade it deserves is….. What I would like your comments on More complex: see handout it is the interaction between both believing in self- responsibility and using assessment formatively that leads to greater educational achievements ( Brown & Hirschfeld, 2008 )
34
Peer marking – using model answers (Forbes & Spence, 1991) Scenario: Engineering students had weekly maths problem sheets marked and problem classes Increased student numbers meant marking impossible and problem classes big enough to hide in Students stopped doing problems Exam marks declined (Average 55%>45%) Solution: Course requirement to complete 50 problem sheets Peer assessed at six lecture sessions but marks do not count Exams and teaching unchanged Outcome: Exam marks increased (Av. 45%>80%)
35
Peer feedback - Geography (Rust, 2001) Scenario Geography students did two essays but no apparent improvement from one to the other despite lots of tutor time writing feedback Increased student numbers made tutor workload impossible Solution: Only one essay but first draft required part way through course Students read and give each other feedback on their draft essays Students rewrite the essay in the light of the feedback In addition to the final draft, students also submit a summary of how the 2nd draft has been altered from the1st in the light of the feedback Outcome: Much better essays
36
Peer feedback - Computing (Zeller, 2000*) The Praktomat system allows students to read, review, and assess each other’s programs in order to improve quality and style. After a successful submission, the student can retrieve and review a program of some fellow student selected by Praktomat. After the review is complete, the student may obtain reviews and re-submit improved versions of his program. The reviewing process is independent of grading; the risk of plagiarism is narrowed by personalized assignments and automatic testing of submitted programs [*Available at: http://www.infosun.fim.unipassau.de/st/papers/iticse2000/itic se2000.pdf]
37
Peer feedback – Computing cont’d (Zeller, 2000*) In a survey, more than two thirds of the students affirmed that reading each other’s programs improved their program quality; this is also confirmed by statistical data. An evaluation shows that program readability improved significantly for students that had written or received reviews.
38
Assessing groups
39
Benefits of cooperative learning Cooperation, compared with competitive and individualistic efforts, tends to result in higher achievement greater long-term retention more frequent use of higher-level reasoning more accurate and creative problem-solving more willingness to take on and persist with difficult tasks more intrinsic motivation transfer of learning from one situation to another greater time on task (Johnson, Johnson and Smith 2007, p 19)
40
Problems with cooperative learning many students don’t like it students may find group work assessment unfair social loafing free riding lack of teamwork skills group think, or avoiding conflict lack of time to gel into an effective group inappropriate group size and/or lack of sufficient heterogeneity in the group (Johnson & Johnson 1999)
41
Some possible approaches to group assessment* Simplest path (everyone gets the same mark) ‘Divide and concur’ (assess individual components separately) Add differentials (the whole group shares the total group mark according to their contributions) Add contribution marks (as well as group mark a smaller additional, individual, component) Add further tasks (additional individual assessment task, e.g. reflective commentary) Test orally (to test individual participation) Test in writing (e.g. in the exam) * Race 2001
42
Mechanise assessment 1.Statement banks 2.Assignment attachment sheets 3.Optical mark reader 4.Computer aided-assessment
43
Mechanise assessment - 1 Statement Banks Write out frequently used feedback comments, for example: 1. I like this sentence/section because it is clear and concise 2. I found this paragraph/section/essay well organised and easy to follow 3. I am afraid I am lost. This paragraph/section is unclear and leaves me confused as to what you mean 4. I would understand and be more convinced if you gave an example/quote/statistic to support this 5. It would really help if you presented this data in a table 6. This is an important point and you make it well etc…….
44
Weekly CAA testing – case study data StudentWeek 1Week 2Week 3Week 4Week 5Week 6Week 7 A57632135402720 B68714579838077 C232111---- D45514579838077 E------- F63-51-47-35 G54583550586062 (Brown, Rust & Gibbs,1994)
45
CAA quizzes (Catley, 2004) Scenario First term, first year compulsory law module A new subject for most (75%) students High failure rate (25%), poor general results (28% 3rd class, 7% Ist) Solution: Weekly optional VLE quizzes (50% take-up) Outcome: Quiz takers: 4% fail, 14% 3rd class, 24% Ist Non-quiz takers: same pattern as before Overall:14% fail (approx half previous figure) 21% 3rd class 14% 1st (double previous figure)
46
Assessing a selection (Rust, 2001) Scenario: Weekly lab reports submitted for marking Increased student numbers meant heavy staff workload and increasingly lengthy gap before returned so feedback of limited/no use Solution: Weekly lab reports still submitted Sample number looked at, and generic feedback e- mailed to all students within 48 hours At end of semester, only three weeks’ lab reports selected for summative marking Outcome: Better lab reports and significantly less marking
47
References Angelo, T. (1996) Transforming assessment: high standards for higher learning, AAHE Bulletin, April, 3–4. Black, P. & Wiliam, D. (1998) Assessment and classroom learning. Assessment in Education, 5(1), 7–74. Biggs, J. (1999) Teaching for quality learning at university. Buckingham: SRHE & Open University Press Brown, S., Rust, C. and Gibbs, G. (1994). Involving students in the assessment process, in Strategies for Diversifying Assessments in Higher Education, Oxford: Oxford Centre for Staff Development, and at DeLiberations http://www.lgu.ac.uk/deliberations/ocsd- pubs/div-ass5.htmlhttp://www.lgu.ac.uk/deliberations/ocsd- pubs/div-ass5.html Brown G, Bull J & Pendlebury M. (1997). Assessing student learning in higher education. London: Routledge Brown, S. and Knight, P. T. (1994). Assessing Learners in Higher Education. London: Kogan Page. Catley, P. (2004). "One lecturer's experience of blending e-learning with traditional teaching or how to improve retention and progression by engaging students." Brookes eJournal of Learning and Teaching 1(2). Chickering, A. W. and Gamson, Z. F. (1987). "Seven Principles for Good Practice in Undergraduate Education." AAHE Bulletin, March 1987: 3-7. Forbes, D. A. & Spence, J. (1991). An experiment in assessment for a large class, in: R. Smith (Ed.) Innovations in engineering education. London: Ellis Horwood. Gibbs, G. & Simpson, C. (2002). Does your assessment support your students’ learning? Available online at: www.brookes.ac.uk/services/ocsd/1_ocsld/lunchtime_gibbs.html (accessed 30 November 2002).
48
References cont’d Gibbs, G (1992). Improving the quality of student learning, Bristol: TE Hattie, J. A. (1987) Identifying the salient facets of a model of student learning: a synthesis of meta-analyses. International Journal of Educational Research, 11, 187–212. Johnson, D., Johnson, R. and Smith, K. (2007). "The State of Cooperative Learning in Postsecondary and Professional Settings." Educational Psychology Review 19(1): 15-29. Johnson, D. W. and Johnson, R. T. (1999). Learning Together and Alone: Cooperative, Competitive and Individualistic Learning (Fifth Edition). Needham Heights, Ma: Allyn and Bacon. Laming, D. (1990) The reliability of a certain university examination compared with the precision of absolute judgements, Quarterly Journal of Experimental Psychology Section A—Human Experimental Psychology, 42(2), 239–254. Nicol, D. J. and Macfarlane-Dick, D. (2006). "Formative assessment and self-regulated learning: a model and seven principles of good feedback practice." Studies in Higher Education 31(2): 199-218. Ramsden, P. (1992). Learning to teach in higher education. London: Routledge Race, P. (2001). "Assessment Series No.9: A Briefing on Self, Peer and Group Assessment." Retrieved 20 April, 2012, from http://www.heacademy.ac.uk/resources/detail/resource_database/SNAS/A_Briefing_on_ Self_Peer_and_Group_Assessment. http://www.heacademy.ac.uk/resources/detail/resource_database/SNAS/A_Briefing_on_ Self_Peer_and_Group_Assessment. Rust, C. (2001). A briefing on assessment of large groups. LTSN Generic Centre Assessment Series,12, York: LTSN. Zeller, A. (2000). Making Students Read and Review Code. [Online] Retrieved from http://portal.acm.org/ft_gateway.cfm?id=343090&type=pdf 19 April 2011. http://portal.acm.org/ft_gateway.cfm?id=343090&type=pdf
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.