Download presentation
Presentation is loading. Please wait.
Published byTimothy Walton Modified over 9 years ago
1
Student assessment AH Mehrparvar,MD Occupational Medicine department Yazd University of Medical Sciences
3
“Students read, not to learn but to pass the examination. They pass the examination but they do not learn.” Huxley
4
What is an assessment? Any systemic method of obtaining evidence (from test, examination questionnaires, surveys and collateral source) to draw inference about competency of the students for a specific purpose
5
Evaluation Evaluation is a judgment regarding the quality or worth of the assessment results This judgment is based upon multiple source of assessment information
6
Qualitative and Quantitative Measurement of students behaviour + Value judgment = Evaluation
7
Critical questions in assessment 1. WHY are we doing the assessment? 2. WHAT are we assessing? 3. HOW are we assessing it? 4. HOW WELL is the assessment working?
8
Purpose of assessment To determine whether learning objectives are met Support of student learning Certification and judgment of competency Development and assessment of teaching program Understanding the learning process Predicting the future performance
9
Purpose of assessment DEFINE THE MINIMUM ACCEPTED LEVEL OF COMPETENCE Prove (that he/she is a competent doctor) - Improve (provide feedback regarding shortcomings)
10
Purpose of assessment For selection of a few students from a large number of students Pre-assessment of the need of a learner For continued monitoring of learning activities for giving a feedback For competence to complete a course
11
WHAT are we testing? Elements of competence Knowledge factual applied: clinical reasoning Skills communication clinical Attitudes professional behaviour
12
How are we doing the assessment? Essays Short answer questions Simulated patient management problems Practical Clinical OSPE OSCE Rating scales Checklists Questionnaires Diary and logbook
13
What to Assess ? DomainMethodInstrument Cognitive (knowledge) Written tests Oral Open-ended or essay questions structured essay or MEQ short answer question objective MCQ simulated patient management problems Assignments questions Psychomotor (skills) ObservationsPractical – actual and model clinical cases Objective structured clinical/practical examination
14
What to Assess? DomainMethodInstrument Affective (Attitude) ObservationRating scales check lists questionnaire log book daily evaluation sheets
15
What we can assess? Knows Shows how Knows how Does Knows Factual tests: SBAs, Essay, SAQ Knows how (Clinical) Context based tests: SBAs, EMQs, SAQ Shows how Performance assessment in vitro: OSCEs OSPE Does Performance assessment in vivo: mini-CEX, DOPs
16
HOW WELL is the assessment working? Evaluation of assessment systems Is it valid? Is it reliable? Is it doing what it is supposed to be doing?
17
Characteristics of Assessment Relevance: Is it appropriate to the needs of the society or system Validity: Does the assessment tool really test what it intend to test Reliability: Accuracy and consistency Objectivity: Will the scores obtained by the candidate be same if evaluated by two or more independent experts? Feasibility: Can the process be implemented in practice?
18
RELEVANCE Relevance refers to appropriateness of the process of evaluation with reference to the jobs to be performed by the student after qualification and therefore it should reflect the health needs of the society Relevance of the process should be obvious both to teachers and the students
19
VALIDITY Refers to the degree to which a test measures what it intends to measure In choosing an instrument, the first question that the teacher should consider is the learning outcome sought to be measured Refers both to the results of the test as well as the instrument
20
Factors Influencing Validity Test factors Unclear directions Difficult and ambiguous wording of questions Poorly constructed items Inappropriate level of difficulty Inappropriate question for the outcome being measured Inappropriate arrangements of items Identifiable pattern of answers and clues. Too short or too long a test Errors in scoring Adverse classroom and environmental factors
21
RELIABILITY Consistency with which an instrument measures the variable Reliability is a measure of reproducibility of the test Reliability is a mathematical concept and is a measure of correlation between two sets of scores
22
To obtain two sets of scores one of three alternatives are available. a. Test-retest: b. Equivalent tests: Two tests of equivalent form can be administered to the students to obtain two sets of scores. c. Split half method: In this a single test is split into two halves (for example odd and even numbered MCQs) and the two sets of scores for each student compared
23
OBJECTIVITY Degree of agreement between the judgment of Independent and competent examiners Objectivity of the Evaluation process should be maintained Step to increase objectivity of scoring of conventional examinations Structuring of questions Preparation of model answers Agreeing on the marking scheme Having papers independently valued by two or more examiners
24
FEASIBILITY Considering the ground realities, an evaluation process should be feasible Factors to be considered in deciding feasibility are Time and resources required Availability of an equivalent form of the test for measuring reliability Ease of administration, scoring and interpretation
25
Systematically designed assessment Relevant to the curriculum Focus on important skills Promote learning skills Discriminate good and poor students Provide feedback
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.