Oral Health Training & Calibration Programme Epidemiology-Calibration WHO Collaborating Centre for Oral Health Services Research
Oral Health Clinical Survey Oral Health Clinical Examination Tool Dentate Status Prosthetic Status and Prosthetic Treatment Needs Mucosal Status Occlusal Status Orthodontic Treatment Status Fluorosis Dean’s Index Gingival Index Debris and Calculus Indices Attachment Loss and Probing Score Tooth Status Chart Count of Tooth Surfaces with Amalgam Trauma Index Treatment and Urgent Needs
Training and Calibration Training for: Dentate Status Prosthetic Status Mucosal Status Fluorosis Orthodontic Status Orthodontic Treatment Status Periodontal Assessments Tooth Status Amalgam Count Traumatic Injury Treatment Needs Calibration for: Fluorosis Occulsal Status Periodontal Assessments Tooth Status Amalgam Count Magnification is not allowed for examinations
Calibration Objectives Define Epidemiology - Index Discuss Validity and Reliability Examiner Comparability Statistics Calibration Inter and Intra Examiner
Suggested 4 Day Calibration Training Day 2 Time Chair 1 Chair 2 Chair 3 9:00-12:00 Classroom Session Presentations/Fluorosis Training 9:00-10:15 Statistics/Fluorosis 10:15-10:30 Break 10:30-11:45 Patient 4 Patient 5 Patient 6 11:45-12:00 Discussion/questions 12:00-1:00 Lunch 1:00-2:00 Patient 1 Patient 2 Patient 3 Patient 7 Patient 8 Patient 9 2:00-3:00 Discussion/Questions 3:00-3:15 Patient 10 Patient 11 Patient 12 4:15-5:00 Discussion/Fluorosis training 3:15-5:00 Discussion Fluorosis This design is based on using the entire Oral Health Module and all of its indices.
Suggested 4 Day Calibration Training cont. Day 3 Day 4 Time Chair 1 Chair 2 Chair 3 9:00-10:15 Statistics Review 10:15-10:30 Break 10:30-11:45 Patient 13 Patient 14 Patient 15 Repeat*** 1 Repeat 2 Repeat 3 11:45-12:00 Discussion/Questions Lunch 1:00-2:00 Patient 16 Patient 17 Patient 18 Repeat 4 Repeat 5 Repeat 6 2:00-3:00 Patient 19 Patient 20 Patient 21 2:00-2:30 2:30-3:15 Final Fluorosis Testing Statistical Review 3:00-3:15 3:15-5:00 Discussion Fluorosis as necessary Discussion Questions Finish
‘science upon the people’ Epidemiology The study of the distribution and determinants of health related states or events in specified populations and the application of this study to the control of health problems. ‘Epi demos logos’ Greek: ‘science upon the people’
Measurement of Oral Disease We use indices: as a numerical expression to give a group’s relative position on a graded scale with a defined upper and lower limit. as a standardised method of measurement that allows comparisons to be drawn with others measured with the same index. to define the stage of disease; not absolute presence or absence.
Desirable characteristics of an index Valid Reliable Acceptable Easy to use Amenable to statistical analysis
Prevalence is the number of cases in a defined population at a particular point in time describes a group at a certain point in time similar to a snapshot in time is expressed as a rate -x per 1000 population
Simple description of the health status of a population or community. Descriptive study Simple description of the health status of a population or community. No effort to link exposures and effects. For example: % with caries % with periodontal disease
Uses of a Prevalence Study Planning Targeting Monitoring Comparing International Regional
Validity and Reliability Valid Yes Reliable Yes Valid No Reliable No Unbiased Valid No Reliable Yes Valid No Reliable No Biased
Validity Success in measuring what you set out to measure Being trained by a Gold Standard trainer ensures validity by Training on what is proposed to be measured Confirming that everyone is measuring the same thing -“singing out of the same hymn book”
Reliability The extent to which the clinical examination yields the same result on repeated inspection. Inter-examiner reliability: reproducibility between examiners Intra-examiner reliability: reproducibility within examiners
Reliability Calibration ensures inter and intra examiner reliability and allows: International comparisons Regional comparisons Temporal comparisons Without calibration Are any differences real or due to examiner variability?
Examiner Reliability Statistics Used when: Training and calibrating examiners in a new index against a Gold Standard Examiner Re-calibrating examiners against a Gold Standard Examiner
Examiner Reliability Statistics Two measures used: Percentage Agreement Kappa Statistic
Percentage Agreement Percentage agreement is one method to measure Examiner Reliability. It means: the percentage of judgements where the two examiners have agreed compared to the total number of judgements made
Example – Percentage Agreement Percentage Agreement is equal to the sum of the diagonal values divided by the overall total and multiplied by 100. Example – Percentage Agreement Ex 2 A U E M Total Ex 1 18 15 4 5 24 2 12 9 23 7 16 35 28 30 100
Example – Percentage Agreement Number of agreements = sum of diagonals = 61 Total number of cases = overall total = 100 Percentage agreement = 61%
Kappa Statistic The Kappa Statistic measures the agreement between the evaluations of two examiners when both are rating the same objects. It describes agreement achieved beyond chance, as a proportion of that agreement which is possible beyond chance.
Kappa Statistic Interpreting Kappa The value of the Kappa Statistic ranges from 0 - 1.00, with larger values indicating better reliability. A value of 1 indicates perfect agreement. A value of 0 indicates that agreement is no better than chance. Generally, a Kappa > 0.60 is considered satisfactory.
Interpreting Kappa 0.00 Agreement is no better than chance 0.01-0.20 Slight agreement 0.21-0.40 Fair agreement 0.41-0.60 Moderate agreement 0.61-0.80 Substantial agreement 0.81-0.99 Almost perfect agreement 1.00 Perfect agreement
Kappa Statistic The formula for calculating the Kappa Statistic is:
Example – Kappa Statistic PO is the sum of the diagonals divided by the overall total. Ex 2 A B C D Total Ex 1 18 15 4 5 24 2 12 9 23 7 16 35 28 30 100
Example - Kappa Statistic PE is the sum of each row total multiplied by the corresponding column total divided by square of the overall total Ex 2 A B C D Total Ex 1 18 15 4 5 24 2 12 9 23 7 16 35 28 30 100
Example - Kappa Statistic Number of agreements = sum of diagonals = 61 Total number of cases = overall total = 100 PO = 0.61
Example - Kappa Statistic
References Cohen J. A coefficient for nominal scales. Educational and Psychological Measurement 1960; 20: 37-46. Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin 1968; 70: 213-220.