Presentation is loading. Please wait.

Presentation is loading. Please wait.

Oral Health Training & Calibration Programme

Similar presentations


Presentation on theme: "Oral Health Training & Calibration Programme"— Presentation transcript:

1 Oral Health Training & Calibration Programme
Epidemiology-Calibration WHO Collaborating Centre for Oral Health Services Research

2 Oral Health Clinical Survey
Oral Health Clinical Examination Tool Dentate Status Prosthetic Status and Prosthetic Treatment Needs Mucosal Status Occlusal Status Orthodontic Treatment Status Fluorosis Dean’s Index Gingival Index Debris and Calculus Indices Attachment Loss and Probing Score Tooth Status Chart Count of Tooth Surfaces with Amalgam Trauma Index Treatment and Urgent Needs

3 Training and Calibration
Training for: Dentate Status Prosthetic Status Mucosal Status Fluorosis Orthodontic Status Orthodontic Treatment Status Periodontal Assessments Tooth Status Amalgam Count Traumatic Injury Treatment Needs Calibration for: Fluorosis Occulsal Status Periodontal Assessments Tooth Status Amalgam Count Magnification is not allowed for examinations

4 Calibration Objectives
Define Epidemiology - Index Discuss Validity and Reliability Examiner Comparability Statistics Calibration Inter and Intra Examiner

5 Suggested 4 Day Calibration Training
Day 2 Time Chair 1 Chair 2 Chair 3 9:00-12:00 Classroom Session Presentations/Fluorosis Training 9:00-10:15 Statistics/Fluorosis 10:15-10:30 Break 10:30-11:45 Patient 4 Patient 5 Patient 6 11:45-12:00 Discussion/questions 12:00-1:00 Lunch 1:00-2:00 Patient 1 Patient 2 Patient 3 Patient 7 Patient 8 Patient 9 2:00-3:00 Discussion/Questions 3:00-3:15 Patient 10 Patient 11 Patient 12 4:15-5:00 Discussion/Fluorosis training 3:15-5:00 Discussion Fluorosis This design is based on using the entire Oral Health Module and all of its indices.

6 Suggested 4 Day Calibration Training cont.
Day 3 Day 4 Time Chair 1 Chair 2 Chair 3 9:00-10:15 Statistics Review 10:15-10:30 Break 10:30-11:45 Patient 13 Patient 14 Patient 15 Repeat*** 1 Repeat 2 Repeat 3 11:45-12:00 Discussion/Questions Lunch 1:00-2:00 Patient 16 Patient 17 Patient 18 Repeat 4 Repeat 5 Repeat 6 2:00-3:00 Patient 19 Patient 20 Patient 21 2:00-2:30 2:30-3:15 Final Fluorosis Testing Statistical Review 3:00-3:15 3:15-5:00 Discussion Fluorosis as necessary Discussion Questions Finish

7 ‘science upon the people’
Epidemiology The study of the distribution and determinants of health related states or events in specified populations and the application of this study to the control of health problems. ‘Epi demos logos’ Greek: ‘science upon the people’

8 Measurement of Oral Disease
We use indices: as a numerical expression to give a group’s relative position on a graded scale with a defined upper and lower limit. as a standardised method of measurement that allows comparisons to be drawn with others measured with the same index. to define the stage of disease; not absolute presence or absence.

9 Desirable characteristics of an index
Valid Reliable Acceptable Easy to use Amenable to statistical analysis

10 Prevalence is the number of cases in a defined population at a particular point in time describes a group at a certain point in time similar to a snapshot in time is expressed as a rate -x per 1000 population

11 Simple description of the health status of a population or community.
Descriptive study Simple description of the health status of a population or community. No effort to link exposures and effects. For example: % with caries % with periodontal disease

12 Uses of a Prevalence Study
Planning Targeting Monitoring Comparing International Regional

13 Validity and Reliability
Valid Yes Reliable Yes Valid No Reliable No Unbiased Valid No Reliable Yes Valid No Reliable No Biased

14 Validity Success in measuring what you set out to measure
Being trained by a Gold Standard trainer ensures validity by Training on what is proposed to be measured Confirming that everyone is measuring the same thing -“singing out of the same hymn book”

15 Reliability The extent to which the clinical examination yields the same result on repeated inspection. Inter-examiner reliability: reproducibility between examiners Intra-examiner reliability: reproducibility within examiners

16 Reliability Calibration ensures inter and intra examiner reliability and allows: International comparisons Regional comparisons Temporal comparisons Without calibration Are any differences real or due to examiner variability?

17 Examiner Reliability Statistics
Used when: Training and calibrating examiners in a new index against a Gold Standard Examiner Re-calibrating examiners against a Gold Standard Examiner

18 Examiner Reliability Statistics
Two measures used: Percentage Agreement Kappa Statistic

19 Percentage Agreement Percentage agreement is one method to measure Examiner Reliability. It means: the percentage of judgements where the two examiners have agreed compared to the total number of judgements made

20 Example – Percentage Agreement
Percentage Agreement is equal to the sum of the diagonal values divided by the overall total and multiplied by 100. Example – Percentage Agreement Ex 2 A U E M Total Ex 1 18 15 4 5 24 2 12 9 23 7 16 35 28 30 100

21 Example – Percentage Agreement
Number of agreements = sum of diagonals = 61 Total number of cases = overall total = 100 Percentage agreement = 61%

22 Kappa Statistic The Kappa Statistic measures the agreement between the evaluations of two examiners when both are rating the same objects. It describes agreement achieved beyond chance, as a proportion of that agreement which is possible beyond chance.

23 Kappa Statistic Interpreting Kappa The value of the Kappa Statistic ranges from , with larger values indicating better reliability. A value of 1 indicates perfect agreement. A value of 0 indicates that agreement is no better than chance. Generally, a Kappa > 0.60 is considered satisfactory.

24 Interpreting Kappa 0.00 Agreement is no better than chance
Slight agreement Fair agreement Moderate agreement Substantial agreement Almost perfect agreement Perfect agreement

25 Kappa Statistic The formula for calculating the Kappa Statistic is:

26 Example – Kappa Statistic
PO is the sum of the diagonals divided by the overall total. Ex 2 A B C D Total Ex 1 18 15 4 5 24 2 12 9 23 7 16 35 28 30 100

27 Example - Kappa Statistic
PE is the sum of each row total multiplied by the corresponding column total divided by square of the overall total Ex 2 A B C D Total Ex 1 18 15 4 5 24 2 12 9 23 7 16 35 28 30 100

28 Example - Kappa Statistic
Number of agreements = sum of diagonals = 61 Total number of cases = overall total = 100 PO = 0.61

29 Example - Kappa Statistic

30 References Cohen J. A coefficient for nominal scales. Educational and Psychological Measurement 1960; 20: Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin 1968; 70:


Download ppt "Oral Health Training & Calibration Programme"

Similar presentations


Ads by Google