Presentation is loading. Please wait.

Presentation is loading. Please wait.

Screening, Sensitivity, Specificity, and ROC curves

Similar presentations


Presentation on theme: "Screening, Sensitivity, Specificity, and ROC curves"— Presentation transcript:

1 Screening, Sensitivity, Specificity, and ROC curves
David Henson 2009

2 Screening tests sort out apparently well persons who probably have a disease from those who probably do not

3 World Health Organisation criteria (1)
Condition sought should be an important health problem There should be an accepted treatment for patients with recognised disease Facilities for diagnosis and treatment should be available There should be a recognised latent or early symptomatic stage WHO 1968

4 WHO criteria (2) There should be suitable screening test or examination The test should be acceptable to the population The natural history of the condition, including development from latent to declared disease, should be adequately understood There should be an agreed policy on whom to treat as patients

5 WHO criteria (3) The cost of case finding (including diagnosis and treatment of patients diagnosed) should be economically balanced in relation to possible expenditure on medical care as a whole Case-finding should be a continuing process and not a 'once and for all' project

6 Screening The application of one or more tests, examinations, or other procedures to detect a potential disease as early as possible during an asymptomatic phase, when intervention can significantly modify the natural history

7 Natural history of disease

8 Benefits of screening: Lead time bias
Lead time bias. Patients screened positive will have the condition longer than those not screened. Their survival time will be longer but does this mean that they have benefited from screening?

9 Benefits of screening: Length time bias
Length time bias. Screening will preferentially pick up those progressing more slowly. There survival times will be longer. Does this mean that they have benefited?

10 Characteristics of a screening test
Reliability- same result Validity- the correct result Accuracy- ? Sensitive- percentage cases detected Specific- percentage normals passing BS Everitt. The Cambridge Dictionary of Statistics in the Medical Sciences, 1995

11 Diagnostic performance
Form a 2x2 contingency table. True diagnosis Test result Positive Negative a b c d Sensitivity = a/(a+c) Specificity = d/(b+d) False positive rate = b/(b+d) False negative rate = c/(a+c) Positive predictive value = a/(a+b) Negative predictive value = d/(c+d)

12 Diagnostic performance
Test 1000 people with tonometer. 20 have glaucoma (2%). 50% of these will have high IOP (10) 120 will have ocular hypertension (6%) True diagnosis Test result Positive Negative 10 120 860 Sensitivity is 10/20 (50%) Specificity is 860/980 (88%) Positive predictive value is 10/130 (7.7%) Negative predictive value is 860/870 (98.9%) In other words if you get a positive result with your tonometer there is only an 8% chance that the patient has glaucoma. The important parameter in screening tests is high specificity.

13 Diagnostic performance
The positive predictive value tells us the probability that a patient has the condition given that they have failed the test. It takes into account the disease prevalence.

14 Think You wish to rule out a disease in someone. Do you want a
A) very specific test? B) very sensitive test?

15 Diagnostic Performance Studies
Patients with disease Healthy controls Administer test Interpret results: sensitivity and specificity

16 How is a positive test result defined?
Decision criteria How is a positive test result defined? Many of our screening tests have what is called a continuous output scale, e.g. a tonometer At what point on this scale do we say a patient has failed the test? The sensitivity and specificity is dependent upon this level. High levels=high specificity, low sensitivity Low levels=high sensitivity, low specificity

17 Example Problem Diagnostic analyses of the Heidelberg Retina Tomograph
Laser scans the optic nerve head & surrounding area, and builds up a 3-dimensional “topography” image.

18 Example Problem glaucoma controls Percent Glaucoma Probability Score
50 glaucoma 40 30 20 10 Percent 50 controls 40 30 20 10 0.2 0.4 0.6 0.8 1.0 Glaucoma Probability Score

19 Example Problem Distributions overlap No clear choice of cut-off.
50 glaucoma Distributions overlap No clear choice of cut-off. 40 30 20 10 Percent 50 controls 40 30 20 10 0.2 0.4 0.6 0.8 1.0 Glaucoma Probability Score

20 Example Problem glaucoma controls Percent Glaucoma Probability Score
50 glaucoma 40 30 20 10 Percent 50 controls 40 30 20 10 0.2 0.4 0.6 0.8 1.0 Glaucoma Probability Score

21 “normal” “abnormal” glaucoma controls Percent
50 glaucoma 40 30 20 10 Percent 50 controls 40 30 20 10 0.2 0.4 0.6 0.8 1.0 Glaucoma Probability Score

22 “normal” “abnormal” glaucoma controls Percent
50 glaucoma 40 30 20 10 Percent 50 controls 40 30 20 10 0.2 0.4 0.6 0.8 1.0 Glaucoma Probability Score

23 “normal” “abnormal” glaucoma controls Percent
50 glaucoma 40 30 20 10 Percent 50 controls 40 30 20 10 0.2 0.4 0.6 0.8 1.0 Glaucoma Probability Score

24 “normal” “abnormal” glaucoma controls Percent
50 glaucoma 40 30 20 10 Percent 50 controls 40 30 20 10 0.2 0.4 0.6 0.8 1.0 Glaucoma Probability Score

25 Think Which test is better – A or B? B A

26 Think B A Which test is better – A or B?
B is more sensitive, but less specific. A is more specific, but less sensitive. which is “better” depends on the cost of false-positive / false-negatives B A

27 Think B A Which test is better – A or B?
B is more sensitive and more specific. A is less specific and less sensitive. When the test is closer to the top left hand corner it is better B A

28 Area under the Curve (AUC)
Absolute, criterion-free measure of performance. Summarises all possible cut-off values. General measure of separation between 2 distributions (related to Mann-Whitney U)

29 Problems & Pitfalls Figure 1   ROC curves for tonometry, drawn from the data of Daubs and Crick (open squares), Tielsch et al (solid circles), and Harper and Reeves (open circles). In each case the data points represent the sensitivity and specificity at different levels of IOP from 10 mm Hg to 28 mm Hg in 2 mm Hg steps. Br J Ophthalmol Oct;84(10):   Links Appraising evaluations of screening/diagnostic tests: the importance of the study populations. Harper R, Henson D, Reeves BC.

30 Seminar Questions Does open angle glaucoma meet all the WHO criteria for disease screening? Does closed angle glaucoma meet the WHO criteria for disease screening? What do you understand by selection bias and how might this affect the ROC curve?

31 The End Ref: An Introduction to Medical Statistics, Bland M, Oxford University Press 1995


Download ppt "Screening, Sensitivity, Specificity, and ROC curves"

Similar presentations


Ads by Google