Presentation is loading. Please wait.

Presentation is loading. Please wait.

Screening in Public Health Practice

Similar presentations


Presentation on theme: "Screening in Public Health Practice"— Presentation transcript:

1 Screening in Public Health Practice

2 Screening Definition: Presumptive identification of an unrecognized disease or defect by the application of tests, examinations, or other procedures. Classifies asymptomatic people as likely or unlikely to have a disease or defect. Usually not diagnostic. Purpose: Delay onset of symptomatic or clinical disease. Improve survival.

3 Screening Seems simple but is complex.
There are hidden costs and risks. Screening can create morbidity and anxiety. Must be aware of biases. For screening to be successful you need a: Suitable disease Suitable test Suitable screening program

4 Suitable Disease Has serious consequences Is progressive
Disease treatment must be effective at an earlier stage Prevalence of the detectable pre-clinical phase must be high Examples of suitable diseases: breast cancer, cervical cancer, hypertension, diabetes?

5 Natural History of Disease
20 30 40 50 60 70 Years A B C D Biological Onset Disease Detectable By Screening Symptoms Develop Death When do you have clinical disease? Pre-clinical Phase? Detectable pre-clinical phase? Lead Time?

6 Natural History of Disease
20 30 40 50 60 70 Years A B C D Biological Onset Disease Detectable By Screening Symptoms Develop Death When do you have clinical disease? -- C Pre-clinical Phase? A  C Detectable pre-clinical phase? B  C, 15 years Lead Time?  15 years, time duration is increased by screening

7 Natural History of Disease
Total pre-clinical phase = A to C (Age 30 to Age 60) = 30 years Detectable pre-clinical phase (DPCP) = B to C (Age 45 to Age 60) = 15 years DPCP varies with the test, the disease, and the individual Lead Time: Duration of time by which the diagnosis is advanced as a result of screening. B to C (Age 45 to Age 60) = 15 years

8 Suitable Test Inexpensive, easy to administer, has minimal discomfort has high level of validity and reliability Valid Test Does what it's supposed to do, that is, correctly classify people with pre-clinical disease as positive and people without pre-clinical disease as negative Reliable Test Gives you same results on repetition Validity is more important than reliability

9 Suitable Test Disease Status (Truth) Yes No Total Positive a b a + b
Yes No Total Positive a b a + b Negative c d c + d a + c b + d a + b + c+ d Screening Test Result

10 Measures of Test Validity
Sensitivity - enables you to pick up the cases of disease a / a + c = those that test positive / all with disease Specificity - enables you to pick out the no diseased people d / b + d = those that test negative / all without disease Valid test has high sensitivity and specificity

11 Breast Cancer Screening Program of Heath Insurance Plan (HIP)
Women assigned to screening or usual care. Screening consisted of yearly mammogram and physical exam. Five years of follow‑up produced these results: Breast Cancer Screening Test Result

12 Suitable Test Sensitivity = 132/177 = 74.6%
Specificity = 63,650/64,633 = 98.5% Interpretation The screening was very good at picking out the women who did not have cancer (see specificity) but it missed 25% of the women who did have cancer (see sensitivity). To measure sensitivity and specificity you can wait for disease to develop (as above) or you can measure the results of the screening test against the outcome of another screening or diagnostic test (Gold Standard).

13 Suitable Test Criterion of Positivity ‑ test value at which the screening test outcome is considered positive Test Result Clearly Negative Grey Zone Clearly Positive ?????????????????? A B C Criterion of positivity affects sensitivity and specificity. Must trade off between the two.

14 Suitable Test What are the sensitivity and specificity if A (or B or C) is used as the cutoff for a positive result? If criterion is low (Point A) then sensitivity is good but specificity suffers. If criterion is high (Point C) then specificity is good but sensitivity suffers. Decision weighing the cost of false positives against the cost of false negatives.

15 Suitable Screening Program
Definition of a screening program Application of a specific test in a specific population for a specific disease You want to determine if screening program is successful. Does it reduce morbidity and mortality? How to evaluate? Feasibility Measures Effectiveness Measures

16 Evaluation of Screening Program: Feasibility Measures
Acceptability, cost, predictive value of a positive test (PV+), predictive value of a negative test (PV-) Disease Status Yes No Total Positive a b a + b Negative c d c + d a + c b + d a + b + c + d Screening Test Result

17 Evaluation of Screening Program

18 Breast Cancer Screening Program of HIP
Screening Test

19 Breast Cancer Screening Program of HIP
Screening Test PV+ = 132/1115 = 11.8% PV- = 63,650/63,695 = 99.9%

20 Evaluation of Screening Program
PV will increase when sensitivity, specificity, and disease prevalence increases. For example, PV+ will increase if you perform breast cancer screening on higher risk population (i.e. women with a family history of breast cancer)

21 How prevalence effects PV
Use screening test with 100% sensitivity and 99.9% specificity in two populations: Population A 1,000 people with low prevalence of disease (1/1,000) Two positive results. One will be true positive. One will be a test error. Population B 1,000 people with higher prevalence of disease (10/1,000) Eleven positive test results. 10 will be true positives. One will be a test error.

22 Impact of Disease Prevalence – PPV?
Disease Status Yes No Total Positive 1 2 Negative 998 999 1000 Screening Test Result Disease Status Yes No Total Positive 10 1 11 Negative 989 990 1000 Screening Test Result

23 Evaluation of Screening Program
Efficacy measures of evaluation Goal  reduce morbidity and mortality Chronic diseases severity of disease at diagnosis cause-specific mortality rate among people picked up by screening versus people picked up by routine care

24 Biases when evaluating a screening program
There are three possible sources of bias when evaluating a screening program that may result in a false picture of its efficacy: Volunteer bias Lead-time bias Length bias

25 Volunteer Bias People who choose to participate in the screening program may be healthier or at higher risk of developing the disease than those that don’t participate

26 Lead Time Bias Lead-time is the amount of time by which the diagnosis was advanced due to screening. Lead time bias means that survival may erroneously appear to be increased among screen-detected cases simply because the diagnosis was made earlier in the course of the disease.

27 Hypothetical Screened and Symptom Diagnosed Cases of Breast Cancer
Survival for Women II = 3 years Apparent Survival for Woman I = 5 years Both women died at the same age but lead-time bias makes it seem as though Woman I has a 2-year longer survival than Woman II

28 Length Bias Less aggressive forms of a disease are more likely to be picked up in a screening program because they have a longer detectable pre-clinical phase . Less aggressive forms of disease usually have better survival.

29 Length Bias Consider the following screening program that was administered to five individuals. ( ) is the Detectable Pre-Clinical Phase for a particular person. Which individuals are picked up at the screening? __________________________________________TIME ^ Time of Screen

30 Summary of Screening Screening is the presumptive identification of unrecognized disease by the application of tests, exams, etc. Suitable disease must be serious with important consequences and progressive There must be treatments available for the disease when identified Suitable test must have low cost, be acceptable, and have a high degree of validity Validity is measured by sensitivity and specificity

31 Summary of Screening Screening programs administer screening tests in particular populations Programs are evaluated mainly by examining predictive value and outcome measures such as stage distribution and cause-specific mortality Evaluation must consider lead-time bias, length-biased sampling, and volunteer bias.

32 November 21, 2012 Cancer Survivor or Victim of Overdiagnosis. By H
November 21, 2012 Cancer Survivor or Victim of Overdiagnosis? By H. GILBERT WELCH Hanover, N.H. FOR decades women have been told that one of the most important things they can do to protect their health is to have regular mammograms. But over the past few years, it’s become increasingly clear that these screenings are not all they’re cracked up to be. The latest piece of evidence appears in a study in Wednesday’s New England Journal of Medicine, conducted by the oncologist Archie Bleyer and me. The study looks at the big picture, the effect of three decades of mammography screening in the United States. After correcting for underlying trends and the use of hormone replacement therapy, we found that the introduction of screening has been associated with about 1.5 million additional women receiving a diagnosis of early stage breast cancer. That would be a good thing if it meant that 1.5 million fewer women had gotten a diagnosis of late-stage breast cancer. Then we could say that screening had advanced the time of diagnosis and provided the opportunity of reduced mortality for 1.5 million women. But instead, we found that there were only around 0.1 million fewer women with a diagnosis of late-stage breast cancer. This discrepancy means there was a lot of over diagnosis: more than a million women who were told they had early stage cancer — most of whom underwent surgery, chemotherapy or radiation — for a “cancer” that was never going to make them sick. Although it’s impossible to know which women these are, that’s some pretty serious harm. But even more damaging is what these data suggest about the benefit of screening. If it does not advance the time of diagnosis of late-stage cancer, it won’t reduce mortality. In fact, we found no change in the number of women with life-threatening metastatic breast cancer.

33 The harm of over diagnosis shouldn’t come as a surprise
The harm of over diagnosis shouldn’t come as a surprise. Six years ago, a long-term follow-up of a randomized trial showed that about one-quarter of cancers detected by screening were over diagnosed. And this study reflected mammograms as used in the 1980s. Newer digital mammograms detect a lot more abnormalities, and the estimates of over diagnosis have risen commensurately: now somewhere between a third and half of screen-detected cancers. The news on the benefits of screening isn’t any better. Some of the original trials from back in the ’80s suggested that mammography reduced breast cancer mortality by as much as 25 percent. This figure became the conventional wisdom. In the last two years, however, three investigations in Europe came to a radically different conclusion: mammography has either a limited impact on breast cancer mortality (reducing it by less than 10 percent) or none at all. Feeling depressed? Don’t be. There’s good news here, too: breast cancer mortality has fallen substantially in the United States and Europe. But it’s not about screening. It’s about treatment. Our therapies for breast cancer are simply better than they were 30 years ago. As treatment improves, the benefit of screening diminishes. Think about it: because we can treat most patients who develop pneumonia, there’s little benefit to trying to detect pneumonia early. That’s why we don’t screen for pneumonia. So here is what we now know: the mortality benefit of mammography is much smaller, and the harm of over diagnosis much larger, than has been previously recognized. But to be honest, that general message has been around for more than a decade. Why isn’t it getting more traction? The reason is that no other medical test has been as aggressively promoted as mammograms — efforts that have gone beyond persuasion to guilt and even coercion (“I can’t be your doctor if you don’t get one”). And proponents have used the most misleading screening statistic there is: survival rates. A recent Komen foundation campaign typifies the approach: “Early detection saves lives. The five-year survival rate for breast cancer when caught early is 98 percent. When it’s not? It decreases to 23 percent.”

34 Survival rates always go up with early diagnosis: people who get a diagnosis earlier in life will live longer with their diagnosis, even if it doesn’t change their time of death by one iota. And diagnosing cancer in people whose “cancer” was never destined to kill them will inflate survival rates — even if the number of deaths stays exactly the same. In short, tell everyone they have cancer, and survival will skyrocket. Screening proponents have also encouraged the public to believe two things that are patently untrue. First, that every woman who has a cancer diagnosed by mammography has had her life saved (consider those “Mammograms save lives. I’m the proof” T-shirts for breast cancer survivors). The truth is, those survivors are much more likely to have been victims of over diagnosis. Second, that a woman who died from breast cancer “could have been saved” had her cancer been detected early. The truth is, a few breast cancers are destined to kill no matter what we do. What motivates proponents to use these tactics? Largely, it’s sincere faith in the virtue of early diagnosis, the belief that screening must be good for women. And 30 years ago, when we started down this road, they may have been right. In light of what we know now, it is wrong to continue down it. Let’s offer the proponents amnesty and move forward. What should be done? First and foremost, tell the truth: woman really do have a choice. While no one can dismiss the possibility that screening may help a tiny number of women, there’s no doubt that it leads many, many more to be treated for breast cancer unnecessarily. Women have to decide for themselves about the benefit and harms. But health care providers can also do better. They can look less hard for tiny cancers and precancers and put more effort into differentiating between consequential and inconsequential cancers. We must redesign screening protocols to reduce overdiagnosis or stop population-wide screening completely. Screening could be targeted instead to those at the highest risk of dying from breast cancer — women with strong family histories or genetic predispositions to the disease. These are the women who are most likely to benefit and least likely to be overdiagnosed. One final plea: Can we please stop using screening mammography as measure of how well our health care system is performing? That’s beginning to look like a cruel joke: cruel because it leads doctors to harass women into compliance; a joke because no one can argue this is either a public health imperative or a valid measure of the quality of care. Breast cancer is arguably the most important cancer for a nonsmoking woman to care about. Diagnostic mammography — when a woman with a breast lump gets a mammogram to learn if it’s something to worry about — is an important tool. No one argues about this. Pre-emptive mammography screening, on the other hand, is, at best, is a very mixed bag — it most likely causes more health problems than it solves. H. Gilbert Welch is a professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice and an author of “Overdiagnosed: Making People Sick in the Pursuit of Health.”


Download ppt "Screening in Public Health Practice"

Similar presentations


Ads by Google