Presentation is loading. Please wait.

Presentation is loading. Please wait.

The days ahead Monday-Wednesday –Training workshop on how to measure the actual reduction in HIV incidence that is caused by implementation of MC programs.

Similar presentations


Presentation on theme: "The days ahead Monday-Wednesday –Training workshop on how to measure the actual reduction in HIV incidence that is caused by implementation of MC programs."— Presentation transcript:

1 The days ahead Monday-Wednesday –Training workshop on how to measure the actual reduction in HIV incidence that is caused by implementation of MC programs Thursday & Friday –Share experiences about the problems countries are encountering in initial scale-up of MC services – and the solutions they have found. Clarify what the unanswered questions are about how best to implement MC. Saturday –Explore how measurement of the impact on incidence could also answer unresolved questions about how to best implement MC (e.g. optimal task sharing…) Monday-Wednesday next week –Third meeting on the use of the DMPPT tool to estimate the cost and forecast the possible impact of MC programs to guide initial program design (and help decision-makers decide whether they want to implement MC)

2 How can we estimate effectiveness? E.g., What is the effect of: Male circumcision on HIV incidence In other words: “How much does an increase in male circumcision cause HIV incidence to decrease?”

3 How can we estimate effectiveness? If we implement MC in a community and incidence of HIV does not change does that mean that MC is not effective at reducing incidence of HIV? If we implement MC and incidence goes up, does that mean that MC increases HIV transmission? If we implement MC and incidence falls, do we know that the MC caused the fall?

4 We are not asking whether incidence changes at the same time that MC is implemented – we are asking whether incidence changes because MC is implemented. Thus, we need to know how much would incidence have changed if MC was not implemented – and how different is that from how much it changes with MC How can we estimate effectiveness?

5 Unfortunately, we can’t rewind the clock and observe the same community over the same period of time with and without MC We need an estimate what would have happened to HIV incidence in the absence of an MC program We call this the counterfactual –i.e. the “factual” is what really happens where MC is implemented and the “counterfactual” is what would have happened if MC had not been implemented How can we estimate effectiveness?

6 Creating a Good Counterfactual Sometimes it is very simple: –If I dye my hair tonight and come in tomorrow with black hair you will easily believe that the change in hair color was caused by the hair dye –In other words, if we strongly believe that something is not changing on its own, then we can assume that the counterfactual = baseline –Good example of that for HIV is treatment

7 Creating a Good Counterfactual With HIV prevention it is never simple

8 Creating a Good Counterfactual Baseline MC Implemented

9 Creating a Good Counterfactual Baseline MC Implemented IMPACT

10 Creating a Good Counterfactual Baseline MC Implemented IMPACT ? MC doubles HIV incidence?

11 Creating a Good Counterfactual Baseline MC Implemented Real IMPACT MC Halves HIV incidence

12 Creating a Good Counterfactual Baseline MC Implemented IMPACT? MC reduces HIV incidence by 1/2

13 Creating a Good Counterfactual Baseline MC Implemented Real IMPACT MC reduces HIV incidence by 1/3

14 Creating a Good Counterfactual Where in the World Are We?

15 Creating a Good Counterfactual Where in the World Are We? If the purpose of the experiment was to test the effectiveness of the hints, should we have given the hints to the “experts” or to the rest of us?

16 Creating a Good Counterfactual Low INCIDENCE High EFFECT Ideally you want the communities that receive MC to be indistinguishable from those that don’t

17 Creating a Good Counterfactual NEGATIVE EFFECT? If they are different, it can bias the estimation of the effect Low INCIDENCE High

18 Creating a Good Counterfactual If they are different, it can bias the estimation of the effect HUGE EFFECT? Low INCIDENCE High

19 Creating a Good Counterfactual The beneficiaries of the intervention and the counterfactual or control groups: –have identical characteristics, except for benefiting from the intervention With a good counterfactual, the only reason for different outcomes between treatments and controls is the intervention (I)

20 Creating a Good Counterfactual The beneficiaries of the intervention and the counterfactual or control groups: –have identical characteristics, except for benefiting from the intervention With a good counterfactual, the only reason for different outcomes between treatments and controls is the intervention (I)

21 Creating a Good Counterfactual for MC What if we compared the incidence among the men in a community who decide to get circumcised with the incidence among the men who decide not to get circumcised? What if we compared the incidence in communities who decide to initiate circumcision services in their health center with the incidence in the ones that don’t? What if we compare the incidence in towns without a clinic with the incidence in towns with a circumcision clinic?

22 Creating a Good Counterfactual for MC What if we opened up a circumcision clinic in Soweto and another in Khayelitsha and another in KwaMashu – but they could only serve a small number of the men who want to be circumcised. If we give lottery tickets to see who can go in what month, could we use the ones who will wait a year as a comparison (counterfactual) group? What if we focused circumcision services on the 15-24 year olds. Could we use the 25-30 year olds as a comparison group?

23

24 Multiple threats to a valid estimate of effectiveness regardless of a robust counterfactual ( we have to live with these ) Measurement error/biases Endogenous Change –Secular changes or drift (long term trends in community, region or country) –Maturational trends (Individual change) –Interfering events Hawthorne/cohort effects

25 Multiple threats to a valid estimate of effectiveness that can be controlled for with a robust counterfactual ( examples will be shown examples on slides to come ) Selection Bias –Participants who voluntarily participate or have the exposure may be different than those without Confounding factors that obfuscate attribution of the program to the impact; a robust counterfactual can control for such factors that are known and unknown

26 Selection bias from an evaluation of “enrolled versus not enrolled”

27 Designs Leading to Biased Results: “Enrolled versus Not Enrolled” Consider a school-based pregnancy prevention program 10 schools in the district are asked if they would like to participate

28 Designs Leading to Biased Results: “Enrolled versus Not Enrolled” 5 schools decline participation 5 schools elect to participate in the program Pregnancy Prevention Program No intervention

29 Designs Leading to Biased Results: “Enrolled versus Not Enrolled” Pregnancy Prevention Program No intervention Pregnancy rate = e t per 100 student years 2 per 100 student years

30 Pregnancy Prevention Program No intervention Pregnancy rate = 3 per 100 student years 2 per 100 student years Designs Leading to Biased Results: “Enrolled versus Not Enrolled ” Schools in the program had fewer adolescent pregnancies… Can we attribute this difference to the program?

31 Designs Leading to Biased Results: “Enrolled versus Not Enrolled ” Pregnancy Prevention Program No intervention Pregnancy rate = 3 per 100 student years 2 per 100 student years religiouee erg) Factor X

32 Designs Leading to Biased Results: “Enrolled versus Not Enrolled” Pregnancy Prevention Program No intervention Pregnancy rate = 3 per 100 student years 2 per 100 student years Factor X (not sure what this should be) Factor X Observed effect might be due to differences in “Factor X” which led to differential selection into the program (“selection bias”)

33 Designs Leading to Biased Results: “Enrolled versus Not Enrolled ” This design compares “apples to oranges” The reason for not enrolling might be correlated with the outcome –You can statistically “control” for observed factors –But you cannot control for factors that are “unobserved” Estimated impact erroneously mixes the effect of different factors

34 Confounding that results even from a a “matched design”

35 : “Matched Designs” Individuals, groups, or communities are matched based on known characteristics to improve comparability: Age, race, sex Region, poverty

36 “Matched Designs” From each pair, one receives the intervention Differences in outcomes are compared within the pair

37 : “Matched Designs” Does this design ensure that the matched pairs are comparable on all factors except the intervention? No: Only observed factors are used for matching Unobserved factors may differ

38 World Bank Challenged: Are Poor Really Helped? By Celia Dugger New York Times, July 28, 2004 WASHINGTON - Wealthy nations and international organizations, including the World Bank, spend more than $55 billion annually to better the lot of the world's 2.7 billion poor people. Yet they have scant evidence that the myriad projects they finance have made any real difference, many economists say.

39 Not sure where previous slide goes Co we could end here and then play the game that is illustrative of selection bias and then after that, have the over talk on IE study designed and analytic methods. this could be doner as the whole group as opposed to country teams


Download ppt "The days ahead Monday-Wednesday –Training workshop on how to measure the actual reduction in HIV incidence that is caused by implementation of MC programs."

Similar presentations


Ads by Google