Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 6 Experimental Studies. 2 Chapter 6 Outline 6.1Introduction 6.2 Historical perspective 6.3 General concepts 6.4 Data analysis.

Similar presentations


Presentation on theme: "1 Chapter 6 Experimental Studies. 2 Chapter 6 Outline 6.1Introduction 6.2 Historical perspective 6.3 General concepts 6.4 Data analysis."— Presentation transcript:

1 1 Chapter 6 Experimental Studies

2 2 Chapter 6 Outline 6.1Introduction 6.2 Historical perspective 6.3 General concepts 6.4 Data analysis

3 3 Epi Experiments (“Trials”) Trials - from the French trier (to try) Clinical trial – test therapeutic interventions applied to individuals (e.g., chemotherapy trial) Field trial – test preventive interventions applied to individuals (e.g., vaccine trial) Community trial – test interventions applied at the aggregate level (e.g., fluoridation of public water trial)

4 4 Illustrative Example 6.1 WHI Clinical Trial 40 US clinical centers Recruitment: 1993-1998 Exposure randomized, double blinded: estrogen + progestin vs identical looking placebo Average follow-up 5.2 years 1˚ outcome = Coronary Heart Disease

5 Survival curves WHI estrogen trial 5

6 6 Illustrative Example 6.2 Vitamin A Community Trial 450 Sumatran villages w ith high childhood mortality rates Exposure = Vitamin A supplementation program vs. no intervention Random allocation of intervention: 229 treatment villages, 221 control villages

7 7 Historical perspective Read in text Biblical reference Van Helmont’s proposal (1662) James Lind’s scurvy experiment (1753) Modern trials –Polio trail (1954) –MRFIT (1982) –WHI (2002)

8 8 “Natural Experiments” Natural conditions that mimic an experiment Example: French surgeon Paré (1510–1590) ran out of boiling oil to treat wounds → forced to use an innocuous lotion for treatment → noticed vastly improved results Not a true experiment because the intervention was not allocated by study protocol

9 9 Concepts Selected Concepts Experimental Design 1.The control group (and the placebo effect) 2.Randomization & comparability 3.Follow-up and outcome ascertainment 4.Intention-to-treat vs. per- protocol analysis

10 10 The effects of an exposure can only be judged in comparison to what would happen in its absence Treatment Group Exposed to the intervention Control Group Not exposed to intervention

11 11 Illustration: “MRFIT” Multiple Risk Factor Intervention Trial (1982) 12,855 high risk men, 35- to 57-years-old Randomly assigned multi-factor Intervention (“special intervention”) group or usual care group Study endpoints: Coronary Heart Disease (CHD) mortality and overall mortality Results described here: http://www.ncbi.nlm.nih.gov/pubmed/7050440 http://www.ncbi.nlm.nih.gov/pubmed/7050440 No significant difference in endpoint rates Also, lower than expected rates in both groups Had no control group had been used, the intervention might have unjustifiably been declared a success

12 12 Polio Field Trial (1954) Polio rates (per 100,000) Placebo group 69 Refusers 46 Vaccinated group28 Had Refusers been used as the control group  effects of the intervention would have been underestimated Am J Pub Health, 1957, 47: 283-7 Dr. Jonas Salk, 1953

13 13 The placebo effect Improvements attributed to an inert intervention Despite popular belief, placebos have no real effect False impressions of placebo effects can be explained by spontaneous improvement, fluctuation of symptoms, regression to the mean, additional treatment, conditional switching of placebo treatment, scaling bias, irrelevant response variables, answers of politeness, experimental subordination, conditioned answers, neurotic or psychotic misjudgment, psychosomatic phenomena, misquotation, etc ( Kienle & Kiene, 1997 )

14 14 The Hawthorne Effect Improvements in behavior because subjects know they are being observed  effects unrelated to the intervention Initially observed in industrial psychology experiments in the 1930 A comparable attention bias effect is seen in trials

15 15 Randomization and Comparability Randomization works by balancing potential confounding factors in the treatment & control group → “like-to-like” comparisons → differences observed at completion of trial due to the treatment or to “chance”

16 16 Checking Group Comparability WHI Trial

17 17 Follow-up & Outcome Ascertainment screening adjudicationFollow-up  screening for study outcomes and confirming the outcomes as true (adjudication) case definitions (uniform and valid criteria for case ascertainments)Study outcomes based on case definitions (uniform and valid criteria for case ascertainments) blindingThe importance of blinding –Single blinding –Double blinding –Triple blinding

18 Intention-to-treat vs. per-protocol analysis Intention-to-treat (ITT)Intention-to-treat (ITT) = “analyze as randomized” (regardless of compliance) Per protocol (PP)Per protocol (PP) = analyze only those that completed the protocol Effectiveness = “real world” effectiveness (including non- compliance) Efficacy = effect under ideal conditions (e.g., complete compliance)

19 19 Human Subjects Ethics now covered in Ch 5 The Belmont Report –Respect for individuals –Beneficence –Justice IRB oversight Data Safety Monitoring Board (DSMB) Informed consent Equipoise

20 20 Equipoise Equipoise ≡ balanced doubt Cannot knowingly expose a participant to harm Cannot withhold known benefit to study subjects What’s left? (ANS: equipoise) Is equipoise the over-riding principles of trial ethics?

21 21 Advocacy vs. Scientific Ethics Advocacy, partisan, corporate, advertising, and political ethicsAdvocacy, partisan, corporate, advertising, and political ethics: “Plan with the end result in mind.” Scientific ethicsScientific ethics: A bending over backwards to prove oneself wrong. “I cannot give any scientist of any age any better advice than this: The intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.” Sir Peter Medewar

22 22 Relative Effect Simple Analysis: Relative Effect Data = WHI trial E = HRT vs. placebo D = CHD (yes or no) Average follow-up: 5.2 years How to say it: HRT increased the risk of CHD by 28% in relative terms.

23 23 Absolute Effect Simple Analysis: Absolute Effect Data = WHI trial E = HRT vs. placebo D = CHD (yes or no) Average follow-up: 5.2 years How to say it: In absolute terms, there was an additional 4.22 CHD cases for every thousand women using HRT over 5.2 years.

24 24 Efficacy same as RRD but without the minus sign Simple Analysis: Efficacy same as RRD but without the minus sign This provides a suitable taking-off point for the discussion of Rothman, K. J., Adami, H. O., & Trichopoulos, D. (1998). Should the mission of epidemiology include the eradication of poverty? Lancet, 352(9130), 810-813. 450 Sumatra villages randomly assigned to either a vitamin A supplementation or not How to say it: Vitamin A supplementation was 34% effective in preventing childhood mortality.

25 25 Absolute Effect Simple Analysis: Absolute Effect 450 Sumatra villages randomly assigned to either a vitamin A or control How to say it: The effect was to reduce mortality by 2.47 deaths per 1000 children over the period of observation.

26 26 OpenEpi.com for data analysis “Counts” menu for incidence proportions, prevalences, and case- control data “Person Time” menu for rate data inferential confidence intervals P-valuesDescriptive and inferential (confidence intervals and P-values) statistics Can be used as a learning tool

27 27 6.1 Bicycle helmet campaign You want to test whether a public awareness campaign about bicycle safety at elementary schools will increase bicycle helmets use among school-aged children. To test this intervention, you identify 12 elementary schools, half of which will be randomly assigned to participate in a school-wide bicycle helmet awareness program. The other 6 schools will serve as controls and will receive no special intervention. Research assistants will determine the percentage of bicyclists wearing helmets at standard locations in neighborhoods of each of the schools before and after the intervention. (A) What is the unit of intervention in this study? (The ‘‘unit of intervention’’ refers to the level at which the intervention is randomized. This may differ from the ‘‘unit of observation,’’ which is the unit upon which the outcome is measured.) (B) What is the unit of observation in this study? (C) Even though the intervention was randomized in this study, there were only 6 treatment schools and 6 controls schools. Therefore, there is a good chance that treatment and control schools will differ with respect to important characteristics such as socioeconomic status. Can you think of a way to control for socioeconomic status through a randomization or study design approach?

28


Download ppt "1 Chapter 6 Experimental Studies. 2 Chapter 6 Outline 6.1Introduction 6.2 Historical perspective 6.3 General concepts 6.4 Data analysis."

Similar presentations


Ads by Google