How to design and interpret controlled clinical trials “the dark side of the moon” “How to session” ESH June 2005 Andreas Pittaras MD.

Slides:



Advertisements
Similar presentations
Appraisal of an RCT using a critical appraisal checklist
Advertisements

Randomized controlled trials
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Journal Club Alcohol, Other Drugs, and Health: Current Evidence July–August 2013.
Reading the Dental Literature
Estimation of Sample Size
Design and Analysis of Clinical Study 12. Randomized Clinical Trials Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
Elements of a clinical trial research protocol
Journal Club Alcohol and Health: Current Evidence July-August 2006.
Clinical Trials Hanyan Yang
Journal Club Alcohol, Other Drugs, and Health: Current Evidence March–April 2015.
Journal Club Alcohol, Other Drugs, and Health: Current Evidence November–December 2009.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Sample Size Determination
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Experimental Study.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Reading Science Critically Debi A. LaPlante, PhD Associate Director, Division on Addictions.
Study Designs By Az and Omar.
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Are the results valid? Was the validity of the included studies appraised?
Multiple Choice Questions for discussion
Clinical Trials. What is a clinical trial? Clinical trials are research studies involving people Used to find better ways to prevent, detect, and treat.
Intervention Studies Principles of Epidemiology Lecture 10 Dona Schneider, PhD, MPH, FACE.
Chapter 4 Gathering data
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Evidence-Based Medicine 3 More Knowledge and Skills for Critical Reading Karen E. Schetzina, MD, MPH.
Systematic Reviews.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
EVIDENCE BASED MEDICINE Effectiveness of therapy Ross Lawrenson.
Evidence-Based Journal Article Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department.
Chapter 7: Data for Decisions Lesson Plan Sampling Bad Sampling Methods Simple Random Samples Cautions About Sample Surveys Experiments Thinking About.
Grobman, K. H. "Confirmation Bias." Teaching about. Developmentalpsychology.org, Web. 16 Sept Sequence Fits the instructor's Rule? Guess.
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Lecture 5 Objective 14. Describe the elements of design of experimental studies: clinical trials and community intervention trials. Discuss the advantages.
How to find a paper Looking for a known paper: –Field search: title, author, journal, institution, textwords, year (each has field tags) Find a paper to.
EXPERIMENTAL EPIDEMIOLOGY
How to read a paper D. Singh-Ranger. Academic viva 2 papers 1 hour to read both Viva on both papers Summary-what is the paper about.
Wipanee Phupakdi, MD September 15, Overview  Define EBM  Learn steps in EBM process  Identify parts of a well-built clinical question  Discuss.
EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry.
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
CAT 5: How to Read an Article about a Systematic Review Maribeth Chitkara, MD Rachel Boykan, MD.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
1 EFFICACY OF SHORT COURSE AMOXICILLIN FOR NON-SEVERE PNEUMONIA IN CHILDREN (Hazir T*, Latif E*, Qazi S** AND MASCOT Study Group) *Children’s Hospital,
Compliance Original Study Design Randomised Surgical care Medical care.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
EBM --- Journal Reading Presenter :林禹君 Date : 2005/10/26.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
EBM --- Journal Reading Presenter :黃美琴 Date : 2005/10/27.
Experiments Textbook 4.2. Observational Study vs. Experiment Observational Studies observes individuals and measures variables of interest, but does not.
 Experimental epidemiology; Randomized Control Trail Dr. Asif Rehman.
Copyright ©2011 Brooks/Cole, Cengage Learning Gathering Useful Data for Examining Relationships Observation VS Experiment Chapter 6 1.
Angela Aziz Donnelly April 5, 2016
Measures of disease frequency Simon Thornley. Measures of Effect and Disease Frequency Aims – To define and describe the uses of common epidemiological.
Critically Appraising a Medical Journal Article
Using internet information critically Reading papers Presenting papers
The Importance of Adequately Powered Studies
CLINICAL PROTOCOL DEVELOPMENT
How many study subjects are required ? (Estimation of Sample size) By Dr.Shaik Shaffi Ahamed Associate Professor Dept. of Family & Community Medicine.
How to read a paper D. Singh-Ranger.
Confidence Intervals and p-values
Critical Reading of Clinical Study Results
The objective of this lecture is to know the role of random error (chance) in factor-outcome relation and the types of systematic errors (Bias)
What is a review? An article which looks at a question or subject and seeks to summarise and bring together evidence on a health topic. Ask What is a review?
HEC508 Applied Epidemiology
Presentation transcript:

How to design and interpret controlled clinical trials “the dark side of the moon” “How to session” ESH June 2005 Andreas Pittaras MD

E. FREIS The father of the first multicenter, double- blinded, random trial of cardiovascular drugs, VA Cooperative Study on Antihypertensive Agents

10.00 new randomized trials every year > trials General internists would need to read 20 articles a day all year round to maintain present knowledge Systematic reviews and guidelines reduces this problem

“the aim of science (…clinical trial) is not to open a door to endless wisdom, but to put a limit to endless error” -Bertolt Brecht

Do we wear the same eyeglasses ? Clinical Trials

Clinical Studies: Essential Questions Was the study original? Whom is the study about? Was the design of the study sensible? Was systematic bias avoided or minimized? Was the study large enough, and continued for long enough, to make the results credible?

Clinical Studies: Essential Questions Was the study original?

Is there any similar study? Is this study bigger, continued for longer, or otherwise more substantial than previous one(s)? Is the methodology of this study any more rigorous (in particular, does it address any specific methodological criticisms of previous studies)?

Will the numerical results of this study add significantly to a meta-analysis of previous studies? Is the population that was studied different in any way (has the study looked at different ages, sex, or ethnic groups than previous studies)? Is the clinical issue addressed of sufficient importance, and is there sufficient doubt in the minds of the public or key decision makers, to make new evidence “politically” desirable even when it is not strictly scientifically necessary?

Clinical Studies: Essential Questions Whom is the study about?

How were the subjects recruited ? advertisement local newspaper, primary care, veterans, homeless people etc Who was included in the study? coexisting illness, local language, other medication, illiterate people etc (the results of studies of new drugs in 23 yo healthy male volunteers will not be applicable to the average elderly women) Who was excluded from the study? A study may be restricted to pts with moderate or severe CHF, which could lead to false conclusions about mild CHF. Hospital outpatients studies have different disease spectrum from the primary care Were the subjects studied in real life circumstances? doubt on the applicability of findings to your own practice

Clinical Studies: Essential Questions was the design of the study sensible?

What specific intervention or other maneuver was being considered, and what was it being compared with ? It is tempting to take published statements at face value, but authors frequently misrepresent (usually subconsciously rather than deliberately) what they actually did, and they overestimate its originality and potential importance. …… examples of problematic descriptions in the method section of a clinical trial……

What the authors said What they should have said (or should have done)An example of: "We measured how often GPs ask patients whether they smoke.""We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes.""We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors said What they should have said (or should have done) An example of: "We measured how often GPs ask patients whether they smoke.""We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes.""We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors saidWhat they should have said (or should have done) An example of: "We measured how often GPs ask patients whether they smoke.""We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes.""We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors said What they should have said (or should have done)An example of: "We measured how often GPs ask patients whether they smoke." "We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes.""We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors said What they should have said (or should have done) An example of: "We measured how often GPs ask patients whether they smoke." "We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes.""We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors saidWhat they should have said (or should have done) An example of: "We measured how often GPs ask patients whether they smoke.""We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes.""We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors said What they should have said (or should have done)An example of: "We measured how often GPs ask patients whether they smoke.""We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes." "We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors said What they should have said (or should have done) An example of: "We measured how often GPs ask patients whether they smoke.""We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes." "We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What the authors saidWhat they should have said (or should have done) An example of: "We measured how often GPs ask patients whether they smoke.""We looked in patients' medical records and counted how many had had their smoking status recorded." Assumption that medical records are 100% accurate. "We measured how doctors treat low back pain.""We measured what doctors say they do when faced with a patient with low back pain." Assumption that what doctors say they do reflects what they actually do. "We compared a nicotine-replacement patch with placebo.""Subjects in the intervention group were asked to apply a patch containing 15 mg nicotine twice daily; those in the control group received identical-looking patches." Failure to state dose of drug or nature of placebo. "We asked 100 teenagers to participate in our survey of sexual attitudes.""We approached 147 white American teenagers aged (85 males) at a summer camp; 100 of them (31 males) agreed to participate." Failure to give sufficient information about subjects. (Note in this example the figures indicate a recruitment bias towards females.) "We randomized patients to either 'individual care plan' or 'usual care'.""The intervention group were offered an individual care plan consisting of...; control patients were offered...." Failure to give sufficient information about intervention. (Enough information should be given to allow the study to be repeated by other workers.) "To assess the value of an educational leaflet, we gave the intervention group a leaflet and a telephone helpline number. Controls received neither." If the study is purely to assess the value of the leaflet, both groups should have been given the helpline number. Failure to treat groups equally apart form the specific intervention. "We measured the use of vitamin C in the prevention of the common cold."A systematic literature search would have found numerous previous studies on this subject 14 Unoriginal study.

What outcome was measured, and how? If you had an incurable disease, testing a new drug, you would measure the efficacy of the drug in terms of whether it made you live longer (and perhaps, whether life was worth living given your condition and any side effects of the medication) The measurement of symptomatic effects (pain), functional effects (mobility), psychological effects (anxiety), or social effects (inconvenience) of an intervention has even more problems. What is important in the eyes of the doctor may not be valued so highly by the patient, and vice versa.

Clinical Studies: Essential Questions Was systematic bias avoided or minimized?

The aim : groups as similar as possible except for the particular difference being examined Receive same explanations Have same contacts with health professionals Be assessed the same number of times Using the same outcome measures

Different study designs to reduce systematic bias Randomized controlled trials Non-randomized controlled clinical trials Cohort studies Case-control studies

Randomized double-blind controlled trials “Gold standard” The two treatments are investigated concurrently Allocation of treatments to patients is by a random process Neither the patient nor the clinician knows which treatment was received “Single blind”: only the patient is unaware

Copyright ©1997 BMJ Publishing Group Ltd. Sources of bias to check for in a randomised controlled trial

Random allocation : same chance of receiving either treatment, and is thus unbiased by definition Minimization (each pt takes automatically the treatment which leads to less imbalance; alternative in small trials,) Systematic allocation (pseudo-random; even vs odd days groups; open to abuse) Non-random concurrent controls ( active vs control of ineligible + refusers; volunteer bias) Historical controls ( a single group of new treatment vs a group previously treated with other alternative treatment)

Alternative designs Parallel group design (two different groups are studied concurrently) Crossover design Within group (paired) comparisons Sequential designs Factorial designs Adaptive designs Zelen’s design

Alternative designs Parallel group design (two different groups are studied concurrently) Crossover design Within group (paired) comparisons Sequential designs Factorial designs Adaptive designs Zelen’s design

Alternative designs Parallel group design (two different groups are studied concurrently) Crossover design Within group (paired) comparisons Sequential designs Factorial designs Adaptive designs Zelen’s design

Sequential design Parallel groups are studied, but the trial continues until the clear benefit of one treatment, or it is unlikely that any difference will emerge. Will be shorter than fixed length trials The data are analyzed after each pt’s results become available Blinding problems; ethical difficulties Group sequential trial : a useful variation; data analysis after each block of patients are available (early termination)

Alternative designs Parallel group design (two different groups are studied concurrently) Crossover design Within group (paired) comparisons Sequential designs Factorial designs Adaptive designs Zelen’s design

Factorial designs Two treatments, A & B, are simultaneously compared with each other and with a control. Pts are divided into four groups, who receive the control treatment, A only, B only, and both A&B. Allows the investigation of the “synergy” between A & B

Alternative designs Parallel group design (two different groups are studied concurrently) Crossover design Within group (paired) comparisons Sequential designs Factorial designs Adaptive designs Zelen’s design

Copyright ©1997 BMJ Publishing Group Ltd. Sources of bias to check for in a randomised controlled trial

Clinical Studies: Essential Questions Was assessment “blind”?

“Blind” assessment? People who assess outcome know the patient’s group -Judge whether someone is still clinically in heart failure -Say whether an x ray is “improved” from last time -recheck a high BP measurement in active group -BB vs ACEi or ARBs or Diuretics(<HR, <K+) -CCB vs others (pedal edema)

Copyright ©1997 BMJ Publishing Group Ltd. Sources of bias to check for in a randomised controlled trial

Clinical Studies: Essential Questions Was the study large enough, and continued for long enough, to make the results credible?

Sample Size Big enough to have a high chance of detecting a worthwhile effect if it exists Be reasonably sure that no benefit exists if its not found in the trial

Errors defined Type I error (α) : The probability of detecting a statistically significant difference when the treatments are in reality equally effective (the chance of false-positive result) Type II error (β): :The probability of not detecting a statistically significant difference when a difference of a given magnitude in reality exists (the chance of a false-negative result) Power (1-β): The probability of detecting a statistically significant difference when a difference of a given magnitude really exists

The simplest approximate sample size formula for binary outcomes, assuming α=0.05, power=0.90, and equal sample sizes in the two groups n=n= [(R+1)- p ₂ (R²+1)] p ₂ (1-R)² n : the sample size in each of the groups p ₁ : event rate in the treatment group p ₂ : event rate in the control group R : risk ratio (p ₁ /p ₂ )

Power (1-β) α (type I error)

The simplest approximate sample size formula for binary outcomes, assuming α=0.05, power=0.90, and equal sample sizes in the two groups N=962 = [(0.60+1)-0.10(0.60²+1)] 0.10(1-0.60)² n : the sample size in each of the groups p ₁ : 0.06 ( 6% event rate in the treatment group) p ₂ :0.10 (estimate 10% event rate in the control group) R :0.60=6%/10% (to detect 40% reduction (p ₁ /p ₂ )

Approximate relative trial sizes for different levels of “ α” and “ power” Power (1-β) α (type I error)

Duration of follow up The study must continue long enough for the effect of intervention to be reflected in the outcomes. A study of a new painkiller on the postoperative pain may only need a follow up period of 48h. The effect of nutritional supplements in the preschool years on the final height needs decades. Events in newly diagnosed DM need >10 years

Completeness of follow up (“drop out of”) Not receiving the tablets Missing their interim checkups Loss of patient motivation Side effects Clinical reasons ( concurrent illness, pregnancy) Incorrect entry Death.

Copyright ©1997 BMJ Publishing Group Ltd. Sources of bias to check for in a randomised controlled trial

Different results in trials High-dose oxygen therapy in neonates Antiarrhythmic drugs after MI Fluoride treatment for osteoporosis Bed rest in twin pregnancy HRT in vascular prevention High-dose aspirin for carotid endarterectomy b-blockers in heart failure Digoxin after MI

Copyright ©1997 BMJ Publishing Group Ltd.

Interpretation “Tips” of results p <0.05 means by chance <1:20 “significant” p<0.01 by chance <1:100 “highly significant” CI “confidence interval” around a result: indicates the limits within which the “real” difference is likely to lie Every r value should be accompanied by a p value or a CI

Interpretation “Tips” of results Relative Risk of death Relative Risk Reduction Absolute Risk Reduction Number needed to treat Odds Ratio

10.00 new randomized trials every year > trials General internists would need to read 20 articles a day all year round to maintain present knowledge Systematic reviews and guidelines reduces this problem 30-40% of patients do not receive care according to present scientific evidence 20-25% of care provided, is not needed or is potentially harmful

Common clinician concerns about trials, subgroups, meta-analyses, and risk “Could my patient have been randomized in this trial? If so the results are applicable; if not, they may not be” “Is my patient so different from those in the trial that its results cannot help me make my treatment decision?”

Source Population Eligible Population Participants Exposure or Intervention Comparison or Control Outcomes +_+_ +_+_ Nested triangles : Different Population with a Common condition