Dr. Matthew Keough August 8th, 2018 Summer School

Slides:



Advertisements
Similar presentations
Research Study Designs
Advertisements

The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Reading the Dental Literature
How does the process work? Submissions in 2007 (n=13,043) Perspectives.
Writing a Research Protocol Michael Aronica MD Program Director Internal Medicine-Pediatrics.
Non-Experimental designs: Developmental designs & Small-N designs
Clinical Trials Hanyan Yang
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Experimental Study.
Reading Science Critically Debi A. LaPlante, PhD Associate Director, Division on Addictions.
Study Designs By Az and Omar.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Measuring Impact: Experiments
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Systematic Reviews.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
Plymouth Health Community NICE Guidance Implementation Group Workshop Two: Debriding agents and specialist wound care clinics. Pressure ulcer risk assessment.
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
RevMan for Registrars Paul Glue, Psychological Medicine What is EBM? What is EBM? Different approaches/tools Different approaches/tools Systematic reviews.
EXPERIMENTAL EPIDEMIOLOGY
How to read a paper D. Singh-Ranger. Academic viva 2 papers 1 hour to read both Viva on both papers Summary-what is the paper about.
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
The expanding evidence for the efficacy of ACT: results from a meta analysis on clinical applications.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
PTP 661 EVIDENCE ABOUT INTERVENTIONS CRITICALLY APPRAISE THE QUALITY AND APPLICABILITY OF AN INTERVENTION RESEARCH STUDY Min Huang, PT, PhD, NCS.
Penn CTSI Research Seminar Clinical Trials November 10, 2015 Vernon M. Chinchilli, PhD Distinguished Professor and Chair Department of Public Health Sciences.
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Analytical Interventional Studies
Welcome Clinical Trials October 11, 2016 Vernon M. Chinchilli, PhD
AP Seminar: Statistics Primer
Systematic review an overview and posing the question
Critically Appraising a Medical Journal Article
Brady Et Al., "sequential compression device compliance in postoperative obstetrics and gynecology patients", obstetrics and gynecology, vol. 125, no.
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
Using internet information critically Reading papers Presenting papers
Chapter 4 Research Methods in Clinical Psychology
The Research Design Continuum
CLINICAL PROTOCOL DEVELOPMENT
How to read a paper D. Singh-Ranger.
Data Collection Methods
Within Trial Decisions: Unblinding and Termination
Critical Appraisal of: Systematic Review: Bisphosphanates and Osteonecrosis of the Jaw Basil Al-Saigh August 2006.
Review of Evidence-Based Practice and determining clinical questions to address This group of 17 slides provides a nice review of evidence-based.
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
Deputy Director, Division of Biostatistics No Conflict of Interest
Randomized Trials: A Brief Overview
Design of Clinical Trials
Evidence-Based Medicine
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Critical Reading of Clinical Study Results
Research in Psychology
Research Methods 3. Experimental Research.
Pilot Studies: What we need to know
Clinical Research: Part 3 RCTs
Pearls Presentation Use of N-Acetylcysteine For prophylaxis of Radiocontrast Nephrotoxicity.
Common Problems in Writing Statistical Plan of Clinical Trial Protocol
remember to round it to whole numbers
Psychological Research Why do we have to learn this stuff?
EXPERIMENTAL STUDIES.
Clinical Research: Part 3 RCTs
Statistical Data Analysis
Systematic reviews and meta-analyses
Appraisal of an RCT using a critical appraisal checklist
Evidence Based Practice
IMPACT OF PHARMACIST DELIVERED CARE IN THE COMMUNITY PHARMACY SETTING
Level of Evidence Lecture 4.
What makes a good grant application
Evidence Based Diagnosis
Presentation transcript:

Dr. Matthew Keough August 8th, 2018 Summer School (Brief) RECOMENDATIONS FOR designing your randomized controlled trial (rct) Dr. Matthew Keough August 8th, 2018 Summer School

What is an RCT? A randomized controlled trial (RCT) is an experiment in which investigators randomly assign eligible human research participants or other units of study (e.g., classrooms, clinics, playgrounds) into groups to receive or not receive one or more interventions that are being compared. The results are analyzed by comparing outcomes in the groups. (CIHR, 2018) A scientific experiment that tests a new treatment, while simultaneously tries to reduce bias (Kraemer & Wilson, 2002)

Psychological Perspective

Strongest Weakest Meta-Analysis RCTs Cohort studies Case-control studies Case series, case reports Editorials, expert opinion Strongest Weakest

MAKE SURE THAT YOUR TREATMENT IS EMPIRICALLY-INFORMED recommendation #1 MAKE SURE THAT YOUR TREATMENT IS EMPIRICALLY-INFORMED Are you the first one to tackle this problem? What has been done already? Is there a “best practice” treatment that you can improve on?

Pre-register your trial protocol Do what you say you’ll do RECOMMENDATION #2 2. BE TRANSPARENT Pre-register your trial protocol Do what you say you’ll do Do not deviate from your original plans

Question: How do Null results relate to registration? These too have the potential to inform clinical decisions Can identify flaws in the research design Week 2 readings: problem with psychology is studies that don’t find significant results aren’t published!!!!

3. DESIGN WELL AND MEASURE Who is in your target population? RECOMMENDATION #3 3. DESIGN WELL AND MEASURE Who is in your target population? Who will you allow in? What groups will you have? What symptoms/variables will you measure and how often?

Who is in your target population and who will you let in? Traditional method: recruit ”purest” groups possible (i.e., maximize internal validity) Easier said than done (especially in psychology) Recent method: allow for more “complex groups” but just measure and model it statistically (i.e., maximize external validity)

Question: why do you think having a heterogeneous sample is beneficial? Question: want to improve generalizability, results can have more impact Solves methodological issues with recruitment

SIDENOTE: efficacy vs. effectiveness Efficacy: testing if your treatment works “in the lab” or under ideal, controlled conditions Specific interventions Internal validity Strict methodology The next major category of decision points would be the actual purpose of the study. As I mentioned, RCTs are typically comparing one thing to another, but there this really boils down to 2 types of RCTs: efficacy and effectiveness. Efficacy to me is what most people think of when they hear RCT. This is looking at the effects of specific interventions Hunsley, Elliot, & Therrien, 2013; Nathan, Stuart, & Dolan, 2000

SIDENOTE: efficacy vs. effectiveness Effectiveness: testing if your treatment works “in the real world” or under less controlled conditions Feasibility, real-world situations External validity More open methodology The next major category of decision points would be the actual purpose of the study. As I mentioned, RCTs are typically comparing one thing to another, but there this really boils down to 2 types of RCTs: efficacy and effectiveness. Efficacy to me is what most people think of when they hear RCT. This is looking at the effects of specific interventions Hunsley, Elliot, & Therrien, 2013; Nathan, Stuart, & Dolan, 2000

What groups will you have? Clinical trials examine how something performs compared to another (Nathan, Stuart, & Dolan, 2000; Parloff, 1986) Waitlist control: = passive ‘Standard practice’ control = receiving commonly accepted treatment (i.e., TAU) Placebo control group it is not surprising to us that methodology is probably one of the most time-consuming parts of research, but also the most important. This is essentially determining if you are able to find what you are looking for (e.g., if you don’t run a power analysis and have the right number of people, you can’t expect to detect a result even if it is there)

Question: How does the type of control group impact interpretation? Another question: what control group do you think is most relevant to psychotherapy?? Question: You are comparing an active intervention to a group that is getting nothing. Depending on your resaerch question, this may not be addressing what you were hoping/wanting This impacts interpretation because it limits what you can be concluding. So can you say that your new intervention is better than people that are getting nothing (which in reality, may not be the best or most accurate comparison you can be making) or can you say that you intervention does not have that much difference between the current, standard practice out there. It also depends on your research question. You want to make sure you have a method that allows you to actually measure what you are wanting!! Some argue it is sufficient to compare to a group of individuals who are not receiving any type of intervention or treatment (passive), while others argue it is more ethical to have the control group be “standard practice” for that group

Methodology: blinding Blinding is when the patient/researcher or both are unaware of the condition that they are in Psychotherapy research most often has patient single-blinding Pro: Eliminates bias Con: Very difficult to have double-blinding (Nathan, Stuart, & Dolan, 2000)

QUESTION: WHY DO YOU THINK IT IS DIFFICULT TO HAVE COMPLETE BLINDING? Or phrased differently? What poses as a challenge of blinding in psychotherapy research? Very challenging for therapist to not know the intent SO what do you thnk is best practice?

What symptoms/variables will you measure and how often? Need to specify a primary outcome Specify secondary outcomes (and you should!) Make sure all measures are reliable (and ideally have been validated in your target population) Measure moderators and mediators as well!!

Evaluating outcomes: Moderation and mediation Moderators specify for whom and/or under what conditions treatment works (Kraemer & Wilson, 2002) Identify subpopulations with possibly different causal mechanisms and who may benefit most Meditators identify possible mechanisms through which a treatment might achieve its effects The so-called “why”

Question: what is the relevance of moderators and mediators for evaluating outcomes? as alluded to above, these methods can be used in a way to further identify the benefit of the intervention, and as a way to combat a high number of inclusion/exclusion criteria

4. DECIDE HOW YOU WILL HANDLE MISSING DATA RECOMMENDATION #4 4. DECIDE HOW YOU WILL HANDLE MISSING DATA How will you handle drop-outs? Should you include them? Or exclude them?

Question: Why do drop-outs pose a problem for an RCT?

DECIDE HOW YOU WILL HANDLE MISSING DATA INTENTION TO TREAT Include all randomized participants Impute data OR use full information methods Depends on “pattern of missingness” or lack thereof COMPLETE CASE ANALYSIS Not the best method BUT, can be unbiased if missingness is random