Download presentation
Presentation is loading. Please wait.
1
Evidence Based Practice
RCS 6740 7/26/04
2
Evidence Based Practice: Definitions
“Evidence based practice is the integration of the best available external clinical evidence from systematic research with individual clinical experiences” (Singh & Oswald, 2004) It is the conscientious, explicit, and judicious use of current evidence in making decisions about client treatment and care
3
Benefits of Evidence Based Practice
Enhances the effectiveness and the efficiency of diagnosis and treatment Provides clinicians with a specific methodology to search for research Helps clinicians critically evaluating published and unpublished research Facilitates patient-centered outcomes
4
Steps of Evidence Based Practice
1) Define the problem 2) Search the treatment literature for evidence about the problem and solutions to the problem 3) Critically evaluate the literature (evidence) 4) Choose and initiate treatment 5) Monitor Patient Outcomes
5
Potential Questions when Evaluating Random/Controlled Trials
1. Were detailed descriptions provided of the study population? 2. Did the study include a large enough patient sample to achieve sufficient statistical power? 3. Were the patients (subjects) randomly allocated to treatment and control groups? 4. Was the randomization successful, as shown by comparability of sociodemographic and other variables of the patients in each group? 5. If the two groups were not successfully randomized, were appropriate statistical tests (e.g., logistic regression analysis) used to adjust for confounding variables that may have affected treatment outcome(s)?
6
Potential Questions when Evaluating Random/Controlled Trials Cont.
6. Did the study use a single- or double-blind methodology? 7. Did the study use placebo controls? 8. Were the two treatment conditions stated in a manner that (a) clearly identified differences between the two, (b) would permit independent replication, and (c) would allow judgment of their generalizability to everyday practice?
7
Potential Questions when Evaluating Random/Controlled Trials Cont.
9. Were the two arms of the treatment protocol (treatment vs. control) managed in exactly the same manner except for the interventions? 10. Were the outcomes measured broadly with appropriate, reliable and valid instruments? 11. Were the outcomes clearly stated? 12. Were the primary and secondary end-points of the study clearly defined? 13. Were the side effects and potential harms of the treatment and control conditions appropriately measured and documented?
8
Potential Questions when Evaluating Random/Controlled Trials Cont.
14. In addition to disorder- or condition-specific outcomes, were patient specific outcomes, functional status, and quality of life issues measured? 15. Were the measurements free from bias and error? 16. Were the patients followed up after the termination of the study and, if so, for how long? 17. Was there documentation of the proportion of patients who dropped out of the study? 18. Was there documentation of the proportion of patients who were lost to follow up?
9
Potential Questions when Evaluating Random/Controlled Trials Cont.
19. Was there documentation of when and why the attrition occurred? 20. If the attrition was substantial, was there documentation of a comparison of baseline characteristics and risk factors of treatment non-responders or withdrawals with those who were treatment responders or completed the study? 21. Was there reporting of the results in terms of number-needed-to-treat (i.e., the reciprocal of the absolute risk reduction rate).
10
Potential Questions when Evaluating Random/Controlled Trials Cont.
22. Were multiple comparisons used to increase the likelihood of a chance finding of a nominally statistically significant difference? 23. Were correct statistical tests used to analyze the data? 24. Were the results interpreted appropriately? 25. Did the interpretation of the results go beyond the actual data?
11
Potential Questions when Evaluating Other Types of Research
1. Is the treatment or technique available/affordable? 2. How large is the likely effect? 3. How uncertain are the study results? 4. What are the likely adverse effects of treatment? Are they reversible? 5. Are the patients included in the studies similar to the patient(s) I am dealing with? If not, are the differences great enough to render the evidence useless?
12
Potential Questions when Evaluating Other Types of Research Cont.
6. Was the study setting similar to my own setting? 7. Will my patient receive the same co-interventions that were used in the study? If not, will it matter? 8. How good was adherence (compliance) in the study? Is adherence likely to be similar in my own practice? 9. Are the outcomes examined in the studies important to me/my patients?
13
Potential Questions when Evaluating Other Types of Research Cont.
10. What are my patients’ preferences regarding the treatment? What are the likely harms and the likely benefits? 11. If I apply the evidence inappropriately to my patient, how harmful is it likely to be? Will it be too late to change my mind if I have inappropriately applied the evidence?
14
Questions and Comments?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.