Presentation is loading. Please wait.

Presentation is loading. Please wait.

EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:

Similar presentations


Presentation on theme: "EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:"— Presentation transcript:

1 EBCP

2 Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie: innacurate Systematic erros: predictable errors that happen all the time. Eg: forgeting to zero a scale. Ie: low accuracy

3 Bias: systemic error due to flawed methodology Study processSources of biasSolution Allocation of subjects to intervention and control groups Selection bias: systematic differences in comparison groups Randomise Implementation of study interventions Performance bias: systematic differences in careprovided apart from the intervention being studied or differences in the placebo effect. Blind subjects Follow up of participantsAttrition bias: systematic differences in withdrawals from the trial. Intention to treat analysis Evaluation of outcomesDetection bias: systematic differences in outcome Assessment Double blind (blind outcome assesors)

4 Types 1 vs 2 error Type 1 error: False +ve, generally due to bias Type 2 error: False –ve, insufficient statistical power (ie: CI is too wide because the sample size is too small) or bias

5 Confidence intervals Clinical significance Statistical significance

6 Causation 1.Exposure must precede outcome 2.Dose dependant gradient 3.Dechallence-rechallege- take away the exposure and the outcome decreases/disappears, then reappears when the exposure is returned Also: Is the association consistent with other studies and does it make biological sense?

7 Measuring Outcomes Relative risk (RR): the probability of an event in the active treatment group divided by the probability of an event in the control group. RR = Y/X. A relative risk of 1 is the null value or no difference. Absolute risk reduction (ARR) : The risk in the control group minus that in the invervention group: ARR = X-Y. Relative risk reduction (RRR)= 1-RR

8 Measuring Outcomes Odds ratios: used for case control trials as risk of developing the disease has no meaning since they already have it or don’t. Odds ratio = odds of exposure in the cases/odds of exposure in the controls OR= a/c ÷ b/d

9 Measuring Outcomes Number needed to treat (NNT): the number of patients you need to treat to prevent one additional bad outcome. The number needed to treat is the reciprocal of the absolute risk reduction (NNT= 1/ARR). Number needed to harm (NNH): the number of people who need to be subjected to the exposure for one person to develop a negative outcome (NNH= 1/ARR in a study measuring harm)

10 Diagnostic Tests A Sensitve test helps rule out a diagnosis: SeNsitive Negative rule OUT: SNOUT A Specific test helps confirm the diagnosis: SPecific Positive rule IN: SPIN Sensitivity: probability of true positives Specificity: probability of true negatives

11 Diagnostic Tests Pre test probability: The chance your pt has the diagnosis. Basically the incidence in similar people presenting with the same symptoms. Likelihood Ratios (+/-ve) : how much a positive or negative result modifies the probability of the disease. – Ratio of 1 doesn’t change the probability – Ratios greater than one increase the probability – Ratios less than one decrease the probability LR+ = sensitivity/(1-specificity) OR true+ve rate /false+ve rate LR- = (1-sensitivity)/specificity OR false-ve rate /true-ve rate

12 Nomograms

13 Prognostic Studies Usually done via observational studies like Case-control or more commonly Cohort studies. The cohort should all be at a similar point in the course of the disease. Results can be shown as a “x” year survival rate or survival curve.

14 Systematic Reviews Sometimes the method of selecting articles for the systematic review is biased. If the selection process is unbiased the funnel plot should look like an inverted funnel.

15 Systematic Reviews Forest plots: Combine the results of the studies into one graph.

16 Systematic Reviews Forest Plots: – Heterogeneity: Assess if any of the studies are significantly different from the others. If heterogeneity is too high then the results of the studies are too different to pool together in a meta-analysis.


Download ppt "EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:"

Similar presentations


Ads by Google