Bias and errors in epidemiologic studies Manish Chaudhary BPH( IOM) MPH(BPKIHS)

Slides:



Advertisements
Similar presentations
Case-control study 3: Bias and confounding and analysis Preben Aavitsland.
Advertisements

Bias Lecture notes Sam Bracebridge.
Bias Update: S. Bracebridge Sources: T. Grein, M. Valenciano, A. Bosman EPIET Introductory Course, 2011 Lazareto, Menorca, Spain.
Bias M.Valenciano, 2006 A. Bosman, 2005 T. Grein,
Research Curriculum Session II –Study Subjects, Variables and Outcome Measures Jim Quinn MD MS Research Director, Division of Emergency Medicine Stanford.
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical Research.
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Bias, Confounding and the Role of Chance
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Reading the Dental Literature
BIAS AND CONFOUNDING Nigel Paneth. HYPOTHESIS FORMULATION AND ERRORS IN RESEARCH All analytic studies must begin with a clearly formulated hypothesis.
Chance, bias and confounding
Bias Thanks to T. Grein.
Biostatistics ~ Types of Studies. Research classifications Observational vs. Experimental Observational – researcher collects info on attributes or measurements.
Variability & Bias Yulia Sofiatin Department of Epidemiology and Biostatistics CRP I.
Cohort Studies.
EVIDENCE BASED MEDICINE
Confounding and effect modification Manish Chaudhary BPH(IOM, TU), MPH(BPKIHS)
Chapter 8 Experimental Research
Validity and Reliability Dr. Voranuch Wangsuphachart Dept. of Social & Environmental Medicine Faculty of Tropical Medicine Mahodil University 420/6 Rajvithi.
Case Control Study Manish Chaudhary BPH, MPH
Cohort Study.
Sampling : Error and bias. Sampling definitions  Sampling universe  Sampling frame  Sampling unit  Basic sampling unit or elementary unit  Sampling.
Dr. Abdulaziz BinSaeed & Dr. Hayfaa A. Wahabi Department of Family & Community medicine  Case-Control Studies.
Lecture 8 Objective 20. Describe the elements of design of observational studies: case reports/series.
Research Techniques Made Simple: Databases for Clinical Research Katrina Abuabara, MD MA David Margolis, MD PhD University of Pennsylvania.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Spurious Association Sometimes an observed association between a disease and suspected factor may not be real. e.g. A study was conducted between births.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 9 Examining Populations and Samples in Research.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Lecture 6 Objective 16. Describe the elements of design of observational studies: (current) cohort studies (longitudinal studies). Discuss the advantages.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Bias Defined as any systematic error in a study that results in an incorrect estimate of association between exposure and risk of disease. To err is human.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Mother and Child Health: Research Methods G.J.Ebrahim Editor Journal of Tropical Pediatrics, Oxford University Press.
Potential Errors In Epidemiologic Studies Bias Dr. Sherine Shawky III.
Design and Analysis of Clinical Study 2. Bias and Confounders Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
A short introduction to epidemiology Chapter 10: Interpretation Neil Pearce Centre for Public Health Research Massey University, Wellington, New Zealand.
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
EXPERIMENTAL EPIDEMIOLOGY
Case Control Study Dr. Ashry Gad Mohamed MB, ChB, MPH, Dr.P.H. Prof. Of Epidemiology.
Causal relationships, bias, and research designs Professor Anthony DiGirolamo.
System error Biases in epidemiological studies FETP India.
Chapter 10 Data Interpretation Issues. Learning Objectives Distinguish between random and systematic errors Describe sources of bias Define the term confounding.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Case-Control Study Duanping Liao, MD, Ph.D
Instructor Resource Chapter 13 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Understanding lack of validity: Bias
Case Control Study Dr Pravin Pisudde Moderator: Abhishek Raut.
Matching. Objectives Discuss methods of matching Discuss advantages and disadvantages of matching Discuss applications of matching Confounding residual.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
1 Causation in epidemiology, confounding and bias Imre Janszky Faculty of Medicine NTNU.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Research design By Dr.Ali Almesrawi asst. professor Ph.D.
Purpose of Epi Studies Discover factors associated with diseases, physical conditions and behaviors Identify the causal factors Show the efficacy of intervening.
Validity in epidemiological research Deepti Gurdasani.
BIAS AND CONFOUNDING Nigel Paneth.
Bias Tunis, 30th October 2014 Dr Sybille Rehmet
Epidemiological Methods
CASE-CONTROL STUDIES Ass.Prof. Dr Faris Al-Lami MB,ChB MSc PhD FFPH
BIAS AND CONFOUNDING
Research Methods 3. Experimental Research.
Dr Seyyed Alireza Moravveji Community Medicine Specialist
ERRORS, CONFOUNDING, and INTERACTION
The objective of this lecture is to know the role of random error (chance) in factor-outcome relation and the types of systematic errors (Bias)
Bias, Confounding and the Role of Chance
Bias in Researches Prof. Dr. Maha Al-Nuaimi.
Presentation transcript:

Bias and errors in epidemiologic studies Manish Chaudhary BPH( IOM) MPH(BPKIHS)

Concept Error - A false or mistaken result obtained in a study or experiment. Difficult to make the study free from any type of error and inferences those are made never perfectly valid. Aim is to maximize fact and minimize error so that the research work would represent to the population they refer. Incorrect inferences can be controlled either in the design and implementation phases or during the analysis.

Types of error Random error Systematic error

Random error Random error is the by chance error which make observed values differ from the true value. Occurs through sampling variability or random fluctuation of event of interest. Random error is when a value of the sample measurement diverges – due to chance alone – from that of the true population value. Random error causes inaccurate measures of association.

Random error There are three major sources of random error: – individual biological variation; – sampling error; Random error can never be completely eliminated since we can study only a sample of the population. Sampling error is usually caused by the fact that a small sample is not representative of all the population’s variables. The best way to reduce sampling error is to increase the size of the study.

Precision vs. Accuracy c c c Good precision, poor accuracyPoor precision, good accuracy Good precision, good accuracyPoor precision, poor accuracy

Systematic error or bias (validity problem) Systematic error or bias is any difference between the true value and the observed value due to all causes other than random fluctuation and sampling variability. Systematic error is an error due to factors that inherent in the study design, data collection, analysis and interpretation to yield results or conclusions that depart from the truth. The increasing of sample size has no effect on systematic error. Bias is defined as any systematic error in an epidemiological study that results in an incorrect estimate of the association between exposure and risk of disease.

If there is misrepresentation of the effect, it is called bias and if there is no misrepresentation, it is called valid or no bias. Types of bias 1.Selection bias 2.Information bias 3.Confounding

Selection bias The selection of subjects based on the result which distorts in the estimate of effect is called selection bias. Concerns with the choice of groups to be compared and choice of sampling frame. Often occurs in case control or cohort study.

Types of Selection Bias Berksonian bias – There may be a spurious association between diseases or between a characteristic and a disease because of the different probabilities of admission to a hospital for those with the disease, without the disease and with the characteristic of interest Berkson J. Limitations of the application of fourfold table analysis to hospital data. Biometrics 1946;2:47-53

Types of Selection Bias (cont.) Response Bias – those who agree to be in a study may be in some way different from those who refuse to participate – Volunteers may be different from those who are enlisted

Types of Information Bias Interviewer Bias – an interviewer’s knowledge may influence the structure of questions and the manner of presentation, which may influence responses Recall Bias – those with a particular outcome or exposure may remember events more clearly or amplify their recollections

Types of Information Bias (cont.) Observer Bias – observers may have preconceived expectations of what they should find in an examination Loss to follow-up – those that are lost to follow-up or who withdraw from the study may be different from those who are followed for the entire study

Information Bias (cont.) Hawthorne effect – an effect first documented at a Hawthorne manufacturing plant; people act differently if they know they are being watched Surveillance bias – the group with the known exposure or outcome may be followed more closely or longer than the comparison group

Information Bias (cont.) Misclassification bias – errors are made in classifying either disease or exposure status

Types of Misclassification Bias Differential misclassification – Errors in measurement are one way only – Example: Measurement bias – instrumentation may be inaccurate, such as using only one size blood pressure cuff to take measurements on both adults and children

Misclassification Bias (cont.) Nonexposed Exposed TotalControlsCases OR = ad/bc = 2.0; RR = a/(a+b)/c/(c+d) = 1.3 True Classification Nonexposed Exposed TotalControlsCases OR = ad/bc = 2.8; RR = a/(a+b)/c/(c+d) = 1.6 Differential misclassification - Overestimate exposure for 10 cases, inflate rates

Misclassification Bias (cont.) CasesControlsTotal Exposed Nonexposed OR = ad/bc = 2.0; RR = a/(a+b)/c/(c+d) = 1.3 True Classification CasesControlsTotal Exposed Nonexposed OR = ad/bc = 1.5; RR = a/(a+b)/c/(c+d) = 1.2 Differential misclassification - Underestimate exposure for 10 cases, deflate rates

Misclassification Bias (cont.) CasesControlsTotal Exposed Nonexposed OR = ad/bc = 2.0; RR = a/(a+b)/c/(c+d) = 1.3 True Classification CasesControlsTotal Exposed Nonexposed OR = ad/bc = 3.0; RR = a/(a+b)/c/(c+d) = 1.6 Differential misclassification - Underestimate exposure for 10 controls, inflate rates

Misclassification Bias (cont.) Nonexposed Exposed TotalControlsCases OR = ad/bc = 2.0; RR = a/(a+b)/c/(c+d) = 1.3 True Classification CasesControlsTotal Exposed Nonexposed OR = ad/bc = 1.3; RR = a/(a+b)/c/(c+d) = 1.1 Differential misclassification - Overestimate exposure for 10 controls, deflate rates

Controls for Bias Be purposeful in the study design to minimize the chance for bias – Example: use more than one control group Define, a priori, who is a case or what constitutes exposure so that there is no overlap – Define categories within groups clearly (age groups, aggregates of person years) Set up strict guidelines for data collection – Train observers or interviewers to obtain data in the same fashion – It is preferable to use more than one observer or interviewer, but not so many that they cannot be trained in an identical manner

Randomly allocate observers/interviewer data collection assignments Institute a masking process if appropriate – Single masked study – subjects are unaware of whether they are in the experimental or control group – Double masked study – the subject and the observer are unaware of the subject’s group allocation – Triple masked study – the subject, observer and data analyst are unaware of the subject’s group allocation Build in methods to minimize loss to follow-up Controls for Bias (cont)

Confounding and effect modification Confounding refers to the effect of an extraneous variable that entirely or partially explains the apparent association between the study exposure and the disease. Confounding is a distortion in the estimated measure of effect due to the mixing of the effect of the study factor with the effect of other risk factor(s). If we do the analysis by ignoring the potential confounding factors, we might get an obscure conclusion on the association between factors.

AB C Criteria for confounders It is a risk factor of the study disease (but it is not the consequence) It associates with exposure under study (but not with the consequence of such exposure). It is about of interest of current study ( i.e. an extraneous variable) In the absence of exposure it indendently able to cause disease (outcome)

Control of confounding In research design During data analysis phase Three methods to control confounding during the design phase of the study: – randomization – restriction – matching

Error of measurement 1. Instruments poor calibration or lack of sensitivity 2. Observer's variation – Intra- observer variations: Semi skilled observers are often inconsistent in diagnosis of the same specimen presented to him blindly on different occasions. – Inter - observer variation: Several observers do not always agree on the diagnosis of the same specimen. 3. Observer's lack of skill or experience to use the apparatus or to give interpretation of diagnosis 4. Patient's lack of cooperation 5. Patients are not measured in the same manner, under the same condition or atmosphere

Summary