Bias in Researches Prof. Dr. Maha Al-Nuaimi.

Slides:



Advertisements
Similar presentations
Sample size estimation
Advertisements

Estimation of Sample Size
BIAS AND CONFOUNDING Nigel Paneth. HYPOTHESIS FORMULATION AND ERRORS IN RESEARCH All analytic studies must begin with a clearly formulated hypothesis.
Bias Thanks to T. Grein.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Questions:  Are behavioral measures less valid and less reliable due to the amount of error that can occur during the tests compared to the other measures?
Statistics Micro Mini Threats to Your Experiment!
N The Experimental procedure involves manipulating something called the Explanatory Variable and seeing the effect on something called the Outcome Variable.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Sampling Methods.
Bias and errors in epidemiologic studies Manish Chaudhary BPH( IOM) MPH(BPKIHS)
Intervention Studies Principles of Epidemiology Lecture 10 Dona Schneider, PhD, MPH, FACE.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Bias Defined as any systematic error in a study that results in an incorrect estimate of association between exposure and risk of disease. To err is human.
Grobman, K. H. "Confirmation Bias." Teaching about. Developmentalpsychology.org, Web. 16 Sept Sequence Fits the instructor's Rule? Guess.
Sampling Techniques 19 th and 20 th. Learning Outcomes Students should be able to design the source, the type and the technique of collecting data.
Issues concerning the interpretation of statistical significance tests.
Understanding lack of validity: Bias
Chapter 3 Surveys and Sampling © 2010 Pearson Education 1.
Results: How to interpret and report statistical findings Today’s agenda: 1)A bit about statistical inference, as it is commonly described in scientific.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Lecture 5.  It is done to ensure the questions asked would generate the data that would answer the research questions n research objectives  The respondents.
Validity in epidemiological research Deepti Gurdasani.
Understanding Populations & Samples
AC 1.2 present the survey methodology and sampling frame used
Experimental Research
Core Research Competencies:
BIAS AND CONFOUNDING Nigel Paneth.
AP Seminar: Statistics Primer
Statistics Use of mathematics to ORGANIZE, SUMMARIZE and INTERPRET numerical data. Needed to help psychologists draw conclusions.
INTRODUCTION AND DEFINITIONS
Bias Tunis, 30th October 2014 Dr Sybille Rehmet
Quality Assurance in the clinical laboratory
AP Seminar: Statistics Primer
Epidemiological Methods
CASE-CONTROL STUDIES Ass.Prof. Dr Faris Al-Lami MB,ChB MSc PhD FFPH
Understanding Results
Randomized Trials: A Brief Overview
Study design.
BIAS AND CONFOUNDING
Population, Samples, and Sampling Descriptions
SAMPLING (Zikmund, Chapter 12.
Questions: Are behavioral measures less valid and less reliable due to the amount of error that can occur during the tests compared to the other measures?
4 Sampling.
Research methods Lesson 2.
Critical Reading of Clinical Study Results
Some Epidemiological Studies
Welcome.
11/20/2018 Study Types.
Chapter 6 Research Validity.
Literature searching & critical appraisal
Dr Seyyed Alireza Moravveji Community Medicine Specialist
Definitions Covered Descriptive/ Inferential Statistics
Evaluating the Role of Bias
Sampling.
SAMPLING (Zikmund, Chapter 12).
Chapter 12 Power Analysis.
Experimental Research
Evidence Based Practice
Facts from figures Having obtained the results of an investigation, a scientist is faced with the prospect of trying to interpret them. In some cases the.
The objective of this lecture is to know the role of random error (chance) in factor-outcome relation and the types of systematic errors (Bias)
Experimental Research
Bias, Confounding and the Role of Chance
Research Methods & Statistics
Objectives: To know the different types and varieties of designs that are commonly used in medical researches. To know the characteristics, advantages.
The Process of a Statistical Study
Presentation transcript:

Bias in Researches Prof. Dr. Maha Al-Nuaimi

Bias in researches IN GENERAL : Random errors (chance) Systematic errors (bias) CONSEQUENCES: Can cause distorted results and wrong conclusions. Can lead to unnecessary costs, wrong clinical practice and harm ... It is therefore the responsibility of all involved stakeholders in the scientific publishing ..

Definition of bias Is Any such trend or deviation from the truth in any step of the research process, which can cause false conclusions. Bias … either intentionally or unintentionally. Therefore, it is immoral and unethical to conduct biased research. Every scientist should thus be aware of all potential sources of bias and undertake all possible actions to reduce or minimize the deviation from the truth. There are many sources for bias …

Selection bias/sampling bias Is the sampling error for a sample selected especially by a nonprobability method. a sample needs to be representative of the population. If this is not the case, conclusions will not be generalizable, i.e. the study will not have the external validity. Population, to which conclusions of the study are to be applied to. This is what we call a selection bias ( non-representative sample). Sampling is a crucial step for every research.

Selection bias/sampling bias Example: Interviewers conducting to select those respondents who are the most accessible and agreeable. Such samples often comprise friends and associates who bear some degree of resemblance in characteristics to those of the desired population. Or a random sample composed of 70% females. This sample would not be representative of the general adult population and would influence the data.

Hospital based studies ….. admission bias. The population studied does not reflect the general population. during patient recruitment, some patients are less or more likely to enter the study than others. (not representative to the studied population). In that case, there will be under-represented subjects and those who are more likely to enter the study will be over-represented subjects. Is accounted by the: sampling need to be random Homogeneity of the population being studied Size of the sample …

volunteer bias. Is a kind of sampling bias, E.G, if the aim of the study is to assess the average HsCRP (high sensitive C-reactive protein) concentration in healthy population , the way to go would be to recruit healthy individuals from a general population during their regular annual health check up, (recruits only volunteer blood donors), Healthy blood donors are usually individuals who feel themselves healthy and who are not suffering from any condition or illness. By recruiting only healthy VOLUNTEERS we might conclude that a disease is much lower that it really is.

Misclassification bias Is a kind of sampling bias. Occurs when a disease of interest is poorly defined (not easy detectable) There is no gold standard for diagnosis. Some subjects are falsely classified as cases or controls. e.g. Early detection of the prostate cancer in asymptomatic men. Due to absence of a reliable test for the early prostate cancer detection, there is a chance that some early prostate cancer cases would go misclassified as disease-free causing the under- or over-estimation of the accuracy of this new marker.

Subjectivity bias: interventional studies Measurement error is generated by the measurement process itself, and represents the difference between the information generated and the information wanted by the researcher. Instruments: Individual variation: Observers: Subjectivity bias: interventional studies RX.: Masking of the drug Blinding: single, double and triple

Observation (information) bias: (usually in cross-sectional studies) Recall bias Memory bias Interviewer bias Rx. Use documents or reports Link with some events Training interviewers

Attrition Bias: usually in prospective studies Non eligibility Non response: for any (from the beginning) Non compliance Drop out (later): die, leave, loss of interest,…. Is accounted by: Pilot project Education Motivation

Non-response bias (kind of selection bias)… can exist when an obtained sample differs from the original selected sample. Survivor bias (in follow up study)…. subjects who died before the study end point might be missed from the study.

Bias in data analysis Researchers may start “torturing the data” .. by dividing data into various subgroups until association becomes statistically significant. If this sub-classification of a study population was not part of the original research hypothesis, This is data manipulation .. often provide meaningless conclusions such as: lactate concentration was positively associated with albumin concentration in a subgroup of male patients with a body mass index in the lowest quartile and total leukocyte count below 4.00 × 109/L. Besides being biased, unethical, invalid and illogical, those conclusions are also useless, since they cannot be generalized to the entire population.

Bias in data interpretation By interpreting the results, we need: To make sure that proper statistical tests were used, that data are interpreted only if there was a statistical significance of the observed relationship. AVOID .. discussing observed differences and associations even if they are not statistically significant AVOID ..overgeneralization of the study conclusions to the entire general population, even if a study was confined to the population subset; Type I (the expected effect is found significant, when actually there is none) and type II (the expected effect is not found significant, when it is actually present) errors

Publication bias Unfortunately, scientific journals are much more likely to accept for publication a study which reports some positive than a study with negative findings. While negative results can help other scientists would not unnecessarily waste their time and financial resources by re-running the same experiments. Journal editors are the most responsible for this phenomenon. Ideally, a study NEED to be published regardless of the nature of its findings, if designed scientifically. However, several journals have already been launched, such as Journal of Pharmaceutical Negative Results, Journal of Negative Results in Biomedicine, Journal of Interesting Negative Results.