Research Skills Workshop Designing a Project

Slides:



Advertisements
Similar presentations
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Advertisements

If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
A Spreadsheet for Analysis of Straightforward Controlled Trials
Client Assessment and Other New Uses of Reliability Will G Hopkins Physiology and Physical Education University of Otago, Dunedin NZ Reliability: the Essentials.
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Andrea M. Landis, PhD, RN UW LEAH
Postgraduate Course 7. Evidence-based management: Research designs.
1 Chapter 4 The Designing Research Consumer. 2 High Quality Research: Evaluating Research Design High quality evaluation research uses the scientific.
How would you explain the smoking paradox. Smokers fair better after an infarction in hospital than non-smokers. This apparently disagrees with the view.
Predictors of Recurrence in Bipolar Disorder: Primary Outcomes From the Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) Dr. Hena.
Cross Sectional Designs
Designing Clinical Research Studies An overview S.F. O’Brien.
Experimental design ITS class December 2, 2004 ITS class December 2, 2004.
CHAPTER 1 Thinking Critically with Psychological Science.
Reading the Dental Literature
PTP 560 Research Methods Week 4 Thomas Ruediger, PT.
Biostatistics ~ Types of Studies. Research classifications Observational vs. Experimental Observational – researcher collects info on attributes or measurements.
Chapter 13: Descriptive and Exploratory Research
How Science Works Glossary AS Level. Accuracy An accurate measurement is one which is close to the true value.
N The Experimental procedure involves manipulating something called the Explanatory Variable and seeing the effect on something called the Outcome Variable.
Chapter 7 Correlational Research Gay, Mills, and Airasian
McGraw-Hill © 2006 The McGraw-Hill Companies, Inc. All rights reserved. Experimental Research Chapter Thirteen.
Quantitative Research
Nasih Jaber Ali Scientific and disciplined inquiry is an orderly process, involving: problem Recognition and identification of a topic to.
Chapter 2 – Experimental Design and Data Collection Math 22 Introductory Statistics.
Research Design Interactive Presentation Interactive Presentation
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Introduction to research Research designs Dr Naiema Gaber.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Types of study designs Arash Najimi
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Research Study Design. Objective- To devise a study method that will clearly answer the study question with the least amount of time, energy, cost, and.
Module 4 Notes Research Methods. Let’s Discuss! Why is Research Important?
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
Selecting and Recruiting Subjects One Independent Variable: Two Group Designs Two Independent Groups Two Matched Groups Multiple Groups.
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
Thinking About Psychology: The Science of Mind and Behavior.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
Why is Research Important?. Basic Research Pure science or research Research for the sake of finding new information and expanding the knowledge base.
Review of Research Methods. Overview of the Research Process I. Develop a research question II. Develop a hypothesis III. Choose a research design IV.
Nursing research Is a systematic inquiry into a subject that uses various approach quantitative and qualitative methods) to answer questions and solve.
© 2008 Pearson Addison-Wesley. All rights reserved Chapter 5 Statistical Reasoning.
Research Strategies. Why is Research Important? Answer in complete sentences in your bell work spiral. Discuss the consequences of good or poor research.
Measures of Reliability in Sports Medicine and Science Will G. Hopkins Sports Medicine 30(4): 1-25, 2000.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Copyright © 2016 Wolters Kluwer All Rights Reserved Chapter 7 Experimental Design I— Independent Variables.
Research Methods Chapter 2.
Handbook for Health Care Research, Second Edition Chapter 7 © 2010 Jones and Bartlett Publishers, LLC CHAPTER 7 Designing the Experiment.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
How Psychologists Do Research Chapter 2. How Psychologists Do Research What makes psychological research scientific? Research Methods Descriptive studies.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
Why is Research Important?
Types of Research Studies Architecture of Clinical Research
Core Competencies: Choosing Study Design
Lecture 02.
Statistical Data Analysis
Module 02 Research Strategies.
Chapter Three Research Design.
Will G Hopkins Sport and Recreation AUT University Auckland NZ
Cross Sectional Designs
By the completion of this presentation, the participant will be able to:
Research Strategies.
Positive analysis in public finance
Interpreting Epidemiologic Results.
Copyright © Allyn & Bacon 2007
RESEARCH DESIGNS: Understanding the different kinds of published study
Types of Statistical Studies and Producing Data
Presentation transcript:

Research Skills Workshop Designing a Project Qualitative: research on unique events; action research. Quantitative: effects; quantitative designs (descriptive, experimental); compliance, bias, placebos; sample size for adequate precision; validity and reliability; females and/or males; individual responses; mechanisms. Will G Hopkins Physiology and Physical Education University of Otago, Dunedin, NZ

There are two approaches: qualitative and quantitative. General Research is the name for the process of finding out about things, especially the relationship between things. There are two approaches: qualitative and quantitative. Use qualitative research to answer "What's happened here?" Use quantitative research to answer "What's happening generally?"

Qualitative Research In its purest form, qualitative research is a detailed examination of a single instance or case of something, usually through intensive interviewing or analysis of other evidence. The quest for truth is similar to that in a court case. Qualitative paradigms that emphasize subjectivity may be closer to art than science. Use it to investigate unique events and to explore potentially important factors for a quantitative study. The open-ended nature of data gathering allows for more flexibility and serendipity in identifying factors and practical strategies than a more structured quantitative approach. Formal procedures (triangulation, member checking, peer debriefing, auditing…) reduce subjectivity.

Action research–a qualitative intervention–can establish causality more convincingly than a purely descriptive qualitative study. You can use qualitative techniques to study more than one case, but any attempt to generalize from this sample to a population is really quantitative research. Samples are usually too small, either because few subjects are available, the data gathering is too time consuming, or researchers don't want to be quantitative. Any study with a small sample can establish only the presence or absence of strong associations in a population. Moderate, weak, or trivial associations remain unclear, because of the uncertainty in the magnitude of these associations. Intensive interviewing is equivalent to assaying many variables, so there is a high risk of finding a spurious association.

Quantitative Research In the simplest quantitative research, the aim is to determine the prevalence, incidence, mean value or other descriptive statistics of something in a population through study of an appropriate sample. More often the aim is to determine an effect: the relationship between the thing of interest (a dependent or outcome variable, such as performance) and other things (predictor variables, such as training, sex, diet). The relationship is expressed as an outcome statistic, such as a difference or change in the mean value, a correlation coefficient, or a relative frequency or risk. You get an estimate the magnitude of the statistic.

Quantitative research designs are either descriptive (subjects usually measured once) or experimental (subjects measured before and after a treatment or intervention). A descriptive study establishes only associations between variables. An experiment establishes causality, or lack of it. Descriptive studies, worst to best: Case, e.g. a gold medallist. Case series, e.g. 20 gold medallists. Cross-sectional (correlational), e.g. sample of 1000 athletes. Case-control (retrospective), e.g. 200 Olympians and 800 non-Olympians. Cohort (prospective or longitudinal), e.g. measure characteristics of 1000 athletes then determine incidence of Olympic medals after 10 years.

Experimental studies, worst to best: No control group (time series), e.g. measure performance before and after a training intervention. Crossover, e.g. give 5 athletes a drug and another 5 athletes a placebo, measure performance; wait a while to wash out the treatments, then cross over the treatments and measure again. Controlled trial, e.g. measure performance of 20 athletes before and after a drug and another 20 before and after a placebo (need 4x as many subjects as in a crossover).

The estimate of the relationship is less likely to be biased (not the same as in the population) if you have a high participation rate in a sample selected randomly from the population. In experiments, bias is also less likely if: subjects are randomly assigned to treatments; assignment is balanced in respect of any characteristics that might affect the outcome; subjects and researchers are blind to the identity of the active and control (placebo) treatments. Sample size is determined by acceptable precision of the estimate, expressed as the likely limits of the true value (95% confidence limits or interval). To halve the likely range you need 4x as many subjects.

Small sample sizes give acceptable accuracy for strong relationships Small sample sizes give acceptable accuracy for strong relationships. Therefore you can study batches of subjects until you get acceptable accuracy--sample size "on the fly". For an accurate estimate of a weak or trivial relationship between variables, a descriptive study usually needs a sample of hundreds or even thousands of subjects; a controlled trial usually needs scores of subjects; a crossover usually needs tens of subjects. In a descriptive study, the less valid the variable, the bigger the sample size. Validity--how well the observed value represents the true value--is often hard to determine; retest reliability (reproducibility of the observed value) sets an upper bound on validity and is worth determining in a pilot study. Lower validity of psychometric variables means ~4x as many subjects for adequate precision.

In an experiment, the less noisy--more reliable--the variable, the smaller the sample. Noise is represented by the typical (standard) error of measurement between trials. A crossover with ~8 subjects gives an outcome with a likely range of ± the typical error. A controlled trial needs ~32. If you have the time and resources, measure the typical error of the dependent variable(s) in a pilot study with the same time frame as the main study. Subjects differing in sex, age or other characteristics may differ in the relationship you are investigating. Therefore include these variables as covariates in the analysis. = multiple linear regression, analysis of covariance… In an experiment, subject characteristics or differences in training or other behavior before the treatment may explain individual responses to the treatment.

But you need bigger samples But you need bigger samples. For example, to estimate the difference in the effect of sex you will need 4x as many subjects (e.g., 40 females and 40 males instead of 20 males). So, if sample size is a problem, opt for a uniform sample of subjects (e.g., young females). In an experiment, try to measure variables that might explain the mechanism of the treatment. A putative mechanism variable has to at least partly track changes in the dependent variable, but that isn't sufficient to ensure it's a mechanism variable. Including such variables will increase your chances of publishing in a high-impact journal. In an unblinded experiment, such variables can help eliminate the possibility of a placebo effect. Important for PhD projects.