Experimental Design Research vs Experiment. Research A careful search An effort to obtain new knowledge in order to answer a question or to solve a problem.

Slides:



Advertisements
Similar presentations
Analysis by design Statistics is involved in the analysis of data generated from an experiment. It is essential to spend time and effort in advance to.
Advertisements

Randomized Complete Block and Repeated Measures (Each Subject Receives Each Treatment) Designs KNNL – Chapters 21,
1 Important Terms Variable – A variable is any characteristic whose value may change from one individual to another A univariate data set consists of.
LSU-HSC School of Public Health Biostatistics 1 Statistical Core Didactic Introduction to Biostatistics Donald E. Mercante, PhD.
Stratification (Blocking) Grouping similar experimental units together and assigning different treatments within such groups of experimental units A technique.
1 Introduction to Experimental Design 1/26/2009 Copyright © 2009 Dan Nettleton.
Psychological Methods
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
AP Statistics Chapter 5 Notes.
Statistics CSE 807.
Research vs Experiment
Analysis of Variance. Experimental Design u Investigator controls one or more independent variables –Called treatment variables or factors –Contain two.
Research Study. Type Experimental study A study in which the investigator selects the levels of at least one factor Observational study A design in which.
Chapter 28 Design of Experiments (DOE). Objectives Define basic design of experiments (DOE) terminology. Apply DOE principles. Plan, organize, and evaluate.
Experimental Design The Research Process Defining a Research Question.
8. ANALYSIS OF VARIANCE 8.1 Elements of a Designed Experiment
Lecture 10 Comparison and Evaluation of Alternative System Designs.
11 Populations and Samples.
TOOLS OF POSITIVE ANALYSIS
EXPERIMENTS AND OBSERVATIONAL STUDIES Chance Hofmann and Nick Quigley
Experimental design, basic statistics, and sample size determination
Introduction to the design (and analysis) of experiments James M. Curran Department of Statistics, University of Auckland
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Experimental design and sample size determination Karl W Broman Department of Biostatistics Johns Hopkins University
Courtney McCracken, M.S., PhD(c) Traci Leong, PhD May 1 st, 2012.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 9 Hypothesis Testing.
COLLECTING QUANTITATIVE DATA: Sampling and Data collection
بسم الله الرحمن الرحيم * this presentation about :- “experimental design “ * Induced to :- Dr Aidah Abu Elsoud Alkaissi * Prepared by :- 1)-Hamsa karof.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
INT 506/706: Total Quality Management Introduction to Design of Experiments.
Chapter 13 Observational Studies & Experimental Design.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Chapter 9 Comparing More than Two Means. Review of Simulation-Based Tests  One proportion:  We created a null distribution by flipping a coin, rolling.
A look at psychological research. General principles The specious attraction of anecdotes The concern for precise measurement Operational definitions.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
The Scientific Method Formulation of an H ypothesis P lanning an experiment to objectively test the hypothesis Careful observation and collection of D.
Introduction to Experimental Design
The success or failure of an investigation usually depends on the design of the experiment. Prepared by Odyssa NRM Molo.
ANOVA: Analysis of Variance
Part III Gathering Data.
Chapter 5: Producing Data “An approximate answer to the right question is worth a good deal more than the exact answer to an approximate question.’ John.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Chapter 1 Measurement, Statistics, and Research. What is Measurement? Measurement is the process of comparing a value to a standard Measurement is the.
Assumes that events are governed by some lawful order
1 Chapter 1: Introduction to Design of Experiments 1.1 Review of Basic Statistical Concepts (Optional) 1.2 Introduction to Experimental Design 1.3 Completely.
Conducting A Study Designing Sample Designing Experiments Simulating Experiments Designing Sample Designing Experiments Simulating Experiments.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
If we can reduce our desire, then all worries that bother us will disappear.
CHAPTER 4 – RESEARCH METHODS Psychology 110. How Do We Know What We Know? You can know something because a friend told you You can know something because.
C82MST Statistical Methods 2 - Lecture 1 1 Overview of Course Lecturers Dr Peter Bibby Prof Eamonn Ferguson Course Part I - Anova and related methods (Semester.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
1 Simulation Scenarios. 2 Computer Based Experiments Systematically planning and conducting scientific studies that change experimental variables together.
Producing Data: Experiments BPS - 5th Ed. Chapter 9 1.
CHAPTER 9: Producing Data Experiments ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
Uses of Diagnostic Tests Screen (mammography for breast cancer) Diagnose (electrocardiogram for acute myocardial infarction) Grade (stage of cancer) Monitor.
STA248 week 121 Bootstrap Test for Pairs of Means of a Non-Normal Population – small samples Suppose X 1, …, X n are iid from some distribution independent.
Factorial Experiments
Statistical Core Didactic
Observational Studies and Experiments
Principles of Experiment
Experiments and Observational Studies
Experimental Design Research vs Experiment
Research vs Experiment
Daniela Stan Raicu School of CTI, DePaul University
Introduction to Experimental Design
Introduction to the design (and analysis) of experiments
DESIGN OF EXPERIMENTS by R. C. Baker
Chapter 10 – Part II Analysis of Variance
Presentation transcript:

Experimental Design Research vs Experiment

Research A careful search An effort to obtain new knowledge in order to answer a question or to solve a problem A protocol for measuring the values of a set of variables (response variables) under a set of condition (study condition) RESEARCH DESIGN (a plan for research)

Research Designs Naturalistic observation Naturalistic observation Case study Case study Correlational Correlational Differential Differential Experimental Experimental Constraint level

Research design Observation Observation A design in which the levels of all the explanatory variables are determined as part of the observational process Experimental Experimental A study in which the investigator selects the levels of at least one factor An investigation in which the investigator applies some treatments to experimental units and then observes the effect of the treatments on the experimental units by measuring one or more response variables An inquiry in which an investigator chooses the levels (values) of input or independent variables and observes the values of the output or dependent variable(s).

Strengths of observation Can be used to generate hypotheses Can be used to generate hypotheses Can be used to negate a proposition Can be used to negate a proposition Can be used to identify contingent relationships Can be used to identify contingent relationships

Limitations of Observation Cannot be used to test hypotheses Cannot be used to test hypotheses Poor representative ness Poor representative ness Poor replicability Poor replicability Observer bias Observer bias

Strengths of experimental Causation can be determined (if properly designed) Causation can be determined (if properly designed) The researcher has considerable control over the variables of interest The researcher has considerable control over the variables of interest Can be designed to evaluate multiple independent variables Can be designed to evaluate multiple independent variables

Limitations of experimental Not ethical in many situations Not ethical in many situations Often more difficult and costly Often more difficult and costly

Design of Experiments Define the objectives of the experiment and the population of interest. Identify all sources of variation. Choose an experimental design and specify the experimental procedure.

What questions do you hope to answer as a result of your experiment? Defining the Objectives To what population do these answers apply?

Defining the Objectives

Identifying Sources of Variation Output Variable Input Variable

Choosing an Experimental Design Experimental design?

Experimental Design A plan and a structure to test hypotheses in which the analyst controls or manipulates one or more variables Protocol for measuring the values of a set of variable It contains independent and dependent variables

What is a statistical experimental design? What is the output variable? What is the output variable? Which (input) factors should we study? Which (input) factors should we study? What are the levels of these factors? What are the levels of these factors? What combinations of these levels should be studied? What combinations of these levels should be studied? How should we assign the studied combinations to experimental units? How should we assign the studied combinations to experimental units? Determine the levels of independent variables (factors) and the number of experimental units at each combination of these levels according to the experimental goal.

The Six Steps of Experimental Design Plan the experiment. Plan the experiment. Design the experiment. Design the experiment. Perform the experiment. Perform the experiment. Analyze the data from the experiment. Analyze the data from the experiment. Confirm the results of the experiment. Confirm the results of the experiment. Evaluate the conclusions of the experiment. Evaluate the conclusions of the experiment.

Plan the Experiment Identify the dependent or output variable(s). Identify the dependent or output variable(s). Translate output variables to measurable quantities. Translate output variables to measurable quantities. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Identify potential combined actions between factors. Identify potential combined actions between factors.

Syllabus ContentWeek Terminology and basic concept 2 T-test, anova and CRD 3 RCBD and Latin Square 4 Mean comparison 7 Midterm Mean comparison 10 Factorial experiment Special topic in factorial experiment 13

Grading system Grade : 0 – 100 Grade : 0 – 100 A > 80 A > 80 B – D→ 45 – 80 (Normal distribution) B – D→ 45 – 80 (Normal distribution) E < 45 E < 45 Grade composition Assignment:3067 Mid-term:3067 Final Exam :4067 Practical Work :33

TerminologyTerminology Variable a characteristic that varies (e.g., weight, body temperature, bill length, etc.). Treatment/ input/independent variable A condition or set of conditions applied to experimental units in an experiment. A condition or set of conditions applied to experimental units in an experiment. The variable that the experimenter either controls or modifies. The variable that the experimenter either controls or modifies. What you manipulate What you manipulate 1.Single factor 2. ≥ 2 factors

TerminologyTerminology Factors Another name for the independent variables of an experimental design Another name for the independent variables of an experimental design An explanatory variable whose effect on the response is a primary objective of the study An explanatory variable whose effect on the response is a primary objective of the study A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control An explanatory variable that can take any one of two or more values. An explanatory variable that can take any one of two or more values. The design of the experiment will largely consist of a policy for determining how to set the factors in each experimental trial

Levels or Classifications The subcategories of the independent variable used in the experimental design The subcategories of the independent variable used in the experimental design The different values of a factor The different values of a factor Dependent/response/output variable The response to the different levels of the independent variables The response to the different levels of the independent variables A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units TerminologyTerminology

Treatment Factor A factor whose levels are chosen and controlled by the researcher to understand how one or more response variables change in response to varying levels of the factor Treatment Design The collection of treatments used in an experiment. Full Factorial Treatment Design Treatment design in which the treatments consist of all possible combinations involving one level from each of the treatment factors. TerminologyTerminology

Experimental unit 1.the unit of the study material in which treatment is applied 2.The smallest unit of the study material sharing a common treatment 3.The physical entity to which a treatment is randomly assigned and independently applied. Observational unit (sampling unit) The smallest unit of the study material for which responses are measured. The smallest unit of the study material for which responses are measured. The unit on which a response variable is measured. The unit on which a response variable is measured. There is often a one-to-one correspondence between experimental units and observational units, but that is not always true. TerminologyTerminology

Populations and Samples Population Population the entire collection of values for the variable being considered. Sample a subset of the population. Statistically, it is important for the sample to be a random sample.

Parameters vs. Statistics Parameter Parameter a measure that characterizes a population. Statistic an estimate of a population parameter, based on a sample.

Basic principles 1.Formulate question/goal in advance 2.Comparison/control 3.Replication 4.Randomization 5.Stratification (blocking) 6.Factorial experiment

Comparison/control Good experiments are comparative. Compare BP in mice fed salt water to BP in mice fed plain water.Compare BP in mice fed salt water to BP in mice fed plain water. Compare BP in strain A mice fed salt water to BP in strain B mice fed salt water.Compare BP in strain A mice fed salt water to BP in strain B mice fed salt water. Ideally, the experimental group is compared to concurrent controls (rather than to historical controls).

Replication Applying a treatment independently to two or more experimental units.

Why replicate? Reduce the effect of uncontrolled variation (i.e. increase precision). Reduce the effect of uncontrolled variation (i.e. increase precision). Quantify uncertainty. Quantify uncertainty.

Randomization Random assignment of treatments to experimental units. Experimental subjects (“units”) should be assigned to treatment groups at random. At random does not mean haphazardly. One needs to explicitly randomize using A computer, orA computer, or Coins, dice or cards.Coins, dice or cards.

Why randomize? Avoid bias.Avoid bias. –For example: the first six mice you grab may have intrinsically higher BP. Control the role of chance.Control the role of chance. –Randomization allows the later use of probability theory, and so gives a solid foundation for statistical analysis.

Stratification (Blocking) Grouping similar experimental units together and assigning different treatments within such groups of experimental units If you anticipate a difference between morning and afternoon measurements: If you anticipate a difference between morning and afternoon measurements: Ensure that within each period, there are equal numbers of subjects in each treatment group. Ensure that within each period, there are equal numbers of subjects in each treatment group. Take account of the difference between periods in your analysis. Take account of the difference between periods in your analysis.

Cage positions Completely randomized design

Cage positions Randomized block design

Randomization and stratification If you can (and want to), fix a variable. If you can (and want to), fix a variable. –e.g., use only 8 week old male mice from a single strain. If you don’t fix a variable, stratify it. If you don’t fix a variable, stratify it. –e.g., use both 8 week and 12 week old male mice, and stratify with respect to age. If you can neither fix nor stratify a variable, randomize it. If you can neither fix nor stratify a variable, randomize it.

Types of Experimental Designs Simple Designs: Vary one factor at a time Not statistically efficient. Not statistically efficient. Wrong conclusions if the factors have interaction. Wrong conclusions if the factors have interaction. Not recommended. Not recommended.

Types of Experimental Designs Factorial Experiment: 1. Full Factorial Design: All combinations. Can find the effect of all factors. Can find the effect of all factors. Too much time and money. Too much time and money. May try 2 k design first May try 2 k design first

Types of Experimental Designs 2. Fractional Factorial Designs: Save time and expense. Less information. Less information. May not get all interactions. May not get all interactions. Not a problem if negligible interactions. Not a problem if negligible interactions.

Common Mistakes in Experimentation 1. The variation due to experimental error is ignored. 2. Important parameters are not controlled. 3. Effects of different factors are not isolated. 4. Simple one-factor-at-a-time designs are used 5. Interactions are ignored. 6. Too many experiments are conducted. Better: two phases.

Effect of one factor depends upon the level of the other. Non-interacting FactorsInteracting Factors Interaction

Full Factorial experiments Suppose we are interested in the effect of both salt water and a high-fat diet on blood pressure. Ideally: look at all 4 treatments in one experiment. Plain waterNormal diet Salt waterHigh-fat diet Why? –We can learn more. –More efficient than doing all single-factor experiments. ×

Interactions

Other points BlindingBlinding –Measurements made by people can be influenced by unconscious biases. –Ideally, dissections and measurements should be made without knowledge of the treatment applied. Internal controlsInternal controls –It can be useful to use the subjects themselves as their own controls (e.g., consider the response after vs. before treatment). –Why? Increased precision.

Other points RepresentativenessRepresentativeness –Are the subjects/tissues you are studying really representative of the population you want to study? –Ideally, your study material is a random sample from the population of interest.

Significance test Compare the BP of 6 mice fed salt water to 6 mice fed plain water. Compare the BP of 6 mice fed salt water to 6 mice fed plain water.  = true difference in average BP (the treatment effect).  = true difference in average BP (the treatment effect). H 0 :  = 0 (i.e., no effect) H 0 :  = 0 (i.e., no effect) Test statistic, D. Test statistic, D. If |D| > C, reject H 0. If |D| > C, reject H 0. C chosen so that the chance you reject H 0, if H 0 is true, is 5% C chosen so that the chance you reject H 0, if H 0 is true, is 5% Distribution of D when  = 0

Statistical power Power: The chance that you reject H 0 when H 0 is false (i.e., you [correctly] conclude that there is a treatment effect when there really is a treatment effect).

Power depends on… The structure of the experiment The structure of the experiment The method for analyzing the data The method for analyzing the data The size of the true underlying effect The size of the true underlying effect The variability in the measurements The variability in the measurements The chosen significance level (  ) The chosen significance level (  ) The sample size The sample size Note: We usually try to determine the sample size to give a particular power (often 80%).

Effect of sample size 6 per group : 12 per group:

Various effects Desired power   sample size  Desired power   sample size  Stringency of statistical test   sample size  Stringency of statistical test   sample size  Measurement variability   sample size  Measurement variability   sample size  Treatment effect   sample size  Treatment effect   sample size 