Download presentation
Presentation is loading. Please wait.
1
Experimental Design Research vs Experiment
2
Research A careful search An effort to obtain new knowledge in order to answer a question or to solve a problem A protocol for measuring the values of a set of variables (response variables) under a set of condition (study condition) RESEARCH DESIGN (a plan for research)
3
Research Designs Naturalistic observation Naturalistic observation Case study Case study Correlational Correlational Differential Differential Experimental Experimental Constraint level
4
Research design Observation Observation A design in which the levels of all the explanatory variables are determined as part of the observational process Experimental Experimental A study in which the investigator selects the levels of at least one factor An investigation in which the investigator applies some treatments to experimental units and then observes the effect of the treatments on the experimental units by measuring one or more response variables An inquiry in which an investigator chooses the levels (values) of input or independent variables and observes the values of the output or dependent variable(s).
5
Strengths of observation Can be used to generate hypotheses Can be used to generate hypotheses Can be used to negate a proposition Can be used to negate a proposition Can be used to identify contingent relationships Can be used to identify contingent relationships
6
Limitations of Observation Cannot be used to test hypotheses Cannot be used to test hypotheses Poor representative ness Poor representative ness Poor replicability Poor replicability Observer bias Observer bias
7
Strengths of experimental Causation can be determined (if properly designed) Causation can be determined (if properly designed) The researcher has considerable control over the variables of interest The researcher has considerable control over the variables of interest Can be designed to evaluate multiple independent variables Can be designed to evaluate multiple independent variables
8
Limitations of experimental Not ethical in many situations Not ethical in many situations Often more difficult and costly Often more difficult and costly
9
Design of Experiments Define the objectives of the experiment and the population of interest. Identify all sources of variation. Choose an experimental design and specify the experimental procedure.
10
What questions do you hope to answer as a result of your experiment? Defining the Objectives To what population do these answers apply?
11
Defining the Objectives
12
Identifying Sources of Variation Output Variable Input Variable
13
Choosing an Experimental Design Experimental design?
14
Experimental Design A plan and a structure to test hypotheses in which the analyst controls or manipulates one or more variables Protocol for measuring the values of a set of variable It contains independent and dependent variables
15
What is a statistical experimental design? What is the output variable? What is the output variable? Which (input) factors should we study? Which (input) factors should we study? What are the levels of these factors? What are the levels of these factors? What combinations of these levels should be studied? What combinations of these levels should be studied? How should we assign the studied combinations to experimental units? How should we assign the studied combinations to experimental units? Determine the levels of independent variables (factors) and the number of experimental units at each combination of these levels according to the experimental goal.
16
The Six Steps of Experimental Design Plan the experiment. Plan the experiment. Design the experiment. Design the experiment. Perform the experiment. Perform the experiment. Analyze the data from the experiment. Analyze the data from the experiment. Confirm the results of the experiment. Confirm the results of the experiment. Evaluate the conclusions of the experiment. Evaluate the conclusions of the experiment.
17
Plan the Experiment Identify the dependent or output variable(s). Identify the dependent or output variable(s). Translate output variables to measurable quantities. Translate output variables to measurable quantities. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Identify potential combined actions between factors. Identify potential combined actions between factors.
18
Syllabus ContentWeek Terminology and basic concept 2 T-test, anova and CRD 3 RCBD and Latin Square 4 Mean comparison 7 Midterm 8 - 9 Mean comparison 10 Factorial experiment 11 - 12 Special topic in factorial experiment 13
19
Grading system Grade : 0 – 100 Grade : 0 – 100 A > 80 A > 80 B – D→ 45 – 80 (Normal distribution) B – D→ 45 – 80 (Normal distribution) E < 45 E < 45 Grade composition Assignment:3067 Mid-term:3067 Final Exam :4067 Practical Work :33
20
TerminologyTerminology Variable a characteristic that varies (e.g., weight, body temperature, bill length, etc.). Treatment/ input/independent variable A condition or set of conditions applied to experimental units in an experiment. A condition or set of conditions applied to experimental units in an experiment. The variable that the experimenter either controls or modifies. The variable that the experimenter either controls or modifies. What you manipulate What you manipulate 1.Single factor 2. ≥ 2 factors
21
TerminologyTerminology Factors Another name for the independent variables of an experimental design Another name for the independent variables of an experimental design An explanatory variable whose effect on the response is a primary objective of the study An explanatory variable whose effect on the response is a primary objective of the study A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control An explanatory variable that can take any one of two or more values. An explanatory variable that can take any one of two or more values. The design of the experiment will largely consist of a policy for determining how to set the factors in each experimental trial
22
Levels or Classifications The subcategories of the independent variable used in the experimental design The subcategories of the independent variable used in the experimental design The different values of a factor The different values of a factor Dependent/response/output variable The response to the different levels of the independent variables The response to the different levels of the independent variables A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units TerminologyTerminology
23
Treatment Factor A factor whose levels are chosen and controlled by the researcher to understand how one or more response variables change in response to varying levels of the factor Treatment Design The collection of treatments used in an experiment. Full Factorial Treatment Design Treatment design in which the treatments consist of all possible combinations involving one level from each of the treatment factors. TerminologyTerminology
24
Experimental unit 1.the unit of the study material in which treatment is applied 2.The smallest unit of the study material sharing a common treatment 3.The physical entity to which a treatment is randomly assigned and independently applied. Observational unit (sampling unit) The smallest unit of the study material for which responses are measured. The smallest unit of the study material for which responses are measured. The unit on which a response variable is measured. The unit on which a response variable is measured. There is often a one-to-one correspondence between experimental units and observational units, but that is not always true. TerminologyTerminology
25
Populations and Samples Population Population the entire collection of values for the variable being considered. Sample a subset of the population. Statistically, it is important for the sample to be a random sample.
26
Parameters vs. Statistics Parameter Parameter a measure that characterizes a population. Statistic an estimate of a population parameter, based on a sample.
27
Basic principles 1.Formulate question/goal in advance 2.Comparison/control 3.Replication 4.Randomization 5.Stratification (blocking) 6.Factorial experiment
28
Comparison/control Good experiments are comparative. Compare BP in mice fed salt water to BP in mice fed plain water.Compare BP in mice fed salt water to BP in mice fed plain water. Compare BP in strain A mice fed salt water to BP in strain B mice fed salt water.Compare BP in strain A mice fed salt water to BP in strain B mice fed salt water. Ideally, the experimental group is compared to concurrent controls (rather than to historical controls).
29
Replication Applying a treatment independently to two or more experimental units.
30
Why replicate? Reduce the effect of uncontrolled variation (i.e. increase precision). Reduce the effect of uncontrolled variation (i.e. increase precision). Quantify uncertainty. Quantify uncertainty.
31
Randomization Random assignment of treatments to experimental units. Experimental subjects (“units”) should be assigned to treatment groups at random. At random does not mean haphazardly. One needs to explicitly randomize using A computer, orA computer, or Coins, dice or cards.Coins, dice or cards.
32
Why randomize? Avoid bias.Avoid bias. –For example: the first six mice you grab may have intrinsically higher BP. Control the role of chance.Control the role of chance. –Randomization allows the later use of probability theory, and so gives a solid foundation for statistical analysis.
33
Stratification (Blocking) Grouping similar experimental units together and assigning different treatments within such groups of experimental units If you anticipate a difference between morning and afternoon measurements: If you anticipate a difference between morning and afternoon measurements: Ensure that within each period, there are equal numbers of subjects in each treatment group. Ensure that within each period, there are equal numbers of subjects in each treatment group. Take account of the difference between periods in your analysis. Take account of the difference between periods in your analysis.
34
Cage positions Completely randomized design
35
Cage positions Randomized block design
36
Randomization and stratification If you can (and want to), fix a variable. If you can (and want to), fix a variable. –e.g., use only 8 week old male mice from a single strain. If you don’t fix a variable, stratify it. If you don’t fix a variable, stratify it. –e.g., use both 8 week and 12 week old male mice, and stratify with respect to age. If you can neither fix nor stratify a variable, randomize it. If you can neither fix nor stratify a variable, randomize it.
37
Types of Experimental Designs Simple Designs: Vary one factor at a time Not statistically efficient. Not statistically efficient. Wrong conclusions if the factors have interaction. Wrong conclusions if the factors have interaction. Not recommended. Not recommended.
38
Types of Experimental Designs Factorial Experiment: 1. Full Factorial Design: All combinations. Can find the effect of all factors. Can find the effect of all factors. Too much time and money. Too much time and money. May try 2 k design first May try 2 k design first
39
Types of Experimental Designs 2. Fractional Factorial Designs: Save time and expense. Less information. Less information. May not get all interactions. May not get all interactions. Not a problem if negligible interactions. Not a problem if negligible interactions.
40
Common Mistakes in Experimentation 1. The variation due to experimental error is ignored. 2. Important parameters are not controlled. 3. Effects of different factors are not isolated. 4. Simple one-factor-at-a-time designs are used 5. Interactions are ignored. 6. Too many experiments are conducted. Better: two phases.
41
Effect of one factor depends upon the level of the other. Non-interacting FactorsInteracting Factors Interaction
42
Full Factorial experiments Suppose we are interested in the effect of both salt water and a high-fat diet on blood pressure. Ideally: look at all 4 treatments in one experiment. Plain waterNormal diet Salt waterHigh-fat diet Why? –We can learn more. –More efficient than doing all single-factor experiments. ×
43
Interactions
44
Other points BlindingBlinding –Measurements made by people can be influenced by unconscious biases. –Ideally, dissections and measurements should be made without knowledge of the treatment applied. Internal controlsInternal controls –It can be useful to use the subjects themselves as their own controls (e.g., consider the response after vs. before treatment). –Why? Increased precision.
45
Other points RepresentativenessRepresentativeness –Are the subjects/tissues you are studying really representative of the population you want to study? –Ideally, your study material is a random sample from the population of interest.
46
Significance test Compare the BP of 6 mice fed salt water to 6 mice fed plain water. Compare the BP of 6 mice fed salt water to 6 mice fed plain water. = true difference in average BP (the treatment effect). = true difference in average BP (the treatment effect). H 0 : = 0 (i.e., no effect) H 0 : = 0 (i.e., no effect) Test statistic, D. Test statistic, D. If |D| > C, reject H 0. If |D| > C, reject H 0. C chosen so that the chance you reject H 0, if H 0 is true, is 5% C chosen so that the chance you reject H 0, if H 0 is true, is 5% Distribution of D when = 0
47
Statistical power Power: The chance that you reject H 0 when H 0 is false (i.e., you [correctly] conclude that there is a treatment effect when there really is a treatment effect).
48
Power depends on… The structure of the experiment The structure of the experiment The method for analyzing the data The method for analyzing the data The size of the true underlying effect The size of the true underlying effect The variability in the measurements The variability in the measurements The chosen significance level ( ) The chosen significance level ( ) The sample size The sample size Note: We usually try to determine the sample size to give a particular power (often 80%).
49
Effect of sample size 6 per group : 12 per group:
50
Various effects Desired power sample size Desired power sample size Stringency of statistical test sample size Stringency of statistical test sample size Measurement variability sample size Measurement variability sample size Treatment effect sample size Treatment effect sample size
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.