Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research vs Experiment

Similar presentations


Presentation on theme: "Research vs Experiment"— Presentation transcript:

1 Research vs Experiment

2 Research A careful search
An effort to obtain new knowledge in order to answer a question or to solve a problem A protocol for measuring the values of a set of variables (response variables) under a set of condition (study condition)

3 Three Purposes of Research
Exploration Description Explanation RESEARCH DESIGN (a plan for research): The outline, plan, or strategy specifying the procedure to be used in answering research questions

4 Method of Research Case Studies Aggregate Data Analysis
Field Research Archival Research Aggregate Data Analysis Existing or Field Collection Surveys (observation) Experiments Use Multiple Methods Whenever Possible

5 Research design Observational
A design in which the levels of all the explanatory variables are determined as part of the observational process Experimental A study in which the investigator selects the levels of at least one factor An investigation in which the investigator applies some treatments to experimental units and then observes the effect of the treatments on the experimental units by measuring one or more response variables An inquiry in which an investigator chooses the levels (values) of input or independent variables and observes the values of the output or dependent variable (s).

6 Strengths of observation
It can be used to generate hypotheses It can be used to negate a proposition It can be used to identify contingent relationships Limitations of Observation It cannot be used to test hypotheses Poor representative ness Poor replicability Observer bias

7 Strengths of experimental
Causation can be determined (if properly designed) The researcher has considerable control over the variables of interest It can be designed to evaluate multiple independent variables Limitations of experimental Not ethical in many situations Often more difficult and costly

8 Design of Experiments Define the objectives of the experiment and the population of interest. Identify all sources of variation. Choose an experimental design and specify the experimental procedure.

9 Defining the Objectives
What questions do you hope to answer as a result of your experiment? To what population do these answers apply? INTERNAL USE FIG. 02s02f04 INTERNAL USE FIG. 02s02f03

10 Defining the Objectives
INTERNAL USE FIG. 02s02a

11 Identifying Sources of Variation
INTERNAL USE FIG. 02s02f05 Output Variable Input Variable

12 Choosing an Experimental Design

13 Experimental Design A controlled study in which one or more treatments are applied to experimental units. A plan and a structure to test hypotheses in which the analyst controls or manipulates one or more variables Protocol for measuring the values of a set of variable It contains independent and dependent variables 4

14 What is a statistical experimental design?
Determine the levels of independent variables (factors) and the number of experimental units at each combination of these levels according to the experimental goal. What is the output variable? Which (input) factors should we study? What are the levels of these factors? What combinations of these levels should be studied? How should we assign the studied combinations to experimental units?

15 The Six Steps of Experimental Design
Plan the experiment. Design the experiment. Perform the experiment. Analyze the data from the experiment. Confirm the results of the experiment. Evaluate the conclusions of the experiment.

16 Plan the Experiment Identify the dependent or output variable(s).
Translate output variables to measurable quantities. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Identify potential combined actions between factors.

17 Well-planned Experiment
Simplicity Degree of Precision Absence of systematic error Range of validity of conclusion Calculation of degree of uncertainty

18 Well-planned Experiment
Simplicity The selection of treatments and experimental arrangement should be as simple as possible, consistent with the objectives of the experiment Degree of Precision The probability should be high that the experiment will be able to measure differences with the degree of precision the experimenter desires. This implies an appropriate design and sufficient replication Absence of systematic error The experiment must be planned to ensure that the experimental units receiving one treatment in no systematic way differ from those receiving another treatment so that an unbiased estimate of each treatment effect can be obtained

19 Well-planned Experiment
Range of validity of conclusion Conclusion should have as wide a range of validity as possible. An experiment replicated in time and space would increase the range of validity of the conclusions that could be drawn from it. A factorial set of treatments is another way for increasing the range of validity of an experiment. In a factorial experiment, the effect of one factor are evaluated under varying levels of a second factor Calculation of degree of uncertainty In any experiment, there is always some degree of uncertainty as to the validity of the conclusions. The experiment should be designed so that it is possible to calculate the probability of obtaining the observed results by chance alone

20 Important steps in experiment
Definition of the problem Statement of the objectives Selection of treatment Selection of experimental material Selection of experimental design Selection of the unit of observation and the number of replication Control the effect of the adjacent units on each other Consideration of data to be collected Outlining statistical analysis and summarization of results Conducting the experiment Analyzing data and interpreting results Preparation of a complete, readable and correct report of the research

21 Important steps in experiment
Definition of the problem State the problem clearly and concisely Statement of the objectives Objectives should be written out in precise terms Selection of treatment Careful selection of treatment Selection of experimental material The material used should be representative of the population on which the treatment will be tested Selection of experimental design Choose the simplest design that is likely to provide the precision Selection of the unit of observation and the number of replication Plot size and the number of replications should be chosen to produce the required precision of treatment estimate

22 Important steps in experiment
Control the effect of the adjacent units on each other Use border rows and by randomization of treatment Consideration of data to be collected The data collected should properly evaluate treatment effect in line with the objectives of the experiment Outlining statistical analysis and summarization of results Write out the SV, DF, SS, MS and F-test Conducting the experiment Used procedures that are free from biases Analyzing data and interpreting results Preparation of a complete, readable and correct report of the research

23 Syllabus Content Week Terminology and basic concept 2
T-test, anova and CRD 3 RCBD and Latin Square 4 Mean comparison 7 Midterm 8 - 9 10 Factorial experiment Special topic in factorial experiment 13

24 B – D → 45 – 80 (Normal distribution)
Grading system Grade : 0 – 100 A > 80 B – D → 45 – 80 (Normal distribution) E < 45 Grade composition Assignment : 30 67 Mid-term Final Exam 40 Practical Work 33

25 Treatment/ input/independent variable
Terminology Variable A characteristic that varies (e.g., weight, body temperature, bill length, etc.) Treatment/ input/independent variable Set at predetermined levels decided by the experimenter A condition or set of conditions applied to experimental units The variable that the experimenter either controls or modifies What you manipulate What you evaluate Single factor ≥ 2 factors 4

26 Terminology Factors Another name for the independent variables of an experimental design An explanatory variable whose effect on the response is a primary objective of the study A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control An explanatory variable that can take any one of two or more values. The design of the experiment will largely consist of a policy for determining how to set the factors in each experimental trial 4

27 Terminology Levels or Classifications
The subcategories of the independent variable used in the experimental design The different values of a factor Dependent/response/output variable A quantitative or qualitative variable that represents the variable of interest. The response to the different levels of the independent variables A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units

28 Full Factorial Treatment Design
Terminology Treatment Factor A factor whose levels are chosen and controlled by the researcher to understand how one or more response variables change in response to varying levels of the factor Treatment Design The collection of treatments used in an experiment. Full Factorial Treatment Design Treatment design in which the treatments consist of all possible combinations involving one level from each of the treatment factors.

29 Terminology Experimental unit
The unit of the study material in which treatment is applied The smallest unit of the study material sharing a common treatment The physical entity to which a treatment is randomly assigned and independently applied a person, object or some other well-defined item upon which a treatment is applied. Observational unit (sampling unit) The smallest unit of the study material for which responses are measured. The unit on which a response variable is measured. There is often a one-to-one correspondence between experimental units and observational units, but that is not always true.

30 Stratification (blocking)
Basic principles Comparison/control Replication Randomization Stratification (blocking)

31 Comparison/control Good experiments are comparative
Comparing the effect of different nitrogen dosages on rice yield Comparing the potential yield of cassava clones Comparing the effectiveness of pesticides Ideally, the experimental group is compared to concurrent controls (rather than to historical controls).

32 Applying a treatment independently to two or more experimental units
Replication Applying a treatment independently to two or more experimental units The number of experimental units for which responses to a particular treatment are observed The usages reduce the effect of uncontrolled variation (i.e. increase precision). Estimate the variability in response that is not associated with treatment different Improve the reliability of the conclusion drawn from the data quantify uncertainty

33 Replication

34 Randomization Random assignment of treatments to experimental units.
Experimental subjects (“units”) should be assigned to treatment groups at random. At random does not mean haphazardly. One needs to explicitly randomize using A computer, or Coins, dice or cards.

35 Why randomize? Allow the observed responses to be regarded as random sampling from appropriate population Eliminate the influence of systematic bias on the measured value Control the role of chance Randomization allows the later use of probability theory, and so gives a solid foundation for statistical analysis.

36 Stratification (Blocking)
Grouping similar experimental units together and assigning different treatments within such groups of experimental units A technique used to eliminate the effects of selected confounding variables when comparing the treatment If you anticipate a difference between morning and afternoon measurements: Ensure that within each period, there are equal numbers of subjects in each treatment group. Take account of the difference between periods in your analysis.

37 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications)

38 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A

39 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B

40 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B

41 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C

42 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

43 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

44 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

45 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

46 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

47 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

48 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

49 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

50 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

51 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

52 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

53 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

54 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units)

55 Cage positions Randomized block design A
(4 treatments x 4 blocks= 16 experimental units) A

56 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C

57 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B

58 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

59 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

60 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

61 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

62 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

63 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

64 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

65 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

66 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

67 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

68 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

69 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

70 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

71 Randomization and stratification
If you can (and want to), fix a variable. e.g., use only 8 week old male mice from a single strain. If you don’t fix a variable, stratify it. e.g., use both 8 week and 12 week old male mice, and stratify with respect to age. If you can neither fix nor stratify a variable, randomize it.

72 Experiment Single Factor Experiment
1. Treatments consists of one factor 2. Treatments which are factors are treated as one type 3. There is no treatment design Multiple Factor Experiment (Factorial experiment) 1. Treatments consist of ≥ 2 factors 2. We are interested in interaction identification 3. There is treatment design

73 Interactions

74 Distribution of D when  = 0
Significance test Based on statistical distribution which depends on the tested parameter  = true difference in average two samples (the treatment effect). H0:  = 0 (i.e., no effect) Test statistic, D. If |D| > C, reject H0. C (critical value) chosen so that the chance you reject H0, if H0 is true, is 5% Distribution of D when  = 0

75 Statistical power Power:
The chance that you reject H0 when H0 is false (i.e., you [correctly] conclude that there is a treatment effect when there really is a treatment effect).

76 Power depends on… The structure of the experiment
The method for analyzing the data The size of the true underlying effect The variability in the measurements The chosen significance level () The sample size Note: We usually try to determine the sample size to give a particular power (often 80%).

77 Effect of sample size 6 per group: 12 per group:

78 Various effects Desired power   sample size 
Stringency of statistical test   sample size  Measurement variability   sample size  Treatment effect   sample size 

79 Determining sample size
The things you need to know: Structure of the experiment Method for analysis Chosen significance level,  (usually 5%) Desired power (usually 80%) Variability in the measurements - if necessary, perform a pilot study The smallest meaningful effect

80 Reducing sample size Reduce the number of treatment groups being compared. Find a more precise measurement (e.g., average time to effect rather than proportion sick). Decrease the variability in the measurements. Make subjects more homogeneous. Use stratification. Control for other variables (e.g., weight). Average multiple measurements on each subject.


Download ppt "Research vs Experiment"

Similar presentations


Ads by Google