Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research vs Experiment

Similar presentations


Presentation on theme: "Research vs Experiment"— Presentation transcript:

1 Research vs Experiment

2 A process of enquiry and investigation;
Research A careful search A process of enquiry and investigation; An effort to obtain new knowledge in order to answer a question or to solve a problem A protocol for measuring the values of a set of variables (response variables) under a set of condition (study condition)

3 Purpose of Research Review or synthesize existing knowledge.
Investigate existing situations or problems. Provide solutions to problems. Explore and analyze more general issues. Construct or create new procedures or systems. Explain new phenomenon. Generate new knowledge. …or a combination of any of the above! (Collis & Hussey, 2003)

4 Three Purposes of Research
Exploration Description Explanation RESEARCH STRATEGY/DESIGN (a plan for research): The outline, plan, or strategy specifying the procedure to be used in answering research questions

5

6 Research strategies The research process ‘onion’ Research strategies

7 Research strategy FIRST, be clear about your research questions and objectives. A strategy is a general plan of how you will go about answering your research question(s). It will contain clear objectives derived from the question. You must – Specify the data sources. Consider the constraints e.g access, time, location, money, ethical issues.

8 Note: They are not mutually exclusive
Research strategies Survey Case study Grounded theory Ethnography Action research Exploratory, descriptive and explanatory studies Experiment Note: They are not mutually exclusive

9 Survey a collection of information in standardised form from samples of known populations to create quantifiable data with regard to a number of variables from which correlations and possible causations can be established.

10 Main advantages of survey
Ability to collect large amounts of data; The relatively cheap cost at which these data may be collected; Perceived as authoritative ; The more respondents can be involved The easier coding and pre-coding The easier quantification, comparison and measurement The easier it becomes to analyse statistically The greater reliability likely Reliability is about accuracy, consistency, precision and lack of error- the ability to produce results which are dependable, repeatable

11 Disadvantage of Survey
The less possibility for understanding respondents meanings and motives The greater the possibility of validity problems arising e.g. do all respondents interpret questions the same way? The more the richness of qualitative accounts is lost The less it tells us about the subjective world of the respondents……hence the need for a ‘phenomenological /naturalistic ’ inquiry. It’s easy to do a survey badly!

12 Case study Focuses on understanding the dynamics present within a single setting. Often used in the exploratory stages. Can be - individual person, a single institution / organisation, a small group, a community, a nation, a decision, a policy, a particular service, a particular event, a process

13 Grounded theory Data collection starts without any formal theoretical framework. Theory is developed from data by a series of observations, which leads to the generation of predictions that are tested in further observations, which may confirm or otherwise the predictions. Theory is grounded in continual reference to the data. An attempt to impart rigour to qualitative methods. Barney Glaser GTI

14 Ethnography Developed out of field work in anthropology.
The purpose is to interpret the world the way the ‘locals’ interpret it. Is time consuming. Linked to participant observation.

15 Action research May involve practitioners who are also researchers e.g. professionals in training. Research may be part of the organisation ,e g school, University, hospital, Researcher is actively involved in the promotion of change within it. Issue of transference of knowledge from one context to another.

16 Experiment A study in which the investigator selects the levels of at least one factor An investigation in which the investigator applies some treatments to experimental units and then observes the effect of the treatments on the experimental units by measuring one or more response variables An inquiry in which an investigator chooses the levels (values) of input or independent variables and observes the values of the output or dependent variable (s).

17 Strengths of experiment
Causation can be determined (if properly designed) The researcher has considerable control over the variables of interest It can be designed to evaluate multiple independent variables Limitations of experiment Not ethical in many situations Often more difficult and costly

18 Design of Experiments Define the objectives of the experiment and the population of interest. Identify all sources of variation. Choose an experimental design and specify the experimental procedure.

19 Defining the Objectives
What questions do you hope to answer as a result of your experiment? To what population do these answers apply? INTERNAL USE FIG. 02s02f04 INTERNAL USE FIG. 02s02f03

20 Identifying Sources of Variation
INTERNAL USE FIG. 02s02f05 Input Variable Output Variable

21 Choosing an Experimental Design

22 Experimental Design A controlled study in which one or more treatments are applied to experimental units. A plan and a structure to test hypotheses in which the analyst controls or manipulates one or more variables Protocol for measuring the values of a set of variable It contains independent and dependent variables 4

23 Statistical experimental design
Determine the levels of independent variables (factors) and the number of experimental units at each combination of these levels according to the experimental goal. What is the output variable? Which (input) factors should we study? What are the levels of these factors? What combinations of these levels should be studied? How should we assign the studied combinations to experimental units?

24 Steps of Experimental Design
Plan the experiment. Design the experiment. Perform the experiment. Analyze the data from the experiment. Confirm the results of the experiment. Evaluate the conclusions of the experiment.

25 Plan the Experiment Identify the dependent or output variable(s).
Translate output (response) variables to measurable quantities. Determine the factors (input or independent variables) that potentially affect the output variables that are to be studied. Identify potential combined actions between factors.

26 Well-planned Experiment
Simplicity Degree of precision Absence of systematic error Range of validity of conclusion Calculation of degree of uncertainty

27 Well-planned Experiment
Simplicity The selection of treatments and experimental arrangement should be as simple as possible, consistent with the objectives of the experiment Degree of Precision The probability should be high that the experiment will be able to measure differences with the degree of precision the experimenter desires. This implies an appropriate design and sufficient replication Absence of systematic error The experiment must be planned to ensure that the experimental units receiving one treatment in no systematic way differ from those receiving another treatment so that an unbiased estimate of each treatment effect can be obtained

28 Well-planned Experiment
Range of validity of conclusion Conclusion should have as wide a range of validity as possible. An experiment replicated in time and space would increase the range of validity of the conclusions that could be drawn from it. A factorial set of treatments is another way for increasing the range of validity of an experiment. In a factorial experiment, the effect of one factor are evaluated under varying levels of a second factor Calculation of degree of uncertainty In any experiment, there is always some degree of uncertainty as to the validity of the conclusions. The experiment should be designed so that it is possible to calculate the probability of obtaining the observed results by chance alone

29 Steps in Design the Experiment
Selection of treatment (independent/input variables) Selection of experimental material Selection of experimental design Selection of the unit of observation and the number of replication Control the effect of the adjacent units on each other Consideration of data to be collected (output/response variables) Outlining statistical analysis and summarization of results

30 Important Steps in Design Experiment
Selection of treatment Careful selection of treatment Selection of experimental material The material used should be representative of the population on which the treatment will be tested Selection of experimental design Choose the simplest design that is likely to provide the precision Selection of the unit of observation and the number of replication Plot size and the number of replications should be chosen to produce the required precision of treatment estimate

31 Important Steps in Design the Experiment
Control the effect of the adjacent units on each other Use border rows and by randomization of treatment Consideration of data to be collected The data collected should properly evaluate treatment effect in line with the objectives of the experiment Outlining statistical analysis and summarization of results Write out the SV, DF, SS, MS and F-test

32 Syllabus Content Week Terminology and basic concept 2
T-test, anova and CRD 3 RCBD and Latin Square 4 Mean comparison 7 Midterm 8 - 9 10 Factorial experiment Special topic in factorial experiment 13

33 B – D → 45 – 80 (Normal distribution)
Grading system Grade : 0 – 100 A > 80 B – D → 45 – 80 (Normal distribution) E < 45 Grade composition Assignment : 30 67 Mid-term Final Exam 40 Practical Work 33

34 Terminology INTERNAL USE FIG. 02s02f05 Output Variable Input Variable

35 Treatment/ input/independent variable
Terminology Variable A characteristic that varies (e.g., weight, body temperature, bill length, etc.) Treatment/ input/independent variable Set at predetermined levels decided by the experimenter A condition or set of conditions applied to experimental units The variable that the experimenter either controls or modifies What you manipulate What you evaluate Single factor ≥ 2 factors 4

36 Terminology Factors Another name for the independent variables of an experimental design An explanatory variable whose effect on the response is a primary objective of the study A variable upon which the experimenter believes that one or more response variables may depend, and which the experimenter can control An explanatory variable that can take any one of two or more values. The design of the experiment will largely consist of a policy for determining how to set the factors in each experimental trial 4

37 Terminology Levels or Classifications
The subcategories of the independent variable used in the experimental design The different values of a factor Dependent/response/output variable A quantitative or qualitative variable that represents the variable of interest. The response to the different levels of the independent variables A characteristic of an experimental unit that is measured after treatment and analyzed to assess the effects of treatments on experimental units

38 Full Factorial Treatment Design
Terminology Treatment Factor A factor whose levels are chosen and controlled by the researcher to understand how one or more response variables change in response to varying levels of the factor Treatment Design The collection of treatments used in an experiment. Full Factorial Treatment Design Treatment design in which the treatments consist of all possible combinations involving one level from each of the treatment factors.

39 Terminology Experimental unit
The unit of the study material in which treatment is applied The smallest unit of the study material sharing a common treatment The physical entity to which a treatment is randomly assigned and independently applied a person, object or some other well-defined item upon which a treatment is applied. Observational unit (sampling unit) The smallest unit of the study material for which responses are measured. The unit on which a response variable is measured. There is often a one-to-one correspondence between experimental units and observational units, but that is not always true.

40 Stratification (blocking)
Basic principles Comparison/control Replication Randomization Stratification (blocking)

41 Comparison/control Good experiments are comparative
Comparing the effect of different nitrogen dosages on rice yield Comparing the potential yield of cassava clones Comparing the effectiveness of pesticides Ideally, the experimental group is compared to concurrent controls (rather than to historical controls).

42 Applying a treatment independently to two or more experimental units
Replication Applying a treatment independently to two or more experimental units The number of experimental units for which responses to a particular treatment are observed The usages reduce the effect of uncontrolled variation (i.e. increase precision). Estimate the variability in response that is not associated with treatment different Improve the reliability of the conclusion drawn from the data quantify uncertainty

43 Replication

44 Randomization Random assignment of treatments to experimental units.
Experimental subjects (“units”) should be assigned to treatment groups at random. At random does not mean haphazardly. One needs to explicitly randomize using A computer, or Coins, dice or cards.

45 Why randomize? Allow the observed responses to be regarded as random sampling from appropriate population Eliminate the influence of systematic bias on the measured value Control the role of chance Randomization allows the later use of probability theory, and so gives a solid foundation for statistical analysis.

46 Stratification (Blocking)
Grouping similar experimental units together and assigning different treatments within such groups of experimental units A technique used to eliminate the effects of selected confounding variables when comparing the treatment If you anticipate a difference between morning and afternoon measurements: Ensure that within each period, there are equal numbers of subjects in each treatment group. Take account of the difference between periods in your analysis.

47 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications)

48 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A

49 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B

50 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B

51 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C

52 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

53 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

54 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

55 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

56 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

57 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

58 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

59 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

60 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

61 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

62 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

63 Completely randomized design (4 treatments x 4 replications)
Cage positions Completely randomized design (4 treatments x 4 replications) A B C D

64 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units)

65 Cage positions Randomized block design A
(4 treatments x 4 blocks= 16 experimental units) A

66 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C

67 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B

68 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

69 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

70 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

71 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

72 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

73 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

74 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

75 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

76 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

77 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

78 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

79 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

80 Cage positions Randomized block design
(4 treatments x 4 blocks= 16 experimental units) A C B D

81 Randomization and stratification
If you can (and want to), fix a variable. e.g., use only 8 week old male mice from a single strain. If you don’t fix a variable, stratify it. e.g., use both 8 week and 12 week old male mice, and stratify with respect to age. If you can neither fix nor stratify a variable, randomize it.

82 Experiment Single Factor Experiment
1. Treatments consists of one factor 2. Treatments which are factors are treated as one type 3. There is no treatment design Multiple Factor Experiment (Factorial experiment) 1. Treatments consist of ≥ 2 factors 2. We are interested in interaction identification 3. There is treatment design

83 Interactions

84 Distribution of D when  = 0
Significance test Based on statistical distribution which depends on the tested parameter  = true difference in average two samples (the treatment effect). H0:  = 0 (i.e., no effect) Test statistic, D. If |D| > C, reject H0. C (critical value) chosen so that the chance you reject H0, if H0 is true, is 5% Distribution of D when  = 0

85 Statistical power Power:
The chance that you reject H0 when H0 is false (i.e., you [correctly] conclude that there is a treatment effect when there really is a treatment effect).

86 Power depends on… The structure of the experiment
The method for analyzing the data The size of the true underlying effect The variability in the measurements The chosen significance level () The sample size Note: We usually try to determine the sample size to give a particular power (often 80%).

87 Effect of sample size 6 per group: 12 per group:

88 Various effects Desired power   sample size 
Stringency of statistical test   sample size  Measurement variability   sample size  Treatment effect   sample size 

89 Determining sample size
The things you need to know: Structure of the experiment Method for analysis Chosen significance level,  (usually 5%) Desired power (usually 80%) Variability in the measurements - if necessary, perform a pilot study The smallest meaningful effect

90 Reducing sample size Reduce the number of treatment groups being compared. Find a more precise measurement (e.g., average time to effect rather than proportion sick). Decrease the variability in the measurements. Make subjects more homogeneous. Use stratification. Control for other variables (e.g., weight). Average multiple measurements on each subject.


Download ppt "Research vs Experiment"

Similar presentations


Ads by Google