Download presentation
Presentation is loading. Please wait.
1
Chapter 20 General Designs of Research
2
The implicit purpose of all research design is to impose controlled restrictions of natural phenomena. The research design tells the investigator, in effect: Do this and this; don’t do that or that; be careful with this; ignore that; and so on.
3
Conceptual Foundations of Research Design Recall that a relation is a set of ordered pairs. Recall, too, that a Cartesian products all the possible ordered pairs of two sets. The right side of Figure20.1, labeled (b), shows the same idea in ordered-pair relation form. Figure20.1(b) is a subset of the Cartesian product, A*B. Research designs are subsets of A*B, and the design and the research problem define or specify how the subsets are set up.
4
Conceptual Foundations of Research Design In sum, a research design is some subset of the Cartesian product of the independent and the dependent variables. Whenever possible, it is desirable to have “complete” designs—a complete design is a cross partition of the independent variables—and to observe the two basic conditions of disjointedness and exhaustiveness. That is, the design must not have a case (a participant’s score) in more than one cell of a partition or cross partition, and all the cases must be used up.
5
A Preliminary Note: Experimental Designs and Analysis of Variance As usually conceived, the rationale of research design is based on experimental ideas and conditions. They are also intimately linked to analysis of variance paradigms. Although there is no hard law that says that analysis of variance is nonexperiments. This is especially so for factorial designs where there are equal numbers of cases in the design paradigm cells, and where the participants are assigned to the experimental conditions (or cells) at random.
6
A Preliminary Note: Experimental Designs and Analysis of Variance When it is not possible to assign participants at random, and hen, for one reason or another, there are unequal numbers of cases in the cells of a factorial design, the use of analysis of variance is questionable, even inappropriate. This is because the use of analysis of variance assumes that the correlations between or among the independent variables of a factorial design are zero. Random assignment makes this assignment tenable since such assignment presumably apportions sources of variance equally among the cells. But random assignment can only be accomplished in experiments. In nonexperimental research, the independent variables are more or less fixed characteristics of the participants. They are usually systematically correlated.
7
A Preliminary Note: Experimental Designs and Analysis of Variance Some method of analysis that takes account of the correlation between them should be used. We will see later in the book that such a method is readily available: multiple regression. Strictly speaking, if our independent variables are nonexperimental, then analysis of variance is not the appropriate mode of analysis. However, there are exceptions to this statement. For instance, if one independent variable is experimental and one nonexperimental, analysis of variance is appropriate. In one-way analysis of variance, moreover, since there is only one independent variable, analysis of variance can be used with a nonexperimental independent variable, though regression analysis would probably be more appropriate.
8
A Preliminary Note: Experimental Designs and Analysis of Variance Similarly, if for some reason the numbers of cases in the cells are unequal (and disproportionate), then there will be correlation between the independent variables, and the assumption of zero correlation is not tenable.
9
The Designs Design 20.1: Experimental Group-Control Group: Randomized Participants Campbell and Stanley (1963): Posttest only control design Isaac and Michael (1987): Randomized control group posttest only design The [R] before the paradigm indicates that participants are assigned randomly to the Experimental Group and the Control Group.
10
The Designs Design 20.1 has a number of advantages: (1) it has the best built-in theoretical control system of any design, which one or two possible exceptions in special cases; (2) it is flexible, being theoretically capable of extension to any number of groups with any number of variables; (3) if extended to more than one variable, it can test several hypotheses at one time; and (4) it is statistically and structurally elegant.
11
The Notion of the Control Group and Extensions of Design 20.1 The notion of the control group needs generalization. The traditional notion that an experimental group should receive the treatment not given to a control group is a special case of the more general rule that comparison groups are necessary for the internal validity of scientific research.
12
Design 20.2: Experimental Group- Group: Matched Participants The structure of Design 20.2 is the same as that of Design 20.1. The only difference is that participants are matched on one or more attributes. For the design to take its place as an “adequate” design, however, randomization must enter the picture, as noted by the small r attached to the M (for “matched”)
13
Matching versus Randomization An important point to remember is that randomization—when it can be done correctly and appropriately—is generally considered better than matching. It is perhaps the only method for controlling unknown sources of variances.
14
Matching by Equating Participants The most common method of matching is to equate participants on one or more variables to be controlled. The major advantage in using this method is that it is able to detect small differences (increase in sensitivity) by ensuring that the participants in the various groups are equal on at least the paired variables. However, an important requirement is that the variables on which participants are matched must be correlated significantly with the dependent variable.
15
Matching by Equating Participants Matching is most useful when the variables on which participants are matched correlate greater than 0.5 or 0.6 with the dependent variable. This method of matching has two major flaws or disadvantages. First, it is difficult to know which are the most important variables to match. The researcher should select those variables that show the lowest correlation with each other, but the highest correlation with the dependent variable.
16
Matching by Equating Participants A second problem is the decrease in finding eligible matched participants as the number of variables used for matching increases. Matching affects the generalizability of the study. The researcher can only generalize the results to other individuals having the same characteristics as the matched sample.
17
The Frequency Distribution Matching Method Each group would be statistically equal—the mean, standard deviation, and skewness between each group would be statistically equivalent. The number of participants lost using this technique would not be as great as the number lost using the individual-by-individual method, because each additional participant would merely have to contribute to producing the appropriate statistical measures, rather than be identical to another participant on the relevant variables.
18
The Frequency Distribution Matching Method The major disadvantage of matching using the frequency distribution method occurs only when there is matching on more than one variable. For example, the mean and distribution of the two variables would be equivalent but the participants in each group would be completely different.
19
Matching by Holding Variables Constant If we need to control the variation caused by gender differences, we can hold sex constant by using only males or only females in the study. At least two problems could affect the validity of the study. The first disadvantage is that the technique restricts the size of the participant population. Consequently, it may be difficult to find enough participants to participate in the study. The second drawback is more critical in that the results of the study are only generalizable to the type of participants with different characteristics.
20
Matching by Incorporating the Nuisance Variable Into the Research Design Another way of attempting to develop equal groups is to use the nuisance or extraneous variable as an independent variable in the research design. For a variable measured on a continuous scale, using multiple regression or analysis of covariance would be preferable over analysis of variance.
21
Participant as Own Control Since each individual is unique, it is difficult if not impossible to find another individual who would be a perfect match. However, this method does not fit all applications. Some studies involved with learning are not suitable.
22
Additional Design Extensions: Design 20.3 Using a Pretest It is used frequently to study change. An interesting and difficult characteristic of this design is the nature of the scores usually analyzed: difference, or change scores, Ya- Yb=D. Unless the effect of the experimental manipulation is strong, the analysis of difference scores is not advisable. Difference scores are considerably less reliable than the scores from which they are calculated.
23
Additional Design Extensions: Design 20.3 Using a Pretest This design has a troublesome aspect, which decreases both internal and external validity of the experiment. The interaction that a combination of increased sensitivity to the issues and the experimental manipulation may decrease Internal validity. What we also have is a lack of generalizability or external validity, in that it may be possible to generalize to pretested groups but not to unpretested groups.
24
Difference Scores How to analyze difference scores in Design 20.3 ? One would think that the application of analysis variance to difference scores yielded by Design 20.3 and similar designs would be effective. Such analysis can be done if the experimental effects are substantial. But difference scores, as mentioned earlier, are usually less than reliable than the scores from which they are calculated.
25
Difference Scores The generally recommended procedure is to use so-called residualized or regressed gain scores. The effect of the pretest scores is removed from the posttest scores; that is, the residual scores are posttest scores purged of the pretest influence. Then the significance of the difference between the means of these scores is tested. All this can be accomplished by using both the procedure just described and a regression equation, or by analysis of covariance.
26
Difference Scores Even the use of residual gain scores and analysis of covariance is not perfect, however. If participants have not been assigned at random to the experimental and control groups, the procedure will not save the situation. If, however, a pretest is used, use random assignment and analysis of covariance, remembering that the results must be treated with special care.
27
Difference Scores The value of Design 20.4 is doubtful. Such a design might be used when one is worried about the reactive effect of pretesting, or when, due to the exigencies of practical situations, one has no other choice. It has the weakness that other possible variables may be influential in the interval between Yb and Ya.
28
Difference Scores Design 20.5 provides a way to possibly avoid confounding due to the interactive effects of the pretest. If the mean of Ya of Control 2 is significantly greater than the mean of Control 1, we can assume that the pretest has not unduly sensitized the participants, or that X is sufficiently strong to override a sensitization-X interaction effect.
29
Difference Scores Design 20.6, proposed by Solomon (1949), has a combination of Design 20.3 and 20.1. If Ya of Experimental is significantly greater than Control 1, and Control 2 is significantly greater than Control 3, together with a consistency of results between the two experiments, this is strong evidence, indeed, of the validity of our research hypothesis.
30
Difference Scores There appear to be only two sources of weakness. One is practicability—it is harder to run two simultaneous experiments than one, and the researcher encounters the difficulty of locating more participants of the same kind. The other difficulty is statistical. Note that there is a lack of balance of groups. There are four actual groups, but not four complete sets of measures. Using the first two lines, that is, with Design 20.3, one can subtract Yb from Ya or do an analysis of covariance. With the two lines, one can test Yas against each other with a t-test or F-test, but the problem is how to obtain one overall statistical approach.
31
Difference Scores One solution is to test the Yas of Control 2 and Control 3 against the average of the two Ybs (the first two lines), as well as to test the significance of the difference of the Yas of the first two lines. In addition, Solomon originally suggested a 2*2 factorial analysis of variance, using the four Ya sets of measures. Figure 20.5. With the analysis in Figure 20.5, we can study the main effects, X and ~X, and Pretested and Not Pretested. What is more interesting, we can test the interaction of pretesting and X and get a clear answer to the previous problem.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.