Download presentation
1
Research Design Quantitative
2
Symbolic Representations of Quantitative Designs - Shorthand
R = random assignment O = observation X = intervention Super or subscript = numbered sequence of events Types of Experimental Designs = Pre-experimental True experimental Quasi-experimental
3
Pre-experimental Designs
One-shot experimental Design x O1
4
Pre-Experimental Design
One Group Pretest-Posttest Design O X O2
5
Pre-Experimental Designs
Static Group Comparison x O1 O1
6
True Experimental Designs The True Experimental Design Pretest-postest control group design
R O x O2 R O O2
7
True Experimental Designs Solomon Four Group Design
R O x O2 R x O2 R O O2 R O2
8
True Experimental Designs Posttest Only Control Group Design
R x O1 R O1
9
True Experimental Designs
Within-Subjects Design – Only one Group X O1 X O2
10
Other Experimental Designs
Factorial Design Used when two or more different characteristics, treatments, or events are independently varied in a single study R X X O1 R X O1 R X O1 R O1 Nested Design Used when the subjects are aggregates
11
Other Experimental Designs
Repeated measures design with counterbalancing – also called crossover design Used when more than one treatment is administered to each subject in sequence, but the sequence is varied Multivariate Design Used when there are multiple variables and complex relationships among the variables Randomized Clinical Trials Used with a large number of subjects to test the results of a treatment and compare the results with a control group who have not received the treatment. The study is carried out in multiple geographic locations and it is “double-blind”
12
Strength of Experimental Designs
They eliminate all factors influencing the dependent variable other than the cause (the independent variable) being studied. This gives the researcher confidence in inferring causal relationships. Criteria for causality (Paul Lazarfeld) Cause must precede effect in time There must be an empirical relationship between the presumed cause and presumed effect The relationship can’t be explained as being due to a third variable
13
Weakness of Experimental Designs
Many variables are not amenable to experimental manipulation, such as human or environmental characteristics Ethics may prohibit manipulation of some variables It is just impractical to manipulate some variables Laboratory experiments are artificial The Hawthorne effect may occur
14
Ways to Overcome “Unfairness”
Use alternative interventions Use placebo effect Use the standard method of care Use different doses or intensities Use delayed treatment – give same treatment after data have been collected for all groups
15
Quasi-Experimental Design
These designs lack at least one of the three properties that characterize true experiments Manipulation of the independent variable must always be present There are usually control groups Most of the time, the control groups are not randomly selected – called non-equivalent control groups
16
The Nonrandomized Control Group Design
O x O2 O O2
17
Reversed Treatment Design with Pre and Posttest – One Group
O x O2 O x O2
18
Nonequivalent Dependent Variables Design - One Group
O DV1 x O2 DV1 changed O DV2 x O2 DV2 not changed
19
Jan Feb Mar Apr x May June Jul Aug
Simple Time Series Jan Feb Mar Apr x May June Jul Aug
20
Control Group Time Series
O O2 X O O4 O O2 __ O O4
21
Reversal Time Samples Design and Alternating Treatment Design
X O __ O X O3 X O __ O X O3
22
Strengths and Weaknesses of Quasi-experimental Designs
Practical Feasible Generalizable to a certain extent Weakness Absence of control makes it possible that some other external factor caused the effect, that selection influenced the effect or that maturation influenced the effect
23
Correlational Studies
These studies examine the relationships between variables. They can describe a relationship, predict a relationship or test a relationship proposed by a theory. They do not test causality. They do not test differences between two or more groups. They examine a single group or situation in terms of two or more variables
24
Types of Correlational Designs
Descriptive correlational design – describes two or more variables and the relationships among the variables Predictive studies – are used to facilitate decision-making about individuals such as admission of students to nursing school. Retrospective data from other groups are used to predict the behavior of a similar group
25
Types of Correlational Designs
Retrospective studies – manifestation of some phenomena existing in the present is linked to phenomena occurring in the past. Prospective studies – examine a presumed cause then go forward in time to the presumed effect. It’s more costly and you may have to wait a long time, but the correlation is stronger
26
Types of Correlational Designs
Theory testing correlational designs – used to test propositions in a theory Partial correlational design eliminates the influence of an intervening variable (mathematically) to study the relationship of the two remaining variables Cross-lagged panel design collects data on two variables at two or more time periods to support the inference that variable 1 occurs before variable 2 Path analysis design
27
Strengths and Weaknesses of Correlational Designs
Various constraints often limit true or quasi-experimental designs Causal relationships may not be important A larger amount of data is able to be gathered than can be acquired through experimental design They are strong in realism and solve practical problems Weaknesses Inability to actively manipulate IV Inability to randomly assign individuals to treatments Possible faulty interpretation of results
28
Simple Ex Post Facto Design
This shows the possible effects of an experience that occurred (or of a condition that was present) prior to the research. Experience O1 O1
29
Descriptive Study Designs
These studies are conducted to examine variables in naturally occurring situations. They look at relationships between variables as part of the overall descriptions but they do not examine the type or degrees of relationships. They protect against bias through conceptual and operational definitions of variables, sample selection, valid and reliable instruments, and control of the environment in which the data are collected.
30
Types of Descriptive Studies
Exploratory Study When little is known about the phenomenon of interest, an exploratory study is used to build basic knowledge, to describe or identify the phenomenon The approach is loosely structured and may include both quantitative and qualitative aspects, but it is still considered quantitative because the data obtained are quantified There are usually no hypotheses
31
Types of Descriptive Studies
Purely descriptive studies study the variables within a particular situation with a single sample of subjects Comparative descriptive studies examine the difference in variables between two or more groups that occur in a particular situation Time dimensional studies Prospective and retrospective Longitudinal – changes in same subjects Cross-sectional – changes in groups of subjects at different stages of development, simultaneously Trend – take samples of population at pre-set intervals Event partitioning
32
Descriptive Study Designs
Case study design Investigation of an individual, group, institution or other social unit to determine the dynamics of what the subject thinks, behaves or develops in a particular manner. It requires detailed study over time. You can use any data collection method. Content Analysis is often a major choice. Strength – the depth of the study – it’s not superficial Weakness – subjectivity of the researcher
33
Descriptive Study Designs
Survey Design Research activity that focuses on the status quo of some situation. Information is collected directly from the group that is the object of the investigation. Purposes can be to describe – people’s characteristics, attitudes or beliefs – sub-samples may be compared explain – a variable of interest by examining its relationship to other variables – nothing is manipulated predict – people report their plans or intentions and extrapolations can be made explore – use probing, loosely formulated questions to find out background data of subjects; to gain information to formulate research questions or hypotheses; to help develop theory for qualitative research
34
Descriptive Designs Types of survey techniques Personal interview
Telephone interviews Written questionnaires - self administered Internet questionnaires – self administered Strengths and Weaknesses of Surveys Weaknesses – superficial, ex post facto, time and resources Strength – flexibility and broad scope
35
Evaluation Research An extremely applied form of Research that looks at how well a program, practice or policy is working. Its purposes are To evaluate the success of a program, not why it succeeds, but whether it is succeeding To answer practical problems for persons who must make decisions
36
Evaluation Research cont.
The classical approach Determine objectives of the program Develop means of measuring attainment of objectives Collect data Interpret data vies-à-vies the objectives Goal-free evaluation Evaluation of the outcomes of a program in the absence of information about intended outcomes Must describe the repercussions of a program or practice or various components of the overall system
37
Categories of Evaluation
Formative evaluation – the ongoing process of providing evaluation feedback in the course of developing a program or policy – the goal is to improve the program. It is also called Process or Implementation Evaluation. Summative evaluation – the worth of a program after it is already in operation – to help decide whether it should be discarded, replaced, modified or continued. It describes the effectiveness of a program.
38
Summative Evaluation Also called Outcome Analysis
Comparative evaluation – assesses the worth of two or more programs or procedures Absolute evaluation – assess the effects of a program in and of itself – no contrast with other programs – called criterion-referenced – measures against criteria Impact Analysis looks at the efficiency of the program according to the subgroups for whom it is most effective Cost Analysis Cost-benefit – Money estimates for costs and benefits Cost effectiveness – Cost to produce the impact
39
Needs Assessment Similar to evaluation research, it provides informational input in a planning process. It is usually done by an agency or group with a service component. It helps in establishing priorities. There are three approaches: Key informant Survey Indicators
40
Evaluation Research Weaknesses
Threatening to individuals Seen as a waste of time Role conflicts if researcher is in-house Censor by “politicians” in-house When some goals are satisfied and others are not, how is the whole thing evaluated Goals may be for the future so can’t see outcome now
41
Other Types of Research
Secondary Analysis –studying data that have been previously gathered Strength – it is efficient and economical Weakness – Variables may have been under analyzed You may want to look at different relationships among variables You may want to change the unit of analysis You may want data from a sub-sample You may want to change the method of analysis Replication Studies
42
Other Types of Research
Meta-analysis – merging findings from many studies that have examined the same phenomenon then using statistics to determine overall findings – looking for effects Meta-synthesis – merging findings (themes) from qualitative studies Methodological – designed to develop the validity and reliability of instruments that measure constructs/variables. They are controlled investigations of ways to obtain, organize and analyze data.
43
Research Design Considerations
Research Control – the design should maximize the control an investigator has over the research situation and the variables. Rigor in quantitative control is exerted by the methodology used, whereas rigor in qualitative design is exerted by bracketing and intuiting. Quantitative control requires: Constancy of conditions – conditions under which the data are collected must be as similar as possible Environment Time, day, year One interviewer –if not minimize the variability Communication and treatment should be constant (same)
44
Research Control cont. Manipulation as control – ability to manipulate the independent variable is very powerful Assures that conditions under which information was obtained were constant or at least similar – can’t do that with ex post facto research Allows more difficult treatment because of the control the researcher can exercise over it Can use factorial designs to test two independent variable at the same time as their effects
45
Research Control cont. Comparison groups as control – scientific knowledge requires some type of comparison – even case studies have an implied reference – “normal” Randomization as control – if you can’t randomize the subjects, then at least vary the order in which questions are asked – especially for attitudes
46
Research Control cont. Control over extraneous individual characteristics of subjects Use only homogeneous subjects Include extraneous variables as independent variable – randomly assign them to sub-blocks Matching – use knowledge of subjects from comparison groups – matching on more than three characteristics is difficult. Matching may be done after the fact Use statistical procedures (ANOVA) after the fact Randomization Use subjects themselves as their own controls
47
Research Design Considerations
Validity – the measure of truth or accuracy of a claim Internal validity shows that the findings are due to the independent variable. It is maintained by using the controls on the previous slides, and by preventing threats to internal validity It is assumed the IV causes the DV Threats to internal validity are other possible explanations for the changes in the DV
48
Research Design Considerations
Threats to internal validity History – external threats which affect the dependent variable Selection – biases from pre-treatment differences Maturation – within the subject over time – not from the treatment Testing – the effect of taking a pretest on posttest scores Instrumentation – changes made by the researcher or mechanical changes Mortality – loss of subjects during the study Other factors - such as statistical regression
49
Research Design Considerations
External validity – the generalizability of research findings to other settings or samples specifically to the population from which the sample came – there is no problem generalizing to the accessible population. Threats to external validity are: Population Factors The Hawthorne effect – awareness of participation causes different behavior Novelty effect – newness of the treatment might cause alteration in behavior
50
Research Design Considerations
Ecological Factors Interaction between history and treatment effects Interaction between selection and treatment – too many decline Interaction between setting and treatment – some resist Experimenter factors – research is affected by characteristics of the researcher Paradigm effect – basic assumptions and ways of conceptualization Loose protocol – step-by-step detail not planned Miss-recording effect –especially if subjects record own responses Unintentional expectancy effect – influences subjects response Analysis effect – decide how to analyze after data collected Fudging effect – reporting effects not obtained
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.