Download presentation
Presentation is loading. Please wait.
Published byKristin Cameron Modified over 9 years ago
1
1 Lecture 3 Theory and Measurement: Causation, Validity and Reliability
2
2 Assignment 1 Assignment 1 Stating a research problem. Providing a short justification for the problem. Providing a few hypotheses that come from the research problem.
3
3 Today’s Lecture Discussion of Causation Discussion of Validity Discussion of Reliability Time permitting, Issues in data preparation and error-checking
4
4 Reminder from Lectures 1 and 2: Causation versus Correlation Correlation: Non-directional relationship between two variables. Non-directional relationship between two variables. Increase in X associated with Increase in Y, but could also be stated as Increase in Y associated with increase in X. Increase in X associated with Increase in Y, but could also be stated as Increase in Y associated with increase in X.Causation: Directional relationship between at least two variables. Directional relationship between at least two variables. Increase in X leads to increase in Y, but reverse may or may not be true. Increase in X leads to increase in Y, but reverse may or may not be true.
5
5 Causation Key to causation is directionality Must be able to establish directionality either theoretically, methodologically, or both.
6
6 Porter’s (1997) Three Criteria for Cause Independent variable must precede the dependent variable. Independent variable must be related to the dependent variable. There must be no third variable that could explain why the independent variable is related to the dependent variable.
7
7 Example: Age and Income We know they are correlated, so does age cause income to increase? We know that Income cannot ‘cause’ age. We know that Income cannot ‘cause’ age. They certainly seem related and the direction seems clear… so is it not clear that age causes income to increase? They certainly seem related and the direction seems clear… so is it not clear that age causes income to increase? “Third Variable” problem: age related to education, job experience, other factors. “Third Variable” problem: age related to education, job experience, other factors.
8
8 Five Approaches to Quantitative Research and Implications for Causality DescriptiveAssociationalComparativeQuasi-Experimental Randomized Experimental
9
9 Research types and causality: Descriptive Descriptive Summarize data Summarize data Statistics: histograms, means, percentages Statistics: histograms, means, percentages Cannot show causality Cannot show causality
10
10 Research types and causality: Associational Associational Only to relate variables Only to relate variables Predictions only made to show that a relationship exists Predictions only made to show that a relationship exists Statistics: Correlation, Multiple Regression Statistics: Correlation, Multiple Regression To some degree, regression can partially be used to infer causality To some degree, regression can partially be used to infer causality
11
11 Research types and causality: Comparative Comparative Compares two or more groups Compares two or more groups Looking for difference between groups Looking for difference between groups Statistics: t-tests, ANOVA (inferential statistics) Statistics: t-tests, ANOVA (inferential statistics) Not well suited for establishing cause b/c it does not meet Porter’s (1997) 3 rd condition (extraneous variables) Not well suited for establishing cause b/c it does not meet Porter’s (1997) 3 rd condition (extraneous variables)
12
12 Research types and causality: Quasi-experimental Quasi-experimental Compares groups Compares groups ‘quasi-experimental’ b/c it does not have random assignment to groups. ‘quasi-experimental’ b/c it does not have random assignment to groups. Can examine causality Can examine causality Statistics: t-tests, ANOVA (inferential statistics) Statistics: t-tests, ANOVA (inferential statistics)
13
13 Research types and causality: randomized experimental Randomized experiment To determine causes To determine causes Compares groups Compares groups Has random assignment to groups Has random assignment to groups Best way to determine exact causes Best way to determine exact causes Statistics: t-tests, ANOVA (inferential statistics) Statistics: t-tests, ANOVA (inferential statistics)
14
14 In Summary… You just need to know that the type of research that you do will affect your ability to describe causality. Whenever possible, choose a research method that will allow you to have the most explanatory power.
15
15 Validity in Research: The ‘quality’ or merit of research
16
16 Validity: Internal, External, Measurement Internal Validity: “the approximate validity with which we can infer that a relationship is causal” (Cook and Campbell 1979). External Validity: “external validity asks the question of generalizability: to what populations, settings, treatment variables, and measurement variables can this effect be generalized?” (Campbell and Stanley 1966). Measurement Validity: Do our measures capture what we want them to capture?
17
17 Internal Validity Two major threats to internal validity (is our study causal)?: Equivalence of groups on participant characteristics Equivalence of groups on participant characteristics Control of extraneous experience or environment variables. Control of extraneous experience or environment variables.
18
18 Equivalence of Groups If looking at a specific cause (X affects Y), then the groups must not vary significantly on other key variables. Example: Looking at the effect of computer use on intelligence. Example: Looking at the effect of computer use on intelligence. But what if computer users and non-computer users differ on employment, age, education, etc?
19
19 Control of Extraneous Experience or Environment Variables If looking at a specific cause (X affects Y), then one or more groups cannot receive unknown stimuli or information that could affect outcome. Problem is particularly troublesome if it affects groups differentially. Problem is particularly troublesome if it affects groups differentially. Example: Study of two classrooms, one with information technology and one without such technology. Example: Study of two classrooms, one with information technology and one without such technology. What if one of the classes also has teaching assistants who help the students?
20
20 Other types of Internal Validity Problems Statistical Regression Because of statistical variation, some individuals may be placed in wrong group (extremes regress to mean) Because of statistical variation, some individuals may be placed in wrong group (extremes regress to mean) Experimental Mortality Some individuals ‘leave’ study– if this is systematic for certain groups, it’s a problem. Some individuals ‘leave’ study– if this is systematic for certain groups, it’s a problem.Selection Process for assigning to different groups. Process for assigning to different groups. Interactions with Participant Assignment Biases in assignment to groups can also have interactions between groups (i.e., environmental factors that differentially affect certain individuals who were not randomly assigned to groups). Biases in assignment to groups can also have interactions between groups (i.e., environmental factors that differentially affect certain individuals who were not randomly assigned to groups).
21
21 External Validity How generalizable is a given study? Two major types: Population external validity Population external validity Ecological external validity Ecological external validity
22
22 Population External Validity Population External Validity: Does the actual sample of participants represent the theoretical or target population? Does the actual sample of participants represent the theoretical or target population? To evaluate, you must know: The theoretical population The theoretical population The accessible population The accessible population The sampling design The sampling design The selected sample The selected sample The actual sample who complete study The actual sample who complete study
23
23 Ecological External Validity Are the conditions, settings, procedures, questions, etc representative of real life? Are the conditions, settings, procedures, questions, etc representative of real life? Often, ecological external validity in competition with experimental controls that attempt to isolate specific variables. Often, ecological external validity in competition with experimental controls that attempt to isolate specific variables.Example: Study of sharing behavior in P2P-like systems (Cheshire 2005) Study of sharing behavior in P2P-like systems (Cheshire 2005)
24
24 Why care about external validity? 1930’s Literary Digest poll: Franklin Roosevelt predicted to lose the 1936 presidential election by a landslide. Franklin Roosevelt predicted to lose the 1936 presidential election by a landslide. Oops… he won by a landslide. Oops… he won by a landslide. How could this happen? Sample was selected from automobile registrations, telephone directories…during the middle of the Great Depression. Sample was selected from automobile registrations, telephone directories…during the middle of the Great Depression.
25
25 Related point: outliers in sample What is the best undergraduate major if you want a high income (UNC-Chapel Hill survey)? Geography was #1 Geography was #1 Maybe not time to switch majors just yet… One outlier, Michael Jordan, accounted for the huge skew in average salaries for graduates (he makes $80 million/year) One outlier, Michael Jordan, accounted for the huge skew in average salaries for graduates (he makes $80 million/year) Key Point: you have to try and make every effort to make your sample generalizable to the population of interest. Non-representative samples will lead to inaccurate conclusions!!!
26
26 Measurement Validity Deals with whether the variables are appropriately defined and representative of the concepts or constructs under investigation. Also called construct validity. Examples: How do you measure life happiness? How do you measure life happiness? How do you measure technical proficiency? How do you measure technical proficiency? How do you measure one’s social network? How do you measure one’s social network?
27
27 Example of Measurement Validity Problem Operational definition of ‘supervision’ is defined as a supervisor being 10 feet or less from a worker (example from Cook and Campbell 1979) Operational definition of ‘supervision’ is defined as a supervisor being 10 feet or less from a worker (example from Cook and Campbell 1979) Problem: the way that supervision is defined, it may be relevant to the construct of ‘stress’ rather than just “supervision”. Problem: the way that supervision is defined, it may be relevant to the construct of ‘stress’ rather than just “supervision”.
28
28 Validity: Summary Internal Validity: Has the causal link between our concepts (or variables) been established? Has the causal link between our concepts (or variables) been established? External Validity: Is the study generalizable, and to what group(s)? Is the study generalizable, and to what group(s)? Measurement Validity: Do our measures capture what we want them to capture? Do our measures capture what we want them to capture?
29
29 Reliability in Research
30
30 Reliability Reliability deals with the consistency of your research instrument (i.e., survey questions, experimental manipulations, etc.)
31
31 Reliability Are the findings (or a specific measure) consistent if you were to do the study over again? A study can be reliable, but not valid. Furthermore, it cannot be valid unless it is reliable. Thus, reliability is absolutely required. Validity is equally important, but the degree of validity (such as external validity) may not be very high depending on the nature of the study.
32
32 Reliability: the problem of error Error is the difference between the observed score and the ‘true’ score. Random error occurs: Due to observers… Due to observers… Due to individual variation (age, mood, etc) Due to individual variation (age, mood, etc) Due to inconsistent situations during data collection (i.e., survey on patriotism after 9/11) Due to inconsistent situations during data collection (i.e., survey on patriotism after 9/11)
33
33 Methods of Measuring Reliability Split-half or item performance Analyze half of survey/instrument and compare to overall analysis to see if it is consistent. Analyze half of survey/instrument and compare to overall analysis to see if it is consistent. Cronbach’s alpha is a related and common way to measure reliability (correlating performance on each item with overall score) Cronbach’s alpha is a related and common way to measure reliability (correlating performance on each item with overall score)
34
34 Three More Methods of Measuring Reliability Test-retest Administering test to same group at different times, correlate the two scores. Administering test to same group at different times, correlate the two scores. Multiple or Parallel forms Mixing same items on a survey and giving to same group twice. Mixing same items on a survey and giving to same group twice. Inter-rater reliability Agreement between different interviewers or coders on same subjects/responses. Agreement between different interviewers or coders on same subjects/responses.
35
35 Summary: Reliability Basically, reliability just deals with the consistency of your measures. If you can show that they are consistent, then you have this covered.
36
36 Class Survey: Data Preparation and Error- Checking
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.