Download presentation
Presentation is loading. Please wait.
Published byAmie Wilkinson Modified over 9 years ago
1
I/O Psychology Research Methods
2
What is Science? Science: Approach that involves the understanding, prediction, and control of some phenomenon of interest. Science: Approach that involves the understanding, prediction, and control of some phenomenon of interest. Scientific Knowledge is Scientific Knowledge is Logical and Concerned with Understanding Logical and Concerned with Understanding Empirical Empirical Communicable and Precise Communicable and Precise Probabilistic (Disprove, NOT Prove) Probabilistic (Disprove, NOT Prove) Objective / Disinterestedness Objective / Disinterestedness
3
Goals of Science Ex: We want to study absenteeism in an organization Ex: We want to study absenteeism in an organization Description: What is the current state of affairs? Description: What is the current state of affairs? Prediction: What will happen in the future? Prediction: What will happen in the future? Explanation: What is the cause of the phenomena we’re interested in? Explanation: What is the cause of the phenomena we’re interested in?
4
What is “research”? Systematic study of phenomena according to scientific principles. Systematic study of phenomena according to scientific principles. A set of procedures used to obtain empirical and verifiable information from which we then make informed, educated conclusions. A set of procedures used to obtain empirical and verifiable information from which we then make informed, educated conclusions.
5
The Empirical Research Process 1. Statement of the Problem 2. Design of the Research Study 3. Measurement of Variables 4. Analysis of Data 5. Interpretation/Conclusions
6
Step 1: Statement of the Problem Theory: statement that explains the relationship among phenomena; gives us a framework within which to conduct research. Theory: statement that explains the relationship among phenomena; gives us a framework within which to conduct research. “There is nothing quite so practical as a good theory.” Kurt Lewin “There is nothing quite so practical as a good theory.” Kurt Lewin Two Approaches: Two Approaches: Inductive – theory building; use data to derive theory. Inductive – theory building; use data to derive theory. Deductive – theory testing; start with theory and collect data to test that theory. Deductive – theory testing; start with theory and collect data to test that theory.
7
Step 1: Statement of the Problem Hypothesis Hypothesis A testable statement about the status of a variable or the relationship among multiple variables A testable statement about the status of a variable or the relationship among multiple variables Must be falsifiable! Must be falsifiable!
8
Step 1: Statement of the Problem Types of variables Types of variables Independent Variables (IV): Are variables that are manipulated by the researcher. Independent Variables (IV): Are variables that are manipulated by the researcher. Dependent Variables (DV): Are the outcomes of interest. Dependent Variables (DV): Are the outcomes of interest. Predictors and Criterion Predictors and Criterion Confounding variables: Uncontrolled extraneous variables that permits alternative explanations for the results of a study. Confounding variables: Uncontrolled extraneous variables that permits alternative explanations for the results of a study.
9
Moderator Variable Special type of IV that influences the relationship between 2 other variables Special type of IV that influences the relationship between 2 other variables X Y X Y M Example Example Gender & Hiring rate Gender & Hiring rate M = Type of job M = Type of job Relationship b/t gender and hiring rate may change depending on the type of job individuals are applying for. Relationship b/t gender and hiring rate may change depending on the type of job individuals are applying for.
10
Mediator Variable Special type of IV that accounts for the relation between the IV and the DV. Special type of IV that accounts for the relation between the IV and the DV. Mediation implies a causal relation in which an IV causes a mediator which causes a DV. Mediation implies a causal relation in which an IV causes a mediator which causes a DV. IVMEDDV IVMEDDV Example: Example: IV = negative feedback IV = negative feedback MED = negative thoughts MED = negative thoughts DV = willingness to participate DV = willingness to participate
11
Moderator vs. Mediator A moderator variable is one that influences the strength of a relationship between two other variables. A moderator variable is one that influences the strength of a relationship between two other variables. A mediator variable is one that explains the relationship between the two other variables. A mediator variable is one that explains the relationship between the two other variables.
12
Example You are an I/O psychologist working for an insurance company. You want to assess which of two training methods is most effective for training new secretaries. You give one group of secretaries on-the-job training and a booklet to study at home. You give the second group of secretaries on-the-job training and have them watch a 30-minute video. You are an I/O psychologist working for an insurance company. You want to assess which of two training methods is most effective for training new secretaries. You give one group of secretaries on-the-job training and a booklet to study at home. You give the second group of secretaries on-the-job training and have them watch a 30-minute video.
13
Step 2: Research Design A research design is the structure or architecture for the study. A research design is the structure or architecture for the study. A plan for how to treat variables that can influence results so as to rule out alternative interpretations. A plan for how to treat variables that can influence results so as to rule out alternative interpretations. Primary Research Methods: Primary Research Methods: Experimental (Laboratory vs. Field Research) Experimental (Laboratory vs. Field Research) Quasi-Experimental Quasi-Experimental Non-Experimental (Observational, Survey) Non-Experimental (Observational, Survey)
14
Step 2: Research Design Secondary Research Methods Secondary Research Methods Meta-analysis: statistical method for combining/analyzing the results from many studies to draw a general conclusion about relationships among variables (p.61). Meta-analysis: statistical method for combining/analyzing the results from many studies to draw a general conclusion about relationships among variables (p.61). Qualitative Research Methods Qualitative Research Methods Rely on observation, interview, case study, and analysis of diaries to produce narrative descriptions of events or processes. Rely on observation, interview, case study, and analysis of diaries to produce narrative descriptions of events or processes.
15
Evaluating Research Design Internal validity (Control) Internal validity (Control) Does X cause Y? Does X cause Y? Lab studies eliminate distracting variables through experimental control. Lab studies eliminate distracting variables through experimental control. Using of statistical techniques to control for the influences of certain variables is statistical control. Using of statistical techniques to control for the influences of certain variables is statistical control. External validity (Generalizability) External validity (Generalizability) Does the relation of X and Y hold in other settings and with other participants and stimuli? Does the relation of X and Y hold in other settings and with other participants and stimuli?
16
Threats to Internal Validity History History Instrumentation Instrumentation Selection Selection Maturation Maturation Mortality/Attrition Mortality/Attrition Testing Testing Experimenter Bias Experimenter Bias Awareness of Being a Subject Awareness of Being a Subject
17
Step 3: Measurement Goal: Quantify the IV and DV Goal: Quantify the IV and DV Psychological Measurement – the process of quantifying variables (called constructs) Psychological Measurement – the process of quantifying variables (called constructs) “The process of assigning numerical values to represent individual differences, that is, variations among individuals on the attribute of interest” “The process of assigning numerical values to represent individual differences, that is, variations among individuals on the attribute of interest” A “Measure” … A “Measure” … Any mechanism, procedure, tool, etc, that purports to translate attribute differences into numerical values Any mechanism, procedure, tool, etc, that purports to translate attribute differences into numerical values
18
Step 3: Measurement Two classes of measured variables: Two classes of measured variables: Categorical (or Qualitative) Categorical (or Qualitative) Differ in type but not amount Differ in type but not amount Continuous (or Quantitative) Continuous (or Quantitative) Differ in amount Differ in amount
19
Step 4: Data Analysis Statistics are what we use to summarize relationship among variables and to estimate the odds that they reflect more than mere chance Statistics are what we use to summarize relationship among variables and to estimate the odds that they reflect more than mere chance Descriptive Statistics: Summarize, organize, and describe a sample of data. Descriptive Statistics: Summarize, organize, and describe a sample of data. Inferential Statistics: Used to make inferences from sample data to a larger sample or population. Inferential Statistics: Used to make inferences from sample data to a larger sample or population. Distributions Distributions
20
Descriptive Statistics Measures of Central Tendency Measures of Central Tendency Mean, Median, Mode Mean, Median, Mode Measures of Variability Measures of Variability Range, Variance, SD Range, Variance, SD
21
Differences in Variance High variance Low variance Normal
22
Inferential Statistics Compares a hypothesis to an alternative Compares a hypothesis to an alternative Statistical Significance: The likelihood that the observed difference would be obtained if the null hypothesis were true Statistical Significance: The likelihood that the observed difference would be obtained if the null hypothesis were true Statistical Power: Likelihood of finding a statistically significant difference when a true difference exists Statistical Power: Likelihood of finding a statistically significant difference when a true difference exists
23
Correlation Correlation Correlation Used to assess the relationship between 2 variables Used to assess the relationship between 2 variables Represented by the correlation coefficient “r” Represented by the correlation coefficient “r” r can take on values from –1 to +1 r can take on values from –1 to +1 Size denotes the magnitude of the relationship Size denotes the magnitude of the relationship 0 means no relationship 0 means no relationship
24
Correlation and Regression Correlation Correlation Scatterplot Scatterplot Regression Line Regression Line Linear vs. Non-Linear Linear vs. Non-Linear Multiple Correlations Multiple Correlations Correlation and Causation Correlation and Causation
25
Prediction of the DV with one IV Correlations allow us to make predictions Correlations allow us to make predictions IV D V 115 86
26
Interpretation: Evaluating Measures How do you determine the usefulness of the information gathered from our measures? How do you determine the usefulness of the information gathered from our measures? The Answer: The Answer: Reliability Evidence Reliability Evidence Validity Evidence Validity Evidence
27
Interpretation: Evaluating Measures Reliability: Consistency or stability of a measure. Reliability: Consistency or stability of a measure. A measure should yield a similar score each time it is given A measure should yield a similar score each time it is given We can get a reliable measure by reducing errors of measurement: any factor that affects obtained scores but is not related to the thing we want to measure. We can get a reliable measure by reducing errors of measurement: any factor that affects obtained scores but is not related to the thing we want to measure. Errors of measurement Errors of measurement Random factors, practice effects, etc. Random factors, practice effects, etc.
28
Evaluating Measures: Reliability Test-Retest (Index of Stability) Test-Retest (Index of Stability) Method: Give the same test on two occasions and correlate sets of scores (coefficient of stability) Method: Give the same test on two occasions and correlate sets of scores (coefficient of stability) Error: Anything that differentially influences scores across time for the same test Error: Anything that differentially influences scores across time for the same test Issue: How long should the time interval be? Issue: How long should the time interval be? Limitations: Limitations: Not good for tests that are supposed to assess change Not good for tests that are supposed to assess change Not good for tests of things that change quickly (i.e., mood) Not good for tests of things that change quickly (i.e., mood) Difficult and expensive to retest Difficult and expensive to retest Memory/practice effects are likely Memory/practice effects are likely
29
Evaluating Measures: Reliability Equivalent Forms (Index of Equivalence) Equivalent Forms (Index of Equivalence) Method: Give two versions of a test and correlate scores (coefficient of equivalence) Method: Give two versions of a test and correlate scores (coefficient of equivalence) Reflects the extent to which the two different versions are measuring the same concept in the same way Reflects the extent to which the two different versions are measuring the same concept in the same way Issue: are tests really parallel?; length of interval? Issue: are tests really parallel?; length of interval? Limitations: Limitations: Difficult and expensive Difficult and expensive Testing time Testing time Unique estimate for each interval Unique estimate for each interval
30
Evaluating Measures: Reliability Internal Consistency Reliability Internal Consistency Reliability Method: take a single test and look at how well the items on the test relate to each other Method: take a single test and look at how well the items on the test relate to each other Split-half: similar to alternate forms (e.g., odd vs. even items) Split-half: similar to alternate forms (e.g., odd vs. even items) Cronbach’s Alpha: mathematically equivalent to the average of all possible split-half estimates Cronbach’s Alpha: mathematically equivalent to the average of all possible split-half estimates Limitations Limitations Only use for multiple item tests Only use for multiple item tests Some “tests” are not designed to be homogeneous Some “tests” are not designed to be homogeneous Doesn’t assess stability over time Doesn’t assess stability over time
31
Evaluating Measures: Reliability Inter-Rater Reliability Inter-Rater Reliability Method: two different raters rate the same target and the ratings are correlated Method: two different raters rate the same target and the ratings are correlated Correlation reflects the proportion of consistency among the ratings Correlation reflects the proportion of consistency among the ratings Issue: reliability doesn’t imply accuracy Issue: reliability doesn’t imply accuracy Limitations Limitations Need informed, trained raters Need informed, trained raters Ratings are not a good way to measure many attributes Ratings are not a good way to measure many attributes
32
Interpretation: Evaluating Measures Validity: Validity: The accurateness of inferences made based on data. The accurateness of inferences made based on data. Whether a measure accurately and completely represents what was intended to be measured. Whether a measure accurately and completely represents what was intended to be measured. Validity is not a property of the test Validity is not a property of the test It is a property of the inferences we make from the test scores It is a property of the inferences we make from the test scores
33
Evaluating Measures: Validity Criterion-Related Criterion-Related Predictive Predictive Concurrent Concurrent Content-Related Content-Related Construct-Related Construct-Related Reliability is a necessary but not sufficient condition for validity Reliability is a necessary but not sufficient condition for validity
34
Content Validity The extent to which a predictor provides a representative sample of the thing we’re measuring The extent to which a predictor provides a representative sample of the thing we’re measuring Example: First Exam Example: First Exam Content: history, research methods, criterion theory, job analysis, measurement in selection Content: history, research methods, criterion theory, job analysis, measurement in selection Evidence Evidence SME evaluation SME evaluation
35
Criterion-Related Validity The extent to which a predictor relates to a criterion The extent to which a predictor relates to a criterion Evidence Evidence Correlation (called the validity coefficient) Correlation (called the validity coefficient) A good validity coefficient is around.3 to.4 A good validity coefficient is around.3 to.4 Concurrent Validity Concurrent Validity Predictive Validity Predictive Validity
36
Construct Validity The extent to which a test is an accurate representation of the construct it is trying to measure The extent to which a test is an accurate representation of the construct it is trying to measure Construct validity results from the slow accumulation of evidence (multiple methods) Construct validity results from the slow accumulation of evidence (multiple methods) Evidence: Evidence: Content validity and criterion-related validity can provide support for construct validity Content validity and criterion-related validity can provide support for construct validity Convergent validity Convergent validity Divergent (discriminant) validity Divergent (discriminant) validity
37
Step 5: Conclusions From Research You are making inferences! You are making inferences! What if it you’re inferences seem “wrong”? What if it you’re inferences seem “wrong”? Theory is wrong? Theory is wrong? Information (data) is bad? Information (data) is bad? Bad measurement? Bad measurement? Bad research design? Bad research design? Bad sample? Bad sample? Analysis was wrong? Analysis was wrong?
38
Step 5: Conclusions From Research Cumulative Process Cumulative Process Dissemination Dissemination Conference presentations & journal publications Conference presentations & journal publications Boundary conditions Boundary conditions Generalizability Generalizability Causation Causation Serendipity Serendipity
39
Research Ethics Informed consent Informed consent Welfare of subjects Welfare of subjects Conflicting obligations to the organization and to the participants Conflicting obligations to the organization and to the participants
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.