Reliability and Validity of Dependent Measures

Slides:



Advertisements
Similar presentations
Ch 8: Experimental Design Ch 9: Conducting Experiments
Advertisements

Reliability and Validity
Validity (cont.)/Control RMS – October 7. Validity Experimental validity – the soundness of the experimental design – Not the same as measurement validity.
Independent and Dependent Variables
Increasing your confidence that you really found what you think you found. Reliability and Validity.
The Basics of Experimentation I: Variables and Control
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
EXPERIMENTAL DESIGNS Criteria for Experiments
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Reliability and Validity in Experimental Research ♣
EXPERIMENTAL DESIGNS What Is Required for a True Experiment? What Are the Independent and Dependent Variables? What Is a Confounding Variable? What Are.
Manipulation and Measurement of Variables
Lecture 12 Psyc 300A. Review: Inferential Statistics We test our sample recognizing that differences we observe may be simply due to chance. Significance.
Psych 231: Research Methods in Psychology
Manipulation and Measurement of Variables
Variables cont. Psych 231: Research Methods in Psychology.
Validity, Reliability, & Sampling
Manipulation and Measurement of Variables Psych 231: Research Methods in Psychology.
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS.
Chapter 8 Experimental Research
Experimental Design The Gold Standard?.
Experimental Research
Group Discussion Explain the difference between assignment bias and selection bias. Which one is a threat to internal validity and which is a threat to.
Consumer Preference Test Level 1- “h” potato chip vs Level 2 - “g” potato chip 1. How would you rate chip “h” from 1 - 7? Don’t Delicious like.
VALIDITY, RELIABILITY, and TRIANGULATED STRATEGIES
Methodology Experiments.
Understand how we can test and improve validity of a study The Pros and Cons of different sampling techniques.
Design Experimental Control. Experimental control allows causal inference (IV caused observed change in DV) Experiment has internal validity when it fulfills.
Experimental Research Validity and Confounds. What is it? Systematic inquiry that is characterized by: Systematic inquiry that is characterized by: An.
Research Strategies Chapter 6. Research steps Literature Review identify a new idea for research, form a hypothesis and a prediction, Methodology define.
Control in Experimentation & Achieving Constancy Chapters 7 & 8.
INTRO TO EXPERIMENTAL RESEARCH, continued Lawrence R. Gordon Psychology Research Methods I.
LEARNING GOAL 1.2: DESIGN AN EFFECTIVE PSYCHOLOGICAL EXPERIMENT THAT ACCOUNTS FOR BIAS, RELIABILITY, AND VALIDITY Experimental Design.
The Basics of Experimentation Ch7 – Reliability and Validity.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 3: The Foundations of Research 1.
Understanding Research Design Can have confusing terms Research Methodology The entire process from question to analysis Research Design Clearly defined.
Research Methods in Psychology (Pp ). IB Internal Assessment The IB Psychology Guide states that SL students are required to replicate a simple.
Independent vs Dependent Variables PRESUMED CAUSE REFERRED TO AS INDEPENDENT VARIABLE (SMOKING). PRESUMED EFFECT IS DEPENDENT VARIABLE (LUNG CANCER). SEEK.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Introduction section of article
Experimental Research
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Chapter Six: The Basics of Experimentation I: Variables and Control.
Chapter 8 – Lecture 6. Hypothesis Question Initial Idea (0ften Vague) Initial ObservationsSearch Existing Lit. Statement of the problem Operational definition.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Experimental Research Design Causality & Validity Threats to Validity –Construct (particular to experiments) –Internal –External – already discussed.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
Can you hear me now? Keeping threats to validity from muffling assessment messages Maureen Donohue-Smith, Ph.D., RN Elmira College.
Validity = accuracy of studyReliability = consistency of results.
Experimental and Quasi-Experimental Research
CHOOSING A RESEARCH DESIGN
Experiments Why would a double-blind experiment be used?
Experiment Basics: Variables
Experimental Research Designs
Chapter 8 Experimental Design The nature of an experimental design
Reliability and Validity
Reliability and Validity
Experiments: Part 3.
Chapter 6 Research Validity.
Social Research Methods Experimental Research
Hypothesis Testing, Validity &
Experiment Basics: Variables
Chapter 6 Research Validity.
Experiments: Part 2.
Experiments: Part 3.
Experiments: Part 3.
Research Methods.
Presentation transcript:

Reliability and Validity of Dependent Measures

Validity of Dependent Variables Does it measure the concept? Construct Validity: Does DV really capture what you want to measure (good operational definition?) Or does it include mood, culture or gender bias, confusing wording, observational bias, etc.

Indicators of Construct Validity Face Validity: Does it appear to be a good measure (do experts think so?) Predictive Validity: Predict later behavior- GRE=grad school success? Concurrent Validity: Are those known to diverge different in scores (Self Monitoring)

Indicators of Construct Validity Convergent Validity: do other kinds of ratings agree? Similar responses to similar scales Divergent validity: is it different from other constructs? (measures intell, not SES or gender bias) shy isn’t lonliness Reactivity- knowing you are being studied changes behavior

Reliability of DV Are results repeatable? All measurement contains true score plus error of measurement Not an issue of replication- same subjects=same scores

Types of Reliability Inter-rater reliability- calculate r for observers or Cohen’s Kappa Internal consistency- split half reliability Cronbach’s Alpha calculates ave of all possible corr. Temporal consistency- test-retest reliability with SAME people Restaurant example

Can a variable be reliable and not valid? Valid and not reliable? How do you know you have a good DV? Mental Measurements Yearbook

Validity of Experimental Designs

Survey Design

Internal validity Does the design test the hypothesis we want it to test? Did IV manipulation cause change in DV? Can we infer causality? What if internal validity is low?

External validity Does your study represent a broad population? Caution with Discussion Section if weak Random Sampling Stratified Sampling Block Randomization

Ecological validity Does study reflect the real world- do people really behave this way? Can you study anything without changing it?

Threats to Internal Validity: In pre-post design: Test participants Administer IV Post test for effect of IV Compare pre vs. post results to look for effect of IV

History World events may cause change in attitudes or behavior over time. Tests of patriotism pre/post 9/11 Views of President pre/post Katrina Attitudes of adolescents pre/post Cobain suicide

Maturation Individuals change over time as they mature. Issue for studies of children, but also huge growth in freshman year- change of attidues and behavior.

Testing The study you use may cause differences in behavior. Similar to REACTIVITY, but for entire study not just DV. Parenting study for example

Instrumentation Use of instrument may get better or worse with time Observation studies Testing skill/ interviewing

Regression toward the mean Extreme scores do not tend to be repeatable- those who score very high or very low on a test will be closer to the average if tested again. A big issue for any study where pretest is used to select subjects for post test.

Mortality Those who drop out of your study may differ from those who choose to continue.

Placebo effect If given any treatment, behavior will change, even if treatment was not meaningful. (fake drugs get some results)

How can we improve internal validity? History Maturation Testing Instrumentation Regression toward the mean Mortality Placebo effect

Improved Design In pre-post design: Two Group design Test participants Administer IV Post test for effect of IV Compare pre vs. post results to look for effect of IV Two Group design Pretest (do you need to do this?) RANDOMIZED assignment to levels of IV Compare post test results of IV and Control groups

Extraneous Variables Any variable that you have not measured or controlled (RA) that may impact the results of your study

Demand Characteristics Participants behave in ways demanded by the situation or experimental set-up. Behavior does not reflect actual beliefs or attitudes. Issue of Ecological Validity

Subject Bias Bias brought on by subjects beliefs (Overhead of mood and menstrual cycle)

Social desirability Subjects want to do the “right thing” and try to guess what the experimenter wants, and do not behave naturally. How to reduce Subject biases?

Experimenter Bias Experimenters’ behavior and expectations can sway results of test. How to reduce these biases?

Floor & Ceiling Effects If measures are too easy or too difficult you will not see differences between groups. Pilot test with similar subjects!

Order effects When using within subjects designs, order of presentation can affect results in several ways. Practice effects: Subjects get better at task with successive trials Fatigue effects: Subjects get tired and do worse or lose interest Carryover effects: subjects experience in one condition impacts results of another condition- subject bias or anchoring and adjustment issues.

How to reduce order effects Counterbalancing Does not get rid of effects, it just makes them equal for all groups. Can do complete counterbalancing if small number of conditions. Latin Square counterbalancing A, B, skip, C, skip, D, etc. then fill back A, B, N, C, N-1, D, N-2, E etc.

A Latin Square for 6 conditions Order1 A B F C E D 2 3 4 5 6

Pretest Vs. Pilot test When do you use a pilot test? When do you use a pre test?

Can a DV be reliable but not valid?

Experimental Validity What to do if low Internal Validity? What are impacts of low External Validity? What if Ecological Validity is low?