Threats to Construct Validity

Slides:



Advertisements
Similar presentations
Theory Hypothesis Falsification which is the act of disproving a hypothesis or theory.
Advertisements

Lecture 8: Quasi-experiments Aims & Objectives –To differentiate between true and quasi-experiments –To discuss the nature of random allocation –To examine.
Chapter 8: Construct and External Validity in Experimental Research
Validity (cont.)/Control RMS – October 7. Validity Experimental validity – the soundness of the experimental design – Not the same as measurement validity.
DISSECTING\CRITIQUING AN ABSTRACT More feedback on critiquing and validity 1.
Validity of Quantitative Research Conclusions. Internal Validity External Validity Issues of Cause and Effect Issues of Generalizability Validity of Quantitative.
GROUP-LEVEL DESIGNS Chapter 9.
Using.  Purpose ▪ What question were the authors trying to answer? ▪ What was the primary relationship being studied?  Methods ▪ Subtitles: Participants;
Decision Criteria and Process Advisory Committee on Heritable Disorders in Newborns and Children February 26-27, 2009.
Causal Designs Chapter 9 Understanding when (and why) X  Y.
Chapter 8 Construct and External Validity in Experimental Research ♣ ♣ Construct Validity   External Validity   Cautions in Evaluating the External.
Measurement. Scales of Measurement Stanley S. Stevens’ Five Criteria for Four Scales Nominal Scales –1. numbers are assigned to objects according to rules.
Correlation AND EXPERIMENTAL DESIGN
1 Procedural Analysis or structured approach. 2 Sometimes known as Analytic Induction Used more commonly in evaluation and policy studies. Uses a set.
Validity Validity of measurement Validity of research
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
A bunch of stuff you need to know
Statistics Micro Mini Threats to Your Experiment!
Lecture 14 Psyc 300A. Review Operational definitions Internal validity Threats to internal validity Type I and type II errors.
Experimental Design 3: Demand Characteristics Martin Ch. 4.
EVAL 6970: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Dr. Anne Cullen Spring 2012.
Construct Validity and Measurement
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
1 Evaluating Psychological Tests. 2 Psychological testing Suffers a credibility problem within the eyes of general public Two main problems –Tests used.
Efficacy of Exercise in Reducing Depressive Symptoms.
Chapter 4 Hypothesis Testing, Power, and Control: A Review of the Basics.
Chapter 8 Introduction to Hypothesis Testing
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Conducting a User Study Human-Computer Interaction.
V ALIDITY IN Q UALITATIVE R ESEARCH. V ALIDITY How accurate are the conclusions you make based on your data analysis? A matter of degree Non-reification.
Construct Validity And its Threats Jill Hoxmeier H615: Advanced Research Design October 10, 2013 “Boy, a few more like that and I’ll be ready for Gamblers.
Understand how we can test and improve validity of a study The Pros and Cons of different sampling techniques.
Activity 3.3 Questions to Ask when Designing an Experiment In this presentation are a series of questions that you can ask yourself as you go through the.
Evaluating a Research Report
 Internal Validity  Construct Validity  External Validity * In the context of a research study, i.e., not measurement validity.
Student information pack: Validity Some key points which you may find helpful.
Conducting a User Study Human-Computer Interaction.
Validity RMS – May 28, Measurement Reliability The extent to which a measurement gives results that are consistent.
CDIS 5400 Dr Brenda Louw 2010 Validity Issues in Research Design.
Chapter 3.1.  Observational Study: involves passive data collection (observe, record or measure but don’t interfere)  Experiment: ~Involves active data.
1 Experimentation in Computer Science – Part 3. 2 Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment.
Human-Computer Interaction. Overview What is a study? Empirically testing a hypothesis Evaluate interfaces Why run a study? Determine ‘truth’ Evaluate.
Reading and Evaluating Research Method. Essential question to ask about the Method: “Is the operationalization of the hypothesis valid? Sections: Section.
KNR 295 Measurement Slide 1 Measurement Theory & Construct Validity Chapter 3.
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
Experiments.  Labs (update and questions)  STATA Introduction  Intro to Experiments and Experimental Design 2.
Experimental Research Design Causality & Validity Threats to Validity –Construct (particular to experiments) –Internal –External – already discussed.
Validity & Reliability. OBJECTIVES Define validity and reliability Understand the purpose for needing valid and reliable measures Know the most utilized.
Experiment An experiment deliberately imposes a treatment on a group of objects or subjects in the interest of observing the response. Differs from an.
BHS Methods in Behavioral Sciences I May 9, 2003 Chapter 6 and 7 (Ray) Control: The Keystone of the Experimental Method.
SOCI 4466 PROGRAM & POLICY EVALUATION LECTURE #8 1. Evaluation projects 2. Take-home final 3. Questions?
Construct validity s.net/kb/consthre.htm.
BHS Methods in Behavioral Sciences I April 7, 2003 Chapter 2 – Introduction to the Methods of Science.
KNR 405 Intro & Validity Slide 1 KNR 405 Applied Motor Learning.
Experimental Design Ragu, Nickola, Marina, & Shannon.
Causation & Experimental Design
Scientific Method A way of problem solving that involves
Hypothesis Testing, Validity, and Threats to Validity
Conducting a User Study
Introduction to Measurement
Introduction to Design
Reliability, validity, and scaling
Chapter 6 Research Validity.
Experimental Design.
Experimental Design.
Formulating the research design
Introduction to Experimental Design
Chapter 6 Research Validity.
Internal Validity - The extent to which all explanations for changes in the DV between conditions have been eliminated -- other than the IV. ie(7a)
BHS Methods in Behavioral Sciences I
Presentation transcript:

Threats to Construct Validity

Inadequate Pre-Operational Explication of Constructs Preoperational = before translating constructs into measures or treatments In other words, you didn't do a good enough job of defining (operationally) what you mean by the construct

Mono-Operation Bias Pertains to the treatment or program Used only one version of the treatment or program

Mono-Method Bias Pertains especially to the measures or outcomes Only operationalized measures in one way For instance, only used paper-and-pencil tests

Hypothesis Guessing People guess the hypothesis and respond to it rather than respond "naturally“. People want to look good or look smart. This is a construct validity issue because the "cause" will be mislabeled. You'll attribute effect to treatment rather than to good guessing.

Evaluation Apprehension People make themselves look good because they know they're in a study. Perhaps their apprehension makes them consistently respond poorly -- you mislabel this as a negative treatment effect.

Experimenter Expectancies The experimenter can bias results consciously or unconsciously. Bias becomes confused (mixed up with) the treatment; you mislabel the results as a treatment effect.

Confounding Constructs and Levels of Constructs Conclude that the treatment has no effect when it is only that level of the treatment which has none Really a dosage issue -- related to mono-operation because you only looked at one or two levels.

Interaction of Different Treatments People get more than one treatment . This happens all the time in social ameliorative studies. Again, the construct validity issue is largely a labeling issue.

Interaction of Testing and Treatment Does the testing itself make the groups more sensitive or receptive to the treatment? This is a labeling issue. It differs from testing threat to internal validity; here, the testing interacts with the treatment to make it more effective; there, it is not a treatment effect at all (but rather an alternative cause).

Restricted Generalizability Across Constructs You didn't measure your outcomes completely. You didn't measure some key affected constructs at all (for example, unintended effects).