Reliability and Validity

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Survey Methods Overview
The Research Consumer Evaluates Measurement Reliability and Validity
Reliability and Validity
Independent and Dependent Variables
Measurement: Reliability and Validity For a measure to be useful, it must be both reliable and valid Reliable = consistent in producing the same results.
PowerPoint presentation to accompany Research Design Explained 4th edition ; ©2000 Mark Mitchell & Janina Jolley Chapter 7 The Multiple Group Experiment.
Chapter 2 Research Process Part 1: Aug 29, Research Methods Importance of scientific method Research Process – develop ideas, refine ideas, test.
Rosnow, Beginning Behavioral Research, 5/e. Copyright 2005 by Prentice Hall Ch. 6: Reliability and Validity in Measurement and Research.
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 8 Survey Research.
Measurement in Exercise and Sport Psychology Research EPHE 348.
PowerPoint presentation to accompany Research Design Explained 5th edition ; ©2004 Mark Mitchell & Janina Jolley Chapter 3 Reading and Evaluating Research.
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 7 Introduction to Descriptive.
Matched Pairs, Within-Subjects, and Mixed Designs
The Basics of Experimentation Ch7 – Reliability and Validity.
METHOD in Personality Research. How do we gather data? 1. From whom??? 1. From whom??? A. Self A. Self B. Others B. Others Plus/Minus? Plus/Minus?
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 10 The Simple Experiment.
Research Methods in Psychology Chapter 2. The Research ProcessPsychological MeasurementEthical Issues in Human and Animal ResearchBecoming a Critical.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 14 Single-n Designs and Quasi-Experiments.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Writing A Review Sources Preliminary Primary Secondary.
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 3 Generating and Refining Research.
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 12 Factorial Designs.
PowerPoint presentation to accompany Research Design Explained 5th edition ; ©2004 Mark Mitchell & Janina Jolley Chapter 11 Factorial Designs.
Measuring Research Variables
The Scientific Method. What is the scientific method? Collection of steps Attempt to solve a problem or answer a question Minimize the influence of bias.
Measurement and Scaling Concepts
PowerPoint presentation to accompany Research Design Explained 5th edition ; ©2004 Mark Mitchell & Janina Jolley Chapter 5 Selecting the Best Measure for.
Reliability and Validity
Ch. 5 Measurement Concepts.
Chapter 4 Research Methods in Clinical Psychology
Reliability and Validity
Experiment Basics: Variables
Reading and Evaluating Research
Research Methods I Chapter 5 – Correlational Research: Surveys
RELIABILITY OF QUANTITATIVE & QUALITATIVE RESEARCH TOOLS
Tests and Measurements: Reliability
Reliability & Validity
Human Resource Management By Dr. Debashish Sengupta
Research Methods Lesson 2 Validity.
Week 3 Class Discussion.
پرسشنامه کارگاه.
5. Reliability and Validity
Reliability, validity, and scaling
Evaluation of measuring tools: reliability
Chapter 6 Research Validity.
Psychology, Science, and You
Reliability.
RESEARCH METHODS Lecture 18
Experimental Design.
Experimental Design.
Evaluating research Is this valid research?.
Research in Psychology
By ____________________
Experiment Basics: Variables
Experiments: Validity, Reliability and Other Design Considerations
Introduction to Experimental Design
Hindsight Bias Tendency to believe, after learning an outcome, that one would have foreseen it. “I knew.
Experiments II: Validity and Design Considerations
Chapter 2 Ethics and Validity
Research in Psychology Chapter Two 8-10% of Exam
Research Methods.
Chapter 8 VALIDITY AND RELIABILITY
Misc Internal Validity Scenarios External Validity Construct Validity
Generating and Refining Research Ideas
AS Psychology Research Methods
Chapter 9 Internal Validity
Presentation transcript:

Reliability and Validity Chapter 5 Reliability and Validity PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Overview Measuring Variables Choosing a Behavior to Measure Overview of Types of Measurement Errors Bias Random error Reliability Validity Manipulating Variables Validity Threats to Establishing Types of manipulations PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Two Types of Measurement Error Bias Random error PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Three “Places” Measurement Error Can Occur Observer/Scorer Participant Person administering the measure PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Two Types of Observer Error Observer bias (Scorer bias) Random observer error PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Minimizing Observer Errors Why it is more important to reduce observer bias than random error Techniques for reducing observer bias* PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Techniques for Reducing Observer Bias Eliminating human observer errors by eliminating the human observer Limiting human observer errors by limiting the human observer’s role Reducing observer bias by making observers “blind” Conclusions about reducing observer bias PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Reducing Random Observer Error Most of the techniques that reduce observer bias reduce random observer error PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Errors in Administering the Measure Types Experimenter (researcher) bias Random error Solutions Blind technique to reduce bias Standardization to reduce both bias and random error PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Errors Due to the Participant Bias due to the participant (Subject bias) Random error due to the participant PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Subject (Participant) Bias Obeying demand characteristics Social desirability bias PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Conclusions about Reducing Subject Biases Blind techniques can reduce demand characteristics Making participants anonymous can reduce social desirability bias PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Summary of Types of Measurement Error Try to reduce all forms of measurement error Really focus on reducing bias PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Reliability: The (Relative) Absence of Random Error The importance of being reliable: Reliability as a prerequisite to validity Using test-retest reliability to assess overall reliability: To what degree is a measure “random error free”? PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Identifying (and Then Dealing with) the Main Source of a Measure’s Reliability Problems Are observers to blame for low test-retest reliability?: Assessing observer reliability Non-observer sources of random error Using internal consistency measures to estimate random error due to participants PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Internal Consistency: Test Questions Should Agree with Each Other Random error due to participants may cause low internal consistency PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Two Solutions to Problems Caused by Random Participant Error Add questions to let random participant error balance out Ask better questions to reduce random participant error PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Measuring Internal Consistency Average inter-item correlations as indexes of internal consistency Split-half coefficients as indexes of internal consistency Additional indexes of internal consistency Conclusions about internal consistency’s relationship to reliability PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Conclusions About Reliability Reliability is a prerequisite for validity If test-retest reliability is low, try to find out where reliability problem is and fix it. Reliability does not guarantee validity PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Beyond Reliability: Establishing Construct Validity Content Validity Internal Consistency Convergent Validity: Getting evidence that you are measuring the right construct Discriminant Validity: Showing that you are not measuring the wrong construct PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Manipulating Variables Common threats to a manipulation’s validity Evidence used to argue for a manipulation’s construct validity Tradeoffs among three common types of manipulations Conclusions PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Common Threats to a Manipulation’s Validity Random error Experimenter bias Subject biases PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Evidence Used to Argue for a Manipulation’s Construct Validity Consistency with theory Manipulation checks PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Tradeoffs Among Three Common Types of Manipulations Instructional manipulations Environmental manipulations Manipulations involving stooges PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley

Concluding Remarks Operational definitions should Be consistent with dictionary/theory definitions Be standardized to reduce bias and random error Have evidence to support their validity PowerPoint presentation to accompany Research Design Explained 6th edition; ©2007 Mark Mitchell & Janina Jolley