Instrument Validity & Reliability. Why do we use instruments? Reliance upon our senses for empirical evidence Senses are unreliable Senses are imprecise.

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Chapter 8 Flashcards.
The Research Consumer Evaluates Measurement Reliability and Validity
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Increasing your confidence that you really found what you think you found. Reliability and Validity.
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
VALIDITY AND RELIABILITY
Reliability - The extent to which a test or instrument gives consistent measurement - The strength of the relation between observed scores and true scores.
Reliability & Validity.  Limits all inferences that can be drawn from later tests  If reliable and valid scale, can have confidence in findings  If.
Part II Sigma Freud & Descriptive Statistics
Measurement in Psychology: Validity Lawrence R. Gordon Psychology Research Methods I.
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
Chapter 4 Validity.
Test Validity: What it is, and why we care.
RELIABILITY & VALIDITY What is Reliability? What is Reliability?What is Reliability?What is Reliability? How Can We Measure Reliability? How Can We Measure.
RELIABILITY & VALIDITY
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Psych 231: Research Methods in Psychology
Validity, Reliability, & Sampling
Chapter 7 Correlational Research Gay, Mills, and Airasian
Chapter 7 Evaluating What a Test Really Measures
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Standardized Test Scores Common Representations for Parents and Students.
Test Validity S-005. Validity of measurement Reliability refers to consistency –Are we getting something stable over time? –Internally consistent? Validity.
Technical Issues Two concerns Validity Reliability
Validity and Reliability
Reliability, Validity, & Scaling
Experimental Research
Ch 6 Validity of Instrument
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
ScWk 240 Week 6 Measurement Error Introduction to Survey Development “England and America are two countries divided by a common language.” George Bernard.
Reliability & Validity
Chapter 7 Instrumentation. Empirical Data We need DATA We can’t rely solely upon our senses We develop INSTRUMENTS to compensate for the limitations of.
Measurement Validity.
Chapter 8 Validity and Reliability. Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V.
Selecting a Sample. Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked.
Presented By Dr / Said Said Elshama  Distinguish between validity and reliability.  Describe different evidences of validity.  Describe methods of.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Validity and Item Analysis Chapter 4. Validity Concerns what the instrument measures and how well it does that task Not something an instrument has or.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
Chapter 9 Correlation, Validity and Reliability. Nature of Correlation Association – an attempt to describe or understand Not causal –However, many people.
Measurement MANA 4328 Dr. Jeanne Michalski
Experimental Research Methods in Language Learning Chapter 12 Reliability and Reliability Analysis.
Validity and Reliability in Instrumentation : Research I: Basics Dr. Leonard February 24, 2010.
Psychology 3051 Psychology 305: Theories of Personality Lecture 2.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Measuring Research Variables
Educational Research Chapter 8. Tools of Research Scales and instruments – measure complex characteristics such as intelligence and achievement Scales.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 5 What is a Good Test?
Assessing Student Performance Characteristics of Good Assessment Instruments (c) 2007 McGraw-Hill Higher Education. All rights reserved.
Dr. Jeffrey Oescher 27 January 2014 Technical Issues  Two technical issues  Validity  Reliability.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Consistency and Meaningfulness Ensuring all efforts have been made to establish the internal validity of an experiment is an important task, but it is.
Reliability and Validity
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
Questions What are the sources of error in measurement?
Test Validity.
Tests and Measurements: Reliability
Journalism 614: Reliability and Validity
5. Reliability and Validity
RESEARCH METHODS Lecture 18
Presentation transcript:

Instrument Validity & Reliability

Why do we use instruments? Reliance upon our senses for empirical evidence Senses are unreliable Senses are imprecise – not valid enough –Operational definitions are important

Validity – How much confidence do you have in the measurement of your instrument? Reliability – How consistent is your measurement?

How much confidence do you have? Judgmental Validity –Face V –Content V Empirical validity –Criterion-related V Predictive Concurrent Judgmental-Empirical - Construct V

Face Validity Does instrument look like valid? –On a survey or questionnaire, the questions seem to be relevant –On a checklist, the behaviors seem relevant –For a performance test, the task seems to be appropriate

Content Validity The content of the test, the measure, is relevant to the behavior or construct being measured An expert judges or a panel of experts judge the content

Criterion Related Validity Using a another independent measure to validate a test –Typically computing a correlation – validity coefficient Two types –Predictive validity –Concurrent validity

Criterion-Related Validity Predictive ACT achievement test Correlated with College GPA Concurrent Coopersmith Self-esteem Scale Correlated with teacher’s ratings of self-esteem

Construct Validity Construct – attempt to describe, name an intangible variable Use many different measures to validate a measure Self-esteem – construct –Instrument measure

Construct Validity Self-esteem – construct –Instrument measure e.g. coopersmith –Correlated it with: Behavioral checklist Teacher’s comments Another accepted instrument for Self-esteem A measure of confidence Locus of control measure

Reliable but is it Valid? Valid but is it Reliable? Invalid and Unreliable No confidence you’ll get near the target; have no idea where it’s going to shoot.

Reliable but is it Valid? Valid but is it Reliable? Invalid but Reliable No confidence you’ll get near the target; but you know where it’s going to shoot (just not at the target!)

Reliable but is it Valid? Valid but is it Reliable? Valid but Unreliable Confidence that when you hit something, it’s what you want, but you can’t depend upon consistency.

Reliable but is it Valid? Valid but is it Reliable? Valid and Reliable Confident that when you hit a target, it’s what you want and you can depend upon consistent shots.

Reliability For an instrument – –Consistency of scores from use to use Types of reliability coefficients –Test – retest –Equivalent forms –Internal consistency Split-half Alpha coefficient (Cronbach alpha)

Reliability Coefficient Value ranges from 0 to considered the minimal acceptable.90 is very good.60 is sometimes acceptable but is really not very good Lower than.60 definitely unacceptable

Inter-rater reliability Example – Two teachers reading same essay, scoring them in a similar manner – consistently Using same checklist to make observations Can be expressed as a coefficient Often as percentage of agreement A function of training, objectivity, and rubric or checklist, i.e., the operational definition!

Norm-referenced tests –Comparison of individual score to others –Intelligence test –ISAT, Iowa Basic Skills Test –SAT aptitude test –Personality test –Percentile’s - derived scores –Grading on a curve

Criterion referenced test –Individual score is compare to a benchmark (a criterion) –If Raw Score used (no conversion): C-R test –Mastery of material –Earning a grade in my class –Disadvantage is potential lack of variability

Measures of Optimum Performance Aptitude Tests –Predict future performance Achievement tests –Measure current knowledge Performance tests –Measure current ability to complete tasks

Measures of typical performance Often impacted by “social desirability” –Wanting to hide undesirable traits or characteristics One way to work around sd is to use projective tests Rorschach ink Blot Thematic Apperception Test

Paper/pencil measures of attitudes using Likert-type scales Strongly Agree – Strongly Disagree - Reverse scoring to prevent or identify “response bias”