 Degree to which inferences made using data are justified or supported by evidence  Some types of validity ◦ Criterion-related ◦ Content ◦ Construct.

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Chapter 8 Flashcards.
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Principles of Measurement Lunch & Learn Oct 16, 2013 J Tobon & M Boyle.
Cal State Northridge Psy 427 Andrew Ainsworth PhD
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
Reliability & Validity.  Limits all inferences that can be drawn from later tests  If reliable and valid scale, can have confidence in findings  If.
Part II Sigma Freud & Descriptive Statistics
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 6 Validity.
Chapter 4A Validity and Test Development. Basic Concepts of Validity Validity must be built into the test from the outset rather than being limited to.
RESEARCH METHODS Lecture 18
Chapter 4 Validity.
Test Validity: What it is, and why we care.
Reliability or Validity Reliability gets more attention: n n Easier to understand n n Easier to measure n n More formulas (like stats!) n n Base for validity.
VALIDITY.
Beginning the Research Design
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Chapter 7 Evaluating What a Test Really Measures
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Understanding Validity for Teachers
VALIDITY. Validity is an important characteristic of a scientific instrument. The term validity denotes the scientific utility of a measuring instrument,
Measurement in Exercise and Sport Psychology Research EPHE 348.
Reliability and Validity what is measured and how well.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Principles of Test Construction
Validity & Practicality
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Reliability & Validity
Assessing the Quality of Research
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Measurement Validity.
Types of Validity Content Validity Criterion Validity Construct Validity Predictive Validity Concurrent Validity Convergent Validity Discriminant Validity.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Chapter 4 Validity Robert J. Drummond and Karyn Dayle Jones Assessment Procedures for Counselors and Helping Professionals, 6 th edition Copyright ©2006.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
MEASUREMENT. MeasurementThe assignment of numbers to observed phenomena according to certain rules. Rules of CorrespondenceDefines measurement in a given.
MOI UNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS CONCEPT MEASUREMENT, SCALING, VALIDITY AND RELIABILITY BY MUGAMBI G.K. M’NCHEBERE EMBA NAIROBI RESEARCH.
Validity Validity is an overall evaluation that supports the intended interpretations, use, in consequences of the obtained scores. (McMillan 17)
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Applied Quantitative Analysis and Practices LECTURE#17 By Dr. Osman Sadiq Paracha.
Lab 6 Validity. Picking a Topic for Your Paper Were you able to come up with 3 ideas? Let’s chat about some of the ideas to make sure we’re all on the.
How Psychologists Do Research Chapter 2. How Psychologists Do Research What makes psychological research scientific? Research Methods Descriptive studies.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Consistency and Meaningfulness Ensuring all efforts have been made to establish the internal validity of an experiment is an important task, but it is.
MGMT 588 Research Methods for Business Studies
Reliability and Validity
Chapter 2 Theoretical statement:
Ch. 5 Measurement Concepts.
VALIDITY by Barli Tambunan/
Lecture 5 Validity and Reliability
Reliability and Validity
Reliability and Validity in Research
Reliability & Validity
Evaluation of measuring tools: validity
Journalism 614: Reliability and Validity
Human Resource Management By Dr. Debashish Sengupta
Week 3 Class Discussion.
پرسشنامه کارگاه.
Reliability and Validity of Measurement
Reliability, validity, and scaling
RESEARCH METHODS Lecture 18
Cal State Northridge Psy 427 Andrew Ainsworth PhD
Chapter 8 VALIDITY AND RELIABILITY
Presentation transcript:

 Degree to which inferences made using data are justified or supported by evidence  Some types of validity ◦ Criterion-related ◦ Content ◦ Construct  All part of unitarian view of validity  Constructs - theoretical abstractions aimed at organizing and making sense of our environment; they are LATENT

 A criterion is any variable you wish to explain and/or predict  They are the key to well-developed theory, good measurement, and strong research design  Ultimate criterion  Multidimensional nature of criteria  Intermediate criteria

 Process of establishing a relationship between variables  Predictive, concurrent, postdictive  Usually based on correlation or regression equation  Low reliability will attenuate or mask relationships

 Selection Ratio – proportion of the individuals in the sample who are selected of the total number  Base rate – percent of successful individuals under random selection  Range Restriction  Differential Prediction for different subgroups

FN FP VP VN XcXc YcYc FN+VP=BR VN+FP=1-BR VP+FP=SR FN+VN=1-SR Successful Unsuccessful RejectAccept False Negatives False Positives

FN FP VP VN XcXc YcYc FN+VP=BR VN+FP=1-BR VP+FP=SR FN+VN=1-SR Successful Unsuccessful RejectAccept False Negatives False Positives

FN FP VP VN XcXc YcYc FN+VP=BR VN+FP=1-BR VP+FP=SR FN+VN=1-SR Successful Unsuccessful RejectAccept False Negatives False Positives

FN FP VP VN XcXc YcYc FN+VP=BR VN+FP=1-BR VP+FP=SR FN+VN=1-SR

 Even low correlations can lead to large increases in selection efficiency  SR and BR have strong influences  When SR is small (choose few), fewer FP and more FN  When SR is large, fewer FN and more FP  When BR is large (many can be successful), SR and validity have little effect on selection efficiency  Most gains in success ratio when BR =.50 and SR is small (e.g.,.10)  The tradeoffs depend on purpose of selection

X Y - Direct - Indirect - Ambiguous

X Y Same prediction for each group

X Y Different prediction for each group

 Extent to which items or measures cover the content area the test purports to measure ◦ Expert judges determine if a measure came from a particular content domain ◦ Scoring and content is based upon theory ◦ If measures are from same content domain, should demonstrate high reliability ◦ If low internal consistency reliability, low content validity

 Validity of inferences about latent unobserved variables on the basis of observed variables  Does a measure assess what it is intended to assess? Do the variables relate in theoretically meaningful ways?  Low reliability will make it difficult to assess the nature of a particular construct and attenuate relationships with other constructs

Construct Validity Can we generalize to the constructs from the measures? Theory What you think Cause Construct Effect Construct Measure or Manipulation Observed Outcomes Observed Relationship True Relationship What you see

Anxiety Test Score (Y) Measure of Anxiety (X) Ability to Learn Salads Eaten (Z) Vegetarianism 5

 Internal Structure Analysis  Cross Structure Analysis  Nomological network (Cronbach & Meehl)

 Factor Analysis ◦ Used to identify factors or dimensions that underlie relations among observed variables  Exploratory - Useful When: ◦ No info on internal structure available ◦ Factor structures may look different than original scale ◦ You have reservations about previous factor analyses  Confirmatory - Useful When: ◦ You have some idea of the internal structure ◦ Confirming factor structures from previous studies  Necessary but not sufficient to establish construct validity

Ability to Learn Z1Z1 Z2Z2 Z3Z3 Anxiety X1X1 X2X2 X3X3 X4X4 e1e1 e3e3 e4e4 e5e5 e6e6 e7e7 e2e2

 Embedded in nomological network (nomological validity)  Test of hypotheses by examining relationships between different indicators of underlying constructs ◦ e.g., leadership style based on reports from subordinates and leadership self-report inventory  Relies on multiple methods of measurement

 A representation of constructs of interest in a study, their observable manifestations (measures), and the interrelationships among and between them  Cronbach & Meehl said this is necessary to establish construct validity  Elements include: ◦ Specify linkage between constructs (hypotheses) ◦ Operationalize constructs (specify measurement)

 Convergent validity - Convergence among different methods designed to measure the same construct  Discriminant validity - Distinctiveness of constructs, demonstrated by divergence of methods designed to measure different constructs  Multi-Trait Multi-Method

 Heterotrait-Monomethod ◦ Different traits, same method  Heterotrait-Heteromethod ◦ Different traits, different methods  Monotrait-Heteromethod ◦ Same trait, different methods ◦ Validity diagonals  Monotrait-Monomethod ◦ Same trait, same method ◦ Reliability diagonals

Method1Method2Method3 A1 B1 C1A2 B2 C2A3 B3 C3 M1 A1 (.89) B1.51 (.89) C (.76) M2 A (.93) B (.94) C (.84) M3 A (.94) B (.92) C (.85)

 Specify the nomological net (expected + and - relationships) of expected relations  Establish reliability  Check convergence with other preexisting measures of the construct (convergent validity)  Factor analysis  Empirical studies of relatedness  Empirical studies of discriminability

 Take the hypotheses you developed in assignment 2 and the variables that were included in them. ◦ Draw a picture of what you believe the nomological network of these variables would look like ◦ What alternative measures of each variable might you use (different than those specified in Assignment 3) to establish convergent validity? ◦ Draw what an MTMM construct validity chart would look like that includes each variable in your study and the original and alternative measures you identified for each construct. Specify whether each correlation would be expected to be Hi, Low or Moderate.