Download presentation
Presentation is loading. Please wait.
Published byJennifer Hutchinson Modified over 8 years ago
1
Consistency and Meaningfulness Ensuring all efforts have been made to establish the internal validity of an experiment is an important task, but it is not the only concern in conducting good experimental research. We now turn our attention to the consistency and meaningfulness of our independent and dependent variables: reliability and validity.
2
Reliability Reliability - refers to the consistency or repeatability of results over time. A tape-measure is a very consistent means of measuring length. Measurements in psychological research are seldom as consistent (e.g., you would probably score differently if you took the ACT two times). There are concerns of reliability regarding independent variables as well, but no formal means by which they are assessed.
3
Types of Reliability There are three common methods for assessing the reliability of dependent variables. Test-retest reliability - the correlation between scores of the same test given at two different times. Equivalent (alternate) forms reliability - the correlation between scores of two similar versions of a test. Split-half reliability - the correlation between odd and even answers on the same test (examines the internal consistency of a test).
4
Validity Validity - refers to the degree to which an intended purpose is achieved. The validity of an independent variable is most often only informally established through thoughtful examination of the manipulation and whether the outcome indicates the manipulation had the desired effect. There are, conversely, a variety of more formal methods used to establish the validity of dependent variables.
5
Types of Validity The validity of dependent variables is concerned with whether or not the DV measures what it claims to measure. Face validity - subjective determination that the DV measures what is claimed (e.g., on the “face” of it). Content validity - judgment of the degree to which items, tasks, etc. represent the area of study (e.g., an American History test which mostly emphasizes questions on the Civil War would have low content validity).
6
Types of Validity (con’t) Criterion-related validity - the degree to which the DV is associated with some criterion within the domain of interest (e.g., is a “career paths” test correlated with “success on the job?”). There are two types of criterion- related validity: –Predictive validity - the DV is used to predict some future criterion performance (e.g., SAT scores may predict end-of-freshman-year GPA). –Concurrent validity - the DV is used to predict some current criterion performance (e.g., “class attendance” may predict current class grade).
7
Types of Validity (con’t) Construct validity - the degree to which IVs and DVs are part of the construct under investigation (e.g., intelligence). –convergent validity - variables we believe are part of the construct are correlated with each other. For example, “digit span” should be correlated with “spatial ability”. –discriminant validity - variables we believe are part of the construct are not correlated with variables that should be uncorrelated with the construct. For example, “reading comprehension” should not be correlated with “religiosity.”
8
Types of Validity (con’t) Postdictive validity - the degree to which the DV can predict some criterion performance from the past (e.g., amount of charitable donations may predict parents’ income). Statistical conclusion validity - the degree to which we can conclude the IV and DV covary based upon statistical analysis of the data (e.g., hypothesis testing).
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.