Validity and Reliability Edgar Degas: Portraits in a New Orleans Cotton Office, 1873.

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

Chapter 3 Introduction to Quantitative Research
Edouard Manet: The Bar at the Folies Bergere, 1882
Statistics Review – Part II Topics: – Hypothesis Testing – Paired Tests – Tests of variability 1.
The Research Consumer Evaluates Measurement Reliability and Validity
Increasing your confidence that you really found what you think you found. Reliability and Validity.
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Copyright © Allyn & Bacon (2007) Hypothesis Testing, Validity, and Threats to Validity Graziano and Raulin Research Methods: Chapter 8 This multimedia.
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
Research Methods in Psychology
Decision Errors and Power
Concept of Measurement
12.The Chi-square Test and the Analysis of the Contingency Tables 12.1Contingency Table 12.2A Words of Caution about Chi-Square Test.
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
Chapter 3 Hypothesis Testing. Curriculum Object Specified the problem based the form of hypothesis Student can arrange for hypothesis step Analyze a problem.
Personality, 9e Jerry M. Burger
How Science Works Glossary AS Level. Accuracy An accurate measurement is one which is close to the true value.
Descriptive Statistics
Inferential Statistics
Statistical hypothesis testing – Inferential statistics I.
Validity and Reliability
Fig Theory construction. A good theory will generate a host of testable hypotheses. In a typical study, only one or a few of these hypotheses can.
Descriptive and Causal Research Designs
Hypothesis Construction Claude Oscar Monet: The Blue House in Zaandam, 1871.
Chapter 2: The Research Enterprise in Psychology
Basic Concepts of Research Basis of scientific method Making observations in systematic way Follow strict rules of evidence Critical thinking about evidence.
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
Chapter 2: The Research Enterprise in Psychology
Overview of Statistical Hypothesis Testing: The z-Test
Chapter 8 Introduction to Hypothesis Testing
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
© 2007 The McGraw-Hill Companies, Inc. All Rights Reserved Slide 1 Research Methods In Psychology 2.
Principles of Test Construction
Review: Estimating a Mean
STA Statistical Inference
Chapter 1: Research Methods
Descriptive and Causal Research Designs
Quantitative Research Design and Statistical Analysis.
Chapter 1: The Research Enterprise in Psychology.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Chapter 2 The Research Enterprise in Psychology. Table of Contents The Scientific Approach: A Search for Laws Basic assumption: events are governed by.
The Research Enterprise in Psychology
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
1 Virtual COMSATS Inferential Statistics Lecture-16 Ossam Chohan Assistant Professor CIIT Abbottabad.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
The Scientific Method: Terminology Operational definitions are used to clarify precisely what is meant by each variable Participants or subjects are the.
Chapter 2 The Research Enterprise in Psychology. Table of Contents The Scientific Approach: A Search for Laws Basic assumption: events are governed by.
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
INTRODUCTION TO HYPOTHESIS TESTING From R. B. McCall, Fundamental Statistics for Behavioral Sciences, 5th edition, Harcourt Brace Jovanovich Publishers,
Introduction to Validity True Experiment – searching for causality What effect does the I.V. have on the D.V. Correlation Design – searching for an association.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
How Psychologists Do Research Chapter 2. How Psychologists Do Research What makes psychological research scientific? Research Methods Descriptive studies.
Data Analysis. Qualitative vs. Quantitative Data collection methods can be roughly divided into two groups. It is essential to understand the difference.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
Chapter 6  PROBABILITY AND HYPOTHESIS TESTING
Constructing hypotheses & research design
Unit 3 Hypothesis.
Part Two: Measurement Edgar Degas,
STAT 312 Chapter 7 - Statistical Intervals Based on a Single Sample
Hypothesis Testing, Validity, and Threats to Validity
Understanding Results
CONCEPTS OF HYPOTHESIS TESTING
Hypothesis Construction
Chapter 3 Probability Sampling Theory Hypothesis Testing.
Measurement Concepts and scale evaluation
Chapter 8 VALIDITY AND RELIABILITY
Presentation transcript:

Validity and Reliability Edgar Degas: Portraits in a New Orleans Cotton Office, 1873

Validity and Reliability A more complete explanation of validity and reliability is provided at the accompanying web page for this topic. See: Validity and ReliabilityValidity and Reliability

1.Validity The extent to which a “test” measures what it is supposed to measure. Non-empirical evaluations. Empirical evaluations. 2.Reliability The extent to which a “test” provides consistent measurement. Negatively affected by measurement error.

Validity and Reliability 3.Importance Without validity and reliability one cannot test an hypothesis. Without hypothesis testing, one cannot support a theory. Without a supported theory one cannot explain why events occur. Without adequate explanation one cannot develop effective material and non-material technologies, including programs designed for positive social change.

Validity 1.Non-Empirical Validity “Does the concept seem to measure what it is supposed to measure?” This evaluation depends entirely upon the intersubjective opinion of the community of scholars. It is not entirely subjective because it represents the consensus opinion of many subjective individual opinions.

Validity 2.Empirical Data analysis evaluation of validity. “Does the concept predict another concept that it is supposed to predict?” This evaluation depends upon empirical evaluation, using either quantitative or qualitative data. Empirical validity does not imply content validity.

Validity 3.Outcomes of Content Validity Assessment If the community of scholars approves the content validity of a measure, then it is accepted for further use. If the community of scholars rejects the content validity of a measure, then it must be redesigned by the researcher(s).

Validity 4.Outcomes of Empirical Validity Assessment Assessments of construct validity imply three types of hypotheses: 1.Do the items used to measure the construct have a statistically significant correlation with the construct? [Construct Validity] 2.Does the construct have a statistically significant correlation with related constructs? [Concurrent Validity] 3.Does the construct statistically predict another construct? [Predictive Validity]

Validity 5.Outcomes of Empirical Validity Assessment If the null hypotheses are rejected, then the empirical results lend support for the empirical validity of the measure. If one or more null hypotheses are not rejected, then: 1.The measure of the construct might be invalid. 2.The measure of the predicted construct might be invalid. 3.The theory linking the two constructs might be flawed.

Reliability 1.Measurement Error Measurement errors are random events that distort the true measurement of a concept. The average of all measurement errors is equal to zero. The distribution of measurement error is incorporated within the standard error.standard error This distribution has the form of a bell-shaped curve, identical to the distribution of the standard deviation about a mean.

Reliability 2.Measurement Error and Hypothesis Testing With a small standard error, even a weak relationship between the independent and dependent variables can be judged as statistically significant. With a large standard error, even a strong relationship between the independent and dependent variables can be judged as not statistically significant. Therefore, it is important to reduce measurement error as much as possible.

Reliability 3.Achieving Reliability Concepts must be defined as precisely as possible. Researchers must develop good indicators of these concepts. This task sometimes requires much work and time to create a valid and reliable measuring instrument. Data must be collected with as much care as possible.

Reliability 4.Assessing Reliability Test-Retest Procedure. Advantages: Intuitive appeal. Disadvantages: History, Maturation, Cueing. Alternative Forms Procedure. Advantages: Little cueing. Disadvantages: History, Maturation.

Reliability 4.Assessing Reliability (Continued) Split-Halves Procedure. Advantages: No History, Maturation, or Cueing. Disadvantages: Different splits yield different assessments of reliability. Internal Consistency Procedure. Advantages: No History, Maturation, or Cueing. Disadvantages: Number of items can affect assessment of reliability.

Errors that Affect Validity and Reliability 1.Measurement Error Random, but can be reduced with care. 2.Sampling Error Errors in selecting a sample for study. 3.Random Error Things that are unknown, unanticipated, uncontrollable. 4.Data Analysis Errors Type-I: False assumption of causality. Type-II: False assumption of no causality.