Reliability Analysis.

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Reliability and Validity of Researcher-Made Surveys.
Psychology Practical (Year 2) PS2001 Correlation and other topics.
Reliability IOP 301-T Mr. Rajesh Gunesh Reliability  Reliability means repeatability or consistency  A measure is considered reliable if it would give.
How good are our measurements? The last three lectures were concerned with some basics of psychological measurement: What does it mean to quantify a psychological.
Consistency in testing
Topics: Quality of Measurements
Lab 3: Correlation and Retest Reliability. Today’s Activities Review the correlation coefficient Descriptive statistics for Big Five at Time 1 and Time.
Some (Simplified) Steps for Creating a Personality Questionnaire Generate an item pool Administer the items to a sample of people Assess the uni-dimensionality.
Reliability and Validity checks S-005. Checking on reliability of the data we collect  Compare over time (test-retest)  Item analysis  Internal consistency.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
 A description of the ways a research will observe and measure a variable, so called because it specifies the operations that will be taken into account.
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
Reliability & Validity.  Limits all inferences that can be drawn from later tests  If reliable and valid scale, can have confidence in findings  If.
Explaining Cronbach’s Alpha
Reliability, the Properties of Random Errors, and Composite Scores.
Reliability and Validity of Research Instruments
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
When Measurement Models and Factor Models Conflict: Maximizing Internal Consistency James M. Graham, Ph.D. Western Washington University ABSTRACT: The.
A quick introduction to the analysis of questionnaire data John Richardson.
Reliability of Selection Measures. Reliability Defined The degree of dependability, consistency, or stability of scores on measures used in selection.
 Rosseni Din  Muhammad Faisal Kamarul Zaman  Nurainshah Abdul Mutalib  Universiti Kebangsaan Malaysia.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Measurement and Data Quality
Reliability, Validity, & Scaling
Foundations of Educational Measurement
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Unanswered Questions in Typical Literature Review 1. Thoroughness – How thorough was the literature search? – Did it include a computer search and a hand.
Fundamental Statistics in Applied Linguistics Research Spring 2010 Weekend MA Program on Applied English Dr. Da-Fu Huang.
1 Cronbach’s Alpha It is very common in psychological research to collect multiple measures of the same construct. For example, in a questionnaire designed.
Instrumentation (cont.) February 28 Note: Measurement Plan Due Next Week.
Research Methodology Lecture No :24. Recap Lecture In the last lecture we discussed about: Frequencies Bar charts and pie charts Histogram Stem and leaf.
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Reliability, the Properties of Random Errors, and Composite Scores Week 7, Psych R. Chris Fraley
SOCW 671: #5 Measurement Levels, Reliability, Validity, & Classic Measurement Theory.
Experimental Research Methods in Language Learning Chapter 12 Reliability and Reliability Analysis.
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Reliability When a Measurement Procedure yields consistent scores when the phenomenon being measured is not changing. Degree to which scores are free of.
Applied Quantitative Analysis and Practices LECTURE#17 By Dr. Osman Sadiq Paracha.
TEST SCORES INTERPRETATION - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because.
EDU5950 SEM RELIABILITY ANALYSIS -CRONBACH ALPHA TEST FOR NORMALITY.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Multivariate Analysis - Introduction. What is Multivariate Analysis? The expression multivariate analysis is used to describe analyses of data that have.
Reliability Analysis.
Lecture 5 Validity and Reliability
Multivariate Analysis - Introduction
Catching Up: Review.
assessing scale reliability
Classical Test Theory Margaret Wu.
Reliability Module 6 Activity 5.
Chapter Nine Measurement and Scaling: Noncomparative Scaling
پرسشنامه کارگاه.
5. Reliability and Validity
Calculating Reliability of Quantitative Measures
Questionnaire Reliability
Reliability and Validity of Measurement
PSY 614 Instructor: Emily Bullock, Ph.D.
RESEARCH METHODS Lecture 18
Reliability.
Chapter 11: Measuring Research Variables
Reliability, the Properties of Random Errors, and Composite Scores
15.1 The Role of Statistics in the Research Process
Psychological Measurement: Reliability and the Properties of Random Errors The last two lectures were concerned with some basics of psychological measurement:
Measurement Concepts and scale evaluation
Descriptive Statistics
Chapter 8 VALIDITY AND RELIABILITY
Multivariate Analysis - Introduction
Questionnaire planning, validation, and intepretation
Presentation transcript:

Reliability Analysis

The reliability of a measuring instrument is defined as the ability of the instrument to measure consistently the phenomenon it is designed to assess. Reliability, therefore, refers to test consistency.

Cronbach’s Alpha It is very common in research to collect multiple measures of the same construct. For example, in a questionnaire designed to measure optimism, there are typically many items that collectively measure the construct of optimism. To have confidence in a measure such as this, we need to test its reliability, the degree to which it is error-free. The type of reliability we'll be examining here is called internal consistency reliability, the degree to which multiple measures of the same thing agree with one another.

Cronbach’s alpha—This is a single correlation coefficient that is an estimate of the average of all the correlation coefficients of the items within a test. If alpha is high (close to 1), then this suggests that all of the items are reliable and the entire test is internally consistent. If alpha is low, then at least one of the items is unreliable, and must be identified via item analysis procedure.

Cronbach’s Alpha Example: Does my questionnaire measure customer satisfaction in a useful way? Using reliability analysis, you can determine the extent to which the items in your questionnaire are related to each other, you can get an overall index of the repeatability or internal consistency of the scale as a whole, and you can identify problem items that should be excluded from the scale.

What is alpha and why should we care? Cronbach’s alpha is the most commonly used measure of reliability (i.e., internal consistency). It was originally derived by Kuder & Richardson (1937) for dichotomously scored data (0 or 1) and later generalized by Cronbach (1951) to account for any scoring method. A high value of alpha is good, but it is important to have a deeper knowledge to use it properly.

Other types of reliability Test/Re-Test The same test is taken twice. Equivalent Forms Different tests covering the same topics Can be accomplished by splitting a test into halves

Cronbach’s basic equation for alpha n = number of questions Vi = variance of scores on each question Vtest = total variance of overall scores on the entire test

How alpha works Vtest is the most important part of alpha If Vtest is large, it can be seen that alpha will be large also: Large Vtest  Small Ratio ΣVi/Vtest  Subtract this small ratio from 1  high alpha

What makes a question “Good” or “Bad” in terms of alpha? SPSS will report “alpha if item deleted”, which shows how alpha would change if that one question was not on the test. Higher “alpha if item deleted” means a question is not so good, because deleting that question would improve the overall alpha.

Obtaining Reliability of Scale - Example 105 members of the community completed a ten-item ‘attitudes-to-help-seeking’ instrument, using a 5-point Likert scale (1=Strongly disagree to 5=Strongly agree). You wish to determine the internal consistency of this scale using Cronbach’s alpha. The data is given in Attitude_Reliability.sav

Obtaining Reliability of Scale using SPSS - Example Obtaining Cronbach’s Alpha: Analyze → Scale →Reliability Analysis Model → Alpha Drag items on the left to the right dialog box In this case: items hs1 to hs10

Reliability Press “statistics” Choose “scale if item deleted” Then Press “continue” Then “OK”

Case Processing Summary Reliability The SPSS Output Case Processing Summary N % Cases Valid 105 100.0 Excludeda .0 Total a. Listwise deletion based on all variables in the procedure. 105 cases (observations/respondents) were used in the calculation of Cronbach’s alpha

Reliability Statistics Cronbach's Alpha Cronbach's Alpha Based on Standardized Items N of Items .768 .792 10 The obtained alpha score is 0.768, which indicates that the scale has high internal consistency (reliability). Cronbach alpha is a reliability coefficient that indicates how well the items are positively correlated to one another. The closer the Cronbach alpha is to 1, the higher the internal consistency. As a thumb rule, values of below 0.6 are considered to be poor, 0.6 to 0.7 ranges are acceptable and those over 0.7 are good.

Item-Total Statistics Scale Mean if Item Deleted Scale Variance if Item Deleted Corrected Item-Total Correlation Squared Multiple Correlation Cronbach's Alpha if Item Deleted hs1 17.94 21.862 .648 .528 .720 hs2 17.39 23.048 .345 .301 .764 hs3 18.19 22.348 .693 .533 .719 hs4 17.72 22.836 .379 .259 .758 hs5 17.75 21.977 .646 .464 .721 hs6 17.86 23.220 .425 .209 .749 hs7 25.029 .246 .158 .772 hs8 18.45 24.692 .415 .339 .752 hs9 18.15 24.265 .526 .389 .743 hs10 17.44 23.710 .253 .193 .781 Note that items 7 and 10 have lowest “corrected item – total correlations”. If these two items were removed from the scale, the “alpha if item deleted” column shows that overall reliability would increase slightly. Hence, deletion of these items may be considered.