Research Methodology Lecture No : 11 (Goodness Of Measures)

Slides:



Advertisements
Similar presentations
Chapter 8 Flashcards.
Advertisements

Conceptualization, Operationalization, and Measurement
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Conceptualization and Measurement
Chapter Eight & Chapter Nine
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
VALIDITY AND RELIABILITY
King Fahd University of Petroleum & Minerals Department of Management and Marketing MKT 345 Marketing Research Dr. Alhassan G. Abdul-Muhmin Introduction.
Professor Gary Merlo Westfield State College
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Measurement Reliability and Validity
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
RESEARCH METHODS Lecture 18
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
Measurement Validity and Reliability. Reliability: The degree to which measures are free from random error and therefore yield consistent results.
Beginning the Research Design
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
Chapter 7 Evaluating What a Test Really Measures
Test Validity S-005. Validity of measurement Reliability refers to consistency –Are we getting something stable over time? –Internally consistent? Validity.
Measurement and Data Quality
Reliability, Validity, & Scaling
Instrumentation.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Copyright © 2008 by Nelson, a division of Thomson Canada Limited Chapter 11 Part 3 Measurement Concepts MEASUREMENT.
Chapter Eight The Concept of Measurement and Attitude Scales
Chapter Nine
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Measurement Validity.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
Validity and Reliability Neither Valid nor Reliable Reliable but not Valid Valid & Reliable Fairly Valid but not very Reliable Think in terms of ‘the purpose.
Measurement and Questionnaire Design. Operationalizing From concepts to constructs to variables to measurable variables A measurable variable has been.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Measurement and Scaling
Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Outline Variables – definition  Physical dimensions  Abstract dimensions Systematic vs. random variables Scales of measurement Reliability of measurement.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Measurement and Scaling Concepts
Survey Methodology Reliability and Validity
Chapter 2 Theoretical statement:
Reliability and Validity
Reliability & Validity
Test Validity.
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Tests and Measurements: Reliability
Reliability & Validity
Human Resource Management By Dr. Debashish Sengupta
پرسشنامه کارگاه.
5. Reliability and Validity
Measurement Reliability and Validity
Reliability and Validity of Measurement
RESEARCH METHODS Lecture 18
Measurement Concepts and scale evaluation
Chapter 8 VALIDITY AND RELIABILITY
The Concept of Measurement and Attitude Scales
Presentation transcript:

Research Methodology Lecture No : 11 (Goodness Of Measures)

Recap Measurement is the process of assigning numbers or labels to objects, persons, states of nature, or events. Scales are a set of symbols or numbers, assigned by rule to individuals, their behaviors, or attributes associated with them

Using these scales we complete the development of our instrument. It is to bee seen if these instruments accurately and measure the concept.

Sources of Measurement Differences Why do ‘scores’ vary? Among the reasons legitimate differences, are differences due to error (systematic or random) 1. That there is a true difference in what is being measured. 2. That there are differences in stable characteristics of individual respondents On satisfaction measures, there are systematic differences in response based on the age of the respondent. 4/12/2017

3.Differences due to short term personal factors – mood swings, fatigue, time constraints, or other transistory factors. Example – telephone survey of same person, difference may be due to these factors (tired versus refreshed) may cause differences in measurement. 4.Differences due to situational factors – calling when someone may be distracted by something versus full attention. 4/12/2017

5.Differences resulting from variations in administering the survey – voice inflection, non verbal communication, etc. Differences due to the sampling of items included in the questionnaire.

Differences due to a lack of clarity in measurement instrument (measurement instrument error). Example; unclear or ambiguous questions. 8. Differences due to mechanical or instrument factors – blurred questionnaires, bad phone connections. 4/12/2017

Goodness of Measure Once we have operationalized, and assigned scales we want to make sure that these instruments developed measure the concept accurately and appropriately. Measure what is suppose to be measured Measure as well as possible 4/12/2017

Validity : checks as to how well an instrument that is developed measured the concept Reliability: checks how consistently an instrument measures

Ways to Check for Reliability How to check for reliability of measurement instruments or the stability of measures and internal consistency of measures? Two methods are discussed to check the stability . Stability (a) Test – Retest Use the same instrument, administer the test shortly after the first time, taking measurement in as close to the original conditions as possible, to the same participants. 4/12/2017

If there are few differences in scores between the two tests, then the instrument is stable. The instrument has shown test-retest reliability. Problems with this approach. Difficult to get cooperation a second time Respondents may have learned from the first test, and thus responses are altered Other factors may be present to alter results (environment, etc.)

(b) Equivalent Form Reliability This approach attempts to overcome some of the problems associated with the test-retest measurement of reliability. Two questionnaires, designed to measure the same thing, are administered to the same group on two separate occasions (recommended interval is two weeks). 4/12/2017

If the scores obtained from these tests are correlated, then the instruments have equivalent form reliability. Tough to create two distinct forms that are equivalent. An impractical method (as with test-retest) and not used often in applied research.

(2)Internal Consistency Reliability This is a test of the consistency of respondents answer to all the items in a measure . The items should ‘hang together as a set. i.e. the items are independent measures of the same concept, they will correlated with one another 4/12/2017

Developing questions on the Concept Enriched Job

Validity Definition: Whether what was intended to be measured was actually measured? 4/12/2017

The weakest form of validity Face Validity The weakest form of validity Researcher simply looks at the measurement instrument and concludes that it will measure what is intended. Thus it is by definition subjective. 4/12/2017

Content Validity The degree to which the instrument items represent the universe of the concepts under study. In English: did the measurement instrument cover all aspects of the topic at hand? 4/12/2017

Criterion Related Validity The degree to which the measurement instrument can predict a variable known as the criterion variable. 4/12/2017

Two subcategories of criterion related validity Predictive Validity Is the ability of the test or measure to differentiate among individuals with reference to a future criterion. E.g. an instrument which is suppose to measure the aptitude of an individual, when used can be compared with the future job performance of a different individual. Good performance (Actual) should also have scored high in the aptitude test and vise versa

Concurrent Validity Is established when the scale discriminates individuals who are known to be different that is they should score differently on the test. E.g. individuals who are happy at availing welfare and individuals who prefer to do job must score differently on a scale/ instrument which measures work ethics.

This is the territory of academic researchers Construct Validity Does the measurement conform to some underlying theoretical expectations. If so then the measure has construct validity. i.e. If we are measuring consumer attitudes about product purchases then do the measure adhere to the constructs of consumer behavior theory. This is the territory of academic researchers 4/12/2017

Two approaches are used to measure construct validity Convergent Validity A high degree of correlation among 2 different measures intended to measure same construct Discriminant Validity The degree of low correlation among varaibles that are assumed to be different. 4/12/2017

To check validity through Correlation analysis, Factor Analysis, Multi trait , Multi matrix correlation etc

Reflective vs Formative measure scales: In some multi item measure where it is measuring different dimensions of a concept do not hang together Such is the case of Job Description Index measure which measures job satisfaction from 5 different dimension i.e Regular Promotions, Fairly good chance for promotion, Income adequate, Highly Paid, good opportunity for accomplishment.

In this case some items of dimensions Income adequate and Highly paid to be correlated but dimension items of Opportunity for Advancement and Highly Paid might not correlated. In this measure not all the items would related to each other as it’s dimensions address different aspect of job satisfaction. This measure /scale is termed as Formative scale

In some cases the measure dimensions and items correlate. In this kind of measure/scale the different dimensions share a common basis ( common interest) An example is of a scale on Attitude towards the Offer scale. Since the items are all focused on the price of an item, all the items are related hence this scale is termed as Reflective Scale.

Recap