Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.

Slides:



Advertisements
Similar presentations
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Advertisements

The Research Consumer Evaluates Measurement Reliability and Validity
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Chapter 5 Reliability Robert J. Drummond and Karyn Dayle Jones Assessment Procedures for Counselors and Helping Professionals, 6 th edition Copyright ©2006.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
VALIDITY AND RELIABILITY
1Reliability Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz.
Professor Gary Merlo Westfield State College
 A description of the ways a research will observe and measure a variable, so called because it specifies the operations that will be taken into account.
Part II Sigma Freud & Descriptive Statistics
Reliability for Teachers Kansas State Department of Education ASSESSMENT LITERACY PROJECT1 Reliability = Consistency.
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
Concept of Measurement
Reliability and Validity
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Research Methods in MIS
Chapter 7 Evaluating What a Test Really Measures
Classroom Assessment A Practical Guide for Educators by Craig A
Reliability of Selection Measures. Reliability Defined The degree of dependability, consistency, or stability of scores on measures used in selection.
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Measurement and Data Quality
Reliability, Validity, & Scaling
Measurement and Scaling
Instrumentation.
Foundations of Educational Measurement
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Copyright © 2008 by Nelson, a division of Thomson Canada Limited Chapter 11 Part 3 Measurement Concepts MEASUREMENT.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 22 Using Inferential Statistics to Test Hypotheses.
Reliability REVIEW Inferential Infer sample findings to entire population Chi Square (2 nominal variables) t-test (1 nominal variable for 2 groups, 1 continuous)
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Reliability & Validity
Assessing Learners with Special Needs: An Applied Approach, 6e © 2009 Pearson Education, Inc. All rights reserved. Chapter 4:Reliability and Validity.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Measurement Validity.
Research methods in clinical psychology: An introduction for students and practitioners Chris Barker, Nancy Pistrang, and Robert Elliott CHAPTER 4 Foundations.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Validity and Item Analysis Chapter 4. Validity Concerns what the instrument measures and how well it does that task Not something an instrument has or.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 15 Developing and Testing Self-Report Scales.
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Chapter 6 - Standardized Measurement and Assessment
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Measurement and Scaling Concepts
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 25 Critiquing Assessments Sherrilene Classen, Craig A. Velozo.
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 47 Critiquing Assessments.
Chapter 2 Theoretical statement:
Ch. 5 Measurement Concepts.
Reliability and Validity in Research
Concept of Test Validity
Journalism 614: Reliability and Validity
Reliability & Validity
Measurement Concepts and scale evaluation
Presentation transcript:

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Question Tell whether the following statement is true or false: Measurement involves assigning numbers to objects to represent the amount of an attribute.

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Answer True Measurement involves assigning numbers to objects to represent the amount of an attribute, using a specified set of rules. Researchers strive to develop or use measurements whose rules are isomorphic with reality.

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Measurement The assignment of numbers to represent the amount of an attribute present in an object or person, using specific rules Advantages: –Removes guesswork –Provides precise information –Less vague than words

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Errors of Measurement Obtained Score = True score + Error Obtained score: An actual data value for a participant True score: Value the would be obtained for a hypothetical perfect measure atribute Error: Represents measurement inaccuracies

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Factors That Contribute to Errors of Measurement Situational contaminants Transitory personal factors Response-set biases

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Question Tell whether the following statement is true or false: Reliability is the degree to which an instrument measures what it is supposed to measure.

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Answer False Validity is the degree to which an instrument measures what it is supposed to measure. Reliability is the degree of consistency or accuracy with which an instrument measures an attribute.

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Key Criteria for Evaluating Quantitative Measures Reliability Validity

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Reliability The consistency and accuracy with which an instrument measures an attribute Reliability assessments involve computing a reliability coefficient –Most reliability coefficients are based on correlation coefficients

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Question Tell whether the following statement is true or false: Reliability coefficients usually range from.00 to 1.00, with higher values reflecting lesser reliability.

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Answer False Reliability coefficients usually range from.00 to 1.00, with higher values reflecting greater reliability not less reliability.

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Correlation Coefficients Correlation coefficients indicate direction and magnitude of relationships between variables Range:  from –1.00 (perfect negative correlation)  through 0.00 (no correlation)  to (perfect positive correlation)

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Three Aspects of Reliability Can Be Evaluated Stability: extent to which an instrument yields the same results on repeated administrations Internal consistency: extent to which all the instrument’s items are measuring the same attribute Equivalence: estimates of interrater or interobserver reliability are obtained

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Stability The extent to which scores are similar on two separate administrations of an instrument Evaluated by test–retest reliability: –Requires participants to complete the same instrument on two occasions –A correlation coefficient between scores on first and second administration is computed –Appropriate for relatively enduring attributes (e.g., self-esteem)

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Internal Consistency The extent to which all the instrument’s items are measuring the same attribute Evaluated by administering instrument on one occasion Appropriate for most multi-item instruments Evaluation methods: –Split-half technique –Coefficient alpha

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Equivalence The degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument Most relevant for structured observations Assessed by comparing observations or ratings of two or more observers (interobserver/interrater reliability) Numerous formula and assessment methods Small number of categories is desired, the kappa statistic is often used.

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Reliability Coefficients Represent the proportion of true variability to obtained variability: r =V T V o Should be at least.70;.80 preferable Can be improved by making instrument longer (adding items) Are lower in homogeneous than in heterogeneous samples

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Validity The degree to which an instrument measures what it is supposed to measure Four aspects of validity: –Face validity –Content validity –Criterion-related validity –Construct validity

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Face Validity Refers to whether the instrument looks as though it is measuring the appropriate construct Based on judgment, no objective criteria for assessment

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Content Validity The degree to which an instrument has an appropriate sample of items for the construct being measured Evaluated by expert evaluation, via the content validity index (CVI)

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Criterion-Related Validity The degree to which the instrument correlates with an external criterion Validity coefficient is calculated by correlating scores on the instrument and the criterion

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Criterion-Related Validity (cont’d) Two types of criterion-related validity: Predictive validity: the instrument’s ability to distinguish people whose performance differs on a future criterion Concurrent validity: the instrument’s ability to distinguish individuals who differ on a present criterion

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Construct Validity Concerned with the questions: What is this instrument really measuring? Does it adequately measure the construct of interest?

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Methods of Assessing Construct Validity Known-groups technique Relationships based on theoretical predictions Multitrait–multimethod matrix method (MTMM) Factor analysis

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Multitrait–Multimethod Matrix Method Builds on two types of evidence: Convergence Discriminability

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Convergence Evidence that different methods of measuring a construct yield similar results Convergent validity comes from the correlations between two different methods measuring the same trait

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Discriminabililty Evidence that the construct can be differentiated from other similar constructs Discriminant validity assesses the degree to which a single method of measuring two constructs yields different results

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Psychometric Assessment Gather evidence: –Validity –Reliability –Other assessment criteria

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Criteria for Assessing Screening/Diagnostic Instruments Sensitivity: the instrument’s ability to correctly identify a “case” Specificity: the instrument’s ability to correctly identify noncases, that is, to screen out those without the condition

Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Developing Screening/Diagnostic Instruments Goal is to establish a cutoff point that balances sensitivity and specificity: –Receiver operating characteristic (ROC) curves