Chapter 8 Flashcards.

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Standardized Scales.
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Ch 5: Measurement Concepts
Conceptualization and Measurement
The Research Consumer Evaluates Measurement Reliability and Validity
Taking Stock Of Measurement. Basics Of Measurement Measurement: Assignment of number to objects or events according to specific rules. Conceptual variables:
MEASUREMENT CONCEPTS © 2012 The McGraw-Hill Companies, Inc.
VALIDITY AND RELIABILITY
Chapter 5 Measurement, Reliability and Validity.
Professor Gary Merlo Westfield State College
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
Measurement Reliability and Validity
Reliability and Validity of Research Instruments
RESEARCH METHODS Lecture 18
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
Concept of Measurement
Chapter 9 Flashcards. measurement method that uses uniform procedures to collect, score, interpret, and report numerical results; usually has norms and.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Classroom Assessment A Practical Guide for Educators by Craig A
Rosnow, Beginning Behavioral Research, 5/e. Copyright 2005 by Prentice Hall Ch. 6: Reliability and Validity in Measurement and Research.
Measurement in Survey Research MKTG 3342 Fall 2008 Professor Edward Fox.
Reliability, Validity, & Scaling
Slide 9-1 © 1999 South-Western Publishing McDaniel Gates Contemporary Marketing Research, 4e Understanding Measurement Carl McDaniel, Jr. Roger Gates Slides.
Reliability and Validity what is measured and how well.
Instrumentation.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Reliability: Introduction. Reliability Session 1.Definitions & Basic Concepts of Reliability 2.Theoretical Approaches 3.Empirical Assessments of Reliability.
Measurement Validity.
CHAPTER OVERVIEW The Measurement Process Levels of Measurement Reliability and Validity: Why They Are Very, Very Important A Conceptual Definition of Reliability.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
MOI UNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS CONCEPT MEASUREMENT, SCALING, VALIDITY AND RELIABILITY BY MUGAMBI G.K. M’NCHEBERE EMBA NAIROBI RESEARCH.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
MEASUREMENT: PART 1. Overview  Background  Scales of Measurement  Reliability  Validity (next time)
Reliability: Introduction. Reliability Session 1.Definitions & Basic Concepts of Reliability 2.Theoretical Approaches 3.Empirical Assessments of Reliability.
SECOND EDITION Chapter 5 Standardized Measurement and Assessment
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
©2005, Pearson Education/Prentice Hall CHAPTER 6 Nonexperimental Strategies.
Foundations of Evidence-Based Outcome Measurement.
© 2009 Pearson Prentice Hall, Salkind. Chapter 5 Measurement, Reliability and Validity.
Measurement and Scaling Concepts
Survey Methodology Reliability and Validity
Ch. 5 Measurement Concepts.
Product Reliability Measuring
Catching Up: Review.
Reliability and Validity
Questions What are the sources of error in measurement?
Concept of Test Validity
Assessment Theory and Models Part II
Measurement: Part 1.
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
Tests and Measurements: Reliability
Reliability & Validity
Introduction to Measurement
Human Resource Management By Dr. Debashish Sengupta
Measurement with Numbers Scaling: What is a Number?
پرسشنامه کارگاه.
5. Reliability and Validity
Reliability and Validity of Measurement
Measurement: Part 1.
RESEARCH METHODS Lecture 18
Measurement Concepts and scale evaluation
Measurement: Part 1.
Ch 5: Measurement Concepts
Presentation transcript:

Chapter 8 Flashcards

systematic process that involves assigning labels (usually numbers) to characteristics of people, objects, or events using explicit and consistent rules so, ideally, the labels accurately represent the characteristic measured Measurement

abstraction that symbolizes a class of people (e. g abstraction that symbolizes a class of people (e.g., female), objects (e.g., chair), or events (e.g., baseball game) that have one or more characteristics in common Concept

definition that assigns meaning to a concept in terms of other concepts, such as in a dictionary, instead of in terms of the activities or operations used to measure it. (Contrast with Operational definition.) Conceptual definition

definition that assigns meaning to a concept in terms of the activities or operations used to measure it, ideally in a way that contains relevant features of the concept and excludes irrelevant features. (Contrast with Conceptual definition.) Operational definition

discrepancies between measured and actual (true) values of a variable caused by flaws in the measurement process (e.g., characteristics of clients or other respondents, measurement conditions, properties or measures). See also Random measurement errors and Systematic measurement errors Measurement errors

Random measurement errors discrepancies between measured and actual (true) values of a variable that are equally likely to be higher or lower than the actual values because they are caused by chance fluctuations in measurement. They are caused by flaws in the measurement process and they tend to cancel each other out and average to zero but they increase the variability of measured values. Also known as unsystematic measurement errors. (Contrast with Systematic measurement errors.) Random measurement errors

Systematic measurement errors discrepancies between measured and actual (true) values of a variable that are more likely to be higher or lower than the actual values of the variable. They are caused by flaws in the measurement process and they lead to over- or underestimates of the actual values of a variable. Also known as bias in measurement. (Contrast with Random measurement errors.) Systematic measurement errors

statistic which indicates whether and how two variables are related statistic which indicates whether and how two variables are related. A correlation has a potential range from –1.0 to +1.0. A positive correlation means that people with higher values on one variable tend to have higher values on another variable. A negative correlation means that people with lower values on one variable tend to have higher values on another variable. A correlation of 0 means there’s no linear relationship between two variables. The absolute value of a correlation (i.e., the actual number, ignoring the plus or minus sign) indicates the strength of the relationship—the larger the absolute value, the stronger the relationship Correlation

general term for the consistency of measurements, and unreliability means inconsistency caused by random measurement errors. See also Internal-consistency reliability, Inter-rater reliability, and Test–retest reliability Reliability

degree to which scores on a measure are consistent over time Test–retest reliability

degree to which responses to a set of items on a standardized scale measure the same construct consistently Internal-consistency reliability

statistic typically used to quantify the internal-consistency reliability of a standardized scale. Also known as Cronbach’s alpha and, when items are dichotomous, Kuder-Richardson 20, KR20, or KR-20 Coefficient alpha

degree of consistency in ratings or observations across raters, observers, or judges (e.g., a second opinion from a health care professional, judges in an Olympic competition). Also known as interobserver or interjudge reliability or agreement Inter-rater reliability

general term for the degree to which accumulated evidence and theory support interpretations and uses of scores derived from a measure. See also Concurrent validity, Construct validity, Content validity, Convergent validity, Criterion validity, Discriminant validity, Face validity, Predictive validity, and Sensitivity to change Measurement validity

degree to which a measure of a construct or other variable appears to measure a given construct in the opinion of clients, other respondents, and other users of the measure Face validity

degree to which questions, behaviors, or other types of content represent a given construct comprehensively (e.g., the full range of relevant content is represented, and irrelevant content is not) Content validity

degree to which scores on a measure can predict performance or status on another measure that serves as a standard (i.e., the criterion, sometimes called a gold standard). See also Concurrent validity and Predictive validity Criterion validity

degree to which scores on a measure can predict a contemporaneous criterion. (Contrast with Predictive validation.) See also Criterion validity Concurrent validity

degree to which scores on a measure can predict a criterion measured at a future point in time. (Contrast with Concurrent validity.) See also Criterion validity Predictive validity

complex concept (e.g., intelligence, well-being, depression) that is inferred or derived from a set of interrelated attributes (e.g., behaviors, experiences, subjective states, attitudes) of people, objects, or events; typically embedded in a theory; and oftentimes not directly observable but measured using multiple indicators Construct

degree to which scores on a measure can be interpreted as representing a given construct, as evidenced by theoretically predicted patterns of associations with measures of related and unrelated variables, group differences, and changes over time, and accuracy of conclusions based on evidence and reasoning about the degree to which cause and effect variables as operationalized in a study represent the constructs of interest (e.g., does an intervention as implemented or an outcome as measured contain all of the relevant features and exclude irrelevant features). See also Convergent validity and Discriminant validity Construct validity

degree to which scores derived from a measure of a construct are correlated in the predicted way with other measures of the same or related constructs or variables. (Contrast with Discriminant validity.) See also Construct validity Convergent validity

degree to which scores derived from a measure of a construct are uncorrelated with, or otherwise distinct from, theoretically dissimilar or unrelated constructs or other variables. (Contrast with Convergent validity.) See also Construct validity Discriminant validity

degree to which a measure detects genuine change in the variable measured. Also known as responsiveness to change Sensitivity to change