Dr. Jeffrey Oescher 27 January 2014 Technical Issues  Two technical issues  Validity  Reliability.

Slides:



Advertisements
Similar presentations
Questionnaire Development
Advertisements

Conceptualization and Measurement
Topics: Quality of Measurements
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
VALIDITY AND RELIABILITY
Part II Sigma Freud & Descriptive Statistics
What is a Good Test Validity: Does test measure what it is supposed to measure? Reliability: Are the results consistent? Objectivity: Can two or more.
Part II Sigma Freud & Descriptive Statistics
MEQ Analysis. Outline Validity Validity Reliability Reliability Difficulty Index Difficulty Index Power of Discrimination Power of Discrimination.
Chapter 4A Validity and Test Development. Basic Concepts of Validity Validity must be built into the test from the outset rather than being limited to.
Reliability and Validity of Research Instruments
Reliability, Validity, Trustworthiness If a research says it must be right, then it must be right,… right??
Chapter 4 Validity.
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
Reliability n Consistent n Dependable n Replicable n Stable.
Developing a Hiring System Reliability of Measurement.
Reliability and Validity
Characteristics of Sound Tests
Research Methods in MIS
Chapter 9 Flashcards. measurement method that uses uniform procedures to collect, score, interpret, and report numerical results; usually has norms and.
Classroom Assessment A Practical Guide for Educators by Craig A
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Standardized Test Scores Common Representations for Parents and Students.
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
Technical Issues Two concerns Validity Reliability
Measurement and Data Quality
Validity and Reliability of Research and the Instruments
Instrument Validity & Reliability. Why do we use instruments? Reliance upon our senses for empirical evidence Senses are unreliable Senses are imprecise.
Foundations of Educational Measurement
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Psychometrics William P. Wattles, Ph.D. Francis Marion University.
Instrumentation (cont.) February 28 Note: Measurement Plan Due Next Week.
Reliability Chapter 3. Classical Test Theory Every observed score is a combination of true score plus error. Obs. = T + E.
Reliability Chapter 3.  Every observed score is a combination of true score and error Obs. = T + E  Reliability = Classical Test Theory.
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Reliability & Validity
Tests and Measurements Intersession 2006.
Assessing Learners with Special Needs: An Applied Approach, 6e © 2009 Pearson Education, Inc. All rights reserved. Chapter 4:Reliability and Validity.
EDU 8603 Day 6. What do the following numbers mean?
Chapter 8 Validity and Reliability. Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Chapter 9 Correlation, Validity and Reliability. Nature of Correlation Association – an attempt to describe or understand Not causal –However, many people.
Psychometrics. Goals of statistics Describe what is happening now –DESCRIPTIVE STATISTICS Determine what is probably happening or what might happen in.
Measurement MANA 4328 Dr. Jeanne Michalski
Experimental Research Methods in Language Learning Chapter 12 Reliability and Reliability Analysis.
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Reliability and Validity Themes in Psychology. Reliability Reliability of measurement instrument: the extent to which it gives consistent measurements.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Reliability and Validity in Testing. What is Reliability? Consistency Accuracy There is a value related to reliability that ranges from -1 to 1.
Reliability a measure is reliable if it gives the same information every time it is used. reliability is assessed by a number – typically a correlation.
Reliability When a Measurement Procedure yields consistent scores when the phenomenon being measured is not changing. Degree to which scores are free of.
Assessing Student Performance Characteristics of Good Assessment Instruments (c) 2007 McGraw-Hill Higher Education. All rights reserved.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
Reliability and Validity
Lecture 5 Validity and Reliability
Assessment Theory and Models Part II
Part Two THE DESIGN OF RESEARCH
مركز مطالعات و توسعه آموزش دانشگاه علوم پزشكي كرمان
The first test of validity
Presentation transcript:

Dr. Jeffrey Oescher 27 January 2014

Technical Issues  Two technical issues  Validity  Reliability

Technical Issues  Validity – the extent to which inferences made on the basis of scores from an instrument are appropriate, meaningful, and useful  Characteristics  Refers to the interpretation of the results  Is a matter of degree  Is situation specific  Is a unitary concept  Involves an overall judgment

Data Collection – Technical Issues  Validity evidence  Content  Face  Content  Construct  Criterion-related  Predictive  Concurrent  Situationally specific

Data Collection – Technical Issues  Reliability  The extent to which scores are free from error  Error is measured by consistency  Two perspectives  Test – the reliability of a test  Agreement – the reliability of an observation

Data Collection – Technical Issues  Test reliability evidence  Stability  Also known as test-retest  Measured on a scale of 0 to1  Equivalence  Also known as parallel forms  Measured on a scale of 0 to 1  Internal consistency  Split half  KR 20  KR 21  Cronbach alpha  All measured on a scale from 0 to 1

Data Collection – Technical Issues  Score reliability evidence  Standard error of measurement or SEM  A statistic that allows one to ascertain the probability that a student’s score falls within a given range of scores  Usually reported as the student’s score and ‘SEM = +/- 2.25’  You can add and subtract one (1) SEM to a student’s score and be confident that their score fall within that range of scores 68% of the time  You can add and subtract two (2) SEM to a students score and be confident that their score falls with that range of scores 99% of the time  Agreement reliability evidence  Percentage of agreement between observers  More commonly known as inter-rater reliability  Ranges on a scale from 0 to 1

Score Interpretation  Two types of interpretations: criterion-referenced and norm- referenced  Criterion-referenced  You need to know the underlying scale (e.g., 0-100, 1-5, etc.) upon which the scores are based  The interpretation of the test score is made relative to this underlying scale  The scores indicted the students mastered about three-fourths of the objectives  The scores are interpreted relative to what the students know  The scores easily communicate some level of performance (e.g., good, bad, moderate, etc.)

Score Interpretation  Norm-referenced  You need to know the reference group (i.e., norming sample) against which the scores are being compared  The interpretation of test scores is made in relation to the scores of students in the norming group  John’s score put him in the 85 th percentile  John’s score indicates he performed better than 85% of the students in the norming group  John’s score doesn’t tell us anything about what John knows in terms of content

Score Interpretation  A note of caution  Which of the following represents a criterion-referenced and norm-referenced interpretation?  The scores for the experimental group were significantly higher than those for the control group.  The scores for the experimental group indicated mastery of about 95% of the objectives, while those scores for the control group indicated only 65% mastery.  These are common examples from the literature you will be reading  Be careful about the first interpretation, as it only tells us which group is better. It does not tell us how well either group performed.