Download presentation
Presentation is loading. Please wait.
Published byIlene Gilbert Modified over 9 years ago
1
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 10 Clarifying Measurement and Data Collection in Quantitative Research
2
2 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Concepts of Measurement Theory Directness of measurement Measurement error Level of measurement Reliability Validity
3
3 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Directness of Measurement Directness of measurement Direct measures (concrete things such as oxygen saturation, temperature, weight) Indirect measures (abstract concepts such as pain, depression, coping, self-care, and self- esteem)
4
4 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Measurement Error Difference between the true measure and what is actually measured Systematic error: the variation in measurement is in the same direction Random error: the difference is without pattern
5
5 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Levels of Measurement Nominal Ordinal Interval Ratio
6
6 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Nominal-Scale Measurement Lowest of the four levels of measurement Categories that are not more or less, but are different from one another in some way Mutually exclusive and exhaustive categories Named categories
7
7 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Example of Nominal Data Gender 1 = Male 2 = Female
8
8 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Ordinal-Scale Measurement Order/ranking imposed on categories Numbers must preserve order 1 = Tallest 2 = Next tallest 3 = Third tallest
9
9 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Interval-Scale Measurement Numerical distances between intervals Absence of a zero point Likert scale scores 1 = Strongly disagree 2 = Disagree 3 = Neutral 4 = Agree 5 = Strongly agree
10
10 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Ratio-Scale Measurement Highest for measurement Continuum of values Absolute zero point
11
11 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Example of Ratio Data Test scores 1 = Lowest third percentile 2 = Middle third percentile 3 = Top third percentile
12
12 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Reference of Measurement Norm-referenced testing Tests performance standards that have been carefully developed over years with large, representative samples using a standardized test with extensive reliability and validity
13
13 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Reference of Measurement (cont’d) Criterion-referenced testing Comparison of a subject’s score with a criterion of achievement that includes the definition of target behaviors When behaviors are mastered, the subject is considered proficient in the behaviors.
14
14 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Reliability Testing Concerned with how consistently the measurement technique measures the concept of interest Needs dependability, consistency, accuracy and comparability If expressed as a correlation coefficient, 1.00 is perfect reliability, whereas 0.00 is no reliability. The lowest acceptable coefficient for a well- developed measurement tool is 0.80.
15
15 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Types of Reliability Stability: Concerned with the consistency of repeated measures or test-retest reliability Equivalence: Focused on comparing two versions of the same instrument (alternate forms reliability) or two observers (interrater reliability) measuring the same event Homogeneity: Addresses the correlation of various items within the instrument or internal consistency; determined by split-half reliability or Cronbach’s alpha coefficient
16
16 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Interrater Reliability Consistency in raters % = number of behaviors performed/total number of behaviors
17
17 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing for Reliability 1. What reliability information is provided? 2. Does the author include reports of the validity of the instrument from previous studies? 3. Did the author perform pilot studies to examine the validity of the instrument? 4. Did the researcher report use of data from the present study to examine instrument validity in the discussion section of the report?
18
18 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. What Is Validity? It is the extent to which an instrument reflects the concept being examined.
19
19 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Physiological Measures Accuracy Selectivity Precision Sensitivity Sources of error
20
20 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing Methods of Measurement Were the measurement strategies clearly identified and described? What level of measurement was achieved by each instrument? Was the reliability of each measurement method adequate? Was the validity of each measurement method adequate? If physiological instruments were used, were they accurate and precise?
21
21 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Measurement Strategies in Nursing Physiological measurements Observational measurements Interviews
22
22 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Examples of Physiological Measures Physical measurement methods ECG, BP, I&O SVO 2, pulse oximetry Microbiological Smears Cultures, number of CFU Sensitivities
23
23 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Observational Measurements Unstructured observations Structured observations Category systems Checklists Rating scales
24
24 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing Observational Measurement 1. Is the object of observation clearly identified and defined? 2. Is interrater reliability described? 3. Are the techniques for recording observations described?
25
25 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Types of Interviews: Unstructured Uses broad questions Describe for me your experience with… Role of interviewer is to encourage continued discussion
26
26 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Examples of Unstructured or Open- Ended Interview Questions Tell me about... What has been your experience with... What was it like to hear you have cancer?
27
27 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Types of Interviews: Structured Structured interviews Describing interview questions Pretesting the interview protocol Training interviewers Preparing for an interview Probing Recording interview data
28
28 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Closed-Ended Interview Questions Structured Response alternatives fixed Which would you rather do, x or y?
29
29 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing Interview Methods 1. Do the interview questions address concerns expressed in the research problem? 2. Are the interview questions relevant for the research purpose and objectives, questions, or hypotheses? 3. Does the design of the questions tend to bias subjects’ responses? 4. Does the sequence of questions tend to bias subjects’ responses?
30
30 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Focus Groups Used to study qualitative issues Obtain participants’ perceptions of narrow subject in a group interview session Give group a feeling of “safety in numbers” Nonverbal approaches are included. Discussion helps to provide depth of data.
31
31 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Focus Group Considerations May sort participants into smaller groups with common characteristics: segmentation Need to select an effective moderator to keep discussion on track The setting should be relaxed and comfortable. High-quality tape recordings should be made.
32
32 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing the Use of Focus Groups 1. What was the aim of the focus group? 2. Was the group size appropriate for the focus group method? 3. Was group sufficiently homogeneous to speak candidly? 4. Was moderator successful in keeping discussion focused?
33
33 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing the Use of Focus Groups (cont’d) 5. Was the aim of the focus group achieved? 6. Did conclusions appear to be representative? 7. Were minority positions identified and explored?
34
34 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Measurement Strategies Questionnaires Scales Q methodology Diaries Delphi technique
35
35 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Questionnaires Administration In person/on phone Self-administered Mail
36
36 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Scales Rating scales The Likert scale Semantic differential scales Visual analog scales
37
37 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Example of Items in a Likert Scale
38
38 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Example of Visual Analog Scale
39
39 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing a Scale 1. Is the instrument clearly described? 2. Are the techniques that were used to administer and score the scale provided? 3. Is information about validity and reliability of the scale described from previous studies? 4. Is information about validity and reliability of the scale described for the present sample? 5. If the scale was developed for the study, was the instrument development process described?
40
40 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Critiquing the Data Collection Process Was data collection process clearly described? Was data collection conducted in a consistent way? Were research controls maintained? If data collectors were used, were they adequately trained?
41
41 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Serendipity The accidental discovery of something useful or valuable. Can lead to new insights
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.