Presentation is loading. Please wait.

Presentation is loading. Please wait.

Identifying Scientific Variables

Similar presentations


Presentation on theme: "Identifying Scientific Variables"— Presentation transcript:

1 Identifying Scientific Variables
Chapter 4 Identifying Scientific Variables

2 Chapter Outline Criteria for defining and measuring variables
Constructs and operational definitions Types of variables Scales of measurement Reliability of a measurement Validity of a measurement Selecting a measurement procedure

3 Criteria for defining and measuring variables
Variable – Any value or characteristic that can change or vary from one person to another or from one situation to another For a variable to be suitable for scientific study, it must be observable and replicable Observable: Can be directly or indirectly measured Replicable: Can be observed more than once

4 Criteria for defining and measuring variables
April is the cruellest month, breeding Lilacs out of the dead land, mixing  Memory and desire, stirring  Dull roots with spring rain.  Winter kept us warm, covering  Earth in forgetful snow, feeding  A little life with dried tubers.  April is the cruellest month, breeding Lilacs out of the dead land, mixing  Memory and desire, stirring  Dull roots with spring rain.  Winter kept us warm, covering  Earth in forgetful snow, feeding  A little life with dried tubers. 

5 Constructs and Operational Definitions
Construct or hypothetical construct – A conceptual variable that is known to exist but cannot be directly observed The operational definition of a construct is an external factor of that construct that we will observe

6 Constructs and Operational Definitions

7 Types of Variables Continuous and discrete variables
Continuous: Measured along a continuum at any place beyond the decimal point, meaning that it can be measured in whole units or fractional units Ex. Olympic sprinters are timed to the nearest hundredths place (in seconds), but if the Olympic judges wanted to clock them to the nearest millionths place, they could Discrete: Measured in whole units or categories that are not distributed along a continuum Ex. Number of brothers and sisters you have, socio-econimic class (working class, middle class, upper class)

8 Types of Variables Quantitative and qualitative
Quantitative: Varies by amount Measured as a numeric value and is often collected by measuring or counting Ex. Food intake in calories (continuous variable), or count number of pieces of food consumed (discrete variable) Qualitative: Varies by class Often a category or label for the behaviors and events researchers observe; Describes nonnumeric aspects of phenomena Ex. Socioeconomic class (working class, middle class, upper class), categories of mental disorders (such as depression)

9 Scales of Measurement Scales of measurement: Rules for how the properties of numbers can change with different uses Nominal Ordinal Interval Ratio

10 Scales of Measurement Nominal scales – Names, Categories, or Measurements in which a number is assigned to represent something or someone. Ex. ZIP codes, license plate numbers Numbers on a nominal scale are often coded values Coding: The procedure of converting a categorical variable to numeric values Ex. For a person’s gender, researchers may code men as 1 and women as 2 Pg. 95

11 Scales of Measurement Ordinal scales – Measurements that convey order or rank only Simply indicate that one rank is greater or less than another rank Ex. Finishing order in a competition, education level

12 Scales of Measurement Interval scales – Measurements that have no true zero and are distributed in equal units Equidistant scale: A scale distributed in units that are equidistant from one another Ex. Rating scales No true zero: When the value 0 truly indicates nothing on a scale of measurement. Interval scales do not have a true zero Ex. Temperature

13 Scales of Measurement Ratio scales – Measurements that have a true zero and are equidistant Similar to interval scales in that scores are distributed in equal units Distribution of scores on a ratio scale has a true zero Is the most informative scale of measurement Ex. Length, height, weight, time Scale Controversy? Subjective experience matters.

14 Reliability of a Measurement
Reliability – The consistency, stability, or repeatability, of one or more measures or observations Test-retest reliability: Extent to which a measure or observation is consistent or stable at two points in time when a measure or observation demonstrated at “Time 1” is again demonstrated using the same measure or observation procedure at “Time 2” Advantage of test-retest reliability is that you can determine the extent to which items or measures are replicable or consistent over time

15 Reliability of a Measurement
Internal consistency: Extent to which multiple items used to measure the same variable are related Reflects the extent to which multiple items for the variable give the same picture of the behavior or event being measured

16 Reliability of a Measurement
Inter-rater reliability (IRR) or inter-observer reliability – Extent to which two or more raters of the same behavior or event are in agreement with what they observed Cohen’s kappa: This statistic gives an estimate of the consistency in ratings of two or more raters High IRR shows that observations made reflect those that other observers would agree with Low IRR indicates a misunderstanding or confusion concerning the behavior or event being observed

17 Validity of a Measurement
Validity – The extent to which a measurement for a variable or construct measures what it is purported or intended to measure Construct Validity – Extent to which an operational definition for a variable or construct is actually measuring that variable or construct Convergent validity: the degree to which two measures of constructs that theoretically should be related, are in fact related Discriminant validity: the degree to which two measures of constructs that theoretically should be unrelated, are in fact unrelated Pg 103 For example, a suicidal ideation scale should produce scores that correlate well with scores on depression, loneliness inventories. It would be unexpected to find that this same scale correlates positively with a scale of life satisfaction.

18 Validity of a Measurement
Construct Validity – Extent to which an operational definition for a variable or construct is actually measuring that variable or construct Face validity - Extent to which a measure for a variable or construct appears to measure what it is purported to measure Does it “look like” it is a valid measure? Get a general consensus among our peers that the measure we are using for a variable appears to be valid Pg 103

19 Validity of a Measurement
Validity – The extent to which a measurement for a variable or construct measures what it is purported or intended to measure Content Validity - Extent to which the items or contents of a measure adequately represent all the features of the construct being measured. The more thorough a measure for a construct, the higher the content validity of the measure will be. Example: Think about an all-purpose Psychology exam; What if the exam contained only questions covering neurophysiology and evolution? What would we be able to infer about the Psychology knowledge of a student who achieves a high score? Criterion validity: Extent to which scores obtained on some measure can be used to infer or predict a criterion or expected outcome Pg 103

20 Selecting a Measurement Procedure
Researchers can be aware of, and control for, problems that can arise in the measurement procedures Four potential concerns: Participant reactivity Experimenter bias Sensitivity Range effects

21 Selecting a Measurement Procedure
Participant reactivity – The reaction or response participants have when they know they are being observed or measured

22 Selecting a Measurement Procedure
To minimize participant reactivity: Reassure confidentiality Use deception when ethical Minimize demand characteristics Demand characteristics: Any characteristic of a research setting that may reveal the hypothesis being tested or give the participant a clue regarding how he or she is expected to behave

23 Selecting a Measurement Procedure
Experimenter bias – Extent to which the behavior of a researcher or experimenter intentionally or unintentionally influences the results of the study Expectancy effects – Preconceived ideas or expectations regarding how participants should behave or what participants are capable of doing Can often lead to experimenter bias

24 Selecting a Measurement Procedure
To minimize experimenter bias: Get a second opinion Standardize the research procedures Conduct a double blind study Double blind study: Type of research study in which the researcher collecting the data and the participants in the study are unaware of the conditions that participants are assigned

25 Selecting a Measurement Procedure
Sensitivity Are the measures sensitive enough to respond to the type and magnitude of the changes that are expected? (e.g. seconds vs. milliseconds, difficult or easy exams) Range effect Limitation in the range of data measured in which scores are clustered to one extreme a ceiling effect (the clustering of scores at the high end of a measurement scale, allowing little or no possibility of increases in value, e.g. test that is too easy) a floor effect (the clustering of scores at the low end of a measurement scale, allowing little or no possibility of decreases in value, e.g. test that is too difficult.

26 Selecting a Measurement Procedure
To maximize the sensitivity of a measure and minimize range effects: Perform a thorough literature review Conduct a pilot study Small preliminary study used to determine the extent to which a manipulation or measure shows an effect Use multiple measures


Download ppt "Identifying Scientific Variables"

Similar presentations


Ads by Google