Presentation is loading. Please wait.

Presentation is loading. Please wait.

Independent and Dependent Variables

Similar presentations


Presentation on theme: "Independent and Dependent Variables"— Presentation transcript:

1 Independent and Dependent Variables
Operational Definitions Evaluating Operational Definitions Planning the Method Section

2 Value of experiments If it is possible to conduct one, experiments are usually preferred because they have high internal validity A psychology experiment has three main features: Manipulation of antecedent conditions Antecedent condition must have two different conditions We measure the response or behavior of subjects as a result of the treatment condition

3 What is an independent variable?
An independent variable (IV) is the variable (antecedent condition) an experimenter intentionally manipulates. Levels of an independent variable are the values of the IV created by the experimenter. An experiment requires at least two levels. Independent and Dependent Variables

4 Example A experiment wishes to test the value of St John’s Wort as a treatment for depression. St John’s Wort is given to subjects; it is the antecedent. But if we only have one group taking St. John’s Wort, then we won’t know if the behavior change we see is due to the St. John’s Wort. Therefore, need another level of this antecedent condition, a group that receives no St. John’s Wort, but who is also depressed

5 In a true experiment we must make sure that our treatment groups do not consist of people who are different on preexisting conditions. For instance, if the group receiving the St. John’s Wort is also in an exercise program but the group not receiving the St. John’s Wort is not, we would not know if the exercise or the St. John’s Wort caused the reaction. This problem is known as confounding.

6 Independent and Dependent Variables
Explain confounding. An experiment is confounded when the value of an extraneous variable systematically changes along with the independent variable. For example, we could confound our experiment if we ran experimental subjects in the morning and control subjects at night. Independent and Dependent Variables

7 What is a dependent variable?
A dependent variable is the outcome measure the experimenter uses to assess the change in behavior produced by the independent variable. The dependent variable depends on the value of the independent variable. If our hypothesis is correct, different values of the independent variable should produce changes in the dependent variable. Independent and Dependent Variables

8 Example Schachter hypothesized that if people were scared or anxious, the would seek company with other people, or affiliate. He told one group of people that they would get painful shocks. Other group was told they would get shocks, but that they wouldn’t be painful at all, that they would feel more like a tickle. While the subjects were waiting to be in the study, they were asked if they would rather wait alone, or in a room with someone else. Those who were told that they would get painful shocks chose to wait with someone else.

9 Example Independent variable = Levels of independent variable=

10 Example Hess tested hypothesis that large pupils make a person more attractive. Hess asked men to look at photos of women where one photo was retouched so that her pupils were larger. Men were asked which woman appeared more friendly, charming, etc.

11 Example Independent variable= Levels of independent variable=

12 What is an operational definition?
An operational definition specifies the exact meaning of a variable in an experiment by defining it in terms of observable operations, procedures, and measurements. Operational definitions are necessary for replication. Anyone should be able to replicate any experiment. Operational Definitions

13 What is an operational definition?
An experimental operational definition specifies the exact procedure for creating values of the independent variable. How was anxiety created in Schachter’s experiment? How were faces manipulated in Hess’s experiment? A measured operational definition specifies the exact procedure for measuring the dependent variable. How was affiliation measured in Schachter’s experiment? How was attrcativeness measured in Hess’s? Operational Definitions

14 Defining scales of measurement
Nominal Ordinal Interval Ratio

15 What are the properties of a nominal scale?
A nominal scale assigns items to two or more distinct categories that can be named using a shared feature, but does not measure their magnitude. Example: you can sort canines into friendly and shy categories or sex, such as male and female represents a nominal scale Evaluating Operational Definitions

16 What are the properties of an ordinal scale?
An ordinal scale measures the magnitude of the dependent variable using ranks, but does not assign precise values. This scale allows us to make statements about relative speed, but not precise speed, like a runner’s place in a marathon. Evaluating Operational Definitions

17 What are the properties of an interval scale?
An interval scale measures the magnitude of the dependent variable using equal intervals between values with no absolute zero point. Example: degrees Celsius or Fahrenheit and Sarnoff and Zimbardo’s (1961) scale. Scales on surveys, such as Likert type scales are interval scales. Evaluating Operational Definitions

18 What are the properties of a ratio scale?
A ratio scale measures the magnitude of the dependent variable using equal intervals between values and an absolute zero. This scale allows us to state that 2 meters are twice as long as 1 meter. Example: distance in meters or time in seconds or weight in pounds or ounces. Evaluating Operational Definitions

19 What does reliability mean?
Reliability refers to the consistency of experimental operational definitions and measured operational definitions. Example: a reliable bathroom scale should display the same weight if you measure yourself three times in the same minute. Evaluating Operational Definitions

20 Explain interrater reliability.
Interrater reliability is the degree to which observers agree in their measurement of the behavior. Example: the degree to which three observers agree when scoring the same personal essays for optimism. Evaluating Operational Definitions

21 Explain test-retest reliability.
Test-retest reliability means the degree to which a person's scores are consistent across two or more administrations of a measurement procedure. Example: highly correlated scores on the Wechsler Adult Intelligence Scale-Revised when it is administered twice, 2 weeks apart. Evaluating Operational Definitions

22 Explain interitem reliability.
Interitem reliability measures the degree to which different parts of an instrument (questionnaire or test) that are designed to measure the same variable achieve consistent results. Evaluating Operational Definitions

23 What does validity mean?
Validity means the operational definition accurately manipulates the independent variable or measures the dependent variable. Evaluating Operational Definitions

24 Evaluating Operational Definitions
What is face validity? Face validity is the degree to which the validity of a manipulation or measurement technique is self-evident. This is the least stringent form of validity. Face validity basically is when the item “looks right.” For example, using a ruler to measure pupil size or examining a questionnaire that measures shyness and feeling the items seem appropriate. Evaluating Operational Definitions

25 What is content validity?
Content validity means how accurately a measurement procedure samples the content of the dependent variable. Does it cover all of the topic areas necessary for this phenomena that you are trying to study. Example: an exam over chapters 1-4 that only contains questions about chapter 2 has poor content validity. Evaluating Operational Definitions

26 What is predictive validity?
Predictive validity means how accurately a measurement procedure predicts future performance. Example: the ACT has predictive validity if these scores are significantly correlated with college GPA. Evaluating Operational Definitions

27 Concurrent validity Concurrent validity compares scores on the instrument in question with another measuring instrument that is a known standard for this issue. You devise a new depression test, and to establish concurrent validity, you give your new instrument with the Beck’s Depression Inventory to be sure they both get the same result.

28 What is construct validity?
Construct validity is how accurately an operational definition represents a construct. Example: does an intelligence test really measure the construct of intelligence that you wish it to, or is it measuring social class differences or self-esteem differences. Evaluating Operational Definitions

29 Explain internal validity.
Internal validity is the degree to which changes in the dependent variable across treatment conditions were due to the independent variable. Internal validity establishes a cause-and-effect relationship between the independent and dependent variables. Internal validity is one of the most important concepts in experimentation. Evaluating Operational Definitions

30 Explain the problem of confounding.
Confounding occurs when an extraneous variable systematically changes across the experimental conditions. Example: a study comparing the effects of meditation and prayer on blood pressure would be confounded if one group exercised more. Evaluating Operational Definitions

31 Explain history threat.
History threat occurs when an event outside the experiment threatens internal validity by changing the dependent variable. Example: subjects in group A were weighed before lunch while those in group B were weighed after lunch. Magic Johnson announcing his HIV positive status. Evaluating Operational Definitions

32 Explain maturation threat.
Maturation threat is produced when internal physical or psychological changes in the subject threaten internal validity by changing the DV. Can include boredom, fatigue or a subject going through puberty. Example: boredom may increase subject errors on a proofing task (DV). Evaluating Operational Definitions

33 Explain testing threat.
Testing threat occurs when prior exposure to a measurement procedure affects performance on this measure during the experiment. Example: performance improves with retesting or practice on a test. Evaluating Operational Definitions

34 Explain instrumentation threat.
Instrumentation threat is when changes in the measurement instrument or measuring procedure threatens internal validity. Example: two different examiners rate the subjects on the dependent variable differently. Evaluating Operational Definitions

35 Explain statistical regression threat.
Statistical regression threat, also called regression to the mean, occurs when subjects are assigned to conditions on the basis of extreme scores. May think that the IV caused the change in their scores later, but it just may be due to regression threat. Example: extreme high scores tend to go down a bit, extreme low scores tend to rise a bit, scores at both ends get closer to the mean even without any treatment at all. Evaluating Operational Definitions

36 Explain selection threat.
Selection threat occurs when individual differences are not balanced across treatment conditions by the assignment procedure. Example: because of nonrandom assignment, subjects in the experimental group were more extroverted than those in the control group. Evaluating Operational Definitions

37 Explain subject mortality threat.
Subject mortality threat occurs when subjects drop out of experimental conditions at different rates. Attrition rates are a red flag because something in this treatment condition is causing people to drop out. Example: If have 2 weight loss regimes and many people drop out of regime A, yet the people remaining in Regime A lose more weight than people in regime B, you still may not be sure that regime A is better. The people who remain may be more dedicated and that is why they lost weight, not because the regime was better. Evaluating Operational Definitions

38 Explain selection interactions.
Selection interactions occur when a selection threat combines with at least one other threat (history, maturation, statistical regression, subject mortality, or testing). For instance, subjects were not randomly assigned into groups (selection) and this personality difference in the groups caused one group to become bored with the treatment (maturation). Evaluating Operational Definitions

39 What is the purpose of the Method section of an APA report?
The Method section of an APA research report describes the Participants, Apparatus or Materials, and Procedure of the experiment. This section provides the reader with sufficient detail (who, what, when, and how) to exactly replicate your study. Planning the Method Section

40 When is an Apparatus section needed?
An Apparatus section of an APA research report is appropriate when the equipment used in a study was unique or specialized, or when we need to explain the capabilities of more common equipment so that the reader can better evaluate or replicate the experiment. Planning the Method Section


Download ppt "Independent and Dependent Variables"

Similar presentations


Ads by Google