Chapter 9 Comparing Two Groups

Slides:



Advertisements
Similar presentations
Comparing Two Proportions (p1 vs. p2)
Advertisements

Objectives (BPS chapter 18) Inference about a Population Mean  Conditions for inference  The t distribution  The one-sample t confidence interval 
CHAPTER 9 Testing a Claim
Introduction Comparing Two Means
Significance Tests About
Copyright ©2011 Brooks/Cole, Cengage Learning More about Inference for Categorical Variables Chapter 15 1.
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Categorical Variables Chapter 15.
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Means Chapter 13.
Significance Testing Chapter 13 Victor Katch Kinesiology.
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Significance Tests Chapter 13.
Chapter 11: Inference for Distributions
Chapter 9 Hypothesis Testing.
Review for Exam 2 Some important themes from Chapters 6-9 Chap. 6. Significance Tests Chap. 7: Comparing Two Groups Chap. 8: Contingency Tables (Categorical.
Agresti/Franklin Statistics, 1 of 88 Chapter 11 Analyzing Association Between Quantitative Variables: Regression Analysis Learn…. To use regression analysis.
CHAPTER 19: Two-Sample Problems
C HAPTER 11 Section 11.2 – Comparing Two Means. C OMPARING T WO M EANS Comparing two populations or two treatments is one of the most common situations.
7. Comparing Two Groups Goal: Use CI and/or significance test to compare means (quantitative variable) proportions (categorical variable) Group 1 Group.
AP Statistics Section 13.1 A. Which of two popular drugs, Lipitor or Pravachol, helps lower bad cholesterol more? 4000 people with heart disease were.
Chapter 10 Analyzing the Association Between Categorical Variables
How Can We Test whether Categorical Variables are Independent?
Week 9 Chapter 9 - Hypothesis Testing II: The Two-Sample Case.
Ch 10 Comparing Two Proportions Target Goal: I can determine the significance of a two sample proportion. 10.1b h.w: pg 623: 15, 17, 21, 23.
1 Chapter 15: Nonparametric Statistics Section 15.1 How Can We Compare Two Groups by Ranking?
1 Chapter 10: Comparing Two Groups Section 10.1: Categorical Response: How Can We Compare Two Proportions?
Agresti/Franklin Statistics, 1 of 82 Chapter 13 Comparing Groups: Analysis of Variance Methods Learn …. How to use Statistical inference To Compare Several.
More About Significance Tests
8.1 Inference for a Single Proportion
Agresti/Franklin Statistics, 1 of 111 Chapter 9 Comparing Two Groups Learn …. How to Compare Two Groups On a Categorical or Quantitative Outcome Using.
Review of Chapters 1- 5 We review some important themes from the first 5 chapters 1.Introduction Statistics- Set of methods for collecting/analyzing data.
LECTURE 19 THURSDAY, 14 April STA 291 Spring
Section Inference for Experiments Objectives: 1.To understand how randomization differs in surveys and experiments when comparing two populations.
Agresti/Franklin Statistics, 1 of 106  Section 9.4 How Can We Analyze Dependent Samples?
Agresti/Franklin Statistics, 1 of 122 Chapter 8 Statistical inference: Significance Tests About Hypotheses Learn …. To use an inferential method called.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 10 Comparing Two Groups Section 10.5 Adjusting for the Effects of Other Variables.
AP Statistics Section 13.1 A. Which of two popular drugs, Lipitor or Pravachol, helps lower bad cholesterol more? 4000 people with heart disease were.
Review of Chapters 1- 6 We review some important themes from the first 6 chapters 1.Introduction Statistics- Set of methods for collecting/analyzing data.
7. Comparing Two Groups Goal: Use CI and/or significance test to compare means (quantitative variable) proportions (categorical variable) Group 1 Group.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 10 Comparing Two Groups Section 10.4 Analyzing Dependent Samples.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 13 Multiple Regression Section 13.3 Using Multiple Regression to Make Inferences.
+ Chi Square Test Homogeneity or Independence( Association)
BPS - 5th Ed. Chapter 221 Two Categorical Variables: The Chi-Square Test.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.1 One-Way ANOVA: Comparing.
AP Statistics Chapter 24 Comparing Means.
1 Chapter 11: Analyzing the Association Between Categorical Variables Section 11.1: What is Independence and What is Association?
Agresti/Franklin Statistics, 1 of 88 Chapter 11 Analyzing Association Between Quantitative Variables: Regression Analysis Learn…. To use regression analysis.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 10 Comparing Two Groups Section 10.1 Categorical Response: Comparing Two Proportions.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 11 Analyzing the Association Between Categorical Variables Section 11.2 Testing Categorical.
Comparing Means Chapter 24. Plot the Data The natural display for comparing two groups is boxplots of the data for the two groups, placed side-by-side.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Unit 5: Hypothesis Testing.
+ Unit 6: Comparing Two Populations or Groups Section 10.2 Comparing Two Means.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 9 Testing a Claim 9.3 Tests About a Population.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 10 Comparing Two Groups Section 10.3 Other Ways of Comparing Means and Comparing Proportions.
1 Chapter 12: Analyzing Association Between Quantitative Variables: Regression Analysis Section 12.1: How Can We Model How Two Variables Are Related?
CHAPTER 19: Two-Sample Problems ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
When  is unknown  The sample standard deviation s provides an estimate of the population standard deviation .  Larger samples give more reliable estimates.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 15 Nonparametric Statistics Section 15.1 Compare Two Groups by Ranking.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 10 Comparing Two Groups Section 10.1 Categorical Response: Comparing Two Proportions.
Chapter 13 Comparing Groups: Analysis of Variance Methods
CHAPTER 9 Testing a Claim
Lecture Slides Elementary Statistics Twelfth Edition
Review for Exam 2 Some important themes from Chapters 6-9
CHAPTER 9 Testing a Claim
Lesson Comparing Two Means.
Chapter 10 Analyzing the Association Between Categorical Variables
Chapter 10: Comparing Two Populations or Groups
Analyzing the Association Between Categorical Variables
CHAPTER 9 Testing a Claim
CHAPTER 9 Testing a Claim
Presentation transcript:

Chapter 9 Comparing Two Groups Learn …. How to Compare Two Groups On a Categorical or Quantitative Outcome Using Confidence Intervals and Significance Tests

Bivariate Analyses The outcome variable is the response variable The binary variable that specifies the groups is the explanatory variable

Bivariate Analyses Statistical methods analyze how the outcome on the response variable depends on or is explained by the value of the explanatory variable

Independent Samples The observations in one sample are independent of those in the other sample Example: Randomized experiments that randomly allocate subjects to two treatments Example: An observational study that separates subjects into groups according to their value for an explanatory variable

Dependent Samples Data are matched pairs – each subject in one sample is matched with a subject in the other sample Example: set of married couples, the men being in one sample and the women in the other. Example: Each subject is observed at two times, so the two samples have the same people

Categorical Response: How Can We Compare Two Proportions? Section 9.1 Categorical Response: How Can We Compare Two Proportions?

Categorical Response Variable Inferences compare groups in terms of their population proportions in a particular category We can compare the groups by the difference in their population proportions: (p1 – p2)

Example: Aspirin, the Wonder Drug Recent Titles of Newspaper Articles: “Aspirin cuts deaths after heart attack” “Aspirin could lower risk of ovarian cancer” “New study finds a daily aspirin lowers the risk of colon cancer” “Aspirin may lower the risk of Hodgkin’s”

Example: Aspirin, the Wonder Drug The Physicians Health Study Research Group at Harvard Medical School Five year randomized study Does regular aspirin intake reduce deaths from heart disease?

Example: Aspirin, the Wonder Drug Experiment: Subjects were 22,071 male physicians Every other day, study participants took either an aspirin or a placebo The physicians were randomly assigned to the aspirin or to the placebo group The study was double-blind: the physicians did not know which pill they were taking, nor did those who evaluated the results

Example: Aspirin, the Wonder Drug Results displayed in a contingency table:

Example: Aspirin, the Wonder Drug What is the response variable? What are the groups to compare?

Example: Aspirin, the Wonder Drug The response variable is whether the subject had a heart attack, with categories ‘yes’ or ‘no’ The groups to compare are: Group 1: Physicians who took a placebo Group 2: Physicians who took aspirin

Example: Aspirin, the Wonder Drug Estimate the difference between the two population parameters of interest

Example: Aspirin, the Wonder Drug p1: the proportion of the population who would have a heart attack if they participated in this experiment and took the placebo p2: the proportion of the population who would have a heart attack if they participated in this experiment and took the aspirin

Example: Aspirin, the Wonder Drug Sample Statistics:

Example: Aspirin, the Wonder Drug To make an inference about the difference of population proportions, (p1 – p2), we need to learn about the variability of the sampling distribution of:

Standard Error for Comparing Two Proportions The difference, , is obtained from sample data It will vary from sample to sample This variation is the standard error of the sampling distribution of :

Confidence Interval for the Difference between Two Population Proportions The z-score depends on the confidence level This method requires: Independent random samples for the two groups Large enough sample sizes so that there are at least 10 “successes” and at least 10 “failures” in each group

Confidence Interval Comparing Heart Attack Rates for Aspirin and Placebo 95% CI:

Confidence Interval Comparing Heart Attack Rates for Aspirin and Placebo Since both endpoints of the confidence interval (0.005, 0.011) for (p1- p2) are positive, we infer that (p1- p2) is positive Conclusion: The population proportion of heart attacks is larger when subjects take the placebo than when they take aspirin

Confidence Interval Comparing Heart Attack Rates for Aspirin and Placebo The population difference (0.005, 0.011) is small Even though it is a small difference, it may be important in public health terms For example, a decrease of 0.01 over a 5 year period in the proportion of people suffering heart attacks would mean 2 million fewer people having heart attacks

The study used male doctors in the U.S Confidence Interval Comparing Heart Attack Rates for Aspirin and Placebo The study used male doctors in the U.S The inference applies to the U.S. population of male doctors Before concluding that aspirin benefits a larger population, we’d want to see results of studies with more diverse groups

Interpreting a Confidence Interval for a Difference of Proportions Check whether 0 falls in the CI If so, it is plausible that the population proportions are equal If all values in the CI for (p1- p2) are positive, you can infer that (p1- p2) >0 If all values in the CI for (p1- p2) are negative, you can infer that (p1- p2) <0 Which group is labeled ‘1’ and which is labeled ‘2’ is arbitrary

Interpreting a Confidence Interval for a Difference of Proportions The magnitude of values in the confidence interval tells you how large any true difference is If all values in the confidence interval are near 0, the true difference may be relatively small in practical terms

Significance Tests Comparing Population Proportions 1. Assumptions: Categorical response variable for two groups Independent random samples

Significance Tests Comparing Population Proportions Assumptions (continued): Significance tests comparing proportions use the sample size guideline from confidence intervals: Each sample should have at least about 10 “successes” and 10 “failures” Two–sided tests are robust against violations of this condition At least 5 “successes” and 5 “failures” is adequate

Significance Tests Comparing Population Proportions 2. Hypotheses: The null hypothesis is the hypothesis of no difference or no effect: H0: (p1- p2) =0 Under the presumption that p1= p2, we create a pooled estimate of the common value of p1and p2 This pooled estimate is

Significance Tests Comparing Population Proportions 2. Hypotheses (continued): Ha: (p1- p2) ≠ 0 (two-sided test) Ha: (p1- p2) < 0 (one-sided test) Ha: (p1- p2) > 0 (one-sided test)

Significance Tests Comparing Population Proportions 3. The test statistic is:

Significance Tests Comparing Population Proportions 4. P-value: Probability obtained from the standard normal table 5. Conclusion: Smaller P-values give stronger evidence against H0 and supporting Ha

Example: Is TV Watching Associated with Aggressive Behavior? Various studies have examined a link between TV violence and aggressive behavior by those who watch a lot of TV A study sampled 707 families in two counties in New York state and made follow-up observations over 17 years The data shows levels of TV watching along with incidents of aggressive acts

Example: Is TV Watching Associated with Aggressive Behavior?

Example: Is TV Watching Associated with Aggressive Behavior? Test the Hypotheses: H0: (p1- p2) = 0 Ha: (p1- p2) ≠ 0 Using a significance level of 0.05 Group 1: less than 1 hr. of TV per day Group 2: at least 1 hr. of TV per day

Example: Is TV Watching Associated with Aggressive Behavior?

Example: Is TV Watching Associated with Aggressive Behavior? Conclusion: Since the P-value is less than 0.05, we reject H0 We conclude that the population proportions of aggressive acts differ for the two groups The sample values suggest that the population proportion is higher for the higher level of TV watching

What is the response variable? Net worth Households: white or black In 2002, the median net worth was estimated as $89,000 for white households and $6000 for black households. What is the response variable? Net worth Households: white or black

What is the explanatory variable? Net worth Households: white or black In 2002, the median net worth was estimated as $89,000 for white households and $6000 for black households. What is the explanatory variable? Net worth Households: white or black

In 2002, the median net worth was estimated as $89,000 for white households and $6000 for black households. Identify the two groups that are the categories of the explanatory variable. White and Black households Net worth and households

In 2002, the median net worth was estimated as $89,000 for white households and $6000 for black households. The estimated medians were based on a sample of households. Were the samples of white households and black households independent samples or dependent samples? Independent samples Dependent samples

Section 9.2 Quantitative Response: How Can We Compare Two Means?

Comparing Means We can compare two groups on a quantitative response variable by comparing their means

Example: Teenagers Hooked on Nicotine A 30-month study: Evaluated the degree of addiction that teenagers form to nicotine 332 students who had used nicotine were evaluated The response variable was constructed using a questionnaire called the Hooked on Nicotine Checklist (HONC)

Example: Teenagers Hooked on Nicotine The HONC score is the total number of questions to which a student answered “yes” during the study The higher the score, the more hooked on nicotine a student is judged to be

Example: Teenagers Hooked on Nicotine The study considered explanatory variables, such as gender, that might be associated with the HONC score

Example: Teenagers Hooked on Nicotine How can we compare the sample HONC scores for females and males? We estimate (µ1 - µ2) by (x1 - x2): 2.8 – 1.6 = 1.2 On average, females answered “yes” to about one more question on the HONC scale than males did

Example: Teenagers Hooked on Nicotine To make an inference about the difference between population means, (µ1 – µ2), we need to learn about the variability of the sampling distribution of:

Standard Error for Comparing Two Means The difference, , is obtained from sample data. It will vary from sample to sample. This variation is the standard error of the sampling distribution of :

Confidence Interval for the Difference between Two Population Means A 95% CI: Software provides the t-score with right-tail probability of 0.025

Confidence Interval for the Difference between Two Population Means This method assumes: Independent random samples from the two groups An approximately normal population distribution for each group this is mainly important for small sample sizes, and even then the method is robust to violations of this assumption

Data as summarized by HONC scores for the two groups: Example: Nicotine – How Much More Addicted Are Smokers than Ex-Smokers? Data as summarized by HONC scores for the two groups: Smokers: x1 = 5.9, s1 = 3.3, n1 = 75 Ex-smokers:x2 = 1.0, s2 = 2.3, n2 = 257

Were the sample data for the two groups approximately normal? Example: Nicotine – How Much More Addicted Are Smokers than Ex-Smokers? Were the sample data for the two groups approximately normal? Most likely not for Group 2 (based on the sample statistics): x2 = 1.0, s2 = 2.3) Since the sample sizes are large, this lack of normality is not a problem

Example: Nicotine – How Much More Addicted Are Smokers than Ex-Smokers? 95% CI for (µ1- µ2): We can infer that the population mean for the smokers is between 4.1 higher and 5.7 higher than for the ex-smokers

How Can We Interpret a Confidence Interval for a Difference of Means? Check whether 0 falls in the interval When it does, 0 is a plausible value for (µ1 – µ2), meaning that it is possible that µ1 = µ2 A confidence interval for (µ1 – µ2) that contains only positive numbers suggests that (µ1 – µ2) is positive We then infer that µ1 is larger than µ2

How Can We Interpret a Confidence Interval for a Difference of Means? A confidence interval for (µ1 – µ2) that contains only negative numbers suggests that (µ1 – µ2) is negative We then infer that µ1 is smaller than µ2 Which group is labeled ‘1’ and which is labeled ‘2’ is arbitrary

Significance Tests Comparing Population Means 1. Assumptions: Quantitative response variable for two groups Independent random samples

Significance Tests Comparing Population Means Assumptions (continued): Approximately normal population distributions for each group This is mainly important for small sample sizes, and even then the two-sided test is robust to violations of this assumption

Significance Tests Comparing Population Means 2. Hypotheses: The null hypothesis is the hypothesis of no difference or no effect: H0: (µ1- µ2) =0

Significance Tests Comparing Population Proportions 2. Hypotheses (continued): The alternative hypothesis: Ha: (µ1- µ2) ≠ 0 (two-sided test) Ha: (µ1- µ2) < 0 (one-sided test) Ha: (µ1- µ2) > 0 (one-sided test)

Significance Tests Comparing Population Means 3. The test statistic is:

Significance Tests Comparing Population Means 4. P-value: Probability obtained from the standard normal table 5. Conclusion: Smaller P-values give stronger evidence against H0 and supporting Ha

Example: Does Cell Phone Use While Driving Impair Reaction Times? Experiment: 64 college students 32 were randomly assigned to the cell phone group 32 to the control group

Example: Does Cell Phone Use While Driving Impair Reaction Times? Experiment (continued): Students used a machine that simulated driving situations At irregular periods a target flashed red or green Participants were instructed to press a “brake button” as soon as possible when they detected a red light

Example: Does Cell Phone Use While Driving Impair Reaction Times? For each subject, the experiment analyzed their mean response time over all the trials Averaged over all trials and subjects, the mean response time for the cell-phone group was 585.2 milliseconds The mean response time for the control group was 533.7 milliseconds

Example: Does Cell Phone Use While Driving Impair Reaction Times? Data:

Example: Does Cell Phone Use While Driving Impair Reaction Times? Test the hypotheses: H0: (µ1- µ2) =0 vs. Ha: (µ1- µ2) ≠ 0 using a significance level of 0.05

Example: Does Cell Phone Use While Driving Impair Reaction Times?

Example: Does Cell Phone Use While Driving Impair Reaction Times? Conclusion: The P-value is less than 0.05, so we can reject H0 There is enough evidence to conclude that the population mean response times differ between the cell phone and control groups The sample means suggest that the population mean is higher for the cell phone group

Example: Does Cell Phone Use While Driving Impair Reaction Times? What do the box plots tell us? There is an extreme outlier for the cell phone group It is a good idea to make sure the results of the analysis aren’t affected too strongly by that single observation Delete the extreme outlier and redo the analysis In this example, the t-statistic changes only slightly

Example: Does Cell Phone Use While Driving Impair Reaction Times? Insight: In practice, you should not delete outliers from a data set without sufficient cause (i.e., if it seems the observation was incorrectly recorded) It is however, a good idea to check for sensitivity of an analysis to an outlier If the results change much, it means that the inference including the outlier is on shaky ground

What is a point estimate of µ1- µ2? 18.2 – 12.9 32.6 – 18.1 How much more time do women spend on housework than men? Data is Hours per Week. Gender: Sample Size Mean St. Dev. Women 6764 32.6 18.2 Men 4252 18.1 12.9 What is a point estimate of µ1- µ2? 18.2 – 12.9 32.6 – 18.1 6764 - 4252 32.6/18.2 – 18.1/12.9

What is the standard error for comparing the means? How much more time do women spend on housework than men? Data is Hours per Week. Gender: Sample Size Mean St. Dev. Women 6764 32.6 18.2 Men 4252 18.1 12.9 What is the standard error for comparing the means? 5.3 .076 .297 .088

sample standard deviations sample sizes genders How much more time do women spend on housework than men? Data is Hours per Week. Gender: Sample Size Mean St. Dev. Women 6764 32.6 18.2 Men 4252 18.1 12.9 What factor causes the standard error to be small compared to the sample standard deviations for the two groups? sample means sample standard deviations sample sizes genders

Other Ways of Comparing Means and Comparing Proportions Section 9.3 Other Ways of Comparing Means and Comparing Proportions

Alternative Method for Comparing Means An alternative t- method can be used when, under the null hypothesis, it is reasonable to expect the variability as well as the mean to be the same This method requires the assumption that the population standard deviations be equal

The Pooled Standard Deviation This alternative method estimates the common value σ of σ1 and σ1 by:

Comparing Population Means, Assuming Equal Population Standard Deviations Using the pooled standard deviation estimate, a 95% CI for (µ1 - µ2) is: This method has df =n1+ n2- 2

Comparing Population Means, Assuming Equal Population Standard Deviations The test statistic for H0: µ1=µ2 is: This method has df =n1+ n2- 2

Comparing Population Means, Assuming Equal Population Standard Deviations These methods assume: Independent random samples from the two groups An approximately normal population distribution for each group This is mainly important for small sample sizes, and even then, the CI and the two-sided test are usually robust to violations of this assumption σ1=σ2

The Ratio of Proportions: The Relative Risk The ratio of proportions for two groups is: In medical applications for which the proportion refers to a category that is an undesirable outcome, such as death or having a heart attack, this ratio is called the relative risk

How Can We Analyze Dependent Samples? Section 9.4 How Can We Analyze Dependent Samples?

Dependent Samples Each observation in one sample has a matched observation in the other sample The observations are called matched pairs

Example: Matched Pairs Design for Cell Phones and Driving Study The cell phone analysis presented earlier in this text used independent samples: One group used cell phones A separate control group did not use cell phones

Example: Matched Pairs Design for Cell Phones and Driving Study An alternative design used the same subjects for both groups Reaction times are measured when subjects performed the driving task without using cell phones and then again while using cell phones

Example: Matched Pairs Design for Cell Phones and Driving Study Data:

Example: Matched Pairs Design for Cell Phones and Driving Study Benefits of using dependent samples (matched pairs): Many sources of potential bias are controlled so we can make a more accurate comparison Using matched pairs keeps many other factors fixed that could affect the analysis Often this results in the benefit of smaller standard errors

Example: Matched Pairs Design for Cell Phones and Driving Study To Compare Means with Matched Pairs, Use Paired Differences: For each matched pair, construct a difference score d = (reaction time using cell phone) – (reaction time without cell phone) Calculate the sample mean of these differences: xd

For Dependent Samples (Matched Pairs) Mean of Differences = Difference of Means

For Dependent Samples (Matched Pairs) The difference (x1 – x2) between the means of the two samples equals the mean xd of the difference scores for the matched pairs The difference (µ1 – µ2) between the population means is identical to the parameter µd that is the population mean of the difference scores

For Dependent Samples (Matched Pairs) Let n denote the number of observations in each sample This equals the number of difference scores The 95 % CI for the population mean difference is:

For Dependent Samples (Matched Pairs) To test the hypothesis H0: µ1 = µ2 of equal means, we can conduct the single-sample test of H0: µd = 0 with the difference scores The test statistic is:

For Dependent Samples (Matched Pairs) These paired-difference inferences are special cases of single-sample inferences about a population mean so they make the same assumptions

Paired-difference Inferences Assumptions: The sample of difference scores is a random sample from a population of such difference scores The difference scores have a population distribution that is approximately normal This is mainly important for small samples (less than about 30) and for one-sided inferences

Paired-difference Inferences Confidence intervals and two-sided tests are robust: They work quite well even if the normality assumption is violated One-sided tests do not work well when the sample size is small and the distribution of differences is highly skewed

Example: Matched Pairs Analysis for Cell Phones and Driving Study Boxplot of the 32 difference scores

Example: Matched Pairs Analysis for Cell Phones and Driving Study The box plot shows skew to the right for the difference scores Two-sided inference is robust to violations of the assumption of normality The box plot does not show any severe outliers

Example: Matched Pairs Analysis for Cell Phones and Driving Study

Example: Matched Pairs Analysis for Cell Phones and Driving Study Significance test: H0: µd = 0 (and hence equal population means for the two conditions) Ha: µd ≠ 0 Test statistic:

Example: Matched Pairs Analysis for Cell Phones and Driving Study The P-value displayed in the output is 0.000 There is extremely strong evidence that the population mean reaction times are different

Example: Matched Pairs Analysis for Cell Phones and Driving Study 95% CI for µd =(µ1 - µ2):

Example: Matched Pairs Analysis for Cell Phones and Driving Study We infer that the population mean when using cell phones is between about 32 and 70 milliseconds higher than when not using cell phones The confidence interval is more informative than the significance test, since it predicts just how large the difference must be

How Can We Adjust for Effects of Other Variables? Section 9.5 How Can We Adjust for Effects of Other Variables?

A Practically Significant Difference When we find a practically significant difference between two groups, can we identify a reason for the difference? Warning: An association may be due to a lurking variable not measured in the study

Example: Is TV Watching Associated with Aggressive Behavior? In a previous example, we saw that teenagers who watch more TV have a tendency later in life to commit more aggressive acts Could there be a lurking variable that influences this association?

Example: Is TV Watching Associated with Aggressive Behavior? Perhaps teenagers who watch more TV tend to attain lower educational levels and perhaps lower education tends to be associated with higher levels of aggression

Example: Is TV Watching Associated with Aggressive Behavior? We need to measure potential lurking variables and use them in the statistical analysis If we thought that education was a potential lurking variable we would what to measure it

Example: Is TV Watching Associated with Aggressive Behavior?

Example: Is TV Watching Associated with Aggressive Behavior? This analysis uses three variables: Response variable: Whether the subject has committed aggressive acts Explanatory variable: Level of TV watching Control variable: Educational level

Control Variable A control variable is a variable that is held constant in a multivariate analysis (more than two variables)

Can An Association Be Explained by a Third Variable? Treat the third variable as a control variable Conduct the ordinary bivariate analysis while holding that control variable constant at fixed values Whatever association occurs cannot be due to effect of the control variable

Example: Is TV Watching Associated with Aggressive Behavior? At each educational level, the percentage committing an aggressive act is higher for those who watched more TV For this hypothetical data, the association observed between TV watching and aggressive acts was not because of education