Download presentation
Presentation is loading. Please wait.
1
SIMPLE LINEAR REGRESSION
CHAPTER 13 SIMPLE LINEAR REGRESSION Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
2
Opening Example Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
3
SIMPLE LINEAR REGRESSION MODEL
Simple Regression Linear Regression Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
4
Simple Regression Definition
A regression model is a mathematical equation that describes the relationship between two or more variables. A simple regression model includes only two variables: one independent and one dependent. The dependent variable is the one being explained, and the independent variable is the one used to explain the variation in the dependent variable. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
5
Linear Regression Definition
A (simple) regression model that gives a straight-line relationship between two variables is called a linear regression model. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
6
Figure 13. 1 Relationship between food expenditure and income
Figure 13.1 Relationship between food expenditure and income. (a) Linear relationship. (b) Nonlinear relationship. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
7
Figure 13.2 Plotting a linear equation.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
8
Figure 13.3 y-intercept and slope of a line.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
9
SIMPLE LINEAR REGRESSION ANALYSIS
Scatter Diagram Least Squares Line Interpretation of a and b Assumptions of the Regression Model Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
10
SIMPLE LINEAR REGRESSION ANALYSIS
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
11
SIMPLE LINEAR REGRESSION ANALYSIS
Definition In the regression model y = A + Bx + ε, A is called the y-intercept or constant term, B is the slope, and ε is the random error term. The dependent and independent variables are y and x, respectively. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
12
SIMPLE LINEAR REGRESSION ANALYSIS
Definition In the model ŷ = a + bx, a and b, which are calculated using sample data, are called the estimates of A and B, respectively. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
13
Table 13.1 Incomes (in hundreds of dollars) and Food Expenditures of Seven Households
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
14
Scatter Diagram Definition
A plot of paired observations is called a scatter diagram. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
15
Figure 13.4 Scatter diagram.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
16
Figure 13.5 Scatter diagram and straight lines.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
17
Figure 13.6 Regression Line and random errors.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
18
Error Sum of Squares (SSE)
The error sum of squares, denoted SSE, is The values of a and b that give the minimum SSE are called the least square estimates of A and B, and the regression line obtained with these estimates is called the least square line. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
19
The Least Squares Line For the least squares regression line ŷ = a + bx, Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
20
The Least Squares Line where
and SS stands for “sum of squares”. The least squares regression line ŷ = a + bx us also called the regression of y on x. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
21
Example 13-1 Find the least squares regression line for the data on incomes and food expenditure on the seven households given in the Table Use income as an independent variable and food expenditure as a dependent variable. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
22
Table 13.2 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
23
Example 13-1: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
24
Example 13-1: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
25
Example 13-1: Solution ŷ = 1.5050 + .2525 x
Thus, our estimated regression model is ŷ = x Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
26
Figure 13.7 Error of prediction.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
27
Interpretation of a and b
Consider the household with zero income. Using the estimated regression line obtained in Example 13-1, ŷ = (0) = $ hundred Thus, we can state that households with no income is expected to spend $ per month on food The regression line is valid only for the values of x between 33 and 83 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
28
Interpretation of a and b
Interpretation of b The value of b in the regression model gives the change in y (dependent variable) due to change of one unit in x (independent variable). We can state that, on average, a $100 (or $1) increase in income of a household will increase the food expenditure by $25.25 (or $.2525). Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
29
Figure 13.8 Positive and negative linear relationships between x and y.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
30
Case Study 13-1 Regression of Heights and Weights of NBA Players
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
31
Case Study 13-1 Regression of Heights and Weights of NBA Players
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
32
Assumptions of the Regression Model
The random error term Є has a mean equal to zero for each x Assumption 2: The errors associated with different observations are independent Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
33
Assumptions of the Regression Model
For any given x, the distribution of errors is normal Assumption 4: The distribution of population errors for each x has the same (constant) standard deviation, which is denoted σЄ Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
34
Figure 13.11 (a) Errors for households with an income of $4000 per month.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
35
Figure 13.11 (b) Errors for households with an income of $ 7500 per month.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
36
Figure 13.12 Distribution of errors around the population regression line.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
37
Figure 13.13 Nonlinear relations between x and y.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
38
STANDARD DEVIATION OF RANDOM ERRORS
Degrees of Freedom for a Simple Linear Regression Model The degrees of freedom for a simple linear regression model are df = n – 2 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
39
Figure 13.14 Spread of errors for x = 40 and x = 75.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
40
STANDARD DEVIATION OF RANDOM ERRORS
The standard deviation of errors is calculated as where Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
41
Example 13-2 Compute the standard deviation of errors se for the data on monthly incomes and food expenditures of the seven households given in Table 13.1. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
42
Table 13.3 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
43
Example 13-2: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
44
COEFFICIENT OF DETERMINATION
Total Sum of Squares (SST) The total sum of squares, denoted by SST, is calculated as Note that this is the same formula that we used to calculate SSyy. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
45
Figure Total errors. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
46
Table 13.4 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
47
Figure 13.16 Errors of prediction when regression model is used.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
48
COEFFICIENT OF DETERMINATION
Regression Sum of Squares (SSR) The regression sum of squares , denoted by SSR, is Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
49
COEFFICIENT OF DETERMINATION
The coefficient of determination, denoted by r2, represents the proportion of SST that is explained by the use of the regression model. The computational formula for r2 is and 0 ≤ r2 ≤ 1 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
50
Example 13-3 For the data of Table 13.1 on monthly incomes and food expenditures of seven households, calculate the coefficient of determination. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
51
Example 13-3: Solution From earlier calculations made in Examples 13-1 and 13-2, b = .2525, SSxx = , SSyy = Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
52
INFERENCES ABOUT B Sampling Distribution of b Estimation of B
Hypothesis Testing About B Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
53
Sampling Distribution of b
Mean, Standard Deviation, and Sampling Distribution of b Because of the assumption of normally distributed random errors, the sampling distribution of b is normal. The mean and standard deviation of b, denoted by and , respectively, are Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
54
Estimation of B Confidence Interval for B
The (1 – α)100% confidence interval for B is given by where and the value of t is obtained from the t distribution table for /2 area in the right tail of the t distribution and n-2 degrees of freedom Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
55
Example 13-4 Construct a 95% confidence interval for B for the data on incomes and food expenditures of seven households given in Table 13.1. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
56
Example 13-4: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
57
Hypothesis Testing About B
Test Statistic for b The value of the test statistic t for b is calculated as The value of B is substituted from the null hypothesis. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
58
Example 13-5 Test at the 1% significance level whether the slope of the regression line for the example on incomes and food expenditures of seven households is positive. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
59
Example 13-5: Solution Step 1: H0: B = 0 (The slope is zero)
H1: B > 0 (The slope is positive) Step 2: is not known Hence, we will use the t distribution to make the test about B Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
60
Example 13-5: Solution Step 3: α = .01
Area in the right tail = α = .01 df = n – 2 = 7 – 2 = 5 The critical value of t is 3.365 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
61
Figure 13.17 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
62
Example 13-5: Solution Step 4: From H0
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
63
Example 13-5: Solution Step 5:
The value of the test statistic t = 6.662 It is greater than the critical value of t = 3.365 It falls in the rejection region Hence, we reject the null hypothesis We conclude that x (income) determines y (food expenditure) positively. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
64
LINEAR CORRELATION Linear Correlation Coefficient
Hypothesis Testing About the Linear Correlation Coefficient Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
65
Linear Correlation Coefficient
Value of the Correlation Coefficient The value of the correlation coefficient always lies in the range of –1 to 1; that is, -1 ≤ ρ ≤ 1 and -1 ≤ r ≤ 1 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
66
Figure 13.18 Linear correlation between two variables.
(a) Perfect positive linear correlation, r = 1 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved x
67
Figure 13.18 Linear correlation between two variables.
(b) Perfect negative linear correlation, r = -1 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved x
68
Figure 13.18 Linear correlation between two variables.
(c) No linear correlation, , r ≈ 0 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved x
69
Figure 13.19 Linear correlation between variables.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
70
Figure 13.19 Linear correlation between variables.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
71
Figure 13.19 Linear correlation between variables.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
72
Figure 13.19 Linear correlation between variables.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
73
Linear Correlation Coefficient
The simple linear correlation, denoted by r, measures the strength of the linear relationship between two variables for a sample and is calculated as Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
74
Example 13-6 Calculate the correlation coefficient for the example on incomes and food expenditures of seven households. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
75
Example 13-6: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
76
Hypothesis Testing About the Linear Correlation Coefficient
Test Statistic for r If both variables are normally distributed and the null hypothesis is H0: ρ = 0, then the value of the test statistic t is calculated as Here n – 2 are the degrees of freedom. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
77
Example 13-7 Using the 1% level of significance and the data from Example 13-1, test whether the linear correlation coefficient between incomes and food expenditures is positive. Assume that the populations of both variables are normally distributed. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
78
Example 13-7: Solution Step 1:
H0: ρ = 0 (The linear correlation coefficient is zero) H1: ρ > 0 (The linear correlation coefficient is positive) Step 2: The population distributions for both variables are normally distributed. Hence we can use the t distribution to perform this test about the linear correlation coefficient. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
79
Example 13-7: Solution Step 3: Area in the right tail = .01
df = n – 2 = 7 – 2 = 5 The critical value of t = 3.365 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
80
Figure 13.20 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
81
Example 13-7: Solution Step 4:
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
82
Example 13-7: Solution Step 5:
The value of the test statistic t = 6.803 It is greater than the critical value of t=3.365 It falls in the rejection region Hence, we reject the null hypothesis We conclude that there is a positive relationship between incomes and food expenditures. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
83
REGRESSION ANALYSIS: A COMPLETE EXAMPLE
A random sample of eight drivers insured with a company and having similar auto insurance policies was selected. The following table lists their driving experience (in years) and monthly auto insurance premiums. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
84
Example 13-8 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
85
Example 13-8 a) Does the insurance premium depend on the driving experience or does the driving experience depend on the insurance premium? Do you expect a positive or a negative relationship between these two variables? b) Compute SSxx, SSyy, and SSxy. c) Find the least squares regression line by choosing appropriate dependent and independent variables based on your answer in part a. d) Interpret the meaning of the values of a and b calculated in part c. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
86
Example 13-8 e) Plot the scatter diagram and the regression line.
f) Calculate r and r2 and explain what they mean. g) Predict the monthly auto insurance for a driver with 10 years of driving experience. h) Compute the standard deviation of errors. i) Construct a 90% confidence interval for B. j) Test at the 5% significance level whether B is negative. k) Using α = .05, test whether ρ is difference from zero. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
87
Example 13-8: Solution Based on theory and intuition, we expect the insurance premium to depend on driving experience The insurance premium is a dependent variable The driving experience is an independent variable Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
88
Table 13.5 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
89
Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
90
Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
91
Example 13-8: Solution The value of a = gives the value of ŷ for x = 0; that is, it gives the monthly auto insurance premium for a driver with no driving experience. The value of b = indicates that, on average, for every extra year of driving experience, the monthly auto insurance premium decreases by $1.55. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
92
Figure 13.21 Scatter diagram and the regression line.
The regression line slopes downward from left to right. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
93
Example 13-8: Solution f) Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
94
Example 13-8: Solution The value of r = indicates that the driving experience and the monthly auto insurance premium are negatively related. The (linear) relationship is strong but not very strong. The value of r² = 0.59 states that 59% of the total variation in insurance premiums is explained by years of driving experience and 41% is not. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
95
Example 13-8: Solution Using the estimated regression line, we find the predict value of y for x = 10 is ŷ = – (10) = $61.18 Thus, we expect the monthly auto insurance premium of a driver with 10 years of driving experience to be $61.18. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
96
Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
97
Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
98
Example 13-8: Solution Step 1: H0: B = 0 (B is not negative)
H1: B < 0 (B is negative) Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
99
Example 13-8: Solution Step 2: Because the standard deviation of the error is not known, we use the t distribution to make the hypothesis test Step 3: Area in the left tail = α = .05 df = n – 2 = 8 – 2 = 6 The critical value of t is Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
100
Figure 13.22 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
101
Example 13-8: Solution Step 4: From H0
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
102
Example 13-8: Solution Step 5:
The value of the test statistic t = It falls in the rejection region Hence, we reject the null hypothesis and conclude that B is negative The monthly auto insurance premium decreases with an increase in years of driving experience. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
103
Example 13-8: Solution Step 1:
H0: ρ = 0 (The linear correlation coefficient is zero) H1: ρ ≠ 0 (The linear correlation coefficient is different from zero) Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
104
Example 13-8: Solution Step 2: Assuming that variables x and y are normally distributed, we will use the t distribution to perform this test about the linear correlation coefficient. Step 3: Area in each tail = .05/2 = .025 df = n – 2 = 8 – 2 = 6 The critical values of t are and 2.447 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
105
Figure 13.23 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
106
Example 13-8: Solution Step 4:
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
107
Example 13-8: Solution Step 5:
The value of the test statistic t = It falls in the rejection region Hence, we reject the null hypothesis We conclude that the linear correlation coefficient between driving experience and auto insurance premium is different from zero. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
108
USING THE REGRESSION MODEL
Using the Regression Model for Estimating the Mean Value of y Using the Regression Model for Predicting a Particular Value of y Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
109
Figure 13.24 Population and sample regression lines.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
110
Using the Regression Model for Estimating the Mean Value of y
Confidence Interval for μy|x The (1 – α)100% confidence interval for μy|x for x = x0 is where the value of t is obtained from the t distribution table for α/2 area in the right tail of the t distribution curve and df = n – 2. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
111
Using the Regression Model for Estimating the Mean Value of y
Confidence Interval for μy|x The value of is calculated as follows: Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
112
Example 13-9 Refer to Example 13-1 on incomes and food expenditures. Find a 99% confidence interval for the mean food expenditure for all households with a monthly income of $5500. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
113
Example 13-9: Solution Using the regression line estimated in Example 13-1, we find the point estimate of the mean food expenditure for x = 55 ŷ = (55) = $ hundred Area in each tail = α/2 = (1 – .99)/2 = .005 df = n – 2 = 7 – 2 = 5 t = 4.032 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
114
Example 13-9: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
115
Example 13-9: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
116
Using the Regression Model for Predicting a Particular Value of y
Prediction Interval for yp The (1 – α)100% prediction interval for the predicted value of y, denoted by yp, for x = x0 is Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
117
Using the Regression Model for Predicting a Particular Value of y
Prediction Interval for yp where the value of t is obtained from the t distribution table for α/2 area in the right tail of the t distribution curve and df = n – 2. The value of is calculated as follows: Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
118
Example 13-10 Refer to Example 13-1 on incomes and food expenditures. Find a 99% prediction interval for the predicted food expenditure for a randomly selected household with a monthly income of $5500. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
119
Example 13-10: Solution Using the regression line estimated in Example 13-1, we find the point estimate of the predicted food expenditure for x = 55 ŷ = (55) = $ hundred Area in each tail = α/2 = (1– .99)/2 = .005 df = n – 2 = 7 – 2 = 5 t = 4.032 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
120
Example 13-10: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
121
Example 13-10: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
122
CAUTIONS IN USING REGRESSION
Extrapolation: The regression line estimated for the sample data is reliable only for the range of x values observed in the sample. Causality: The regression line does not prove causality between two variables: that is, it does not predict that a change in y is caused by a change in x. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
123
TI-84 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
124
Minitab Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
125
Excel Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
126
Excel Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
127
Excel Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
128
Excel Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.