Download presentation
Published byGianna Wellington Modified over 9 years ago
1
Forecasting Using the Simple Linear Regression Model and Correlation
2
What is a forecast? Using a statistical method on past data to predict the future. Using experience, judgment and surveys to predict the future.
3
Why forecast? to enhance planning. to force thinking about the future.
to fit corporate strategy to future conditions. to coordinate departments to the same future. to reduce corporate costs.
4
Kinds of Forecasts Causal forecasts are when changes in a variable (Y) you wish to predict are caused by changes in other variables (X's). Time series forecasts are when changes in a variable (Y) are predicted based on prior values of itself (Y). Regression can provide both kinds of forecasts.
5
Types of Relationships
Positive Linear Relationship Negative Linear Relationship
6
Types of Relationships
(continued) Relationship NOT Linear No Relationship
7
Relationships If the relationship is not linear, the forecaster often has to use math transformations to make the relationship linear.
8
Correlation Analysis Correlation measures the strength of the linear relationship between variables. It can be used to find the best predictor variables. It does not assure that there is a causal relationship between the variables.
9
The Correlation Coefficient
Ranges between -1 and 1. The Closer to -1, The Stronger Is The Negative Linear Relationship. The Closer to 1, The Stronger Is The Positive Linear Relationship. The Closer to 0, The Weaker Is Any Linear Relationship.
10
Graphs of Various Correlation (r) Values
Y Y Y X X X r = -1 r = -.6 r = 0 Y Y X X r = .6 r = 1
11
The Scatter Diagram Plot of all (Xi , Yi) pairs
12
The Scatter Diagram Is used to visualize the relationship and to assess its linearity. The scatter diagram can also be used to identify outliers.
13
Regression Analysis Regression Analysis can be used to model causality and make predictions. Terminology: The variable to be predicted is called the dependent or response variable. The variables used in the prediction model are called independent, explanatory or predictor variables.
14
Simple Linear Regression Model
The relationship between variables is described by a linear function. A change of one variable causes the other variable to change.
15
Population Linear Regression
Population Regression Line Is A Straight Line that Describes The Dependence of One Variable on The Other Population Slope Coefficient Random Error Population Y intercept Dependent (Response) Variable Population Regression Line Independent (Explanatory) Variable
16
How is the best line found?
Y Observed Value = Random Error X Observed Value
17
Sample Linear Regression
Sample Regression Line Provides an Estimate of The Population Regression Line Sample Slope Coefficient Sample Y Intercept Residual Sample Regression Line provides an estimate of provides an estimate of
18
Simple Linear Regression: An Example
Annual Store Square Sales Feet ($1000) , ,681 , ,395 , ,653 , ,543 , ,318 , ,563 , ,760 You wish to examine the relationship between the square footage of produce stores and their annual sales. Sample data for 7 stores were obtained. Find the equation of the straight line that fits the data best
19
The Scatter Diagram Excel Output
20
The Equation for the Regression Line
From Excel Printout:
21
Graph of the Regression Line
Yi = Xi
22
Interpreting the Results
Yi = Xi The slope of means that each increase of one unit in X, we predict the average of Y to increase by an estimated units. The model estimates that for each increase of 1 square foot in the size of the store, the expected annual sales are predicted to increase by $1487.
23
The Coefficient of Determination
SSR regression sum of squares r2 = = SST total sum of squares The Coefficient of Determination (r2 ) measures the proportion of variation in Y explained by the independent variable X.
24
Coefficients of Determination (R2) and Correlation (R)
Y ^ Y = b + b X i 1 i X
25
Coefficients of Determination (R2) and Correlation (R)
(continued) r2 = .81, r = +0.9 Y ^ Y = b + b X i 1 i X
26
Coefficients of Determination (R2) and Correlation (R)
(continued) r2 = 0, r = 0 Y ^ Y = b + b X i 1 i X
27
Coefficients of Determination (R2) and Correlation (R)
(continued) r2 = 1, r = -1 Y ^ Y = b + b X i 1 i X
28
Correlation: The Symbols
Population correlation coefficient (‘rho’) measures the strength between two variables. Sample correlation coefficient r estimates based on a set of sample observations.
29
Example: Produce Stores
From Excel Printout
30
Inferences About the Slope
t Test for a Population Slope Is There A Linear Relationship between X and Y ? Null and Alternative Hypotheses H0: 1 = 0 (No Linear Relationship) H1: 1 0 (Linear Relationship) Test Statistic: Where and df = n - 2
31
Example: Produce Stores
Data for 7 Stores: Estimated Regression Equation: Annual Store Square Sales Feet ($000) , ,681 , ,395 , ,653 , ,543 , ,318 , ,563 , ,760 Yi = Xi The slope of this model is Is Square Footage of the store affecting its Annual Sales?
32
Inferences About the Slope: t Test Example
Test Statistic: Decision: Conclusion: H0: 1 = 0 H1: 1 0 .05 df = 5 Critical value(s): From Excel Printout Reject Reject Reject H0 .025 .025 There is evidence of a linear relationship. t 2.5706
33
Inferences About the Slope Using A Confidence Interval
Confidence Interval Estimate of the Slope b1 tn-2 Excel Printout for Produce Stores At 95% level of Confidence The confidence Interval for the slope is (1.062, 1.911). Does not include 0. Conclusion: There is a significant linear relationship between annual sales and the size of the store.
34
Residual Analysis Is used to evaluate validity of assumptions. Residual analysis uses numerical measures and plots to assure the validity of the assumptions.
35
Linear Regression Assumptions
1. X is linearly related to Y. 2. The variance is constant for each value of Y (Homoscedasticity). 3. The Residual Error is Normally Distributed. 4. If the data is over time, then the errors must be independent.
36
Residual Analysis for Linearity
X X e e X X Not Linear Linear
37
Residual Analysis for Homoscedasticity
X X e e X X Homoscedasticity Heteroscedasticity
38
Residual Analysis for Independence: The Durbin-Watson Statistic
It is used when data is collected over time. It detects autocorrelation; that is, the residuals in one time period are related to residuals in another time period. It measures violation of independence assumption. Calculate D and compare it to the value in Table E.8.
39
Preparing Confidence Intervals for Forecasts
40
Interval Estimates for Different Values of X
Confidence Interval for the mean of Y Confidence Interval for a individual Yi Y Yi = b0 + b1Xi _ X X A Given X
41
Estimation of Predicted Values
Confidence Interval Estimate for YX The Mean of Y given a particular Xi Size of interval vary according to distance away from mean, X. Standard error of the estimate t value from table with df=n-2
42
Estimation of Predicted Values
Confidence Interval Estimate for Individual Response Yi at a Particular Xi Addition of 1 increases width of interval from that for the mean of Y
43
Example: Produce Stores
Data for 7 Stores: Annual Store Square Sales Feet ($000) , ,681 , ,395 , ,653 , ,543 , ,318 , ,563 , ,760 Predict the annual sales for a store with 2000 square feet. Regression Model Obtained: Yi = Xi
44
Estimation of Predicted Values: Example
Confidence Interval Estimate for YX Find the 95% confidence interval for the average annual sales for stores of 2,000 square feet Predicted Sales Yi = Xi = ($000) tn-2 = t5 = X = SYX = = Confidence interval for mean Y
45
Estimation of Predicted Values: Example
Confidence Interval Estimate for Individual Y Find the 95% confidence interval for annual sales of one particular store of 2,000 square feet Predicted Sales Yi = Xi = ($000) tn-2 = t5 = X = SYX = = Confidence interval for individual Y
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.