Download presentation
Presentation is loading. Please wait.
Published byAmy Day Modified over 8 years ago
1
Forecasting
2
Model with indicator variables
3
The choice of a forecasting technique depends on the components identified in the time series. The techniques discussed next are: –Seasonal indexes –Exponential smoothing –Autoregressive models (a brief discussion) Forecasting Models
4
Forecasting with Seasonal Indexes Linear regression and seasonal indexes to forecast the time series that composes trend with seasonality The model Linear trend value for period t, obtained from the linear regression Seasonal index for period t.
5
The procedure –Use simple linear regression to find the trend line. –Use the trend line to calculate the seasonal indexes. –To calculate F t multiply the trend value for period t by the seasonal index of period t.
6
The exponential smoothing model can be used to produce forecasts when the time series… –exhibits gradual(not a sharp) trend –no cyclical effects –no seasonal effects Forecasting with Exponential Smoothing
7
Forecast for period t+k is computed by F t+k = S t where t is the current period and S t = y t + (1- )S t-1
8
Autocorrelation among the errors of the regression model provides opportunity to produce accurate forecasts. In a stationary time series (no trend and no seasonality) correlation between consecutive residuals leads to the following autoregressive model: y t = 0 + 1 y t-1 + t Autoregressive models
9
The estimated model has the form If y t is the last observation, forecast for period t and t+1 are computed by
10
* * * * * * * * o o o o o o o o Model 1Model 2 Which model performs better? There are many forecasting models available ?
11
A forecasting method can be selected by evaluating its forecast accuracy using the actual time series. The two most commonly used measures of forecast accuracy are: –Mean Absolute Deviation –Sum of Squares for Forecast Error
12
Choose SSE if it is important to avoid (even a few) large errors. Otherwise, use MAD. A useful procedure for model selection. –Use some of the observations to develop several competing forecasting models. –Run the models on the rest of the observations. –Calculate the accuracy of each model. –Select the model with the best accuracy measure. Measures of Forecast Accuracy
14
Selecting a Forecasting Model Quartely data from 2001 to quarter two 2005 were used to develop three forecasting models for energy consumption. Use MAD and SSE to determine which model performed best for quarter three 2005 – quarter two 2006.
15
For model 1 Actual y t=19 Forecast for y t=19 (Q3;2005)
16
Summary of results
17
Regression Diagnostics The three conditions required for the validity of the regression analysis are: –the error variable is normally distributed. –the error variance is constant for all values of x. –The errors are independent of each other. How can we diagnose violations of these conditions?
18
Positive First Order Autocorrelation + + + + + + + Residuals Time Positive first order autocorrelation occurs when consecutive residuals tend to be similar. 0 + y t
19
Negative First Order Autocorrelation + ++ + + + + 0 Residuals Time Negative first order autocorrelation occurs when consecutive residuals tend to markedly differ. y t
20
Durbin - Watson Test: Are the Errors Autocorrelated? This test detects first order autocorrelation between consecutive residuals in a time series If autocorrelation exists the error variables are not independent
21
Example 18.3 How does the weather affect the sales of lift tickets in a ski resort? –Data of the past 20 years sales of tickets, along with the total snowfall and the average temperature during Christmas week in each year, was collected. –The model hypothesized was TICKETS= 0 + 1 SNOWFALL+ 2 TEMPERATURE+ –Regression analysis yielded the following results: Testing the Existence of Autocorrelation, Example
23
Diagnostics: The Error Distribution The errors histogram The errors may be normally distributed
24
Residual vs. predicted y It appears there is no problem of heteroscedasticity (the error variance seems to be constant). Diagnostics: Heteroscedasticity
25
Residual over time Diagnostics: First Order Autocorrelation The errors are not independent!! t etet
26
Test for positive first order auto-correlation: n=20, k=2. From the Durbin-Watson table we have: d L =1.10, d U =1.54. The statistic d=0.5931 Conclusion: Because d<d L, there is sufficient evidence to infer that positive first order autocorrelation exists. Diagnostics: First Order Autocorrelation
27
The Modified Model: Time Included The modified regression model TICKETS = 0 + 1 SNOWFALL + 2 TEMPERATURE + 3 TIME + All the required conditions are met for this model. The fit of this model is high R 2 = 0.7410. The model is valid. Significance F =.0001. SNOWFALL and TIME are linearly related to ticket sales. TEMPERATURE is not linearly related to ticket sales.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.