Download presentation
Presentation is loading. Please wait.
Published byDeirdre Mosley Modified over 8 years ago
1
CHAPTER 2 Forecasting McGraw-Hill/Irwin Copyright © 2009 by The McGraw-Hill Companies, Inc. All rights reserved.
2
Introduction to Forecasting uWhat is forecasting? uPrimary Function is to Predict the Future uWhy are we interested? uAffects the decisions we make today uExamples: who uses forecasting in their jobs? uforecast demand for products and services uforecast availability of manpower uforecast inventory and materiel needs daily 2-2
3
Characteristics of Forecasts uThey are usually wrong! uA good forecast is more than a single number umean and standard deviation urange (high and low) uAggregate forecasts are usually more accurate uAccuracy erodes as we go further into the future. uForecasts should not be used to the exclusion of known information 2-3
4
What Makes a Good Forecast uIt should be timely uIt should be as accurate as possible uIt should be reliable uIt should be in meaningful units uIt should be presented in writing uThe method should be easy to use and understand in most cases. 2-4
5
Forecast Horizons in Operation Planning Figure 2.1 2-5
6
Subjective Forecasting Methods uSales Force Composites uAggregation of sales personnel estimates uCustomer Surveys uJury of Executive Opinion uThe Delphi Method uIndividual opinions are compiled and reconsidered. Repeat until and overall group consensus is (hopefully) reached. 2-6
7
Objective Forecasting Methods uTwo primary methods: causal models and time series methods Causal Models Let Y be the quantity to be forecasted and (X 1, X 2,..., X n ) be n variables that have predictive power for Y. A causal model is Y = f (X 1, X 2,..., X n ). A typical relationship is a linear one. That is, Y = a 0 + a 1 X 1 +... + a n X n. 2-7
8
Time Series Methods A time series is just collection of past values of the variable being predicted. Also known as naïve methods. Goal is to isolate patterns in past data. (See Figures on following pages) uTrend uSeasonality uCycles uRandomness 2-8
9
Figure 2.2 2-9
10
Notation Conventions Let D 1, D 2,... D n,... be the past values of the series to be predicted (demand). If we are making a forecast in period t, assume we have observed D t, D t-1 etc. Let F t, t + forecast made in period t for the demand in period t + where = 1, 2, 3, … Then F t -1, t is the forecast made in t-1 for t and F t, t+1 is the forecast made in t for t+1. (one step ahead) Use shorthand notation F t = F t - 1, t. 2-10
11
Evaluation of Forecasts The forecast error in period t, e t, is the difference between the forecast for demand in period t and the actual value of demand in t. For a multiple step ahead forecast: e t = F t - , t - D t. For one step ahead forecast: e t = F t - D t. D = (1/n) | e i | MSE = (1/n) e i 2 2-11
12
Biases in Forecasts uA bias occurs when the average value of a forecast error tends to be positive or negative. uMathematically an unbiased forecast is one in which E (e i ) = 0. See Figure 2.3 on page 64 in text (next slide). 2-12
13
Forecast Errors Over Time Figure 2.3 2-13
14
Forecasting for Stationary Series A stationary time series has the form: D t = + t where is a constant and t is a random variable with mean 0 and var Two common methods for forecasting stationary series are moving averages and exponential smoothing. 2-14
15
Moving Averages In words: the arithmetic average of the n most recent observations. For a one-step-ahead forecast: F t = (1/N) (D t - 1 + D t - 2 +... + D t - n ) (Go to Example.) 2-15
16
Summary of Moving Averages uAdvantages of Moving Average Method uEasily understood uEasily computed uProvides stable forecasts uDisadvantages of Moving Average Method uRequires saving all past N data points uLags behind a trend uIgnores complex relationships in data 2-16
17
Moving Average Lags a Trend Figure 2.4 2-17
18
Exponential Smoothing Method A type of weighted moving average that applies declining weights to past data. 1. New Forecast = (most recent observation) + (1 - (last forecast) or 2. New Forecast = last forecast - last forecast error) where 0 < and generally is small for stability of forecasts ( around.1 to.2) 2-18
19
Exponential Smoothing (cont.) In symbols: F t+1 = D t + (1 - ) F t = D t + (1 - ) ( D t-1 + (1 - ) F t-1 ) = D t + (1 - )( )D t-1 + (1 - ( )D t - 2 +... Hence the method applies a set of exponentially declining weights to past data. It is easy to show that the sum of the weights is exactly one. ( Or F t + 1 = F t - F t - D t ) ) 2-19
20
Weights in Exponential Smoothing 2-20
21
Comparison of ES and MA uSimilarities uBoth methods are appropriate for stationary series uBoth methods depend on a single parameter uBoth methods lag behind a trend One can achieve the same distribution of forecast error by setting = 2/ ( N + 1). uDifferences uES carries all past history. MA eliminates “bad” data after N periods uMA requires all N past data points while ES only requires last forecast and last observation. 2-21
22
Using Regression for Times Series Forecasting nRegression Methods Can be Used When Trend is Present. –Model: D t = a + bt. nIf t is scaled to 1, 2, 3,..., then the least squares estimates for a and b can be computed as follows: n Set S xx = n 2 (n+1)(2n+1)/6 - [n(n+1)/2] 2 Set S xy = n i D i - [n(n + 1)/2] D i Set S xy = n i D i - [n(n + 1)/2] D i _ –Let b = S xy / S xx and a = D - b (n+1)/2 These values of ab provide the “best” fit of the data in a least squares sense. These values of a and b provide the “best” fit of the data in a least squares sense. 2-22
23
Other Methods When Trend is Present Double exponential smoothing, of which Holt’s method is only one example, can also be used to forecast when there is a linear trend present in the data. The method requires separate smoothing constants for slope and intercept. 2-23
24
Forecasting For Seasonal Series Seasonality corresponds to a pattern in the data that repeats at regular intervals. (See figure next slide) Multiplicative seasonal factors: c 1, c 2,..., c N where i = 1 is first period of season, i = 2 is second period of the season, etc.. c i = N. c i = 1.25 implies 25% higher than the baseline on avg. c i = 0.75 implies 25% lower than the baseline on avg. 2-24
25
Figure 2.8 2-25
26
Quick and Dirty Method of Estimating Seasonal Factors uCompute the sample mean of the entire data set (should be at least several seasons of data). uDivide each observation by the sample mean. (This gives a factor for each observation.) uAverage the factors for like periods in a season. The resulting N numbers will exactly add to N and correspond to the N seasonal factors. 2-26
27
Deseasonalizing a Series uTo remove seasonality from a series, simply divide each observation in the series by the appropriate seasonal factor. The resulting series will have no seasonality and may then be predicted using an appropriate method. Once a forecast is made on the deseasonalized series, one then multiplies that forecast by the appropriate seasonal factor to obtain a forecast for the original series 2-27
28
Box-Jenkins Models uRecommended when at least 72 data points of past history are available. uPrimary feature: Exploits the structure of the autocorrelation function of the time series. Autocorrelation coefficient of lag k: 2-28
29
Stationary Time Series Box Jenkins models can only be constructed for stationary series. That is, series that exhibit no trend, seasonality, growth, etc. If the series is represented by D 1, D 2,... then this translates to the assumptions that E(D i ) = μ and Var(D i ) = 2 independent of i. Later we will show how differencing can convert many non-stationary series to stationary series. 2-29
30
The Autoregressive Process Interpret as the linear regression coefficients and as the error term. This is an AR(p) process. Simpler and more common is the AR(1) process given by: 2-30
31
Theoretical Autocorrelation Function of the AR(1) Process 2-31
32
2-32
33
The Moving Average Process Note that the weights are shown with negative signs by convention. It can be shown that an AR(1) process is equivalent to an MA(∞) process. The MA(1) model is powerful because the autocorrelation function, which has a non-zero value only at lag 1, is often observed in practice. 2-33
34
Typical Realizations of the MA(1) Process with negative and positive one period autocorrelations. 2-34
35
2-35
36
ARMA Models An ARMA model is one that includes both AR terms and MA terms. For example, the ARMA(1,1) model is: By combining AR and MA terms into a single model, we are able to capture complex relationships in the data with a parsimonious model (i.e., one with as few terms as possible). 2-36
37
ARIMA Models The “I” in ARIMA stands for integrated, which means applying an ARMA model to a differenced process. Differencing can convert a non- stationary time series into a stationary time series under some circumstances. One order of differencing eliminates trend, and two orders of differencing eliminates quadratic trend First differencing would be denoted: 2-37
38
Practical Considerations in Forecasting uOverly sophisticated forecasting methods can be problematic, especially for long term forecasting. (Refer to Figure on the next slide.) uTracking signals may be useful for indicating forecast bias. uBox-Jenkins methods require substantial data history, use the correlation structure of the data, and can provide significantly improved forecasts under some circumstances. 2-38
39
Figure 2.12 2-39
40
Case Study: Sport Obermeyer Saves Money Using Sophisticated Forecasting Methods Problem: Company had to commit at least half of production based on forecasts, which were often very wrong. Standard jury of executive opinion method of forecasting was replaced by a type of Delphi Method which could itself predict forecast accuracy by the dispersion in the forecasts received. Firm could commit early to items that had forecasts more likely to be accurate and hold off on items in which forecasts were probably off. Use of early information from retailers improved forecasting on difficult items. Consensus forecasting in this case was not the best method. 2-40
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.