Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 2 Chapter 2 Forecasting McGraw-Hill/Irwin Copyright © 2005 by The McGraw-Hill Companies, Inc. All rights reserved.

Similar presentations


Presentation on theme: "Chapter 2 Chapter 2 Forecasting McGraw-Hill/Irwin Copyright © 2005 by The McGraw-Hill Companies, Inc. All rights reserved."— Presentation transcript:

1 Chapter 2 Chapter 2 Forecasting McGraw-Hill/Irwin Copyright © 2005 by The McGraw-Hill Companies, Inc. All rights reserved.

2 1-2 Introduction to Forecasting n What is forecasting? –Primary Function is to Predict the Future n Why are we interested? –Affects the decisions we make today n Examples: who uses forecasting in their jobs? –forecast demand for products and services –forecast availability of manpower –forecast inventory and materiel needs daily

3 1-3 Characteristics of Forecasts n They are usually wrong! n A good forecast is more than a single number –mean and standard deviation –range (high and low) n Aggregate forecasts are usually more accurate n Accuracy erodes as we go further into the future. n Forecasts should not be used to the exclusion of known information

4 1-4 What Makes a Good Forecast n It should be timely n It should be as accurate as possible n It should be reliable n It should be in meaningful units n It should be presented in writing n The method should be easy to use and understand in most cases. –Ease of updating as new data becomes available.

5 1-5 Forecast Horizons in Operation Planning Figure 2.1

6 1-6 Forecasting Methods n Subjective Forecasting Methods –Measures individual or group opinion n Objective Forecasting Methods –Forecasts made based on past history and a model. –Time Series Models: use only past history. »Easy to incorporate into computer models. –Regression: Causal Model to predict

7 1-7 Subjective Forecasting Methods n Sales Force Composites –Aggregation of sales personnel estimates n Customer Surveys »To understand future trends, change in customer view n Jury of Executive Opinion »Maybe there is a new product etc. n The Delphi Method –Individual opinions are compiled and reconsidered. Repeat until and overall group consensus is (hopefully) reached.

8 1-8 Objective Forecasting Methods n Two primary methods: causal models and time series methods Causal Models Let Y be the quantity to be forecasted and (X 1, X 2,..., X n ) be n variables that have predictive power for Y. Let Y be the quantity to be forecasted and (X 1, X 2,..., X n ) be n variables that have predictive power for Y. A causal model is Y = f (X 1, X 2,..., X n ). A causal model is Y = f (X 1, X 2,..., X n ). “Y depends on X 1, X 2,..., X n ” A typical relationship is a linear one. That is, A typical relationship is a linear one. That is, Y = a 0 + a 1 X 1 +... + a n X n. Y = a 0 + a 1 X 1 +... + a n X n.

9 1-9 Time Series Methods A time series is just collection of past values of the variable being predicted. Also known as naïve methods. Goal is to isolate patterns in past data. (See Figures on following pages) n Trend n Seasonality n Cycles n Randomness

10 1-10

11 1-11 Notation Conventions Let D 1, D 2,... D n,... be the past values of the series to be predicted (demand). If we are making a forecast in period t, assume we have observed D t, D t-1 etc. Let F t, t +   forecast made in period t for the demand in period t +  where  = 1, 2, 3, … Then F t -1, t is the forecast made in t-1 for t and F t, t+1 is the forecast made in t for t+1. (one step ahead) Use shorthand notation F t = F t - 1, t.

12 1-12 Evaluation of Forecasts The forecast error in period t, e t, is the difference between the forecast for demand in period t and the actual value of demand in t. The forecast error in period t, e t, is the difference between the forecast for demand in period t and the actual value of demand in t. For a multiple step ahead forecast: e t = F t - , t - D t. For one step ahead forecast: e t = F t - D t. For one step ahead forecast: e t = F t - D t.  e 1, e 2,.., e n forecast errors over n periods  D = (1/n)  | e i | MAPE = [(1/n)  | e i /Di| ]*100 MSE = (1/n)  e i 2 Mean Absolute Deviation Mean Square Error Mean Absolute Percentage Error

13 1-13 Biases in Forecasts n A bias occurs when the average value of a forecast error tends to be positive or negative (ie, deviation from truth). n Mathematically an unbiased forecast is one in which E (e i ) = 0. See Figure 2.3 (next slide).  e i = 0  e i = 0

14 1-14 Forecast Errors Over Time Figure 2.3

15 1-15 Ex. 2.1 weekF1D1 | E1 | |E1/D1| F2D2 | E2 | |E2/D2| 1928840.0455969150.0549 2878810.0114898900.0000 3959720.0206929020.0222 4908370.0843939030.0333 5889130.0330908640.0465 6939300.0000858940.0449

16 1-16 Evaluating Forecasts n MAD1 = 17/6 = 2.83 better n MAD2 = 18/6 = 3.00 n MSE1 = 79/6 = 13.17 n MSE2 = 70/6 = 11.67 better n MAPE1 = 0.0325 better n MAPE2 = 0.0333 No one measure can be true by itself!

17 1-17 Forecasting for Stationary Series A stationary time series has the form: D t =  +  t where  is a constant (mean of the series) and  t is a random variable with mean 0 and var    Two common methods for forecasting stationary series are moving averages and exponential smoothing.

18 1-18 Moving Averages In words: the arithmetic average of the N most recent observations. For a one-step-ahead forecast: F t = (1/N) (D t - 1 + D t - 2 +... + D t - N ) Here N is the parameter, that is how many past periods are we averaging over. What is the effect of choosing different N?

19 1-19 Moving Average Lags a Trend Figure 2.4

20 1-20 Summary of Moving Averages n Advantages of Moving Average Method –Easily understood –Easily computed –Provides stable forecasts n Disadvantages of Moving Average Method –Requires saving all past N data points –Lags behind a trend –Ignores complex relationships in data

21 1-21 Simple Moving Average - Example PeriodActualMA3MA5 142 240 343 44041.7 54141.0 63941.341.2 74640.040.6 84442.041.8 94543.042 103845.043 114042.342.4 12 41.042.6 n Consider the following data, n Starting from 4 th period one can start forecasting by using MA3. Same is true for MA5 after the 6 th period. n Actual versus predicted(forecasted) graphs are as follows;

22 1-22 Simple Moving Average - Example Actual MA3 MA5

23 1-23 Exponential Smoothing Method A type of weighted moving average that applies declining weights to past data. 1. New Forecast =  (most recent observation) + (1 -  (last forecast) or or 2. New Forecast = last forecast -  last forecast error) where 0 <  and generally is small for stability of forecasts ( around.1 to.2)

24 1-24 Exponential Smoothing (cont.) In symbols: F t+1 =  D t + (1 -  ) F t =  D t + (1 -  ) (  D t-1 + (1 -  ) F t-1 ) =  D t + (1 -  )(  )D t-1 + (1 -    (  )D t - 2 +... Hence the method applies a set of exponentially declining weights to past data. It is easy to show that the sum of the weights is exactly one. Hence the method applies a set of exponentially declining weights to past data. It is easy to show that the sum of the weights is exactly one. ( Or F t + 1 = F t -  F t - D t ) ) ( Or F t + 1 = F t -  F t - D t ) )

25 1-25 Weights in Exponential Smoothing Fig. 2-5 For alpha = 0.1

26 1-26 Comparison of ES and MA n Similarities –Both methods are appropriate for stationary series –Both methods depend on a single parameter –Both methods lag behind a trend –One can achieve the same distribution of forecast error by setting  = 2/ ( N + 1). (How so ?) n Differences –ES carries all past history. MA eliminates “bad” data after N periods –MA requires all N past data points while ES only requires last forecast and last observation.

27 1-27 Exponential Smoothing for different values of alpha So what happens when alpha is changed?

28 1-28 Example of Exponential Smoothing

29 1-29 Picking a Smoothing Constant .1 .4 Actual Lower values of  are preferred when the underlying trend is stable and higher values of  are preferred when it is susceptible to change. Note that if  is low your next forecast highly depends on your previous ones and feedback is less effective.

30 1-30 Using Regression for Times Series Forecasting n Regression Methods Can be Used When Trend is Present. – Model: D t = a + bt. n If t is scaled to 1, 2, 3,..., then the least squares estimates for a and b can be computed as follows: n Set S xx = n 2 (n+1)(2n+1)/6 - [n(n+1)/2] 2 Set S xy = n  i D i - [n(n + 1)/2]  D i Set S xy = n  i D i - [n(n + 1)/2]  D i – Let b = S xy / S xx and a = D - b (n+1)/2 These values of aandb provide the “best” fit of the data in a least squares sense. These values of a and b provide the “best” fit of the data in a least squares sense. A little bit calculus, take the partial derivatives and set it equal to 0 and solve for a and b!

31 1-31 An Example of a Regression Line

32 1-32 Other Methods When Trend is Present Double exponential smoothing, of which Holt’s method is only one example, can also be used to forecast when there is a linear trend present in the data. The method requires separate smoothing constants for slope and intercept. Composed of 2 separate exponential smoothing equations; one for the average, one for the slope of the trend.

33 1-33 Holt’s Double Exponential Smoothing n S t =αD t + (1- α)(S t-1 + G t-1 ) –New value of intercept at time t calculated as a weighted avg. between last demand observation and last demand forecast n G t =β(S t – S t-1 ) + (1- β)G t-1 –Value of slope at time t is calculated as a weighted average of last slope observation and last slope forecast –τ step ahead forecast is then F t, t+τ = S t + τ G t

34 1-34 Forecasting For Seasonal Series Seasonality corresponds to a pattern in the data that repeats at regular intervals. (See figure next slide) Multiplicative seasonal factors: c 1, c 2,..., c N where i = 1 is first period of season, i = 2 is second period of the season, etc..  c i = N.  c i = N. c i = 1.25 implies 25% higher than the baseline on avg. c i = 1.25 implies 25% higher than the baseline on avg. c i = 0.75 implies 25% lower than the baseline on avg. c i = 0.75 implies 25% lower than the baseline on avg.

35 A Seasonal Demand Series

36 1-36 Quick and Dirty Method of Estimating Seasonal Factors n Compute the sample mean of the entire data set (should be at least several seasons of data). n Divide each observation by the sample mean. (This gives a factor for each observation.) n Average the factors for like periods in a season. The resulting N numbers will exactly add to N and correspond to the N seasonal factors.

37 1-37 Deseasonalizing a Series n To remove seasonality from a series, simply divide each observation in the series by the appropriate seasonal factor. The resulting series will have no seasonality and may then be predicted using an appropriate method. Once a forecast is made on the deseasonalized series, one then multiplies that forecast by the appropriate seasonal factor to obtain a forecast for the original series.

38 1-38 Seasonal series with increasing trend Fig 2-10

39 1-39 Winter’s Method for Seasonal Problems n A triple exponential smoothing –3 exponential smoothing equations –For the base signal (intercept) or the series, α –For the trend, β –For the seasonal series, γ n S t = α(D t /c t-N) + (1- α)(S t-1 +G t-1 ) n G t = β[S t -S t-1 ] + (1- β)G t-1 n c t = γ(D t /S t ) + (1- γ)c t-N

40 1-40 Initialization for Winters’s Method

41 1-41 Practical Considerations n Overly sophisticated forecasting methods can be problematic, especially for long term forecasting. (Refer to Figure on the next slide.) n Tracking signals may be useful for indicating forecast bias. n Box-Jenkins methods require substantial data history, use the correlation structure of the data, and can provide significantly improved forecasts under some circumstances.

42 The Difficulty with Long-Term Forecasts

43 1-43 Tracking the Mean When Lost Sales are PresentFig. 2-13

44 1-44 Tracking the Standard Deviation When Lost Sales are Present Fig. 2-14

45 1-45 Case Study: Sport Obermeyer Saves Money Using Sophisticated Forecasting Methods Problem: Company had to commit at least half of production based on forecasts, which were often very wrong. Standard jury of executive opinion method of forecasting was replaced by a type of Delphi Method which could itself predict forecast accuracy by the dispersion in the forecasts received. Firm could commit early to items that had forecasts more likely to be accurate and hold off on items in which forecasts were probably off. Use of early information from retailers improved forecasting on difficult items. Consensus forecasting in this case was not the best method.


Download ppt "Chapter 2 Chapter 2 Forecasting McGraw-Hill/Irwin Copyright © 2005 by The McGraw-Hill Companies, Inc. All rights reserved."

Similar presentations


Ads by Google