Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sales Forecasting using Dynamic Bayesian Networks Steve Djajasaputra SNN Nijmegen The Netherlands.

Similar presentations


Presentation on theme: "Sales Forecasting using Dynamic Bayesian Networks Steve Djajasaputra SNN Nijmegen The Netherlands."— Presentation transcript:

1

2 Sales Forecasting using Dynamic Bayesian Networks Steve Djajasaputra SNN Nijmegen The Netherlands

3 STW 6/2003Steve Djajasaputra (SNN Nijmegen)2 Table of Content 1.Why Sales Forecasting? 2.Method 3.Results & Discussions 4.Conclusions 5.Further Research 6.Acknowledgements

4 STW 6/2003Steve Djajasaputra (SNN Nijmegen)3 1. Why Sales Forecasting? Sales Forecasting bring advantage for your business: –reducing logistic cost –improving your services targeted marketing lower backorder But in practice… is this really happening?

5 STW 6/2003Steve Djajasaputra (SNN Nijmegen)4 The Answer is… YES! An Example of Success Story: Bayesian statistical technology for predicting newspaper sales 1 to 4% more sales with same deliveries 3 to 12 % less deliveries to achieve same total amount of sales

6 STW 6/2003Steve Djajasaputra (SNN Nijmegen)5 But time-series forecasting is not always easy! So…. Searching better forecasting technology Aggregation of different group of products can be helpful Clustering methodology for aggregation Bayesian Methodology: generative model

7 STW 6/2003Steve Djajasaputra (SNN Nijmegen)6 2. Method Dynamic Bayesian Networks Forecasting The Inputs

8 STW 6/2003Steve Djajasaputra (SNN Nijmegen)7 Dynamic Bayesian Networks Y is our observation –e.g. sales of different products: beer y1, beer y2,… –X-axis: the time t (e.g.weeks)

9 STW 6/2003Steve Djajasaputra (SNN Nijmegen)8 Dynamic Bayesian Networks In our model, we assume that our observation Y is generated with this dynamic: X are inputs, for example: sales of bier last week, weather information, prices, day labeling are hidden variables, which are unobserved/unknown is noise N(0, 2 )

10 STW 6/2003Steve Djajasaputra (SNN Nijmegen)9 Dynamic Bayesian Networks Hierarchical model: Our hidden variables depend on other unobserved/unknown hidden variables M. Several from different product share the same M. –A is a transition matrix for –G is a transition matrix for M – is noise N(0, ) – is noise N(0, M )

11 STW 6/2003Steve Djajasaputra (SNN Nijmegen)10 Dynamic Bayesian Networks Inference & Learning: –We have Y & X data in our model –But we dont know the values of hidden variables:, M and their initial values –We also dont know the correct value of parameters:, M,A,G and their initial values –We solve these problems in Bayesian paradigm, using EM Algorithm.

12 STW 6/2003Steve Djajasaputra (SNN Nijmegen)11 Forecasting Steps: Training step: find the model parameters and hidden variables 1:T given the data from observation window X 1:T,Y 1:T, using EM algorithm and Kalman smoothing. Forecasting step: predict T+h and Y T+h h is the horizon of forecasting Updating step: update the hidden variables 1:T+h given the real value Y T+h Repeat the forecasting & updating steps above in iterations.

13 STW 6/2003Steve Djajasaputra (SNN Nijmegen)12 The Inputs By Autocorrelation & FFT Spectrum analysis, for inputs (X i,t ) I decided to use: –Seasonality markers –Recent sales (1 week ago) –Last month sales (4 weeks ago) We need to keep the number of inputs as small as possible to avoid over-fitting. Since I consider seasonality & recent sales, my model is somewhat comparable with SC model which is used by Pim Ouwehand.

14 STW 6/2003Steve Djajasaputra (SNN Nijmegen)13 3. Results & Discussions An Example of Result Residual Analysis Nonlinear Transformation The Offset Problem Removing Outliers Our Bayesian Approach vs Conventional Econometric Methods Need More Informative Inputs

15 STW 6/2003Steve Djajasaputra (SNN Nijmegen)14 An Example of Result Mean Absolute Deviation (MAD) is 2346 beers Y-axis: O is the real value X is the prediction X-axis: weeks Training steps: week 5..204 Forecasting steps: week 205..260, 1 week horizon This result is about the range of Winter method used by Pim Ouwehand.

16 STW 6/2003Steve Djajasaputra (SNN Nijmegen)15 Residual Analysis To validate our model. Its showed that the residues (error) are noise as we assumed. –Y predicted vs Error (figure on top) –Error vs time (figure on bottom) –Autocorrelation and FFT of Error –Cross correlation Error vs Inputs

17 STW 6/2003Steve Djajasaputra (SNN Nijmegen)16 Nonlinear Transformation To make data more linear & gaussian since we assume our model is linear and the data is assumed to be gaussian distributed. e.g. Log, Sigmoid

18 STW 6/2003Steve Djajasaputra (SNN Nijmegen)17 The Offset Problem Due to the stationary assumption, the software gives over(under)estimated forecasting if the trend is exist. Solutions: –Removing trend (e.g. taking difference) –Updating the parameters after forecasting step. Legend: Left: moving averaged Beer-2 vs weeks Right: Predicted Beer-2 vs weeks

19 STW 6/2003Steve Djajasaputra (SNN Nijmegen)18 Removing Outliers The plot shows that the data is very noisy. Most of the outliers are below the mean, perhaps due to out of stock problem. Thus it will be helpful if we can get out of stock label for input in our forecasting model. Sales of 10 beers (normalized) vs weeks

20 STW 6/2003Steve Djajasaputra (SNN Nijmegen)19 Our Bayesian Approach vs Conventional Econometric methods Econometric regression methods (e.g. Winter Method used by Pim Ouwehand) works well to fit the data.

21 STW 6/2003Steve Djajasaputra (SNN Nijmegen)20 However, we dont want just do fitting the data. We want to understand the process behind the data that we observed (i.e. hidden/unobserved variables). We want to have a generative model of the beer buyers. This generative model helps you to understand the hidden process in the market. This is a valuable insight for business decision, e.g. by simulation.

22 STW 6/2003Steve Djajasaputra (SNN Nijmegen)21 We need More Informative Inputs

23 STW 6/2003Steve Djajasaputra (SNN Nijmegen)22 4. Conclusions This preliminary research (only with the sales data without other informative inputs) showed that the result is about in the range of Winter method. We need more informative input data for a better model. Hacking data (e.g. removing trend, nonlinear transformation) slightly improves the result. But this is not the main purpose of this research.

24 STW 6/2003Steve Djajasaputra (SNN Nijmegen)23 Conclusions … continued We are not only just fitting the data but constructing a generative model, which is useful for understanding business process behind the sales. This understanding help you to shape your strategy to achieve more profit.

25 STW 6/2003Steve Djajasaputra (SNN Nijmegen)24 5. Further Research Clustering and Structural Learning Non stationary process Non linear model Approximations –Variational –Factorial –Monte Carlo (MCMC)

26 STW 6/2003Steve Djajasaputra (SNN Nijmegen)25 6. Acknowledgements Our sponsor: STW Tom Heskes (KUN) Pim Ouwehand (TUE) Bart Bakker (Phillips, was in KUN) Data providers/ Business Partners : Schuitema, Technie Unie, OPG.

27 STW 6/2003Steve Djajasaputra (SNN Nijmegen)26 Appendix: Clustering Insights On Observed data Y

28 STW 6/2003Steve Djajasaputra (SNN Nijmegen)27 On hidden variables: –1,2:seasonality – 3:last month – 4:last week


Download ppt "Sales Forecasting using Dynamic Bayesian Networks Steve Djajasaputra SNN Nijmegen The Netherlands."

Similar presentations


Ads by Google