Econ 240 C Lecture 14 1. 2 Part I: Exponential Smoothing Exponential smoothing is a technique that is useful for forecasting short time series where there.

Slides:



Advertisements
Similar presentations
Autocorrelation Functions and ARIMA Modelling
Advertisements

Part II – TIME SERIES ANALYSIS C3 Exponential Smoothing Methods © Angel A. Juan & Carles Serrat - UPC 2007/2008.
Forecasting OPS 370.
Unit Roots & Forecasting
Slide 1 DSCI 5340: Predictive Modeling and Business Forecasting Spring 2013 – Dr. Nick Evangelopoulos Lecture 7: Box-Jenkins Models – Part II (Ch. 9) Material.
Time Series Building 1. Model Identification
Forecasting based on creeping trend with harmonic weights Creeping trend can be used if variable changes irregularly in time. We use OLS to estimate parameters.
Exponential Smoothing Methods
1 ECON 240C Lecture Outline Box-Jenkins Passengers Box-Jenkins Passengers Displaying the Forecast Displaying the Forecast Recoloring Recoloring.
Forecasting using simple models
Econ 240 C Lecture I. Work in Groups II. You will be graded based on a PowerPoint presentation and a written report. III. Your report should have.
1 Econ 240 C Lecture 3. 2 Part I Modeling Economic Time Series.
1 Econ 240 C Lecture White noise inputoutput 1/(1 – z) White noise input output Random walkSynthesis 1/(1 – bz) White noise input output.
Econ 240 C Lecture Project II I. Work in Groups II. You will be graded based on a PowerPoint presentation and a written report. III. Your report.
1 Econ 240 C Lecture Time Series Concepts Analysis and Synthesis.
Econ 240C Lecture Review 2002 Final Ideas that are transcending p. 15 Economic Models of Time Series Symbolic Summary.
1 Midterm Review. 2 Outline The Triangle of Stability - Power 7 The Triangle of Stability - Power 7 Augmented Dickey-Fuller Tests – Power 10 Augmented.
1 Power 2 Econ 240C. 2 Lab 1 Retrospective Exercise: –GDP_CAN = a +b*GDP_CAN(-1) + e –GDP_FRA = a +b*GDP_FRA(-1) + e.
1 Econ 240C Lecture Five Outline w Box-Jenkins Models w Time Series Components Model w Autoregressive of order one.
1 Econ 240 C Lecture 3. 2 Time Series Concepts Analysis and Synthesis.
Econ 240 C Lecture Outline Exponential Smoothing Exponential Smoothing Back of the envelope formula: geometric distributed lag: L(t) = a*y(t-1)
MOVING AVERAGES AND EXPONENTIAL SMOOTHING
1 Econ 240C Lecture Five. 2 Outline w Box-Jenkins Models: the grand design w What do you need to learn? w Preview of partial autocorrelation function.
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: capumfg Example: capumfg Polar form Polar form.
1 ECON 240C Lecture 8. 2 Part I. Economic Forecast Project Santa Barbara County Seminar –April 17, 2003 URL:
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing.
ARIMA Forecasting Lecture 7 and 8 - March 14-16, 2011
1 Econ 240C Lecture Five Outline w Box-Jenkins Models w Time Series Components Model w Autoregressive of order one.
Econ 240 C Lecture Re-Visit Santa Barbara South Coast House Price Santa Barbara South Coast House Price UC Budget, General Fund UC Budget, General.
1 ECON 240C Lecture 8. 2 Part I. Economic Forecast Project Santa Barbara County Seminar Santa Barbara County Seminar  April 22, 2004 (April 17, 2003)
1 Lab Four Postscript Econ 240 C. 2 Airline Passengers.
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing.
Statistical Forecasting Models
R. Werner Solar Terrestrial Influences Institute - BAS Time Series Analysis by descriptive statistic.
Business Forecasting Chapter 5 Forecasting with Smoothing Techniques.
Slides 13b: Time-Series Models; Measuring Forecast Error
MOVING AVERAGES AND EXPONENTIAL SMOOTHING. Forecasting methods: –Averaging methods. Equally weighted observations –Exponential Smoothing methods. Unequal.
© 2003 Prentice-Hall, Inc.Chap 12-1 Business Statistics: A First Course (3 rd Edition) Chapter 12 Time-Series Forecasting.
Constant process Separate signal & noise Smooth the data: Backward smoother: At any give T, replace the observation yt by a combination of observations.
BOX JENKINS METHODOLOGY
Winter’s Exponential smoothing
STAT 497 LECTURE NOTES 2.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Time Series Forecasting Chapter 16.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Time Series Forecasting Chapter 13.
Time Series Analysis and Forecasting
Time series Decomposition Farideh Dehkordi-Vakil.
1 Given the following data, calculate forecasts for months April through June using a three- month moving average and an exponential smoothing forecast.
Time-Series Forecasting Overview Moving Averages Exponential Smoothing Seasonality.
1 1 Slide © 2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
1 BABS 502 Moving Averages, Decomposition and Exponential Smoothing Revised March 14, 2010.
The Box-Jenkins (ARIMA) Methodology
Forecasting Demand. Forecasting Methods Qualitative – Judgmental, Executive Opinion - Internal Opinions - Delphi Method - Surveys Quantitative - Causal,
1 Econ 240C Lecture Five. 2 Part I: Time Series Components Model w The conceptual framework for inertial (mechanical) time series models: w Time series.
Forecasting Demand. Problems with Forecasts Forecasts are Usually Wrong. Every Forecast Should Include an Estimate of Error. Forecasts are More Accurate.
Time Series Analysis PART II. Econometric Forecasting Forecasting is an important part of econometric analysis, for some people probably the most important.
EC 827 Module 2 Forecasting a Single Variable from its own History.
Forecasting Methods Dr. T. T. Kachwala.
Forecasting techniques
Chapter 6: Autoregressive Integrated Moving Average (ARIMA) Models
John Loucks St. Edward’s University . SLIDES . BY.
Econ 240 C Lecture 4.
Econ 240C Lecture 18.
ECON 240C Lecture 7.
MOVING AVERAGES AND EXPONENTIAL SMOOTHING
Exponential Smoothing
Forecasting - Introduction
Exponential smoothing
BOX JENKINS (ARIMA) METHODOLOGY
Chap 4: Exponential Smoothing
Exponential Smoothing
Presentation transcript:

Econ 240 C Lecture 14 1

2 Part I: Exponential Smoothing Exponential smoothing is a technique that is useful for forecasting short time series where there may not be enough observations to estimate a Box-Jenkins model Exponential smoothing is a technique that is useful for forecasting short time series where there may not be enough observations to estimate a Box-Jenkins model Exponential smoothing can be understood from many perspectives; one perspective is a formula that could be calculated by hand Exponential smoothing can be understood from many perspectives; one perspective is a formula that could be calculated by hand

3 Simple exponential smoothing Simple exponential smoothing, also known as single exponential smoothing, is most appropriate for a time series that is a random walk with first order moving average error structure Simple exponential smoothing, also known as single exponential smoothing, is most appropriate for a time series that is a random walk with first order moving average error structure The levels term, L(t), is a weighted average of the observation lagged one, y(t-1) plus the previous levels, L(t-1): The levels term, L(t), is a weighted average of the observation lagged one, y(t-1) plus the previous levels, L(t-1): L(t) = a*y(t-1) + (1-a)*L(t-1) L(t) = a*y(t-1) + (1-a)*L(t-1)

4 Single exponential smoothing The parameter a is chosen to minimize the sum of squared errors where the error is the difference between the observation and the levels term: e(t) = y(t) – L(t) The parameter a is chosen to minimize the sum of squared errors where the error is the difference between the observation and the levels term: e(t) = y(t) – L(t) The forecast for period t+1 is given by the formula: L(t+1) = a*y(t) + (1-a)*L(t) The forecast for period t+1 is given by the formula: L(t+1) = a*y(t) + (1-a)*L(t) Example from John Heinke and Arthur Reitsch, Business Forecasting, 6 th Ed. Example from John Heinke and Arthur Reitsch, Business Forecasting, 6 th Ed.

5 observationsSales

6 Single exponential smoothing For observation #1, set L(1) = Sales(1) = 500, as an initial condition For observation #1, set L(1) = Sales(1) = 500, as an initial condition As a trial value use a = 0.1 As a trial value use a = 0.1 So L(2) = 0.1*Sales(1) + 0.9*Level(1) L(2) = 0.1* *500 = 500 So L(2) = 0.1*Sales(1) + 0.9*Level(1) L(2) = 0.1* *500 = 500 And L(3) = 0.1*Sales(2) + 0.9*Level(2) L(2) = 0.1* *500 = 485 And L(3) = 0.1*Sales(2) + 0.9*Level(2) L(2) = 0.1* *500 = 485

7 observationsSalesLevel

8 observationsSalesLevel a = 0.1

9 Single exponential smoothing So the formula can be used to calculate the rest of the levels values, observation #4-#24 So the formula can be used to calculate the rest of the levels values, observation #4-#24 This can be set up on a spread-sheet This can be set up on a spread-sheet

10 observationsSalesLevel a = 0.1

11 Single exponential smoothing The forecast for observation #25 is: L(25) = 0.1*sales(24)+0.9*(24) The forecast for observation #25 is: L(25) = 0.1*sales(24)+0.9*(24) Forecast(25)=Levels(25)=0.1* *449 Forecast(25)=Levels(25)=0.1* *449 Forecast(25) = Forecast(25) = 469.1

13 Single exponential distribution The errors can now be calculated: e(t) = sales(t) – levels(t) The errors can now be calculated: e(t) = sales(t) – levels(t)

14 observationsSalesLevelerror a = 0.1

15 observationsSalesLevelerror error squared a = 0.1

16 observationsSalesLevelerror error squared sum sq res a = 0.1

17 Single exponential smoothing For a = 0.1, the sum of squared errors is:  errors) 2 = 582,281.2 For a = 0.1, the sum of squared errors is:  errors) 2 = 582,281.2 A grid search can be conducted for the parameter value a, to find the value between 0 and 1 that minimizes the sum of squared errors A grid search can be conducted for the parameter value a, to find the value between 0 and 1 that minimizes the sum of squared errors The calculations of levels, L(t), and errors, e(t) = sales(t) – L(t) for a =0.6 The calculations of levels, L(t), and errors, e(t) = sales(t) – L(t) for a =0.6

18 observa tionsSalesLevels a = 0.6

19 Single exponential smoothing Forecast(25) = Levels(25) = 0.6*sales(24) + 0.4*levels(24) = 0.6* *465 = 776 Forecast(25) = Levels(25) = 0.6*sales(24) + 0.4*levels(24) = 0.6* *465 = 776

20 observa tionsSalesLevelserror error square Sum of Sq Res a = 0.6

21 Single exponential smoothing Grid search plot Grid search plot

23 Single Exponential Smoothing EVIEWS: Algorithmic search for the smoothing parameter a EVIEWS: Algorithmic search for the smoothing parameter a In EVIEWS, select time series sales(t), and open In EVIEWS, select time series sales(t), and open In the sales window, go to the PROCS menu and select exponential smoothing In the sales window, go to the PROCS menu and select exponential smoothing Select single Select single the best parameter a = 0.26 with sum of squared errors = and root mean square error = = ( /24) 1/2 the best parameter a = 0.26 with sum of squared errors = and root mean square error = = ( /24) 1/2 The forecast, or end of period levels mean = The forecast, or end of period levels mean = 532.4

24

25

26 Forecast = L(25) = 0.26*Sales(24) L(24) = 532.4

27

28 Part II. Three Perspectives on Single Exponential Smoothing The formula perspective The formula perspective L(t) = a*y(t-1) + (1 - a)*L(t-1) L(t) = a*y(t-1) + (1 - a)*L(t-1) e(t) = y(t) - L(t) e(t) = y(t) - L(t) The Box-Jenkins Perspective The Box-Jenkins Perspective The Updating Forecasts Perspective The Updating Forecasts Perspective

29 Box Jenkins Perspective Use the error equation to substitute for L(t) in the formula, L(t) = a*y(t-1) + (1 - a)*L(t-1) Use the error equation to substitute for L(t) in the formula, L(t) = a*y(t-1) + (1 - a)*L(t-1) L(t) = y(t) - e(t) L(t) = y(t) - e(t) y(t) - e(t) = a*y(t-1) + (1 - a)*[y(t-1) - e(t-1)] y(t) = e(t) -y(t-1) - (1-a)*e(t-1) y(t) - e(t) = a*y(t-1) + (1 - a)*[y(t-1) - e(t-1)] y(t) = e(t) -y(t-1) - (1-a)*e(t-1) or  y(t) = y(t) - y(t-1) = e(t) - (1-a) e(t-1) or  y(t) = y(t) - y(t-1) = e(t) - (1-a) e(t-1) So y(t) is a random walk plus MAONE noise, i.e y(t) is a (0,1,1) process where (p,d,q) are the orders of AR, differencing, and MA. So y(t) is a random walk plus MAONE noise, i.e y(t) is a (0,1,1) process where (p,d,q) are the orders of AR, differencing, and MA.

30 Box-Jenkins Perspective In Lab Eight, we will apply simple exponential smoothing to retail sales, a process you used for forecasting trend in Lab 3, and which can be modeled as (0,1,1). In Lab Eight, we will apply simple exponential smoothing to retail sales, a process you used for forecasting trend in Lab 3, and which can be modeled as (0,1,1).

31 Box-Jenkins Perspective If the smoothing parameter approaches one, then y(t) is a random walk: If the smoothing parameter approaches one, then y(t) is a random walk:  y(t) = y(t) - y(t-1) = e(t) - (1-a) e(t-1)  y(t) = y(t) - y(t-1) = e(t) - (1-a) e(t-1) if a = 1, then  y(t) = y(t) - y(t-1) = e(t) if a = 1, then  y(t) = y(t) - y(t-1) = e(t) In Lab Eight, we will use the price of gold, which we used in Lab 4, to make this point In Lab Eight, we will use the price of gold, which we used in Lab 4, to make this point

32 Box-Jenkins Perspective The levels or forecast, L(t), is a geometric distributed lag of past observations of the series, y(t), hence the name “exponential” smoothing The levels or forecast, L(t), is a geometric distributed lag of past observations of the series, y(t), hence the name “exponential” smoothing L(t) = a*y(t-1) + (1 - a)*L(t-1) L(t) = a*y(t-1) + (1 - a)*L(t-1) L(t) = a*y(t-1) + (1 - a)*ZL(t) L(t) = a*y(t-1) + (1 - a)*ZL(t) L(t) - (1 - a)*ZL(t) = a*y(t-1) L(t) - (1 - a)*ZL(t) = a*y(t-1) [1 - (1-a)Z] L(t) = a*y(t-1) [1 - (1-a)Z] L(t) = a*y(t-1) L(t) = {1/ [1 - (1-a)Z]} a*y(t-1) L(t) = {1/ [1 - (1-a)Z]} a*y(t-1) L(t) = [1 +(1-a)Z + (1-a) 2 Z 2 + …] a*y(t-1) L(t) = [1 +(1-a)Z + (1-a) 2 Z 2 + …] a*y(t-1) L(t) = a*y(t-1) + (1-a)*a*y(t-2) + (1-a) 2 a*y(t-3) + …. L(t) = a*y(t-1) + (1-a)*a*y(t-2) + (1-a) 2 a*y(t-3) + ….

33 The Updating Forecasts Perspective Use the error equation to substitute for y(t) in the formula, L(t) = a*y(t-1) + (1 - a)*L(t-1) Use the error equation to substitute for y(t) in the formula, L(t) = a*y(t-1) + (1 - a)*L(t-1) y(t) = L(t) + e(t) y(t) = L(t) + e(t) L(t) = a*[L(t-1) + e(t-1)] + (1 - a)*L(t-1) L(t) = a*[L(t-1) + e(t-1)] + (1 - a)*L(t-1) So L(t) = L(t-1) + a*e(t-1), So L(t) = L(t-1) + a*e(t-1), i.e. the forecast for period t is equal to the forecast for period t-1 plus a fraction a of the forecast error from period t-1. i.e. the forecast for period t is equal to the forecast for period t-1 plus a fraction a of the forecast error from period t-1.

34 Part III. Double Exponential Smoothing With double exponential smoothing, one estimates a “trend” term, R(t), as well as a levels term, L(t), so it is possible to forecast, f(t), out more than one period With double exponential smoothing, one estimates a “trend” term, R(t), as well as a levels term, L(t), so it is possible to forecast, f(t), out more than one period f(t+k) = L(t) + k*R(t), k>=1 f(t+k) = L(t) + k*R(t), k>=1 L(t) = a*y(t) + (1-a)*[L(t-1) + R(t-1)] L(t) = a*y(t) + (1-a)*[L(t-1) + R(t-1)] R(t) = b*[L(t) - L(t-1)] + (1-b)*R(t-1) R(t) = b*[L(t) - L(t-1)] + (1-b)*R(t-1) so the trend, R(t), is a geometric distributed lag of the change in levels,  L(t) so the trend, R(t), is a geometric distributed lag of the change in levels,  L(t)

35 If the smoothing parameters a = b, then we have double exponential smoothing If the smoothing parameters a = b, then we have double exponential smoothing If the smoothing parameters are different, then it is the simplest version of Holt- Winters smoothing If the smoothing parameters are different, then it is the simplest version of Holt- Winters smoothing Part III. Double Exponential Smoothing

36 Part III. Double Exponential Smoothing Holt- Winters can also be used to forecast seasonal time series, e.g. monthly Holt- Winters can also be used to forecast seasonal time series, e.g. monthly f(t+k) = L(t) + k*R(t) + S(t+k-12) k>=1 f(t+k) = L(t) + k*R(t) + S(t+k-12) k>=1 L(t) = a*[y(t)-S(t-12)]+ (1-a)*[L(t-1) + R(t-1)] L(t) = a*[y(t)-S(t-12)]+ (1-a)*[L(t-1) + R(t-1)] R(t) = b*[L(t) - L(t-1)] + (1-b)*R(t-1) R(t) = b*[L(t) - L(t-1)] + (1-b)*R(t-1) S(t) = c*[y(t) - L(t)] + (1-c)*S(t-12) S(t) = c*[y(t) - L(t)] + (1-c)*S(t-12)

37 Part IV. Dickey Fuller Tests: Trend

38 Stochastic Trends: Random Walks with Drift We have discussed earlier in the course how to model the Total Return to the Standard and Poor’s 500 Index We have discussed earlier in the course how to model the Total Return to the Standard and Poor’s 500 Index One possibility is this time series could be a random walk around a deterministic trend” One possibility is this time series could be a random walk around a deterministic trend” Sp500(t) = exp{a + d*t +WN(t)/[1-Z]} Sp500(t) = exp{a + d*t +WN(t)/[1-Z]} And taking logarithms, And taking logarithms,

39 Stochastic Trends: Random Walks with Drift Lnsp500(t) = a + d*t + WN(t)/[1-Z] Lnsp500(t) = a + d*t + WN(t)/[1-Z] Lnsp500(t) –a –d*t = WN(t)/[1-Z] Lnsp500(t) –a –d*t = WN(t)/[1-Z] Multiplying through by the difference operator,  = [1-Z] Multiplying through by the difference operator,  = [1-Z] [1-Z][Lnsp500(t) –a –d*t] = WN(t-1) [1-Z][Lnsp500(t) –a –d*t] = WN(t-1) [LnSp500(t) – a –d*t] - [LnSp500(t-1) – a –d*(t- 1)] = WN(t) [LnSp500(t) – a –d*t] - [LnSp500(t-1) – a –d*(t- 1)] = WN(t)  Lnsp500(t) = d + WN(t)  Lnsp500(t) = d + WN(t)

40 So the fractional change in the total return to the S&P 500 is drift, d, plus white noise So the fractional change in the total return to the S&P 500 is drift, d, plus white noise More generally, More generally, y(t) = a + d*t + {1/[1-Z]}*WN(t) y(t) = a + d*t + {1/[1-Z]}*WN(t) [y(t) –a –d*t] = {1/[1-Z]}*WN(t) [y(t) –a –d*t] = {1/[1-Z]}*WN(t) [y(t) –a –d*t]- [y(t-1) –a –d*(t-1)] = WN(t) [y(t) –a –d*t]- [y(t-1) –a –d*(t-1)] = WN(t) [y(t) –a –d*t]= [y(t-1) –a –d*(t-1)] + WN(t) [y(t) –a –d*t]= [y(t-1) –a –d*(t-1)] + WN(t) Versus the possibility of an ARONE: Versus the possibility of an ARONE:

41 [y(t) –a –d*t]=b*[y(t-1)–a–d*(t-1)]+WN(t) [y(t) –a –d*t]=b*[y(t-1)–a–d*(t-1)]+WN(t) Or y(t) = [a*(1-b)+b*d]+[d*(1-b)]*t+b*y(t-1) +wn(t) Or y(t) = [a*(1-b)+b*d]+[d*(1-b)]*t+b*y(t-1) +wn(t) Subtracting y(t-1) from both sides’ Subtracting y(t-1) from both sides’  y(t) = [a*(1-b)+b*d] + [d*(1-b)]*t + (b-1)*y(t-1) +wn(t)  y(t) = [a*(1-b)+b*d] + [d*(1-b)]*t + (b-1)*y(t-1) +wn(t) So the coefficient on y(t-1) is once again interpreted as b-1, and we can test the null that this is zero against the alternative it is significantly negative. Note that we specify the equation with both a constant, So the coefficient on y(t-1) is once again interpreted as b-1, and we can test the null that this is zero against the alternative it is significantly negative. Note that we specify the equation with both a constant, [a*(1-b)+b*d] and a trend [d*(1-b)]*t [a*(1-b)+b*d] and a trend [d*(1-b)]*t

42 Example Lnsp500(t) from Lab 2 Lnsp500(t) from Lab 2

43

44

45

46

47 Part V. Intervention Analysis

48 Intervention Analysis The approach to intervention analysis parallels Box-Jenkins in that the actual estimation is conducted after pre- whitening, to the extent that non- stationarity such as trend and seasonality are removed The approach to intervention analysis parallels Box-Jenkins in that the actual estimation is conducted after pre- whitening, to the extent that non- stationarity such as trend and seasonality are removed Example: preview of Lab 8 Example: preview of Lab 8

49 Telephone Directory Assistance A telephone company was receiving increased demand for free directory assistance, i.e. subscribers asking operators to look up numbers. This was increasing costs and the company changed policy, providing a number of free assisted calls to subscribers per month, but charging a price per call after that number. A telephone company was receiving increased demand for free directory assistance, i.e. subscribers asking operators to look up numbers. This was increasing costs and the company changed policy, providing a number of free assisted calls to subscribers per month, but charging a price per call after that number.

50 Telephone Directory Assistance This policy change occurred at a known time, March 1974 This policy change occurred at a known time, March 1974 The time series is for calls with directory assistance per month The time series is for calls with directory assistance per month Did the policy change make a difference? Did the policy change make a difference?

51

52 The simple-minded approach

53

54

55

56 Principle The event may cause a change, and affect time series characteristics The event may cause a change, and affect time series characteristics Consequently, condider the pre-event period, January 1962 through February 1974, the event March 1974, and the post-event period, April 1974 through December 1976 Consequently, condider the pre-event period, January 1962 through February 1974, the event March 1974, and the post-event period, April 1974 through December 1976 First difference and then seasonally difference the entire series First difference and then seasonally difference the entire series

57 Analysis: Entire Differenced Series

58

59

60

61

62 Analysis: Pre-Event Differences

63

64

65

66 So Seasonal Nonstationarity It was masked in the entire sample by the variance caused by the difference from the event It was masked in the entire sample by the variance caused by the difference from the event The seasonality was revealed in the pre- event differenced series The seasonality was revealed in the pre- event differenced series

67

68 Pre-Event Analysis Seasonally differenced, differenced series Seasonally differenced, differenced series

69

70

71

72

73 Pre-Event Box-Jenkins Model [1-Z 12 ][1 –Z]Assist(t) = WN(t) – a*WN(t-12) [1-Z 12 ][1 –Z]Assist(t) = WN(t) – a*WN(t-12)

74

75

76

77 Modeling the Event Step function Step function

78 Entire Series Assist and Step Assist and Step Dassist and Dstep Dassist and Dstep Sddast sddstep Sddast sddstep

79

80

81

82 Model of Series and Event Pre-Event Model: [1-Z 12 ][1 –Z]Assist(t) = WN(t) – a*WN(t-12) Pre-Event Model: [1-Z 12 ][1 –Z]Assist(t) = WN(t) – a*WN(t-12) In Levels Plus Event: Assist(t)=[WN(t) – a*WN(t-12)]/[1-Z]*[1-Z 12 ] + (-b)*step In Levels Plus Event: Assist(t)=[WN(t) – a*WN(t-12)]/[1-Z]*[1-Z 12 ] + (-b)*step Estimate: [1-Z 12 ][1 –Z]Assist(t) = WN(t) – a*WN(t-12) + (-b)* [1-Z 12 ][1 –Z]*step Estimate: [1-Z 12 ][1 –Z]Assist(t) = WN(t) – a*WN(t-12) + (-b)* [1-Z 12 ][1 –Z]*step

83

84

85 Policy Change Effect Simple: decrease of 387 (thousand) calls per month Simple: decrease of 387 (thousand) calls per month Intervention model: decrease of 397 with a standard error of 22 Intervention model: decrease of 397 with a standard error of 22

86