#1 EC 485: Time Series Analysis in a Nut Shell. #2 Data Preparation: 1)Plot data and examine for stationarity 2)Examine ACF for stationarity 3)If not.

Slides:



Advertisements
Similar presentations
Cointegration and Error Correction Models
Advertisements

Autocorrelation Functions and ARIMA Modelling
Part II – TIME SERIES ANALYSIS C4 Autocorrelation Analysis © Angel A. Juan & Carles Serrat - UPC 2007/2008.
Part II – TIME SERIES ANALYSIS C5 ARIMA (Box-Jenkins) Models
Analysis of Sales of Food Services & Drinking Places Julianne Shan Ho-Jung Hsiao Christian Treubig Lindsey Aspel Brooks Allen Edmund Becdach.
Unit Roots & Forecasting
Slide 1 DSCI 5340: Predictive Modeling and Business Forecasting Spring 2013 – Dr. Nick Evangelopoulos Lecture 7: Box-Jenkins Models – Part II (Ch. 9) Material.
Applied Business Forecasting and Planning
Time Series Analysis Materials for this lecture Lecture 5 Lags.XLS Lecture 5 Stationarity.XLS Read Chapter 15 pages Read Chapter 16 Section 15.
Chapter 11 Autocorrelation.
Time Series Building 1. Model Identification
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
R. Werner Solar Terrestrial Influences Institute - BAS Time Series Analysis by means of inference statistical methods.
Economics 310 Lecture 25 Univariate Time-Series Methods of Economic Forecasting Single-equation regression models Simultaneous-equation regression models.
How should these data be modelled?. Identification step: Look at the SAC and SPAC Looks like an AR(1)- process. (Spikes are clearly decreasing in SAC.
Non-Seasonal Box-Jenkins Models
BABS 502 Lecture 9 ARIMA Forecasting II March 23, 2009.
ARIMA-models for non-stationary time series
Modeling Cycles By ARMA
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: capumfg Example: capumfg Polar form Polar form.
BABS 502 Lecture 8 ARIMA Forecasting II March 16 and 21, 2011.
1 ECON 240C Lecture 8. 2 Outline: 2 nd Order AR Roots of the quadratic Roots of the quadratic Example: change in housing starts Example: change in housing.
ARIMA Forecasting Lecture 7 and 8 - March 14-16, 2011
Financial Econometrics
Introduction At the start of most beginning economics courses we learn the economics is a science aimed toward answering the following questions: 1.What.
1 ECON 240C Lecture 8. 2 Part I. Economic Forecast Project Santa Barbara County Seminar Santa Barbara County Seminar  April 22, 2004 (April 17, 2003)
Non-Seasonal Box-Jenkins Models
12 Autocorrelation Serial Correlation exists when errors are correlated across periods -One source of serial correlation is misspecification of the model.
BOX JENKINS METHODOLOGY
Box Jenkins or Arima Forecasting. H:\My Documents\classes\eco346\Lectures\chap ter 7\Autoregressive Models.docH:\My Documents\classes\eco346\Lectures\chap.
Chapter 15 Forecasting Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
AR- MA- och ARMA-.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
The Box-Jenkins Methodology for ARIMA Models
Slide 1 DSCI 5340: Predictive Modeling and Business Forecasting Spring 2013 – Dr. Nick Evangelopoulos Lecture 8: Estimation & Diagnostic Checking in Box-Jenkins.
Linear Stationary Processes. ARMA models. This lecture introduces the basic linear models for stationary processes. Considering only stationary processes.
Regression For the purposes of this class: –Does Y depend on X? –Does a change in X cause a change in Y? –Can Y be predicted from X? Y= mX + b Predicted.
Intervention models Something’s happened around t = 200.
Slide 1 DSCI 5340: Predictive Modeling and Business Forecasting Spring 2013 – Dr. Nick Evangelopoulos Exam 2 review: Quizzes 7-12* (*) Please note that.
Tutorial for solution of Assignment week 39 “A. Time series without seasonal variation Use the data in the file 'dollar.txt'. “
Lecture 7: Forecasting: Putting it ALL together. The full model The model with seasonality, quadratic trend, and ARMA components can be written: Ummmm,
Slide 1 DSCI 5340: Predictive Modeling and Business Forecasting Spring 2013 – Dr. Nick Evangelopoulos Lecture 3: Time Series Regression (Ch. 6) Material.
Autoregressive Integrated Moving Average (ARIMA) Popularly known as the Box-Jenkins methodology.
It’s About Time Mark Otto U. S. Fish and Wildlife Service.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
John G. Zhang, Ph.D. Harper College
Autocorrelation, Box Jenkins or ARIMA Forecasting.
Big Data at Home Depot KSU – Big Data Survey Course Steve Einbender Advanced Analytics Architect.
1 Chapter 3:Box-Jenkins Seasonal Modelling 3.1Stationarity Transformation “Pre-differencing transformation” is often used to stablize the seasonal variation.
Time Series Basics Fin250f: Lecture 8.1 Spring 2010 Reading: Brooks, chapter
STAT 497 LECTURE NOTE 9 DIAGNOSTIC CHECKS 1. After identifying and estimating a time series model, the goodness-of-fit of the model and validity of the.
Chapter 14: Inference for Regression. A brief review of chapter 4... (Regression Analysis: Exploring Association BetweenVariables )  Bi-variate data.
MULTIVARIATE TIME SERIES & FORECASTING 1. 2 : autocovariance function of the individual time series.
Time Series Analysis Lecture 11
Estimation Method of Moments (MM) Methods of Moment estimation is a general method where equations for estimating parameters are found by equating population.
Forecasting (prediction) limits Example Linear deterministic trend estimated by least-squares Note! The average of the numbers 1, 2, …, t is.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Simple Linear Regression Analysis Chapter 13.
1 Chapter 5 : Volatility Models Similar to linear regression analysis, many time series exhibit a non-constant variance (heteroscedasticity). In a regression.
The Box-Jenkins (ARIMA) Methodology
TESTING FOR NONSTATIONARITY 1 This sequence will describe two methods for detecting nonstationarity, a graphical method involving correlograms and a more.
Introduction to stochastic processes
Analysis of financial data Anders Lundquist Spring 2010.
MODEL DIAGNOSTICS By Eni Sumarminingsih, Ssi, MM.
Case study 4: Multiplicative Seasonal ARIMA Model
Lecture 8 ARIMA Forecasting II
Chapter 6: Autoregressive Integrated Moving Average (ARIMA) Models
CHAPTER 16 ECONOMIC FORECASTING Damodar Gujarati
Case study 3: SEASONAL ARIMA MODEL
BOX JENKINS (ARIMA) METHODOLOGY
Chap 7: Seasonal ARIMA Models
Presentation transcript:

#1 EC 485: Time Series Analysis in a Nut Shell

#2 Data Preparation: 1)Plot data and examine for stationarity 2)Examine ACF for stationarity 3)If not stationary, take first differences 4)If variance appears non-constant, take logarithm before first differencing 5)Examine the ACF after these transformations to determine if the series is now stationary Model Identification and Estimation: 1)Examine the ACF and PACF’s of your (now) stationary series to get some ideas about what ARIMA(p,d,q) models to estimate. 2) Estimate these models 3) Examine the parameter estimates, the SBC statistic and test of white noise for the residuals. Forecasting: 1)Use the best model to construct forecasts 2)Graph your forecasts against actual values 3)Calculate the Mean Squared Error for the forecasts

#3 Data Preparation: 1)Plot data and examine. Do a visual inspection to determine if your series is non- stationary. 2)Examine Autocorrelation Function (ACF) for stationarity. The ACF for a non- stationary series will show large autocorrelations that diminish only very slowly at large lags. (At this stage you can ignore the partial autocorrelations and you can always ignore what SAS calls the inverse autocorrelations. 3)If not stationary, take first differences. SAS will do this automatically in the IDENTIFY VAR=y(1) statement where the variable to be “identified” is y and the 1 refers to first-differencing. 4)If variance appears non-constant, take logarithm before first differencing. You would take the log before the IDENTIFY statement: ly = log(y); PROC ARIMA; IDENTIFY VAR=ly(1); 5)Examine the ACF after these transformations to determine if the series is now stationary

#4 In this presentation, a variable measuring the capacity utilization for the U.S. economy is modeled. The data are monthly from 1967:1 – 2004:03. It will be used as an example of how to carry out the three steps outlined on the previous slide. We will remove the last 6 observations 2003:10 – 2004:03 so that we can construct out-of-sample forecasts and compare our models’ ability to forecast.

#5 This plot of the raw data indicates non-stationarity, although there does not appear to be a strong trend. Capacity Utilization 1967:1 – 2004:03 (in levels)

#6 This plot of the ACF clearly indicates a non-stationary series. The autocorrelations diminish only very slowly. This ACF plot is produced By SAS using the code: PROC ARIMA; IDENTIFY VAR=cu; It will also produce an inverse autocorrelation plot that you can ignore and a partial autocorrelation plot that we will use in the modeling stage.

#7 This graph of first differences appears to be stationary. First differences of Capacity Utilization 1967:1 – 2004:03

#8 This ACF was produced in SAS using the code: PROC ARIMA; IDENTIFY VAR=cu(1); RUN; where the (1) tells SAS to use first differences. This ACF shows the autocorrelations diminishing fairly quickly. So we decide that the first difference of the capacity util. rate is stationary.

#9 In addition to the autocorrelation function (ACF) and partial autocorrelation functions (PACF) SAS will print out an autocorrelation check for white noise. Specifically, it prints out the Ljung-Box statistics, called Chi-Square below, and the p-values. If the p-value is very small as they are below, then we can reject the null hypothesis that all of the autocorrelations up to the stated lag are jointly zero. For example, for our capacity utilization data (first differences): H o :  1 =  2 =  3 =  4 =  5 =  6 = 0 (the data series is white noise) H 1 : at least one is non-zero  2 = with a p-value of less than  easily reject Ho A check for white noise on your stationary series is important, because if your series is white noise there is nothing to model and thus no point in carrying out any estimation or forecasting. We see here that the first difference of capacity utilization is not white noise, so we proceed to the modeling and estimation stage. Note: we can ignore the autocorrelation check for the data before differencing because it is non-stationary.

#10 Model Identification and Estimation: 1)Examine the Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) of your (now) stationary series to get some ideas about what ARIMA(p,d,q) models to estimate. The “d” in ARIMA stands for the number of times the data have been differenced to render to stationary. This was already determined in the previous section. The “p” in ARIMA(p,d,q) measures the order of the autoregressive component. To get an idea of what orders to consider, examine the partial autocorrelation function. If the time-series has an autoregressive order of 1, called AR(1), then we should see only the first partial autocorrelation coefficient as significant. If it has an AR(2), then we should see only the first and second partial autocorrelation coefficients as significant. (Note, that they could be positive and/or negative; what matters is the statistical significance.) Generally, the partial autocorrelation function PACF will have significant correlations up to lag p, and will quickly drop to near zero values after lag p.

#11 Here is the partial autocorrelation function PACF for the first-differenced capacity utilization series. Notice that the first two (maybe three) autocorrelations are statistically significant. This suggests AR(2) or AR(3) model. There is a statistically significant autocorrelation at lag 24 (not printed here) but this can be ignored. Remember that 5% of the time we can get an autocorr. that is more than 2 st. dev.s above zero when in fact the true one is zero.

#12 Model Identification and Estimation: (con’t) The “q” measures the order of the moving average component. To get an idea of what orders to consider, we examine the autocorrelation function. If the time-series is a moving average of order 1, called a MA(1), we should see only one significant autocorrelation coefficient at lag 1. This is because a MA(1) process has a memory of only one period. If the time-series is a MA(2), we should see only two significant autocorrelation coefficients, at lag 1 and 2, because a MA(2) process has a memory of only two periods. Generally, for a time-series that is a MA(q), the autocorrelation function will have significant correlations up to lag q, and will quickly drop to near zero values after lag q. For the capacity utilization time-series, we see that the ACF function decays, but only for the first 4 lags. Then it appears to drop off to zero abruptly. Therefore, a MA(4) might be considered. Our initial guess is ARIMA(2,1,4) where the 1 tells us that the data have been first-differenced to render it stationary.

#13 2)Estimate the Models: To estimate the model in SAS is fairly straight forward. Go back to the PROC ARIMA and add the ESTIMATE command. Here we will estimate four models: ARIMA(1,1,0), ARIMA(1,1,1), ARIMA(2,1,0), and ARIMA(2,1,4). Although we believe the last of these will be the best, it is instructive to estimate a simple AR(1) on our differenced series, this is the ARIMA(1,1,0) a model with an AR(1) and a MA(1) on the differenced series; this is the ARIMA(1,1,1), and a model with only an AR(2) term. This is the ARIMA(2,1,0) PROC ARIMA; IDENTIFY VAR=cu(1); ESTIMATE p = 1: ESTIMATE p = 1 q=1; ESTIMATE p = 2; ESTIMATE p = 2 q = 4; RUN; This estimates an ARIMA(1,1,0) This estimates ARIMA(1,1,1) This estimates an ARIMA(2,1,0) This estimates an ARIMA(2,1,4) This tells SAS that d=1 for all models

#14 3)Examine the parameter estimates, the SBC statistic and test of white noise for the residuals. On the next few slides you will see the results of estimating the 4 models discussed in the previous section. We are looking at the statistical significance of the parameter estimates. We also want to compare measures of overall fit. We will use the SBC statistic. It is based on the sum of squared residuals from estimating the model and it balances the reduction in degrees of freedom against the reduction in sum of squared residuals from adding more variables (lags of the time-series). The lower the sum of squared residuals, the better the model. SAS calculates the SBC as: Where k = p+q+1, the number of parameters estimated, and T is sample size. L is the likelihood measure, and essentially depends on the sum of squared residuals. The model with the lowest SBC measure is considered “best”. SBC can be positive or negative. NOTE: SAS’s formula differs slightly from the one in the textbook.

#15 This is the ARIMA(1,1,0) model:  y t =β 0 + β 1  y t-1 + ε t Things to notice: the parameter estimate on the AR(1) term  1 is statistically significant, which is good. However, the autocorrelation check of the residuals tells us that the residuals from this ARIMA(1,1,0) are not white- noise, with a p-value of We have left important information in the residuals that could be used. We need a better model. These are the estimates of β 0 and β 1

#16 This is the ARIMA(1,1,1) model:  y t = β 0 + β 1  y t-1 + ε t + λ 1 ε t-1 Things to notice: the parameter estimates of the AR(1) term β 1 and of the MA(1) term λ 1 are statistically significant. Also, the autocorrelation check of the residuals tells us that the residuals from this ARIMA(1,1,1) are white- noise, since the Chi-Square statistics up to a lag of 18 have p-values greater than 10%, meaning we cannot reject the null hypothesis that the autocorrelations up to lag 18 are jointly zero (p-value = ). Also the SBC statistic is smaller. So we might be done … These are the estimates of β 0, β 1 and  λ 1

#17 This is the ARIMA(2,1,0) model:  y t = β 0 + β 1  y t-1 + β 2  y t-2 + ε t This model has statistically significant coefficient estimates, the residuals up to lag 6 reject the null hypothesis of white noise, casting some doubt on this model. We won’t place much meaning in the Chi-Square statistics for lags beyond 18. The SBC statistic is larger, which is not good.

#18 This is the ARIMA(2,1,4) model:  y t = β 0 + β 1  y t-1 + β 2  y t-2 + ε t + λ 1 ε t-1 + λ 2 ε t-2 + λ 3 ε t-3 + λ 4 ε t-4 Two of the parameter estimates are not statistically significant telling us the model is not “parsimonious”, and the SBC statistic is larger than the SBC for the ARIMA(1,1,1) model. Ignore the first Chi-Square statistic since it has 0 d.o.f. due to estimating a model with 7 parameters. The Chi-Square statistic at 12 and 18 lags is statistically insignificant indicating white noise.

#19 Forecasts: proc arima; identify var=cu(1); estimate p=1; (any model goes here) forecast lead=6 id=date interval=month out=fore1; We calculate the Mean Squared Error for the 6 out-of-sample forecasts. Graphs appear on the next four slides. We find that the fourth model produces forecasts with the smallest MSE. SAS automatically adjusts the data from first differences back into levels. Use the actual values for CU and the forecasted values below to generate a mean squared prediction error for each model estimated. The formula is MSE = (1/6)*  (fcu – cu) 2 where fcu is a forecast and cu is actual. Obsdatecucu2fcu1sd1fcu2sd2fcu3sd3fcu4sd4 441SEP OCT NOV DEC JAN FEB MAR

#20

#21 Granger Causality (Predictability) Test We can test to determine if another variable helps to predict our series Yt. This can be done through a simple F-test on the α parameters. If these are jointly zero, then the variable X has no “predictive content” for variable Y. See textbook, Chapter 14.

#22