Presentation is loading. Please wait.

Presentation is loading. Please wait.

Time Series EC Burak Saltoglu

Similar presentations


Presentation on theme: "Time Series EC Burak Saltoglu"— Presentation transcript:

1 Time Series EC 532 2017 Burak Saltoglu

2 Ec532 2nd half: Time Series Analysis
Topic 1 linear time series Topic 2 nonstationary time series Topic 3 Cointegration and unit roots Topic 4 Vector Autoregression (VAR) Topic 5 Volatility modeling (if time allows)

3 References

4 Time series books Hamiton, J (1994); Time Series Analysis
Enders W (2014), Applied Time Series Chatfield (2003), The Analysis of Time Series Diebold F (2006), Elements of Forecasting 9/22/2018

5 References books : Ruey Tsay, 2013
Walters Applied Time Series Methods, Wiley, 2013 Granger Long Run Economic Relationships, 1990. Hamilton Time Series Analysis, 1994.

6 Topic 1: Linear Time series Outline
Non-stationary time series Distributed Lag Models Nonlinear Models

7 outline Linear Time Series Models ARDL Models Granger Causality Test
AR and MA processes Diagnostics in Time Series Correlogram Box-Pierce Q Statistics Ljung-Box (LB) Statistics Forecasting

8 Later in topic 2 Stationary versus Non-stationary Times Series Testing for Stationarity

9 The Reasons for using Time Series
Psychological Reasons: People do not change their habits immediately Technological Reasons: Quantity of a resource needed or bought might not be so adaptive in many cases Instutitional Reasons: There might be some limitations on individuals

10 Distributed Lag Models
In the distributed lag (DL) model we have not only current value of the explanatory variable but also its past value(s). With DL models, the effect of a shock in the explanatory variable lasts more. We can estimate DL models (in principal)with OLS. Because the lags of X are also non-stochastic.

11 Autoregressive Models
In the Autoregressive (AR) models, the past value(s) of the dependent variables becomes an explanatory variable. We can not esitmate an autoregressive model with OLS due to 1.Presence of stochastic explanatory variables and 2.Posibility of serial correlation

12 ARDL Models In the ARDL models, we have both AR and DL part in one regression.

13 Granger Causality Test
Let us consider the relation between GNP and money supply. A regression analysis can show us the relation between these two. But our regression analysis can not say us the direction of the relation. The granger causality test examines the causality between series, the direction of the relation. We can test whether GNP causes money supply to increase or a monetary expansion lead GNP to rise, under conditions defined by Granger.

14 Granger Causality Test
Steps for testing M (granger) causes GNP; Regress GNP on all lagged GNP obtain Regress GNP on all lagged GNP and all lagged M obtain The null is ’s are alll zero. Test statistics; where m number of lags, k the number of parameters in step-2. df(m,n-k)

15 Granger Causality Test

16 Linear Time Series Models: y(t)
Time series analysis is useful when the economic relationship is difficult to set Even if there are explanatory variables to express y, it is not possible to forecast y(t)

17 Stationary Stochastic Process
Any time series data can be thought of as being generated by a stochastic process. A stochastic process is said to be stationary if its mean and variance are constant over time the value of covariance between two time depends only on the distance or lag between the two time periods and not on the actual time at which the covariance is computed.

18 Times series and white noise
a process is said to be white noise if it follows the following properties

19 Stationary Time Series
If a time series is time invariant with respect to changes in time The process can be estimated with fixed coefficients Strict-sense Stationarity:

20 Stationarity Wide sense stationarity

21 Stationarity Strict sense stationarity implies wide sense stationarity but the reverse is not true. İmplication of stationarity: inference we obtain from a non-stationary series is misleading and wrong.

22 Linear Time Series Models-AR
Basic ARMA Models

23 Lag Operators Or we can use lag polynomials

24 Lag operators and polynomials

25 AR vs MA Representation

26 AUTOCORRELATIONS and AUTOCOVARIANCE FUNCTIONS

27 Autocorrelation

28 Partial Autocorrelation

29 Linear Time Series -AR For AR(1);

30 Linear Time Series Models-AR

31 Linear Time Series Models-AR

32 Linear Time Series Models-AR(1)
So if you have a data which is generated by an AR(1) process, it is correlogram will diminish slowly (if it is stationary)

33 AR(1) process 𝑦 𝑡 =0.99 𝑦 𝑡−1 + ε 𝑡

34 AR process simulation 𝑦 𝑡 =0.90 𝑦 𝑡−1 + ε 𝑡

35 AR process simulation 𝑦 𝑡 =0.5 𝑦 𝑡−1 + ε 𝑡

36 AR(1) with weak predictable part
𝑦 𝑡 =0.05 𝑦 𝑡−1 + ε 𝑡

37 𝑦 𝑡 = ε 𝑡

38 𝑦 𝑡 =0.9 𝑦 𝑡−1 + ε 𝑡 𝑦 𝑡 =0.8 𝑦 𝑡−1 + ε 𝑡

39 Linear Time Series Models-AR(p)
Autoregressive Expected value of Y;

40 Linear Time Series Models-MA
Moving Average MA(k) Process The term ‘moving average’ comes from the fact that y is constructed from a weighted sum of the two most recent Error terms

41

42 MA(1) Correlogram

43 Linear Time Series Models-MA(1)
So if you have a data that is generated by MA(1) its correlogram will decline to zero quickly (after one lag.)

44 An MA(1) example One major implication is the MA(1) process has a memory of only one Lag. i.e. MA(1) process forgets immediately after one term or only remembers the Just one Previous realization.

45 Variance-autocovariance MA(2)
**Since it is white noise

46 MA(2)

47 Linear Time Series Models-MA
Moving Average MA(k) Process Error term is white noise. MA(k) has k+2 parameters Variance of y;

48 Homework, derive the autocorrelation function for MA(3),..MA(k).

49 ARMA Models: ARMA(1,1)

50 ARMA(1,1)

51 ARMA(1,1)

52

53 Model Selection How well does it fit the data?
Adding additional lags for p and q will reduce the SSR. Adding new variables decrease the degrees of freedom In addition, adding new variables decreases the forecasting performance of the fitted model. Parsimonious model: optimizes this trade-off

54 Two Model Selection Criteria
Akaike Information Criterion: Schwartz Bayesian Criterion. AIC: k is the number of parameters estimated if intercept term is allowed: (p+q+1) else k=p+q. T: number of observations Choose the lag order which minimizes the AIC or SBC AIC may be biased towards selecting overparametrized model whereas SBC is asympoticaly consistent

55 Chararterization of Time Series
Visual inspection Autocorrelation order selection Test for significance Barlett Box ljung

56 Correlogram Under stationarity,
One simple test of stationarity is based on autocorrelation function (ACF). ACF at lag k is; Under stationarity,

57 Sample Autocorrelation

58 Correlogram If we plot against k, the graph is called as correlogram.
As an example let us look at the correlogram of Turkey’s GDP.

59 Autocorrelation Function
Correlogram Autocorrelation Function

60 Test for autocorrelation
Barlett Test: to test for

61

62 ISE30 Return Correlation

63 Box-Pierce Q Statistics
To test the joint hypothesis that all the autocorrelation coefficients are simultaneously zero, one can use the Q statistics. where; m= lag length n= sample size

64 Box-Pierce Q Statistics

65 Ljung-Box (LB) Statistics
It is variant of Q statistics as;

66 Box Jenkins approach to time series
data Stop: If the series are non-stationary Identification Choose the order of p q ARMA Estimate ARMA coefficients Diagnostic checking: Is the model appropriate Forecasting

67 forecasting T T+R Today Ex post forecasting period T+1,…T+R
Ex ante period ESTIMATION PERIOD t=1,…T

68 Introduction to forecasting

69

70 In practice If we can consistently estimate the order via AIC then one can forecast the future values of y. There are alternative measures to conduct forecast accuracy

71 Mean Square Prediction Error Method (MSPE)
Choose model with the lowest MSPE If there are observations in the holdback periods, the MSPE for Model 1 is defined as:

72 A Forecasting example for AR(1)
Suppose we are given

73 A Forecasting example for AR(1)
Left for forecasting

74

75 Introduction to forecasting

76 Forecast of AR(1) model forecast actual y(151) forecast -6.452201702
y(152) forecast y(153) forecast y(154) forecast y(155) forecast y(156) forecast y(157) forecast y(158) forecast y(159) forecast y(160) forecast y(161) forecast

77 AR(1) forecast

78

79 Summary Find the AR, MA order via autocovariances, correlogram plots
Use, AIC, SBC to choose orders Check LB stats Run a regression Do forecasting (use RMSE or MSE) to choose the best out-of-sample forecasting model.

80 Topic II: Testing for Stationarity and Unit Roots
EC 532

81 Outline What is unit roots? Why is it important? Test for unit roots
Spurious regression Test for unit roots Dickey Fuller Augmented Dickey Fuller tests

82 Stationarity and random walk
Can we test via ACF or Box Ljung? Why a formal test is necessary? Source: W Enders Chapter 4, chapter 6

83 Spurious Regression Regressions involving time series data include the possibility of obtaining spurious or dubious results signals the spurious regression. Two variables carrying the same trend makes two series to move together this does not mean that there is a genuine or natural relationship.

84 Spurios regression One of OLS assumptions was the stationarity of these series we will call such regression as spurious regression (Newbold and Granger (1974)).

85 Unit roots and cointegration
Clive Granger Robert Engle

86 Spurious regression the least squares estimates are not consistent and regular tests and inference do not hold. As rule of thumb (Granger and Newbold,1974)

87 Example Spurious Regression : two simulated RW:Ar1.xls
Xt = Xt-1 + ut ut~N(0,1) Yt = Yt-1 + εt εt~N(0,1) ut and εt are independent Spurious regression: Yt = βXt + ut Coefficients Standard Error t Stat P-value X Variable 1 9.87E-16

88 Examples:Gozalo

89 Unit Roots: Stationarity

90 Stationary and unit roots

91 Some Time Series Models: Random Walk Model
Where error term follows the white noise property with the following properties

92 Random Walk Now let us look at the dynamics of such a model; 𝜎 2

93 Implications of Random walk

94

95 Random Walk: BİST30 index

96 Random Walk:ISE percentage returns

97 Why a formal test is necessary?
For instance, daily brent oil series given below graph shows a series non-stationarity time series.

98 Brent Oil:20 years of daily data
End of lecture

99 How instructive to use ACF?

100 Does Crude Oil data follow random walk? (or does it contain unit root)
Neither Graph nor autocovariance functions can be formal proof of the existence of random walk series. How about standard t-test?

101 Testing for Unit Roots: Dickey Fuller
     But it would not be appropriate to use this information to reject the null of unit root. This t-test is not appropriate under the null of a unit –root. Dickey and Fuller (1979,1981) developed a formal test for unit roots. Hypothesis tests based on non-stationary variables cannot be analytically evaluated. But non-standard test statistics can be obtained via Monte Carlo

102 Dickey Fuller Test These are three versions of the Dickey-Fuller (DF) unit root tests. The null hypothesis for all versions is same whether beta1 is zero or not.

103 Dickey Fuller Test These are three versions of the Dickey-Fuller (DF) unit root tests. The null hypothesis for all versions is same whether beta1 is zero or not.

104 Dickey Fuller Test The test involves to estimate any of the below specifications

105 Dickey Fuller test So we will run and test the slope to be significant or not So the test statistic is the same as conventioanl t-test.

106 Running DF Regression

107 Testing DF in EVIEWS

108 DF: EVIEWS

109 Testing for DF for other specifications: RW with trend

110 Dickey Fuller F-test (1981)
. Now of course the test statistic is distributed under F test which can be found in Dickey Fuller tables. They are calculated under conventional F tests.

111 Dickey Fuller Test These are three versions of the Dickey-Fuller (DF) unit root tests. The null hypothesis for all versions is same whether beta1 is zero or not.

112 Augemented Dickey Fuller

113 Augmented Dickey Fuller Test
With Dickey-Fuller (ADF) test we can handle with the autocorrelation problem. The m, number of lags included, should be big enough so that the error term is not serially correlated. The null hypothesis is again the same. Let us consider GDP example again

114 Augmented Dickey Fuller Test

115 Augmented Dickey Fuller Test
At 99% confidence level, we can not reject the null. “ not augmented”

116 Augmented Dickey Fuller Test
At 99% confidence level, we reject the null. This time we “augmented” the regression to handle with serial correlation ***Because GDP is not stationary at level and stationary at first difference,it is called integrated order one, I(1). Then a stationary serie is I(0).

117 Augmented Dickey Fuller Test
In order to handle the autocorrelation problem Augmented Dickey-Fuller (ADF) test is proposed. The p, number of lags included, should be big enough so that the error term is not serially correlated. So in practice we use either SBC or AIC to clean the residuals. The null hypothesis is again the same.

118 ADF

119 Example:Daily Brent Oil We can not reject the null of unit root
t-Statistic   Prob.* Augmented Dickey-Fuller test stat p= 0.8823 Test critical values: 1% level 5% level 10% level *MacKinnon (1996) one-sided p-values. Augmented Dickey-Fuller Test Equation Dependent Variable: D(BRENT) Included observations: 5137 after adjustments Variable Coefficient Std. Error t-Statistic Prob.   BRENT(-1) C @TREND(1) 1.78E E R-squared     Mean dependent var

120 Diagnostics: Monthly trl30

121

122

123 Trl30 and 360

124 I(1) ve I(0) Series If a series is stationary it is said to be I(0) series If a series is not stationary but its first difference is stationary it is called to be difference stationary or I(1).

125 Next presentation will investigate the stationarity behaviour of more than one time series known as co-integration.

126 COINTEGRATION EC332 Burak Saltoglu

127 Economic theory, implies equilibrium relationships between the levels of time series variables that are best described as being I(1). Similarly, arbitrage arguments imply that the I(1) prices of certain financial time series are linked. (two stocks, two emerging market bonds etc).

128 Cointegration If two (or more) series are themselves non-stationary (I(1)), but a linear combination of them is stationary (I(0)) then these series are said to be co-integrated. Examples: Inflation and interest rates, Exchange Rates and inflation rates, Money Demand: inflation, interest rates, income

129

130 Money demand r:interest rates, y;income, infl: inflation.
Each series in the above eqn may be nonstationary (I(1)) but the money demand relationship may be stationary... All of the above series may wander around individually but as an equilibrium relationship MD is stable.... Or even though the series themselves may be non-stationary, they will move closely together over time and their difference will be stationary.

131 COINTEGRATION ANALYSIS
Consider the m time series variables y1t, ,y2t,…,ymt known to non-stationary, ie. suppose Then, yt=(y1t, y2t,…,ymt)’ are said to form one or more cointegrating relations if there are linear combinations of yit’s that are I (0) ie. i.e if there exists an matrix such that Where, r denotes the number of cointegrating vectors. 16

132 Testing for Cointegration Engle – Granger Residual-Based Tests Econometrica, 1987
Step 1: Run an OLS regression of y1t (say) on the rest of the variables: namely y2t, y3t, …ymt, and save the residual from this regression 17

133 Dickey Fuller Test Dickey-Fuller (DF) unit root tests.

134 Residual Based Cointegration test: Dickey Fuller test
Therefore, testing for co-integration yields to test whether the residuals from a combination of I(1) series are I(0). If u: is an I(0) then we conclude Even the individual data series are I(1) their linear combination might be I(0). This means that there is an equilibrium vector and if the variables divert from equilibrium they will converge there at a later date. If the residuals appear to be I(1) then there does not exist any co-integration relationship implying that the inference obtained from these variables are not reliable.

135 Higher order integration
If two series are I(2) may be they might have an I(1) relationship.

136

137 Example of ECM The following is the ECM that can be formed,

138 COINTEGRATION and Error Correction Mechanism
Estimation of the ECM 22

139 Error Correction Term The error correction term tells us the speed with which our model returns to equilibrium for a given exogenous shock. It should have a negative signed, indicating a move back towards equilibrium, a positive sign indicates movement away from equilibrium The coefficient should lie between 0 and 1, 0 suggesting no adjustment one time period later, 1 indicates full adjustment

140 An Example Are Turkish interest rates with different maturities (1 month versus 12 months) co-integrated Step 1: Test for I(1) for each series. Step 2: test whether two of these series move together in the long-run. if yes then set up an Error Correction Mechanism.

141

142

143

144 So both of these series are non-stationary i.e I(1)
Now we test whether there exists a linear combination of these two series which is stationary.

145 COINTEGRATION and Error Correction Mechanism
22

146 Test for co-integration

147 COINTEGRATION and Error Correction Mechanism
Estimate the ECM 22

148

149 ECM regression

150 Use of Cointegration in Economic and Finance
Purchasing Power Parity: FX rate differences between two countries is equal to inflation differences. Big Mac etc… Uncovered Interest Rate Parity: Exchange rate can be determined with the interest rate differentials Interest Rate Expectations: Long and short rate of interests should be moving together. Consumption Income HEDGE FUNDS! (ECM can be used to make money!) 22

151 conlcusion Test for co-integration via ADF is easy but might have problems when the relationship is more than 2-dimensional (Johansen is more suitable) Nonlinear co-integration, Near unit roots, structural breaks are also important. But stationarity and long run relationship of macro time series should be investigated in detail.

152 Vector Autoregression (VAR)
In 1980’s proposed by Christopher Sims is an econometric model used to capture the evolution and the interdependencies among multiple economic time series generalize the univariate AR models All the variables in the VAR system are treated symmetrically (by own lags and the lags of all the other variables in the model VAR models as a theory-free method to estimate economic relationships, They consitutean alternative to the "identification restrictions" in structural models

153 VECTOR AUTOREGRESSİON

154 Why VAR? Christoffer Sims, from princeton (nobel prize winner 2011) First VAR paper in 1980

155 VAR Models In Vector Autoregression specification, all variables are regressed on their and others lagged values.For example a simple VAR model is or which is called VAR(1) model with dimension 2

156 VAR Models Generally VAR(p) model with k dimension is
where each Ai is a k*k matrix of coefficients, m and εt is the k*1 vectors. Furthermore, No serial correlation but there can be contemporaneous correlations

157 An Example VAR Models: 1 month 12 months TRY Interest rates monthly
Generally VAR(p) model with k dimension is where each Ai is a k*k matrix of coefficients, m and εt is the k*1 vectors. Furthermore, No serial correlation but there can be contemporaneous correlations

158 TRL30R TRL360R TRL30R(-1)      ( )  ( ) [ ] [ ] TRL30R(-2)    ( )  ( ) [ ] [ ] TRL360R(-1)    ( )  ( ) [ ] [ ] TRL360R(-2)    ( )  ( ) [ ] [ ] C      ( )  ( ) [ ] [ ]

159 trl30 and trl360 Akaike information criterion -4.089038
 Schwarz criterion

160 Hypothesis testing To test whether a VAR with a lag order 8 is preferred to a lag order 10

161 VAR Models Impulse Response Functions: Suppose we want to see the reaction of our simple initial VAR(1) model to a shock, say ε1=[1,0]’ and rest is 0, where ....

162


Download ppt "Time Series EC Burak Saltoglu"

Similar presentations


Ads by Google