Download presentation
Presentation is loading. Please wait.
1
Time Series EC 532 2017 Burak Saltoglu
2
Ec532 2nd half: Time Series Analysis
Topic 1 linear time series Topic 2 nonstationary time series Topic 3 Cointegration and unit roots Topic 4 Vector Autoregression (VAR) Topic 5 Volatility modeling (if time allows)
3
References
4
Main reference book for the course
Vance Martin, Stan Hurn and David Harris, 2013, Econometric Modellig with Time Series R, Matlab and GAUSS codes are very useful.
5
Time series books Hamiton, J (1994); Time Series Analysis
Enders W (2014), Applied Time Series Chatfield (2003), The Analysis of Time Series Diebold F (2006), Elements of Forecasting 9/22/2018
6
References books : Ruey Tsay, 2013
Walters Applied Time Series Methods, Wiley, 2013 Granger Long Run Economic Relationships, 1990. Hamilton Time Series Analysis, 1994.
7
Topic 1: Linear Time series Outline
Non-stationary time series Distributed Lag Models Nonlinear Models
8
outline Linear Time Series Models ARDL Models Granger Causality Test
AR and MA processes Diagnostics in Time Series Correlogram Box-Pierce Q Statistics Ljung-Box (LB) Statistics Forecasting
9
Later in topic 2 Stationary versus Non-stationary Times Series Testing for Stationarity
10
The Reasons for using Time Series
Time Series: consists of a set of observations on a variable, y, taken at equally spaced intervals over time. Why study time series: 2 reasons analysis and modelling The aim of the analysis is to summarize the characteristic of time series Modelling: forecasting the future values.: Why we rely on time series? Psychological Reasons: People do not change their habits immediately Technological Reasons: Quantity of a resource needed or bought might not be so adaptive in many cases Instutitional Reasons: There might be some limitations on individuals
11
Distributed Lag Models
In the distributed lag (DL) model we have not only current value of the explanatory variable but also its past value(s). With DL models, the effect of a shock in the explanatory variable lasts more. We can estimate DL models (in principal)with OLS. Because the lags of X are also non-stochastic.
12
Autoregressive Models
In the Autoregressive (AR) models, the past value(s) of the dependent variables becomes an explanatory variable. We can not esitmate an autoregressive model with OLS due to 1.Presence of stochastic explanatory variables and 2.Posibility of serial correlation
13
ARDL Models In the ARDL models, we have both AR and DL part in one regression.
14
Granger Causality Test
Let us consider the relation between GNP and money supply. A regression analysis can show us the relation between these two. But our regression analysis can not say us the direction of the relation. The granger causality test examines the causality between series, the direction of the relation. We can test whether GNP causes money supply to increase or a monetary expansion lead GNP to rise, under conditions defined by Granger.
15
Granger Causality Test
Steps for testing M (granger) causes GNP; Regress GNP on all lagged GNP obtain Regress GNP on all lagged GNP and all lagged M obtain The null is ’s are alll zero. Test statistics; where m number of lags, k the number of parameters in step-2. df(m,n-k)
16
Granger Causality Test
17
Linear Time Series Models: y(t)
Time series analysis is useful when the economic relationship is difficult to set Even if there are explanatory variables to express y, it is not possible to forecast y(t)
18
Stationary Stochastic Process
Any time series data can be thought of as being generated by a stochastic process. A stochastic process is said to be stationary if its mean and variance are constant over time the value of covariance between two time depends only on the distance or lag between the two time periods and not on the actual time at which the covariance is computed.
19
Times series and white noise
a process is said to be white noise if it follows the following properties
20
Stationary Time Series
If a time series is time invariant with respect to changes in time The process can be estimated with fixed coefficients Strict-sense Stationarity:
21
Stationarity Wide sense stationarity
22
Stationarity Strict sense stationarity implies wide sense stationarity but the reverse is not true. İmplication of stationarity: inference we obtain from a non-stationary series is misleading and wrong.
23
Linear Time Series Models-AR
Basic ARMA Models
24
Lag Operators Or we can use lag polynomials
25
Lag operators and polynomials
26
AR vs MA Representation Wold Decomposition
27
AUTOCORRELATIONS and AUTOCOVARIANCE FUNCTIONS
𝑦 𝑡 = 𝑗=0 ∞ 𝜙 2𝑗 𝜀 𝑡−𝑗 𝑣𝑎𝑟(𝑦 𝑡 )=E 𝑗=0 ∞ 𝜙 2𝑗 𝜀 𝑡−𝑗 2 𝑣𝑎𝑟(𝑦 𝑡 )= 𝑗=0 ∞ 𝜙 2𝑗 E( 𝜀 2 𝑡−𝑗 ) 𝑗=0 ∞ 𝜙 2𝑗 = 1+ 𝜙 2 +…= 1 1− 𝜙 2 𝑣𝑎𝑟(𝑦 𝑡 )= 1 1− 𝜙 2 E( 𝜀 2 𝑡−𝑗 ) = 1 1− 𝜙 2 𝜎 2
28
Autocorrelation
29
Sample counterpart of autocovariance function
𝛾 0 = 𝑡=1 𝑇 𝑦 𝑡 − 𝑦 2 = 𝜎 2 𝛾 𝑘 = 𝑡=1 𝑇 𝑦 𝑡 − 𝑦 ( 𝑦 𝑡−𝑘 − 𝑦 ) k=1,… Because of stationarity: 𝛾 𝑘 = 𝛾 −𝑘 𝜌 𝑘 = 𝛾 𝑘 𝛾 0
30
Partial Autocorrelation
An AR process has A geometrically decaying acf Number of non-zero points of pacf=AR order. MA: geometrically decaying pacf Number of non-zero points of acf=MA order. Measures the correlation between an observation k periods ago and the current observation, after controling for intermediate lags. For The first lags pacf and acf are equal.
31
Linear Time Series -AR For AR(1);
32
Linear Time Series Models-AR
33
Linear Time Series Models-AR
34
Linear Time Series Models-AR(1)
So if you have a data which is generated by an AR(1) process, it is correlogram will diminish slowly (if it is stationary)
35
𝑦 𝑡
36
𝑦 𝑡 =0.95 𝑦 𝑡−1 + ε 𝑡
37
AR(1) process 𝑦 𝑡 =0.99 𝑦 𝑡−1 + ε 𝑡
38
AR process simulation 𝑦 𝑡 =0.90 𝑦 𝑡−1 + ε 𝑡
39
AR process simulation 𝑦 𝑡 =0.5 𝑦 𝑡−1 + ε 𝑡
40
AR(1) with weak predictable part
𝑦 𝑡 =0.05 𝑦 𝑡−1 + ε 𝑡
41
Turkish GDP growth
42
Turkish GDP and ınflation
M = Turkish GDP and ınflation GDP (quarterly: Mean 4.6635 st.deviation 5.6062 Skewness kurtosis 4.3208
43
US GDP: (Quarterly)
44
Turkish GDP quarterly: Autocorrelations
45
AR(1) Application on Turkish growth rate
Turkish GDP estimate SE t-stat constant 0.93 0.46 2.05 AR(1) 0.80 0.07 11.99 variance 11.86 1.87 6.33 mean 4.6635 st.deviation 5.6062 Skewness kurtosis 4.3208 𝑣𝑎𝑟(𝑦 𝑡 )= 1 1− 𝜙 2 𝜎 2 𝐸(𝑦 𝑡 )= δ 1− 𝜙 2 = − =4.562
46
Turkish inflation autocorrelations: Persistency
47
AR(1) Application on Turkish growth rate
Turkish inflation estimate SE t-stat constant 1.45 1.74 0.8 AR(1) 0.959 0.02 45.9 variance 78.2 2.8 27 Inflation monthly: Mean 35 st.deviation 31 Skewness 0.94 kurtosis 3.13 𝐸(𝑦 𝑡 )= δ 1− 𝜙 2 = − =35.36
48
Turkish inflation
49
𝑦 𝑡 = ε 𝑡
50
𝑦 𝑡 =0.9 𝑦 𝑡−1 + ε 𝑡 𝑦 𝑡 =-0.8 𝑦 𝑡−1 + ε 𝑡
51
Linear Time Series Models-AR(p)
Autoregressive Expected value of Y;
52
Linear Time Series Models-MA
Moving Average MA(k) Process The term ‘moving average’ comes from the fact that y is constructed from a weighted sum of the two most recent Error terms
54
MA(1) Correlogram
55
Linear Time Series Models-MA(1) typo!
So if you have a data that is generated by MA(1) its correlogram will decline to zero quickly (after one lag.)
56
An MA(1) example One major implication is the MA(1) process has a memory of only one Lag. i.e. MA(1) process forgets immediately after one term or only remembers the Just one Previous realization.
57
Variance-autocovariance MA(2)
**Since it is white noise
58
MA(2)
59
Linear Time Series Models-MA
Moving Average MA(k) Process Error term is white noise. MA(k) has k+2 parameters Variance of y; Need to derive gamma2,…
60
Homework, derive the autocorrelation function for MA(3),..MA(k).
61
ARMA Models: ARMA(1,1)
62
ARMA(1,1)
63
ARMA(1,1)
65
Maximum likelihood estimation
66
Deriving the likelihood function
67
Log-likelihood
69
Estimation AR(1) Since T and other parameters are constant we can ignore them in optimization and use
70
Likelihood function for ARMA(1,1) process
71
Model Selection How well does it fit the data?
Adding additional lags for p and q will reduce the SSR. Adding new variables decrease the degrees of freedom In addition, adding new variables decreases the forecasting performance of the fitted model. Parsimonious model: optimizes this trade-off
72
Two Model Selection Criteria
Akaike Information Criterion: Schwartz Bayesian Criterion. AIC: k is the number of parameters estimated if intercept term is allowed: (p+q+1) else k=p+q. T: number of observations Choose the lag order which minimizes the AIC or SBC AIC may be biased towards selecting overparametrized model whereas SBC is asympoticaly consistent
73
Chararterization of Time Series
Visual inspection Autocorrelation order selection Test for significance Barlett (individual) Box Ljung (joint)
74
Correlogram Under stationarity,
One simple test of stationarity is based on autocorrelation function (ACF). ACF at lag k is; Under stationarity,
75
Sample Autocorrelation
76
Correlogram If we plot against k, the graph is called as correlogram.
As an example let us look at the correlogram of Turkey’s GDP.
77
Autocorrelation Function
Correlogram Autocorrelation Function
78
Test for autocorrelation
Barlett Test: to test for
80
ISE30 Return Correlation
81
Box-Pierce Q Statistics
To test the joint hypothesis that all the autocorrelation coefficients are simultaneously zero, one can use the Q statistics. where; m= lag length T= sample size
82
Ljung-Box (LB) Statistics
It is variant of Q statistics as;
83
Box-Pierce Q Statistics
84
Box Jenkins approach to time series
data Stop: If the series are non-stationary Identification Choose the order of p q ARMA Estimate ARMA coefficients Diagnostic checking: Is the model appropriate Forecasting
85
forecasting T T+R Today Ex ante period ESTIMATION PERIOD t=1,…T
Ex post forecasting period T+1,…T+R
86
Introduction to forecasting
88
In practice If we can consistently estimate the order via AIC then one can forecast the future values of y. There are alternative measures to conduct forecast accuracy
89
Mean Square Prediction Error Method (MSPE)
Choose model with the lowest MSPE If there are observations in the holdback periods, the MSPE for Model 1 is defined as:
90
A Forecasting example for AR(1)
Suppose we are given
91
A Forecasting example for AR(1)
Left for forecasting
93
Introduction to forecasting
94
Forecast of AR(1) model forecast actual y(151) forecast -6.452201702
y(152) forecast y(153) forecast y(154) forecast y(155) forecast y(156) forecast y(157) forecast y(158) forecast y(159) forecast y(160) forecast y(161) forecast
95
AR(1) forecast
97
Forecast Combinations
Assume that there are 2 competing forecast model: a and b In addition, the forecast errors also has the same linear combination Assuming no correlation between model a and b
98
So, if model a has better prediction error than b we give more weights to a.
99
Forecasting combination
Voting behavior: suppose company A forecasts the vote for party X: 40%, B forecasts 50%. past survey performances: 𝜎 2 𝑎=30% 𝜎 2 𝑏=20%
100
Using regression for forecast combinations
Run the following regression and then do the forecasts on the basis of estimated coefficients
101
Summary Find the AR, MA order via autocovariances, correlogram plots
Use, AIC, SBC to choose orders Check LB stats Run a regression Do forecasting (use RMSE or MSE) to choose the best out-of-sample forecasting model.
102
Topic II: Testing for Stationarity and Unit Roots
EC 532
103
Outline What is unit roots? Why is it important? Test for unit roots
Spurious regression Test for unit roots Dickey Fuller Augmented Dickey Fuller tests
104
Stationarity and random walk
Can we test via ACF or Box Ljung? Why a formal test is necessary? Source: W Enders Chapter 4, chapter 6
105
Spurious Regression Regressions involving time series data include the possibility of obtaining spurious or dubious results signals the spurious regression. Two variables carrying the same trend makes two series to move together this does not mean that there is a genuine or natural relationship.
106
Spurios regression One of OLS assumptions was the stationarity of these series we will call such regression as spurious regression (Newbold and Granger (1974)).
107
Unit roots and cointegration
Clive Granger Robert Engle
108
Spurious regression the least squares estimates are not consistent and regular tests and inference do not hold. As rule of thumb (Granger and Newbold,1974)
109
Example Spurious Regression : two simulated RW:Ar1.xls
Xt = Xt-1 + ut ut~N(0,1) Yt = Yt-1 + εt εt~N(0,1) ut and εt are independent Spurious regression: Yt = βXt + ut Coefficients Standard Error t Stat P-value X Variable 1 9.87E-16
110
Examples:Gozalo
111
Unit Roots: Stationarity
112
Stationary and unit roots
113
Some Time Series Models: Random Walk Model
Where error term follows the white noise property with the following properties
114
Random Walk Now let us look at the dynamics of such a model; 𝜎 2
115
Implications of Random walk
117
Random Walk: BİST30 index
118
Random Walk:ISE percentage returns
120
Why a formal test is necessary?
For instance, daily brent oil series given below graph shows a series non-stationarity time series.
121
Brent Oil:20 years of daily data
End of lecture
122
How instructive to use ACF?
123
Does Crude Oil data follow random walk? (or does it contain unit root)
Neither Graph nor autocovariance functions can be formal proof of the existence of random walk series. How about standard t-test?
124
Testing for Unit Roots: Dickey Fuller
But it would not be appropriate to use this information to reject the null of unit root. This t-test is not appropriate under the null of a unit –root. Dickey and Fuller (1979,1981) developed a formal test for unit roots. Hypothesis tests based on non-stationary variables cannot be analytically evaluated. But non-standard test statistics can be obtained via Monte Carlo
125
Dickey Fuller Test These are three versions of the Dickey-Fuller (DF) unit root tests. The null hypothesis for all versions is same whether beta1 is zero or not.
126
Dickey Fuller Test These are three versions of the Dickey-Fuller (DF) unit root tests. The null hypothesis for all versions is same whether beta1 is zero or not.
127
Dickey Fuller Test The test involves to estimate any of the below specifications
128
Dickey Fuller test So we will run and test the slope to be significant or not So the test statistic is the same as conventioanl t-test.
129
Running DF Regression
130
Testing DF in EVIEWS
131
DF: EVIEWS
132
Testing for DF for other specifications: RW with trend
133
Dickey Fuller F-test (1981)
. Now of course the test statistic is distributed under F test which can be found in Dickey Fuller tables. They are calculated under conventional F tests.
134
Dickey Fuller Test These are three versions of the Dickey-Fuller (DF) unit root tests. The null hypothesis for all versions is same whether beta1 is zero or not.
135
Augemented Dickey Fuller
136
Augmented Dickey Fuller Test
With Dickey-Fuller (ADF) test we can handle with the autocorrelation problem. The m, number of lags included, should be big enough so that the error term is not serially correlated. The null hypothesis is again the same. Let us consider GDP example again
137
Augmented Dickey Fuller Test
138
Augmented Dickey Fuller Test
At 99% confidence level, we can not reject the null. “ not augmented”
139
Augmented Dickey Fuller Test
At 99% confidence level, we reject the null. This time we “augmented” the regression to handle with serial correlation ***Because GDP is not stationary at level and stationary at first difference,it is called integrated order one, I(1). Then a stationary serie is I(0).
140
Augmented Dickey Fuller Test
In order to handle the autocorrelation problem Augmented Dickey-Fuller (ADF) test is proposed. The p, number of lags included, should be big enough so that the error term is not serially correlated. So in practice we use either SBC or AIC to clean the residuals. The null hypothesis is again the same.
141
ADF
142
Example:Daily Brent Oil We can not reject the null of unit root
t-Statistic Prob.* Augmented Dickey-Fuller test stat p= 0.8823 Test critical values: 1% level 5% level 10% level *MacKinnon (1996) one-sided p-values. Augmented Dickey-Fuller Test Equation Dependent Variable: D(BRENT) Included observations: 5137 after adjustments Variable Coefficient Std. Error t-Statistic Prob. BRENT(-1) C @TREND(1) 1.78E E R-squared Mean dependent var
143
Diagnostics: Monthly trl30
146
Trl30 and 360
147
I(1) ve I(0) Series If a series is stationary it is said to be I(0) series If a series is not stationary but its first difference is stationary it is called to be difference stationary or I(1).
148
Next presentation will investigate the stationarity behaviour of more than one time series known as co-integration.
149
COINTEGRATION EC332 Burak Saltoglu
150
Economic theory, implies equilibrium relationships between the levels of time series variables that are best described as being I(1). Similarly, arbitrage arguments imply that the I(1) prices of certain financial time series are linked. (two stocks, two emerging market bonds etc).
151
Cointegration If two (or more) series are themselves non-stationary (I(1)), but a linear combination of them is stationary (I(0)) then these series are said to be co-integrated. Examples: Inflation and interest rates, Exchange Rates and inflation rates, Money Demand: inflation, interest rates, income
153
Brent vs wti
154
Crude oil futures
155
Usd treasury 2 year vs 30 years
156
Money demand r:interest rates, y;income, infl: inflation.
Each series in the above eqn may be nonstationary (I(1)) but the money demand relationship may be stationary... All of the above series may wander around individually but as an equilibrium relationship MD is stable.... Or even though the series themselves may be non-stationary, they will move closely together over time and their difference will be stationary.
157
COINTEGRATION ANALYSIS
Consider the m time series variables y1t, ,y2t,…,ymt known to non-stationary, ie. suppose Then, yt=(y1t, y2t,…,ymt)’ are said to form one or more cointegrating relations if there are linear combinations of yit’s that are I (0) ie. i.e if there exists an matrix such that Where, r denotes the number of cointegrating vectors. 16
158
Testing for Cointegration Engle – Granger Residual-Based Tests Econometrica, 1987
Step 1: Run an OLS regression of y1t (say) on the rest of the variables: namely y2t, y3t, …ymt, and save the residual from this regression 17
159
Dickey Fuller Test Dickey-Fuller (DF) unit root tests.
160
Residual Based Cointegration test: Dickey Fuller test
Therefore, testing for co-integration yields to test whether the residuals from a combination of I(1) series are I(0). If u: is an I(0) then we conclude Even the individual data series are I(1) their linear combination might be I(0). This means that there is an equilibrium vector and if the variables divert from equilibrium they will converge there at a later date. If the residuals appear to be I(1) then there does not exist any co-integration relationship implying that the inference obtained from these variables are not reliable.
161
Higher order integration
If two series are I(2) may be they might have an I(1) relationship.
162
Examples of cointegration: brent wti regression
Null Hypothesis: RESID01 has a unit root Exogenous: Constant Lag Length: 0 (Automatic - based on SIC, maxlag=12) t-Statistic Prob.* Augmented Dickey-Fuller test statistic 0.0009 Test critical values: 1% level 5% level 10% level
163
Example of ECM The following is the ECM that can be formed,
164
COINTEGRATION and Error Correction Mechanism
Estimation of the ECM 22
165
Error Correction Term The error correction term tells us the speed with which our model returns to equilibrium for a given exogenous shock. It should have a negative signed, indicating a move back towards equilibrium, a positive sign indicates movement away from equilibrium The coefficient should lie between 0 and 1, 0 suggesting no adjustment one time period later, 1 indicates full adjustment
166
An Example Are Turkish interest rates with different maturities (1 month versus 12 months) co-integrated Step 1: Test for I(1) for each series. Step 2: test whether two of these series move together in the long-run. if yes then set up an Error Correction Mechanism.
170
So both of these series are non-stationary i.e I(1)
Now we test whether there exists a linear combination of these two series which is stationary.
171
COINTEGRATION and Error Correction Mechanism
22
172
Test for co-integration
173
COINTEGRATION and Error Correction Mechanism
Estimate the ECM 22
175
ECM regression
176
Use of Cointegration in Economic and Finance
Purchasing Power Parity: FX rate differences between two countries is equal to inflation differences. Big Mac etc… Uncovered Interest Rate Parity: Exchange rate can be determined with the interest rate differentials Interest Rate Expectations: Long and short rate of interests should be moving together. Consumption Income HEDGE FUNDS! (ECM can be used to make money!) 22
177
conlcusion Test for co-integration via ADF is easy but might have problems when the relationship is more than 2-dimensional (Johansen is more suitable) Nonlinear co-integration, Near unit roots, structural breaks are also important. But stationarity and long run relationship of macro time series should be investigated in detail.
178
Vector Autoregression (VAR)
In 1980’s proposed by Christopher Sims is an econometric model used to capture the evolution and the interdependencies among multiple economic time series generalize the univariate AR models All the variables in the VAR system are treated symmetrically (by own lags and the lags of all the other variables in the model VAR models as a theory-free method to estimate economic relationships, They consitutean alternative to the "identification restrictions" in structural models
179
VECTOR AUTOREGRESSİON
180
Why VAR? Christoffer Sims, from princeton (nobel prize winner 2011) First VAR paper in 1980
181
VAR Models In Vector Autoregression specification, all variables are regressed on their and others lagged values.For example a simple VAR model is or which is called VAR(1) model with dimension 2
182
VAR Models Generally VAR(p) model with k dimension is
where each Ai is a k*k matrix of coefficients, m and εt is the k*1 vectors. Furthermore, No serial correlation but there can be contemporaneous correlations
183
An Example VAR Models: 1 month 12 months TRY Interest rates monthly
Generally VAR(p) model with k dimension is where each Ai is a k*k matrix of coefficients, m and εt is the k*1 vectors. Furthermore, No serial correlation but there can be contemporaneous correlations
184
TRL30R TRL360R TRL30R(-1) ( ) ( ) [ ] [ ] TRL30R(-2) ( ) ( ) [ ] [ ] TRL360R(-1) ( ) ( ) [ ] [ ] TRL360R(-2) ( ) ( ) [ ] [ ] C ( ) ( ) [ ] [ ]
185
trl30 and trl360 Akaike information criterion -4.089038
Schwarz criterion
186
Hypothesis testing To test whether a VAR with a lag order 8 is preferred to a lag order 10
187
VAR Models Impulse Response Functions: Suppose we want to see the reaction of our simple initial VAR(1) model to a shock, say ε1=[1,0]’ and rest is 0, where ....
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.