ARMA models Gloria González-Rivera University of California, Riverside and Jesús Gonzalo U. Carlos III de Madrid
White Noise A sequence of uncorrelated random variables is called a white noise process. . . . . 1 2 3 4 k
The Wold Decomposition If {Zt} is a nondeterministic stationary time series, then
Some Remarks on the Wold Decomposition
What the Wold theorem does not say The at need not be normally distributed, and hence need not be iid Though P[at|Zt-j]=0, it need not be true that E[at|Zt-j]=0 (think on the possible consequences???) The shocks a need not be the “true” shocks to the system. When will this happen??? The uniqueness result only states that the Wold representation is the unique linear representation where the shocks are linear forecast errors. Non-linear representations, or representations in terms of non-forecast error shocks are perfectly possible.
Birth of the ARMA models Under general conditions the infinite lag polynomial of the Wold Decomposition can be approximated by the ratio of two finite lag polynomials: Therefore AR(p) MA(q)
MA(1) processes Let a zero-mean white noise process Expectation Variance Autocovariance
MA(1) processes (cont) Autocovariance of higher order Autocorrelation MA(1) process is covariance-stationary because MA(1) process is ergodic because If were Gaussian, then would be ergodic for all moments
Both processes share the same autocorrelation function Plot the function 0.5 -1 1 -0.5 Both processes share the same autocorrelation function MA(1) is not uniquely identifiable, except for
Invertibility Definition: A MA(q) process defined by the equation is said to be invertible if there exists a sequence of constants and Theorem: Let {Zt} be a MA(q). Then {Zt} is invertible if and only if The coefficients {pj} are determined by the relation
Identification of the MA(1) If we identify the MA(1) through the autocorrelation structure, we need to decide with value of q to choose, the one greater than one or othe one less than one. Requiring the condition of invertibility (think why????) we will choose the value q<1. Another reason to choose the value less than one can be found by paying attention to the error variance of the two “equivalent” representations:
covariance-stationary MA(q) Moments MA(q) is covariance-stationary and ergodic for the same reasons as in a MA(1)
Is it covariance-stationary? MA(infinite) Is it covariance-stationary? The process is covariance-stationary provided that mention the change of notation from theta to psi (square summable sequence)
Some interesting results Proposition 1. (absolutely summable) (square summable) Proposition 2. Ergodic for the mean
Proof 1. (1) (2) It is finite because N is finite It is finite because is absolutely summable then
Proof 2.
AR(1) Using backward substitution geometric progression Remember: is the condition for stationarity and ergodicity
AR(1) (cont) Hence, this AR(1) process has a stationary solution if Alternatively, consider the solution of the characteristic equation: i.e. the roots of the characteristic equation lie outside of the unit circle Mean of a stationary AR(1) Variance of a stationary AR(1)
Autocovariance of a stationary AR(1) Rewrite the process as Autocorrelation of a stationary AR(1) ACF PACF: from Yule-Walker equations Make a graph of the autocorrelations of an AR(1)
Causality and Stationarity Definition: An AR(p) process defined by the equation is said to be causal, or a causal function of {at}, if there exists a sequence of constants and Causality is equivalent to the condition Definition: A stationary solution {Zt} of the equation exists (and is also the unique stationary solution) if and only if From now on we will be dealing only with causal AR models
AR(2) Stationarity Study of the roots of the characteristic equation (a) Multiply by -1 (b) Divide by
For a stationary causal solution is required that Necessary conditions for a stationary causal solution Roots can be real or complex. (1) Real roots (2) Complex roots
1 real -2 -1 1 2 complex -1
Mean of AR(2) Variance and Autocorrelations of AR(2)
different shapes according to the roots, real or complex Difference equation different shapes according to the roots, real or complex Show correlograms of AR(2) Ask the students to prove the PACF Partial autocorrelations: from Yule-Walker equations
AR(p) All p roots of the characteristic equation outside of the unit circle stationarity ACF System to solve for the first p autocorrelations: p unknowns and p equations ACF decays as mixture of exponentials and/or damped sine waves, Depending on real/complex roots PACF
Relationship between AR(p) and MA(q) Stationary AR(p) Example
Write an example, i.e. MA(2), and proceed as in the previous example Invertible MA(q) Ask the students to calculate the pi from a MA(2) Write an example, i.e. MA(2), and proceed as in the previous example
ARMA (p,q)
Autocorrelations of ARMA(p,q) taking expectations: Picture the autocorrelograms of ARMA PACF
ARMA(1,1)
ACF of ARMA(1,1) taking expectations
ACF PACF
ACF and PACF of an ARMA(1,1)
ACF and PACF of an MA(2)
ACF and PACF of an AR(2)
Problems P1: Determine which of the following ARMA processes are casual and which of them are invertible (in each case at denotes a white noise): P2: Show that the two MA(1) processes have the same autocovariances functions.
Problems (cont) P.3: Let {Zt} denote the unique stationary solution of the autoregressive equations Where . Then is given by the expression Define the new sequence These calculations show that {Zt} is the (unique stationary) solution of the causal AR equations
Problems (cont) P4: Let Yt be the AR(1) plus noise time series defined by Yt =Zt + Wt, where for all s and t. Show that {Yt} is stationary and find its autocovariance functions. Show that the time series is an MA(1). Conclude from the previous point that {Yt} is an ARMA(1,1) and express the three parameters of this model in terms of
Appendix: Lag Operator L Definition Properties Examples
Appendix: Inverse Operator Definition Note that : this definition does not hold because the limit does not exist Example:
Appendix: Inverse Operator (cont) Suppose you have the ARMA model and want to find the MA representation . You could try to crank out directly, but that’s not much fun. Instead you could find and matching terms in Lj to make sure this works. Example: Suppose . Multiplying both polynomials and matching powers of L, which you can easily solver recursively for the TRY IT!!!
Appendix: Factoring Lag Polynomials Suppose we need to invert the polynomial We can do that by factoring it: Now we need to invert each factor and multiply: Check the last expression!!!!
Appendix: Partial Fraction Tricks There is a prettier way to express the last inversion by using the partial fraction tricks. Find the constants a and b such that The numerator on the right hand side must be 1, so
Appendix: More on Invertibility Consider a MA(1) Definition A MA process is said to be invertible if it can be written as an AR( ) For a MA(1) to be invertible we require For a MA(q) to be invertible, all roots of the characteristic equation should lie outside of the unit circle MA processes have an invertible and a non-invertible representations Invertible representation optimal forecast depends on past information Non-invertible representation forecast depends on the future!!!