Chapter 3 ARMA Time Series Models

Slides:



Advertisements
Similar presentations
DCSP-14 Jianfeng Feng Department of Computer Science Warwick Univ., UK
Advertisements

Autoregressive Integrated Moving Average (ARIMA) models
Dates for term tests Friday, February 07 Friday, March 07
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Response to a Sinusoidal Input Frequency Analysis of an RC Circuit.
Model Building For ARIMA time series
Slide 1 DSCI 5340: Predictive Modeling and Business Forecasting Spring 2013 – Dr. Nick Evangelopoulos Lecture 7: Box-Jenkins Models – Part II (Ch. 9) Material.
Chapter 2: Second-Order Differential Equations
Ch 5.7: Series Solutions Near a Regular Singular Point, Part II
Ch 5.6: Series Solutions Near a Regular Singular Point, Part I
Laplace Transforms Important analytical method for solving linear ordinary differential equations. - Application to nonlinear ODEs? Must linearize first.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Ch 6.2: Solution of Initial Value Problems
1 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
EE513 Audio Signals and Systems Digital Signal Processing (Systems) Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
ARMA models Gloria González-Rivera University of California, Riverside
Time Series Analysis.
STAT 497 LECTURE NOTES 2.
Linear Stationary Processes. ARMA models. This lecture introduces the basic linear models for stationary processes. Considering only stationary processes.
Fourier Series. Introduction Decompose a periodic input signal into primitive periodic components. A periodic sequence T2T3T t f(t)f(t)
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
K. Ensor, STAT Spring 2004 Memory characterization of a process How would the ACF behave for a process with no memory? What is a short memory series?
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
Auto Regressive, Integrated, Moving Average Box-Jenkins models A stationary times series can be modelled on basis of the serial correlations in it. A non-stationary.
Chapter 7 The Laplace Transform
Linear Filters. denote a bivariate time series with zero mean. Let.
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
Review and Summary Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
MODELS FOR NONSTATIONARY TIME SERIES By Eni Sumarminingsih, SSi, MM.
Topics 1 Specific topics to be covered are: Discrete-time signals Z-transforms Sampling and reconstruction Aliasing and anti-aliasing filters Sampled-data.
Introduction to stochastic processes
STAT 497 LECTURE NOTES 3 STATIONARY TIME SERIES PROCESSES
Chapter 5. Transform Analysis of LTI Systems Section
Math for CS Fourier Transforms
DYNAMIC BEHAVIOR OF PROCESSES :
Class 3 Linear System Solution Using the Laplace Transform
Quadratic and Higher Degree Equations and Functions
Properties of the z-Transform
3.3 Dividing Polynomials.
Trigonometric Identities
LECTURE 30: SYSTEM ANALYSIS USING THE TRANSFER FUNCTION
Laplace Transforms Chapter 3 Standard notation in dynamics and control
Advanced Engineering Mathematics 6th Edition, Concise Edition
SIGNALS PROCESSING AND ANALYSIS
Chapter 6: Autoregressive Integrated Moving Average (ARIMA) Models
A second order ordinary differential equation has the general form
Computational Data Analysis
Model Building For ARIMA time series
UNIT II Analysis of Continuous Time signal
Trigonometric Identities
3.3 Dividing Polynomials.
Learning Resource Services
Chapter 5 Nonstationary Time series Models
Chapter 6: Forecasting/Prediction
UNIT-I SIGNALS & SYSTEMS.
Box-Jenkins models Stationary Time series AR(p), MA(q), ARMA(p,q)
Machine Learning Week 4.
Chapter 4 Other Stationary Time Series Models
Chapter 9 Model Building
Chapter 8. Model Identification
Linear Filters.
The Spectral Representation of Stationary Time Series
Tania Stathaki 811b LTI Discrete-Time Systems in Transform Domain Ideal Filters Zero Phase Transfer Functions Linear Phase Transfer.
Chapter 9 Advanced Topics in DSP
Chapter 2 – Linear Filters
CH2 Time series.
Laplace Transforms Important analytical method for solving linear ordinary differential equations. - Application to nonlinear ODEs? Must linearize first.
Laplace Transforms Important analytical method for solving linear ordinary differential equations. - Application to nonlinear ODEs? Must linearize first.
BOX JENKINS (ARIMA) METHODOLOGY
THE LAPLACE TRANSFORM LEARNING GOALS Definition
Presentation transcript:

Chapter 3 ARMA Time Series Models NOTE: Some slides have blank sections. They are based on a teaching style in which the corresponding blank (derivations, theorem proofs, examples, …) are worked out in class on the board or overhead projector.

Moving Average Process of order q MA(q) Operator Notation: Chuiping developed discrete Euler processes in 2002. This process has a close relationship with Autoregressive model. That is, the present value has linear relationship with past p values. However, the time points are different. AR(p) process has k, k-1… time points but.. EAR process has h to the power k… If we take log bas h transformation on time, the EAR process become AR process EAR has linear multiplicative values

Note: An MA(q) is a GLP with a finite number of terms

Stationarity Region for an MA(q): - range of parameters for which the resulting model is stationary Stationarity Region for an MA(q):

Variance of an MA(q) gk and rk for of an MA(q) MA(1): MA(2):

Spectrum of an MA(q) Two Forms: 1. From GLP: 2. Using spectrum formula directly Using form given in Problem 1.19(b) on page 58

Spectral Density of an MA(q)

Spectral Density of an MA(1) Using Form 1 Using Form 2

tswge demo The following command generates and plots a realization from an MA, AR, or ARMA model gen.arma.wge(n,phi,theta,vara,sn) Note: sn=0 (default) generates a new (randomly obtained) realization each time. Setting sn>0 allows you to generate the same realization each time you apply the command with the same sn. gen.arma.wge(n=100,theta=-.99) gen.arma.wge(n=100,theta=-.99,sn=5)

tswge demo The following command plots a realization, the true autocorrelations, and the spectral density for an MA, AR, or ARMA model plotts.true.wge(n,phi,theta,lag.max) plotts.true.wge(theta=c(.99)) plotts.true.wge(theta=c(-.99))

Autoregressive Process AR(p) Note that this can be written Chuiping developed discrete Euler processes in 2002. This process has a close relationship with Autoregressive model. That is, the present value has linear relationship with past p values. However, the time points are different. AR(p) process has k, k-1… time points but.. EAR process has h to the power k… If we take log bas h transformation on time, the EAR process become AR process EAR has linear multiplicative values i.e. X t is a linear combination of the last p readings plus a random noise is called the moving average constant

AR(p) Process Operator Notation Characteristic Equation is called the characteristic polynomial

Definition 3.2 Suppose Xt is a causal, stationary process satisfying j(B)(Xt -m) = at . Then Xt will be called an AR(p) process - we will often use the zero mean form for notational simplicity

Theorem 3.1

General Linear Process Form (from Theorem 3.1) AR(1) Model: (zero mean form of model) General Linear Process Form (from Theorem 3.1)

AR(1) (continued) Stationarity Characteristic Equation

Note: AR(1) (continued) As operators, y (B) is the “inverse” of j (B) Inverting the Operator Note: As operators, y (B) is the “inverse” of j (B) Notation: y (B) = j -1 (B)

In general: Note: - i.e. an AR(p) is an “infinite order MA” The inverse operator is defined analogously to the inverse algebraic counterpart. General result proved in Brockwell and Davis (Time Series: Theory and Methods, 1991 - yellow book)

AR(1) (continued) Autocovariance (shown earlier) Spectrum

AR(1) Models  j1 positive

AR(1) Models  j1 negative

tswge Demo plotts.true.wge(phi=c(.95)) # generate AR(1) realizations gen.arma.wge(n=200,phi=c(.95)) gen.arma.wge(n=200,phi=c(.95),sn=6) x=gen.arma.wge(n=200,phi=c(.95)) # plot realization in x plotts.sample.wge(x)

Which is AR(1)? Which is MA(1)?

Realizations look like: AR(1): X t – φ 1Xt – 1 = at Stationarity: Stationary iff |j1| ______ where r1 is root of characteristic equation Autocorrelation: for k > 0 damped exponential (oscillating if j1 <0) Realizations look like: j1 > 0 : j1 < 0 : Spectrum: has a peak at f = __ (j1 > 0) or f =____ (j1 < 0)

Theorem 3.2: An AR(p) process is stationary iff all roots, r j , j = 1, …, p of the characteristic equation fall outside the unit circle, i.e. | r j | > 1. Proof: Case in which roots are distinct.

Checking an AR(p) for Stationarity Example: (1 - 1.9B +1.7B2 - 0.72B3) X t = a t Question: Is this a stationary model? 1. Use Theorem 3.2 2. Find roots of the characteristic equation (numerically) Factor Table – tswge (factor.wge) Absolute Reciprocal of root System frequency (f0) Root Factor 1 - 0.9B 1.11 0.9 0.0 1 -B + 0.8B 2 .63 ± .93i 0.89 0.156

AR(5) Example: Question: Is this a stationary model?

Example: (1-1.4B +.75B2 - 0.72B3 +1.215B4-.729B 5)X t = a t Factor Table Absolute reciprocal of root System frequency (f0) Factor 1 -1.5B + 0.9B 2 0.949 0.105 1 + B + 0.9B 2 0.949 0.338 1 - 0.9B 0.90 0.0 Note: Factor tables contain much more information than whether the model is stationary. This will be discussed later.

Are the following models stationary? tswge demo Use factor.wge(phi) to find factors factor.wge(phi=c(1.6,-1.85,1.44,-.855)) factor.wge(phi=c(1,-.5,1.5,-.9))

Stationary AR(p) j(B)(X t - m ) = a t E(X t ) = Autocovariances (m = 0 )

Autocorrelations of an AR(p) Yule-Walker Equations

Uses for Yule-Walker Equations Can be used to solve for the autocorrelations given the coefficients j 1, ... , j p referred to as Yule-Walker estimates of the parameters j 1, ... , j p Chapter 7

AR(p)

AR(p) Spectrum

Linear Homogeneous Difference Equations with Constant Coefficients Recall(?) from Differential Equations m th order linear homogeneous differential equation with constant coefficients.

To solve this differential equation, we use a characteristic or auxiliary equation with zk gives the characteristic equation Example: If the roots, r 1, ... , rm of the characteristic equation are all distinct, then is a solution to the differential equation above.

Our interest is in Difference Equations where the b j’s are constants and the subscripts are integers.

Differential Equation Difference Equation

Our interest is in Difference Equations where the b j’s are constants and the subscripts are integers. Notes: 1. g k as defined above is said to satisfy a linear homogeneous difference equation with constant coefficients For the above equation, the characteristic equation (obtained by substituting g k - j with z j ) is

Results:

Results: (continued) The constants in (3) and (4) are uniquely determined by m initial conditions

Recall: For an AR(p) process and for k > 0 i.e. r k satisfies a linear homogeneous difference equation with constant coefficients for k > 0. Substituting r k - j with z j gives the characteristic equation we previously examined, i.e.

General Solution for r k for an AR(p) Note: In both cases, C i and C ij can be found if r 1, ... , r p are known

Question: Given an AR(5) Model - how would you find r 12 ? Two ways: .

2. Do the following: (This assumes the roots are distinct) Note: In general, the starting values will satisfy the general solution of the difference equation but not the difference equation itself.

Recall for AR(1) Models “wandering” realizations” damped exponential autocorrelations spectral density has a peak at f = 0

oscillatory realizations oscillating damped exponential autocorrelations spectral density has a peak at f = .5

Nonstationary “AR(1)-type” Models Root on the unit circle Root inside the unit circle tswge demo gen.arma.wge(n=100,phi=c(.9999)) gen.arma.wge(n=100,phi=c(1)) gen.aruma.wge(n=100,d=1) #covered in Chapter 5 gen.arma.wge(n=50,phi=c(1.1))

Functions in tswge will not generate realizations from the explosively nonstationary process Using R code: n=50 a=rnorm(n) x[1]=0 for(k in 2:n) { x[k]=1.1*x[k-1]+a[k] } plotts.wge(x)

AR(2) (1-j1B - j2B2)X t = a t Characteristic Equation: Note:

Behavior of r k for an AR(2) Question: What does the autocorrelation function look like for an AR(2)? Recall: For an AR(1), r k looks like a “damped exponential” (1-j1B)X t = a t j1 > 0 j1 < 0

Behavior of r k for an AR(2) Case 1: Distinct Roots

Recall: AR(1) Model (1-.95B)Xt = at Rescaled version of above realization

AR(2) Model with Real Roots (1-.95B)(1-lB)Xt = at Varies slightly from a damped exponential

(1-.95B)(1+.7B)Xt = at tswge demo Use mult.wge(fac1,fac2,…,fac6) to find full model. phi2=mult.wge(fac1=.95,fac2=-.7) phi2 plotts.true.wge(phi=phi2$model.coef) factor.wge(phi=phi2$model.coef) Full model coefficients

Complex roots: Notes: See ATSA Text: p. 107-108 Section 3.2.7.1 The 3rd factor in r k above is periodic with a frequency f 0 (2) r k is a “damped sinusoidal” function See ATSA Text: p. 107-108 Section 3.2.7.1

Spectral Density of an AR(2) Note: In the case of complex roots, the peak occurs at f s where f s  f 0 but f s is close to f 0

j2 = -.7 j2 = -.9 j2 = -.98

j2 = -.7 j2 = -.9 j2 = -.98

Behavior of r k for an AR(2) -- continued Case 2: Repeated roots Notes: In this case the roots must be real (b) r k is given by C 3 r –k + C 4 kr –k (c) Visually, r k looks like a damped exponential (somewhat)

AR(2) Model (1-.95B)(1-lB)Xt = at Repeated Root Case l = .95

AR(2) Model (1-.95B)(1-lB)Xt = at Repeated Root Case l = .95

Canadian Lynx Data (or simply Lynx Data) Plausible AR(2) model: System frequency: f0 = .10

tswge demo data(llynx) plotts.sample.wge(llynx) # factor.wge(phi=c(1.38,-.75)) plotts.true.wge(phi=c(1.38,-.75))

The AR(1) and AR(2) serve as “building blocks” of an AR(p) model Key Concept: The AR(1) and AR(2) serve as “building blocks” of an AR(p) model

AR( p) Roots of j (z) = 1-j1z -    - jpz p = 0 can be - real - complex (conjugate pairs) Note: φ(z) can be factored where each factor has real coefficients and is either: - a linear factor - an irreducible quadratic factor Example: 1 - 1.95z +1.85z2 - .855z3 = (1 - .95z) (1 - z + .9z2)

Quadratic Factors: 1- a1j z - a2j z 2 Linear Factors: 1- a1j z i.e. a2j = 0 - associated with real roots - contribute AR(1)-type behavior - system frequency: f0 = 0 if a1j > 0 = .5 if a1j < 0 Quadratic Factors: 1- a1j z - a2j z 2 - associated with complex roots - contribute cyclic AR(2)-type behavior - system frequency: Realizations, autocorrelations and spectra will show a mixture of these behaviors

Is this a stationary process? What are its characteristics? X t – 1.95 Xt – 1 + 1.85Xt – 2 – .855 Xt –3 = at Is this a stationary process? What are its characteristics? Procedure: -- Find roots of the characteristic equation - usually numerically -- Display information in a Factor Table (we previously used the Factor Table to check for stationarity)

Factor Table Characteristic Equation: X t – 1.95 Xt – 1 + 1.85Xt – 2 – .855 Xt –3 = at Characteristic Equation: 1 - 1.95z +1.85z2 - .855z3 = 0 Factored: (1 - .95z) (1 - r + .9z2) = 0 Factor Table System frequency (f0) Absolute Reciprocal of root Factor Root 1 - .95B 1.053 0.95 0.0 1 -B + .9B 2 .556 ± .896i 0.95 0.16 the process is stationary

What are the characteristics of this stationary process? X t – 1.95 Xt – 1 + 1.85Xt – 2 – .855 Xt –3 = at What are the characteristics of this stationary process? Autocorrelations: -- behave like a mixture of : - damped exponential (associated with positive real root) - damped sinusoid (associated with complex roots) Spectrum: -- will tend to show peaks at frequencies near the system frequencies Realizations: -- behave like a mixture of the characteristics induced by the individual factors Factor Tables display these features for a given process.

Factor Table X t – 1.95 Xt – 1 + 1.85Xt – 2 – .855 Xt –3 = at Factor Root Abs. Recip. of Root f0 1 - .95B 1.053 0.95 0.0 .556 ± .896i 1 -B + .9B 2 0.95 0.16

Factor Table Characteristic Equation: Factored: X t – .2 Xt – 1 – 1.23Xt – 2 + .26 Xt –3 + .66Xt – 4 = at Characteristic Equation: 1 - .2z – 1.23z2 + .26z3 + .66z4= 0 Factored: (1 - 1.8z +.95z2) (1 + 1.6z + .7z2) = 0 Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root 1 - 1.8B + .95B 2 .95 ± 39i 0.06 0.97 1 + 1.6B + .7B 2 –1.15 ± 35i 0.83 0.45

X t – .2 Xt – 1 – 1.23Xt – 2 + .26 Xt –3 + .66Xt – 4 = at System frequency (f0) Absolute Reciprocal of root AR Factor Root 1 - 1.8B + .95B 2 .95 ± 39i 0.06 0.97 1 + 1.6B + .7B 2 –1.15 ± 35i 0.83 0.45

tswge demo X t – .2 Xt – 1 – 1.23Xt – 2 + .26 Xt –3 + .66Xt – 4 = at factor.wge(phi=c(.2,1.23,-.26,-.66)) plotts.true.wge(phi=c(.2,1.23,-.26,-.66))

To Summarize: The autocorrelation function of an AR(p) process satisfies a linear homogeneous difference equation with constant coefficients What do these solutions look like? - AR(1) : damped exponentials or damped oscillating exponentials - AR(2) with complex roots: damped sinusoidals - AR(p) : mixture of the above two behaviors

General Solution for r k for an AR(p)

Factored: Recall: Factor Table (1 - 1.95B +1.85B2 –.855B3) X t = a t - call this Model A Factored: (1 – .95B) (1 -B + .9B 2) X t = a t Factor Table Absolute Reciprocal of root System frequency (f0) Factor Root 1 -.95B 1.053 0.95 0.0 1 - B + .9B 2 .556 ± .896i 0.95 0.16 Note: The real root and the pair of complex conjugate roots are the same distance from the unit circle - both behaviors are easily visible

(1 – .95B) (1 -B + .9B 2) X t = a t

Model A-r: Model A Model A-r AR Factor Root Absolute Reciprocal of root System Frequency 1 -.95B 1.053 0.95 0.0 1 - .76B + .5B 2 .76 ± 1.19i 0.7 0.16 (1 – .95B)(1 -.76B + .5B 2) X t = a t Model A Model A-r

Note: If the root of the first order factor is much closer to the unit circle, -- the first order factor dominates -- the behavior of the autocorrelations (and the realization) becomes “first-order like” – i.e. damped exponentials

Model A-c: Model A-c Model A AR Factor Root Absolute Reciprocal of root System Frequency 1 -.7B 1.43 0.7 0.0 1 - B + .9B 2 .556 ± .896i 0.95 0.16 (1 – .7B)(1 -B + .9B 2) X t = a t Model A-c Model A

When roots of the second order factor are much closer to the unit circle, the second order factor dominates -- the behavior of the autocorrelations behave more like a damped sinusoidal which is characteristic of a second order model -- realizations appear more “pseudo-sinusoidal” General Property Roots closest to the unit circle dominate the behavior of autocorrelation functions, realizations, and spectral densities

Factor Table Factored Form (1 - .98B) (1 + .5B2) (1 + .5B) X t = a t X t – .48Xt – 1 + .01 Xt –2 – .24 Xt –3 –.245Xt – 4 = at Factored Form (1 - .98B) (1 + .5B2) (1 + .5B) X t = a t Factor Table Absolute Reciprocal of root System frequency (f0) AR Factor Root 1 - .98B 1.02 0.98 1 + .5B 2 ± 1.4i 0.70 0.25 1 + .5B –2.0 0.50 0.50 The “near unit root” associated with the factor (1.98B) dominates the behavior of the process - makes more subtle features difficult to identify - standard procedure is to difference such data

tswge demo Use factor.wge X t – .48Xt – 1 + .01 Xt –2 – .24 Xt –3 –.245Xt – 4 = at Use factor.wge

Invertibility Recall: When the roots of j(z) = 0 are all outside the unit circle, the AR(p) model j(B)X t = a t can be written in GLP form, i.e. as an infinite order MA Question: When can an MA(q) process X t = q(B)a t be rewritten as an infinite order AR process, i.e. as i.e.

Definition 3.4 If an MA(q) process, X(t), can be expressed as then X(t) is said to be invertible. Theorem 3.3 An MA(q) process, X t = q(B)a t is invertible if and only if all roots of q(z) = 0 are all outside the unit circle

Invertibility Example: MA(1)

Definition 3.4 If an MA(q) process, X(t), can be expressed as then X(t) is said to be invertible. Theorem 3.3 An MA(q) process, X t = q(B)a t is invertible if and only if all roots of q(z) = 0 are all outside the unit circle

tswge demo Check for Invertibility: Use factor.wge factor.wge(phi=c(1.6,-.9)) factor.wge(phi=c(1.6,.9)) Note: The frequency f 0 shown in the factor table is not a system frequency - in fact it is a frequency at which there is a “dip” in the spectrum instead of a peak

Demo – dip in spectral density Consider the MA(2) Model: First recall the spectral density of plotts.true.wge(phi=c(1.1,-.9)) Now, what does the spectral density of look like? plotts.true.wge(theta=c(1.1,-.9))

Why worry about invertibility? 1. Invertibility assures that the present is related to the past in a reasonable manner. 2. Removes model multiplicity

tswge demo 1. X t = at – .9 at –1 2. X t = at – 1.111 at –1 Note: 1/.9 = 1.1111…. Use plotts.true.wge to compare the characteristics of these two models

Summary MA(q) Process: always stationary ( if | q i |’s <  ) invertible if and only if the roots of q (z) = 0 lie outside the unit circle AR(p) Process: stationary if and only if the roots of j (z) = 0 lie outside the unit circle always invertible ( if | j i |’s <  ) In general, we will restrict our attention to stationary and invertible models.

Autoregressive – Moving Average Process of Orders p and q ARMA(p, q) where a t is white noise j 1 , … , j p are real constants q1 , … , q q are real constants j p  0 qq  0 j (z) and q (z) have no common factors Chuiping developed discrete Euler processes in 2002. This process has a close relationship with Autoregressive model. That is, the present value has linear relationship with past p values. However, the time points are different. AR(p) process has k, k-1… time points but.. EAR process has h to the power k… If we take log bas h transformation on time, the EAR process become AR process EAR has linear multiplicative values

Notes: 1. I will typically use the “zero mean” form of the model 2. Operator Notation:

Example (1 - B + .9B2) X t = (1-.95B)a t (1 - B + .9B2) X t = a t

Example k r k 1 .5 2 .25 3 .125 

ARMA(p, q) Model j (B )X t = q (B )a t j (B ) and q (B ) introduce characteristics of the same type as in the AR(p) and MA(q) cases “near cancellation” may “hide” some characteristics Theorem 3.4: Let X t be a causal process specified by j (B)X t = q (B)a t . Then X t is an ARMA(p,q) process iff (i) roots of j (z) = 0 are all outside the unit circle (ii) roots of (z) = 0 are all outside the unit circle

Note: If X t (ARMA) is stationary and invertible, then

Autocovariances and Autocorrelations of an ARMA(p, q)

Calculating Autocovariance, gK, for an ARMA(p, q) Model Solve (v+1) x (v+1) system of equations for go, ... , g v where v = max(p, q) Two ways to proceed beyond step 1: calculate g K by recursively calculating g k = j1g k-1 +  + jpg k-p until we reach K (b) since g k satisfies a linear homogeneous difference equation for k > q, use the general solution to solve for g K

Example

ARMA (p, q) Spectral Density

Example (1 – .95B) X t = (1 - .9B)a t (1 – .95B) X t = (1 - .9B)a t (1 – .95B) X t = a t X t = (1 - .9B)a t

What can you tell about the underlying ARMA model from these plots?

tswge demo What can you tell about the underlying ARMA model from these plots? # AR factors factor.wge(phi=c(.3,.9,.1,-.8075)) # MA factors factor.wge(phi=c(-.9,-.8,-.72))

Calculating y-weights (1 – B + .9B2) X t = (1+.8B)a t Question: What are the y-weights? 3 methods discussed in book (Chapter 3, pages 144-146) (a) Equating coefficients (b) Using division (c) General expression Result: For j > max(p - 1, q): y j = j 1 y j- 1 + ··· + j p y j-p

Calculating y-weights (1 – B + .9B2) X t = (1+.8B)a t Question: What are the y-weights? Result: For j > max(p - 1, q): y j = j 1 y j- 1 + ··· + j p y j-p

tswge demo To calculate y-weights use (1 – B + .9B2) X t = (1+.8B)a t psi.weights.wge(phi,theta,lag.max) (1 – B + .9B2) X t = (1+.8B)a t psi.weights.wge(phi=c(.9,-.9),theta=-.8,lag.max=5)

Decomposing an AR(p) Realization into its Components It is useful to be able to decompose the realization x t into additive components, i.e. to write Result: If x t is a realization from an AR(p) for which there are no repeated roots of the characteristic equation, then - corresponds to the jth factor in the factor table - is a realization from an AR(1) for first order factors - is a realization from an ARMA(2,1) for second order factors

Example: Consider the case in which j(B ) = j 1(B ) j 2(B ) where • j 1(B ) = 1 - a 1 B is a first order factor • j 2(B ) = 1 - d 1 B - d 2 B 2 is an irreducible second order factor.

Note: Realization = C1 + C2 + C3 Absolute Reciprocal of root System frequency (f0) Factor Component 1 -- 0 frequency .95 1 -.95B .14 1 -1.2B + 0.9B 2 .95 Component 2 -- .14 frequency .95 .34 1 +B + 0.9B 2 Component 3 -- .34 frequency Note: Realization = C1 + C2 + C3

Realization Realization Realization AR(1) Component AR(1) Component AR(1) Component ARMA(2,1) Component ARMA(2,1) Component ARMA(2,1) Component

Decomposition of an AR Process into its Additive Components The important points to be understood are that: each component corresponds to a row in the factor table the components add to give Xt , i.e the components provide a visual representation for the information in the factor table.

tswge demo: Additive Factors Use factor.comp.wge(x,p,ncomp) 1. x=gen.arma.wge(n=500,p=5,phi=c(1.64,-.794,.734,-1.389,.802)) y=factor.comp.wge(x,p=5,ncomp=3) Note: May sometimes get a warning error from base R function ARIMA Sunspot Data data(ss08) y=factor.comp.wge(ss08,p=9,ncomp=4)

Seasonal Models - most important seasonal models are nonstationary - we will discuss these in Chapter 5

Generating Realizations from an ARMA (p, q) Process MA(q) AR(p)

General Procedure for Generating a Realization of Length n from an ARMA(p, q) Model Starting values: 2. Generate n + K observations where K is “large” keep as discard

Note: You may want to vary K based on how slowly the autocorrelations damp For distinct roots and for k > q We want |r K+1| to be small. This depends on how close the roots r i are to the unit circle.

Transformations Memoryless for variance stabilizing, normalizing, etc. examples

Airline Data Log Airline Data

Autoregressive transformations Basic Result:

Given the ARMA Process:

tswge demo Generate data from (1+.95B)(1-1.6B+.9B2)Xt=at - transform by 1+.95B - transform by 1-1.6B+.9B2 - transform by both To perform autoregressive transformations use artrans.wge(x,phi.tr,lag.max,plottr='TRUE') x=gen.arma.wge(n=200,phi=c(.65,.62,-.855),sn=6) plotts.wge(x) # y1 is (1+.95B)X(t) y1=artrans.wge(x,phi.tr=-.95) # y2 is (1-.65B+.9B^2)X(t) y2=artrans.wge(x,phi.tr=c(1.6,-.9)) # y12 is (1+.95B)y2(t)=(1+.95)(1-.65B+.9B^2)X(t) y12=artrans.wge(y2,phi.tr=-.95)

Note: Since roots close to the unit circle dominate the behavior of the process - it is sometimes necessary to remove these model components using a transformation (such as differencing for a real root close to +1) in order to more clearly see the more subtle features of the model

Factor Table Characteristic Equation: Factored: X t – 1.59Xt – 1 + .38 Xt –2 + .70 Xt –3 –.49Xt – 4 = at Characteristic Equation: 1 – 1.59z –+.38z2 + .70z3 – .49z4= 0 Factored: (1 - .99z) (1 -1.2z +.7z2) (1 + .7z) = 0 Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root 1 - .99B 1.01 0.99 0.11 1 -1.2B+.7B 2 .93± .76i 0.84 1 + .7B –1.42 0.70 0.50

X t – 1.59Xt – 1 + .38 Xt –2 + .70 Xt –3 –.49Xt – 4 = at

Factor Table Factored Form X t – 1.59Xt – 1 + .38 Xt –2 + .70 Xt –3 –.49Xt – 4 = at Factored Form (1 - .99B) (1 -1.2B + .5B2) (1 + .5B) X t = a t Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root 1 - .99B 1.01 0.99 0.11 1 -1.2B +.7B 2 .93± .76i 0.84 1 + .7B –1.42 0.70 0.50 The “near unit root” associated with the factor (1.99B) dominates the behavior of the process - makes more subtle features difficult to identify - standard procedure is to difference such data

Procedure: Model for Differenced Data • Difference the data: - i.e. calculate Y t = X t – Xt – 1= (1 − B)Xt • Model the differenced series and estimate the coefficients Model for Differenced Data Y t - .56Yt – 1 -.15Yt –2 + .40 Yt –3 = at Again, not much information is available from the form of the resulting model about the effect of the differencing - so we factor

Note: secondary factors are very similar to those in original model X t – .38Xt – 1 – .09 Xt –2 – .19 Xt –3 –.29Xt – 4 = at Factor Table Absolute Reciprocal of root System frequency (f0) AR Factor Root 1 - .99B 1.01 0.99 1 -1.2B+ .7B 2 .93± 1.4i 0.83 0.11 0.50 –1.42 0.70 1 + .7B Model for Differenced Data Y t - .56Yt – 1 -.15 Yt –2 + .40 Yt –3 = at Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root 1 -1.20B +.62B 2 .97 ± .82i 0.79 0.11 1 + .64B –1.56 0.64 0.50 Note: secondary factors are very similar to those in original model

Original Data Differenced Data

Note: Differencing can be used to remove the effects of a factor close to 1  B - this topic is discussed further in Chapters 5, 8, and 9 tswge demo To apply a difference filter to data set x use artrans.wge(x,phi=1)

Recall: Linear Filter We have looked at linear filters (Input) (Output) We have looked at linear filters Autoregressive Transformations are Linear Filters (for difference)

Types of Filters Low-pass filters: High-pass filters: - “pass” low frequency behavior and “filter out” higher frequency behavior High-pass filters: - “pass” high frequency behavior and “filter out” lower frequency behavior Band-pass filters: - “pass” frequencies in a certain “frequency band” Band-stop (notch) filters: - “pass” frequencies except those in a certain “frequency band”

What type of filter is a difference? Original Data Differenced Data

What type of filter is Y t =Xt – 1.2Xt-1 +.9 Xt-2