Download presentation
Presentation is loading. Please wait.
1
Chapter 3 ARMA Time Series Models
NOTE: Some slides have blank sections. They are based on a teaching style in which the corresponding blank (derivations, theorem proofs, examples, …) are worked out in class on the board or overhead projector.
2
Moving Average Process of order q MA(q)
Operator Notation: Chuiping developed discrete Euler processes in This process has a close relationship with Autoregressive model. That is, the present value has linear relationship with past p values. However, the time points are different. AR(p) process has k, k-1… time points but.. EAR process has h to the power k… If we take log bas h transformation on time, the EAR process become AR process EAR has linear multiplicative values
3
Note: An MA(q) is a GLP with a finite number of terms
4
Stationarity Region for an MA(q):
- range of parameters for which the resulting model is stationary Stationarity Region for an MA(q):
5
Variance of an MA(q) gk and rk for of an MA(q) MA(1): MA(2):
7
Spectrum of an MA(q) Two Forms: 1. From GLP:
2. Using spectrum formula directly Using form given in Problem 1.19(b) on page 58
8
Spectral Density of an MA(q)
9
Spectral Density of an MA(1)
Using Form 1 Using Form 2
12
tswge demo The following command generates and plots a realization from an MA, AR, or ARMA model gen.arma.wge(n,phi,theta,vara,sn) Note: sn=0 (default) generates a new (randomly obtained) realization each time. Setting sn>0 allows you to generate the same realization each time you apply the command with the same sn. gen.arma.wge(n=100,theta=-.99) gen.arma.wge(n=100,theta=-.99,sn=5)
13
tswge demo The following command plots a realization, the true autocorrelations, and the spectral density for an MA, AR, or ARMA model plotts.true.wge(n,phi,theta,lag.max) plotts.true.wge(theta=c(.99)) plotts.true.wge(theta=c(-.99))
14
Autoregressive Process AR(p)
Note that this can be written Chuiping developed discrete Euler processes in This process has a close relationship with Autoregressive model. That is, the present value has linear relationship with past p values. However, the time points are different. AR(p) process has k, k-1… time points but.. EAR process has h to the power k… If we take log bas h transformation on time, the EAR process become AR process EAR has linear multiplicative values i.e. X t is a linear combination of the last p readings plus a random noise is called the moving average constant
15
AR(p) Process Operator Notation
Characteristic Equation is called the characteristic polynomial
16
Definition 3.2 Suppose Xt is a causal, stationary process satisfying j(B)(Xt -m) = at . Then Xt will be called an AR(p) process - we will often use the zero mean form for notational simplicity
17
Theorem 3.1
18
General Linear Process Form (from Theorem 3.1)
AR(1) Model: (zero mean form of model) General Linear Process Form (from Theorem 3.1)
19
AR(1) (continued) Stationarity Characteristic Equation
20
Note: AR(1) (continued) As operators, y (B) is the “inverse” of j (B)
Inverting the Operator Note: As operators, y (B) is the “inverse” of j (B) Notation: y (B) = j -1 (B)
21
In general: Note: - i.e. an AR(p) is an “infinite order MA”
The inverse operator is defined analogously to the inverse algebraic counterpart. General result proved in Brockwell and Davis (Time Series: Theory and Methods, yellow book)
22
AR(1) (continued) Autocovariance (shown earlier) Spectrum
23
AR(1) Models j1 positive
24
AR(1) Models j1 negative
25
tswge Demo plotts.true.wge(phi=c(.95)) # generate AR(1) realizations
gen.arma.wge(n=200,phi=c(.95)) gen.arma.wge(n=200,phi=c(.95),sn=6) x=gen.arma.wge(n=200,phi=c(.95)) # plot realization in x plotts.sample.wge(x)
26
Which is AR(1)? Which is MA(1)?
27
Realizations look like:
AR(1): X t – φ 1Xt – 1 = at Stationarity: Stationary iff |j1| ______ where r1 is root of characteristic equation Autocorrelation: for k > 0 damped exponential (oscillating if j1 <0) Realizations look like: j1 > 0 : j1 < 0 : Spectrum: has a peak at f = __ (j1 > 0) or f =____ (j1 < 0)
28
Theorem 3.2: An AR(p) process is stationary iff all roots, r j , j = 1, …, p of the characteristic equation fall outside the unit circle, i.e. | r j | > 1. Proof: Case in which roots are distinct.
30
Checking an AR(p) for Stationarity
Example: ( B +1.7B B3) X t = a t Question: Is this a stationary model? 1. Use Theorem 3.2 2. Find roots of the characteristic equation (numerically) Factor Table – tswge (factor.wge) Absolute Reciprocal of root System frequency (f0) Root Factor B 1.11 0.9 0.0 1 -B B 2 .63 ± .93i 0.89 0.156
31
AR(5) Example: Question: Is this a stationary model?
32
Example: (1-1.4B +.75B B B4-.729B 5)X t = a t Factor Table Absolute reciprocal of root System frequency (f0) Factor 1 -1.5B B 2 0.949 0.105 1 + B B 2 0.949 0.338 B 0.90 0.0 Note: Factor tables contain much more information than whether the model is stationary. This will be discussed later.
33
Are the following models stationary?
tswge demo Use factor.wge(phi) to find factors factor.wge(phi=c(1.6,-1.85,1.44,-.855)) factor.wge(phi=c(1,-.5,1.5,-.9))
34
Stationary AR(p) j(B)(X t - m ) = a t E(X t ) =
Autocovariances (m = 0 )
35
Autocorrelations of an AR(p)
Yule-Walker Equations
36
Uses for Yule-Walker Equations
Can be used to solve for the autocorrelations given the coefficients j 1, ... , j p referred to as Yule-Walker estimates of the parameters j 1, ... , j p Chapter 7
37
AR(p)
38
AR(p) Spectrum
39
Linear Homogeneous Difference Equations with Constant Coefficients
Recall(?) from Differential Equations m th order linear homogeneous differential equation with constant coefficients.
40
To solve this differential equation, we use a characteristic or auxiliary equation
with zk gives the characteristic equation Example: If the roots, r 1, ... , rm of the characteristic equation are all distinct, then is a solution to the differential equation above.
41
Our interest is in Difference Equations
where the b j’s are constants and the subscripts are integers.
42
Differential Equation
Difference Equation
43
Our interest is in Difference Equations
where the b j’s are constants and the subscripts are integers. Notes: 1. g k as defined above is said to satisfy a linear homogeneous difference equation with constant coefficients For the above equation, the characteristic equation (obtained by substituting g k - j with z j ) is
44
Results:
45
Results: (continued) The constants in (3) and (4) are uniquely determined by m initial conditions
46
Recall: For an AR(p) process and for k > 0
i.e. r k satisfies a linear homogeneous difference equation with constant coefficients for k > 0. Substituting r k - j with z j gives the characteristic equation we previously examined, i.e.
47
General Solution for r k for an AR(p)
Note: In both cases, C i and C ij can be found if r 1, ... , r p are known
48
Question: Given an AR(5) Model - how would you find r 12 ?
Two ways: .
49
2. Do the following: (This assumes the roots are distinct) Note: In general, the starting values will satisfy the general solution of the difference equation but not the difference equation itself.
50
Recall for AR(1) Models “wandering” realizations”
damped exponential autocorrelations spectral density has a peak at f = 0
51
oscillatory realizations
oscillating damped exponential autocorrelations spectral density has a peak at f = .5
52
Nonstationary “AR(1)-type” Models
Root on the unit circle Root inside the unit circle tswge demo gen.arma.wge(n=100,phi=c(.9999)) gen.arma.wge(n=100,phi=c(1)) gen.aruma.wge(n=100,d=1) #covered in Chapter 5 gen.arma.wge(n=50,phi=c(1.1))
53
Functions in tswge will not generate realizations from the explosively nonstationary process
Using R code: n=50 a=rnorm(n) x[1]=0 for(k in 2:n) { x[k]=1.1*x[k-1]+a[k] } plotts.wge(x)
54
AR(2) (1-j1B - j2B2)X t = a t Characteristic Equation: Note:
55
Behavior of r k for an AR(2)
Question: What does the autocorrelation function look like for an AR(2)? Recall: For an AR(1), r k looks like a “damped exponential” (1-j1B)X t = a t j1 > 0 j1 < 0
56
Behavior of r k for an AR(2)
Case 1: Distinct Roots
57
Recall: AR(1) Model (1-.95B)Xt = at
Rescaled version of above realization
58
AR(2) Model with Real Roots (1-.95B)(1-lB)Xt = at
Varies slightly from a damped exponential
59
(1-.95B)(1+.7B)Xt = at tswge demo
Use mult.wge(fac1,fac2,…,fac6) to find full model. phi2=mult.wge(fac1=.95,fac2=-.7) phi2 plotts.true.wge(phi=phi2$model.coef) factor.wge(phi=phi2$model.coef) Full model coefficients
60
Complex roots: Notes: See ATSA Text: p. 107-108 Section 3.2.7.1
The 3rd factor in r k above is periodic with a frequency f 0 (2) r k is a “damped sinusoidal” function See ATSA Text: p Section
61
Spectral Density of an AR(2)
Note: In the case of complex roots, the peak occurs at f s where f s f 0 but f s is close to f 0
62
j2 = -.7 j2 = -.9 j2 = -.98
63
j2 = -.7 j2 = -.9 j2 = -.98
64
Behavior of r k for an AR(2) -- continued
Case 2: Repeated roots Notes: In this case the roots must be real (b) r k is given by C 3 r –k + C 4 kr –k (c) Visually, r k looks like a damped exponential (somewhat)
65
AR(2) Model (1-.95B)(1-lB)Xt = at
Repeated Root Case l = .95
66
AR(2) Model (1-.95B)(1-lB)Xt = at
Repeated Root Case l = .95
67
Canadian Lynx Data (or simply Lynx Data)
Plausible AR(2) model: System frequency: f0 = .10
68
tswge demo data(llynx) plotts.sample.wge(llynx) #
factor.wge(phi=c(1.38,-.75)) plotts.true.wge(phi=c(1.38,-.75))
69
The AR(1) and AR(2) serve as “building blocks” of an AR(p) model
Key Concept: The AR(1) and AR(2) serve as “building blocks” of an AR(p) model
70
AR( p) Roots of j (z) = 1-j1z - - jpz p = 0 can be - real - complex (conjugate pairs) Note: φ(z) can be factored where each factor has real coefficients and is either: - a linear factor - an irreducible quadratic factor Example: z +1.85z z3 = ( z) (1 - z + .9z2)
71
Quadratic Factors: 1- a1j z - a2j z 2
Linear Factors: 1- a1j z i.e. a2j = 0 - associated with real roots - contribute AR(1)-type behavior - system frequency: f0 = 0 if a1j > 0 = .5 if a1j < 0 Quadratic Factors: 1- a1j z - a2j z 2 - associated with complex roots - contribute cyclic AR(2)-type behavior - system frequency: Realizations, autocorrelations and spectra will show a mixture of these behaviors
72
Is this a stationary process? What are its characteristics?
X t – 1.95 Xt – Xt – 2 – .855 Xt –3 = at Is this a stationary process? What are its characteristics? Procedure: -- Find roots of the characteristic equation - usually numerically -- Display information in a Factor Table (we previously used the Factor Table to check for stationarity)
73
Factor Table Characteristic Equation:
X t – 1.95 Xt – Xt – 2 – .855 Xt –3 = at Characteristic Equation: z +1.85z z3 = 0 Factored: ( z) (1 - r + .9z2) = 0 Factor Table System frequency (f0) Absolute Reciprocal of root Factor Root B 1.053 0.95 0.0 1 -B + .9B 2 .556 ± .896i 0.95 0.16 the process is stationary
74
What are the characteristics of this stationary process?
X t – 1.95 Xt – Xt – 2 – .855 Xt –3 = at What are the characteristics of this stationary process? Autocorrelations: -- behave like a mixture of : - damped exponential (associated with positive real root) - damped sinusoid (associated with complex roots) Spectrum: -- will tend to show peaks at frequencies near the system frequencies Realizations: -- behave like a mixture of the characteristics induced by the individual factors Factor Tables display these features for a given process.
75
Factor Table X t – 1.95 Xt – 1 + 1.85Xt – 2 – .855 Xt –3 = at Factor
Root Abs. Recip. of Root f0 B 1.053 0.95 0.0 .556 ± .896i 1 -B + .9B 2 0.95 0.16
76
Factor Table Characteristic Equation: Factored:
X t – .2 Xt – 1 – 1.23Xt – Xt – Xt – 4 = at Characteristic Equation: 1 - .2z – 1.23z z z4= 0 Factored: ( z +.95z2) ( z + .7z2) = 0 Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root B B 2 .95 ± 39i 0.06 0.97 B + .7B 2 –1.15 ± 35i 0.83 0.45
77
X t – .2 Xt – 1 – 1.23Xt – 2 + .26 Xt –3 + .66Xt – 4 = at
System frequency (f0) Absolute Reciprocal of root AR Factor Root B B 2 .95 ± 39i 0.06 0.97 B + .7B 2 –1.15 ± 35i 0.83 0.45
78
tswge demo X t – .2 Xt – 1 – 1.23Xt – 2 + .26 Xt –3 + .66Xt – 4 = at
factor.wge(phi=c(.2,1.23,-.26,-.66)) plotts.true.wge(phi=c(.2,1.23,-.26,-.66))
79
To Summarize: The autocorrelation function of an AR(p) process satisfies a linear homogeneous difference equation with constant coefficients What do these solutions look like? - AR(1) : damped exponentials or damped oscillating exponentials - AR(2) with complex roots: damped sinusoidals - AR(p) : mixture of the above two behaviors
80
General Solution for r k for an AR(p)
81
Factored: Recall: Factor Table
( B +1.85B2 –.855B3) X t = a t - call this Model A Factored: (1 – .95B) (1 -B + .9B 2) X t = a t Factor Table Absolute Reciprocal of root System frequency (f0) Factor Root 1 -.95B 1.053 0.95 0.0 1 - B + .9B 2 .556 ± .896i 0.95 0.16 Note: The real root and the pair of complex conjugate roots are the same distance from the unit circle - both behaviors are easily visible
82
(1 – .95B) (1 -B + .9B 2) X t = a t
83
Model A-r: Model A Model A-r AR Factor Root
Absolute Reciprocal of root System Frequency 1 -.95B 1.053 0.95 0.0 B + .5B 2 .76 ± 1.19i 0.7 0.16 (1 – .95B)(1 -.76B + .5B 2) X t = a t Model A Model A-r
84
Note: If the root of the first order factor is much closer to the unit circle, -- the first order factor dominates -- the behavior of the autocorrelations (and the realization) becomes “first-order like” – i.e. damped exponentials
85
Model A-c: Model A-c Model A AR Factor Root
Absolute Reciprocal of root System Frequency 1 -.7B 1.43 0.7 0.0 1 - B + .9B 2 .556 ± .896i 0.95 0.16 (1 – .7B)(1 -B + .9B 2) X t = a t Model A-c Model A
86
When roots of the second order factor are much closer to the unit circle, the second order factor dominates -- the behavior of the autocorrelations behave more like a damped sinusoidal which is characteristic of a second order model -- realizations appear more “pseudo-sinusoidal” General Property Roots closest to the unit circle dominate the behavior of autocorrelation functions, realizations, and spectral densities
87
Factor Table Factored Form (1 - .98B) (1 + .5B2) (1 + .5B) X t = a t
X t – .48Xt – Xt –2 – .24 Xt –3 –.245Xt – 4 = at Factored Form ( B) (1 + .5B2) (1 + .5B) X t = a t Factor Table Absolute Reciprocal of root System frequency (f0) AR Factor Root B 1.02 0.98 1 + .5B 2 ± 1.4i 0.70 0.25 1 + .5B –2.0 0.50 0.50 The “near unit root” associated with the factor (1.98B) dominates the behavior of the process - makes more subtle features difficult to identify - standard procedure is to difference such data
88
tswge demo Use factor.wge
X t – .48Xt – Xt –2 – .24 Xt –3 –.245Xt – 4 = at Use factor.wge
89
Invertibility Recall:
When the roots of j(z) = 0 are all outside the unit circle, the AR(p) model j(B)X t = a t can be written in GLP form, i.e. as an infinite order MA Question: When can an MA(q) process X t = q(B)a t be rewritten as an infinite order AR process, i.e. as i.e.
90
Definition 3.4 If an MA(q) process, X(t), can be expressed as then X(t) is said to be invertible. Theorem 3.3 An MA(q) process, X t = q(B)a t is invertible if and only if all roots of q(z) = 0 are all outside the unit circle
91
Invertibility Example: MA(1)
92
Definition 3.4 If an MA(q) process, X(t), can be expressed as then X(t) is said to be invertible. Theorem 3.3 An MA(q) process, X t = q(B)a t is invertible if and only if all roots of q(z) = 0 are all outside the unit circle
93
tswge demo Check for Invertibility: Use factor.wge factor.wge(phi=c(1.6,-.9)) factor.wge(phi=c(1.6,.9)) Note: The frequency f 0 shown in the factor table is not a system frequency - in fact it is a frequency at which there is a “dip” in the spectrum instead of a peak
94
Demo – dip in spectral density
Consider the MA(2) Model: First recall the spectral density of plotts.true.wge(phi=c(1.1,-.9)) Now, what does the spectral density of look like? plotts.true.wge(theta=c(1.1,-.9))
95
Why worry about invertibility?
1. Invertibility assures that the present is related to the past in a reasonable manner. 2. Removes model multiplicity
96
tswge demo 1. X t = at – .9 at –1 2. X t = at – at –1 Note: 1/.9 = …. Use plotts.true.wge to compare the characteristics of these two models
97
Summary MA(q) Process: always stationary ( if | q i |’s < )
invertible if and only if the roots of q (z) = 0 lie outside the unit circle AR(p) Process: stationary if and only if the roots of j (z) = 0 lie outside the unit circle always invertible ( if | j i |’s < ) In general, we will restrict our attention to stationary and invertible models.
98
Autoregressive – Moving Average Process of Orders p and q
ARMA(p, q) where a t is white noise j 1 , … , j p are real constants q1 , … , q q are real constants j p 0 qq 0 j (z) and q (z) have no common factors Chuiping developed discrete Euler processes in This process has a close relationship with Autoregressive model. That is, the present value has linear relationship with past p values. However, the time points are different. AR(p) process has k, k-1… time points but.. EAR process has h to the power k… If we take log bas h transformation on time, the EAR process become AR process EAR has linear multiplicative values
99
Notes: 1. I will typically use the “zero mean” form of the model
2. Operator Notation:
100
Example (1 - B + .9B2) X t = (1-.95B)a t (1 - B + .9B2) X t = a t
101
Example k r k 1 .5 2 .25 3 .125
102
ARMA(p, q) Model j (B )X t = q (B )a t
j (B ) and q (B ) introduce characteristics of the same type as in the AR(p) and MA(q) cases “near cancellation” may “hide” some characteristics Theorem 3.4: Let X t be a causal process specified by j (B)X t = q (B)a t . Then X t is an ARMA(p,q) process iff (i) roots of j (z) = 0 are all outside the unit circle (ii) roots of (z) = 0 are all outside the unit circle
103
Note: If X t (ARMA) is stationary and invertible, then
104
Autocovariances and Autocorrelations
of an ARMA(p, q)
105
Calculating Autocovariance, gK, for an
ARMA(p, q) Model Solve (v+1) x (v+1) system of equations for go, ... , g v where v = max(p, q) Two ways to proceed beyond step 1: calculate g K by recursively calculating g k = j1g k-1 + + jpg k-p until we reach K (b) since g k satisfies a linear homogeneous difference equation for k > q, use the general solution to solve for g K
106
Example
107
ARMA (p, q) Spectral Density
108
Example (1 – .95B) X t = (1 - .9B)a t (1 – .95B) X t = (1 - .9B)a t
(1 – .95B) X t = a t X t = (1 - .9B)a t
110
What can you tell about the underlying ARMA model from these plots?
111
tswge demo What can you tell about the underlying ARMA model from these plots? # AR factors factor.wge(phi=c(.3,.9,.1,-.8075)) # MA factors factor.wge(phi=c(-.9,-.8,-.72))
112
Calculating y-weights
(1 – B + .9B2) X t = (1+.8B)a t Question: What are the y-weights? 3 methods discussed in book (Chapter 3, pages ) (a) Equating coefficients (b) Using division (c) General expression Result: For j > max(p - 1, q): y j = j 1 y j- 1 + ··· + j p y j-p
113
Calculating y-weights
(1 – B + .9B2) X t = (1+.8B)a t Question: What are the y-weights? Result: For j > max(p - 1, q): y j = j 1 y j- 1 + ··· + j p y j-p
114
tswge demo To calculate y-weights use (1 – B + .9B2) X t = (1+.8B)a t
psi.weights.wge(phi,theta,lag.max) (1 – B + .9B2) X t = (1+.8B)a t psi.weights.wge(phi=c(.9,-.9),theta=-.8,lag.max=5)
115
Decomposing an AR(p) Realization into its Components
It is useful to be able to decompose the realization x t into additive components, i.e. to write Result: If x t is a realization from an AR(p) for which there are no repeated roots of the characteristic equation, then - corresponds to the jth factor in the factor table - is a realization from an AR(1) for first order factors - is a realization from an ARMA(2,1) for second order factors
116
Example: Consider the case in which j(B ) = j 1(B ) j 2(B ) where
• j 1(B ) = 1 - a 1 B is a first order factor • j 2(B ) = 1 - d 1 B - d 2 B 2 is an irreducible second order factor.
117
Note: Realization = C1 + C2 + C3
Absolute Reciprocal of root System frequency (f0) Factor Component frequency .95 1 -.95B .14 1 -1.2B B 2 .95 Component frequency .95 .34 1 +B B 2 Component frequency Note: Realization = C1 + C2 + C3
118
Realization Realization Realization AR(1) Component AR(1) Component AR(1) Component ARMA(2,1) Component ARMA(2,1) Component ARMA(2,1) Component
119
Decomposition of an AR Process into its Additive Components
The important points to be understood are that: each component corresponds to a row in the factor table the components add to give Xt , i.e the components provide a visual representation for the information in the factor table.
120
tswge demo: Additive Factors
Use factor.comp.wge(x,p,ncomp) 1. x=gen.arma.wge(n=500,p=5,phi=c(1.64,-.794,.734,-1.389,.802)) y=factor.comp.wge(x,p=5,ncomp=3) Note: May sometimes get a warning error from base R function ARIMA Sunspot Data data(ss08) y=factor.comp.wge(ss08,p=9,ncomp=4)
121
Seasonal Models - most important seasonal models are nonstationary
- we will discuss these in Chapter 5
122
Generating Realizations from an ARMA (p, q) Process
MA(q) AR(p)
123
General Procedure for Generating a Realization of Length n from an ARMA(p, q) Model
Starting values: 2. Generate n + K observations where K is “large” keep as discard
124
Note: You may want to vary K based on how slowly the autocorrelations damp
For distinct roots and for k > q We want |r K+1| to be small. This depends on how close the roots r i are to the unit circle.
125
Transformations Memoryless for variance stabilizing, normalizing, etc.
examples
126
Airline Data Log Airline Data
127
Autoregressive transformations
Basic Result:
128
Given the ARMA Process:
132
tswge demo Generate data from (1+.95B)(1-1.6B+.9B2)Xt=at
- transform by 1+.95B - transform by 1-1.6B+.9B2 - transform by both To perform autoregressive transformations use artrans.wge(x,phi.tr,lag.max,plottr='TRUE') x=gen.arma.wge(n=200,phi=c(.65,.62,-.855),sn=6) plotts.wge(x) # y1 is (1+.95B)X(t) y1=artrans.wge(x,phi.tr=-.95) # y2 is (1-.65B+.9B^2)X(t) y2=artrans.wge(x,phi.tr=c(1.6,-.9)) # y12 is (1+.95B)y2(t)=(1+.95)(1-.65B+.9B^2)X(t) y12=artrans.wge(y2,phi.tr=-.95)
133
Note: Since roots close to the unit circle dominate the behavior of the process - it is sometimes necessary to remove these model components using a transformation (such as differencing for a real root close to +1) in order to more clearly see the more subtle features of the model
134
Factor Table Characteristic Equation: Factored:
X t – 1.59Xt – Xt – Xt –3 –.49Xt – 4 = at Characteristic Equation: 1 – 1.59z –+.38z z3 – .49z4= 0 Factored: ( z) (1 -1.2z +.7z2) (1 + .7z) = 0 Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root B 1.01 0.99 0.11 1 -1.2B+.7B 2 .93± .76i 0.84 1 + .7B –1.42 0.70 0.50
135
X t – 1.59Xt – 1 + .38 Xt –2 + .70 Xt –3 –.49Xt – 4 = at
136
Factor Table Factored Form
X t – 1.59Xt – Xt – Xt –3 –.49Xt – 4 = at Factored Form ( B) (1 -1.2B + .5B2) (1 + .5B) X t = a t Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root B 1.01 0.99 0.11 1 -1.2B +.7B 2 .93± .76i 0.84 1 + .7B –1.42 0.70 0.50 The “near unit root” associated with the factor (1.99B) dominates the behavior of the process - makes more subtle features difficult to identify - standard procedure is to difference such data
137
Procedure: Model for Differenced Data • Difference the data:
- i.e. calculate Y t = X t – Xt – 1= (1 − B)Xt • Model the differenced series and estimate the coefficients Model for Differenced Data Y t - .56Yt – Yt – Yt –3 = at Again, not much information is available from the form of the resulting model about the effect of the differencing - so we factor
138
Note: secondary factors are very similar to those in original model
X t – .38Xt – 1 – .09 Xt –2 – .19 Xt –3 –.29Xt – 4 = at Factor Table Absolute Reciprocal of root System frequency (f0) AR Factor Root B 1.01 0.99 1 -1.2B+ .7B 2 .93± 1.4i 0.83 0.11 0.50 –1.42 0.70 1 + .7B Model for Differenced Data Y t - .56Yt – Yt – Yt –3 = at Factor Table System frequency (f0) Absolute Reciprocal of root AR Factor Root B +.62B 2 .97 ± .82i 0.79 0.11 B –1.56 0.64 0.50 Note: secondary factors are very similar to those in original model
139
Original Data Differenced Data
140
Note: Differencing can be used to remove the effects of a factor close to 1 B - this topic is discussed further in Chapters 5, 8, and 9 tswge demo To apply a difference filter to data set x use artrans.wge(x,phi=1)
141
Recall: Linear Filter We have looked at linear filters
(Input) (Output) We have looked at linear filters Autoregressive Transformations are Linear Filters (for difference)
142
Types of Filters Low-pass filters: High-pass filters:
- “pass” low frequency behavior and “filter out” higher frequency behavior High-pass filters: - “pass” high frequency behavior and “filter out” lower frequency behavior Band-pass filters: - “pass” frequencies in a certain “frequency band” Band-stop (notch) filters: - “pass” frequencies except those in a certain “frequency band”
143
What type of filter is a difference?
Original Data Differenced Data
144
What type of filter is Y t =Xt – 1.2Xt-1 +.9 Xt-2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.