Download presentation
Presentation is loading. Please wait.
1
Econ 240 C Lecture 4
5
Outline Part I: Time Averages Part II: Autocovariance Function
Part III: Random Walk Part IV: Deciding Between a Random Walk Or an ARONE?
6
Practicum: Lab Two Natural Logarithm of the Rotterdam Import Price for Dark Northern Spring Wheat Trace Histogram Autocorrelation function
10
Practicum: Lab Two First difference of Natural Logarithm of the Rotterdam Import price for Dark Northern Spring Wheat Trace Histogram Autocorrelation function
14
Part I Stationarity and Time Averages
15
Many Observations of White Noise
Using simulation, we can create many different white noise time series of arbitrary length, say 1000 Using these time series we could average them at each point in time to obtain an average of the group or ensemble at that time, creating the mean function, m(t) = E wn(t).
16
Time Averages In economics, as a practical matter, we usually only have one observation of a time series In this case it is necessary to be able to average over time If the time series is stationary, then averaging over time makes sense
17
Five Simulated White Noise Time Series
WN WN2 WN3 WN4 WN5
18
Trace of Five Simulated White Noise Time Series
19
Ensemble Average
21
Appeal to Central Limit Theorem
If I had generated 100 or 1000 simulated white noise time series, then the ensemble average would come close to zero for every time period. Mean Function, m(t) =E X(t)
22
Ensemble Average
23
Ensemble Averages Are a Luxury Good in Economics
Need to consider time average Think of the time average for the first white noise time series
24
Time Average
25
Trace of Time Average
26
Mean Function For a Time Average
m = E WN(t) = c = 0
27
Time Average Will Not Work Well for an Evolutionary Time Series
28
Mean Function For An Evolutionary Time Series
M(t) = E x(t) is not independent of time
29
Part II Autocovariance Function E{ [x(t) - Ex(t)][x(t-u) - Ex(t)]} =
= gx,x(t,u) If a time series is covariance stationary, then gx,x(t,u) = gx,x(u) i.e. depends only on lag. Only relative time not absolute time counts.
30
Autocovariance of White Noise At Lag Zero
Autocovariance Function E{ [wn(t) - Ewn(t)][wn(t-0) - Ewn(t)]} E{ [wn(t) - m(t)][wn(t) - m(t)]} E{ [wn(t) - 0][wn(t) - 0]} E{wn(t)*wn(t)}
31
Autocovariance of White Noise At Lag Zero
32
Variance For Entire Series
33
Autocovariance of White Noise at Lag 1 = ?
Autocovariance Function E{ wn(t)*wn(t-1) } = gx,x(u=1) = ?
35
Average Cross Product For WN Series for 999
36
Autocovariance of White Noise at Lag 1 = ?
gx,x(u=1) = 0 Since white noise is independent as well as identically distributed
37
Autocovariance of White Noise at Lag 2 = ?
gx,x(u=2) = 0 Since white noise is independent as well as identically distributed
38
Theoretical Autocovariance, WN
39
Autocovariance, WN Theoretical Vs. Simulated
40
Autocorrelation Function
The Autocorrelation function is just a standardized autocovariance function, i.e. just the autocovariance function divided by the variance rx,x (u) = gx,x(u) / gx,x(0)
41
Simulated Autocorrelation Function, WN
WN Simulated Sample: Included observations: 1000 Autocorrelation Partial Correlation AC PAC Q-Stat Prob .| | | | .| | | | *| | *| | .| | | | .| | | | .| | | | .| | | | .| | | | .| | | | .| | | |
42
Part III Random Walk as an evolutionary process
RW(t) = RW(t-1) + WN(t) lag by one RW(t-1) = RW(t-2) +WN(t-1) Substitute for RW(t-1) RW(t) = RW(t-2) + WN(t) + WN(t-1)
43
Random Walk Lag again to obtain RW(t-2) = RW(t-3) + WN(t-2) and Substitute for RW(t-2) RW(t) = RW(t-3) + WN(t) + WN(t-1) + WN(t-2) etc. so RW(t) = current shock plus all past shocks with past shocks weighted equally to the current shock so a random walk has infinite memory
44
Random Walk RW(t) -RW(t-1) = WN(t) Z0 RW(t) - Z RW(t) = WN(t)
RW(t) = 1/[1-Z] * WN(t) RW(t) = [1 + Z + Z2 + …] WN(t) RW(t) = WN(t) + Z WN(t) + Z2 WN(t) + .. RW(t) = WN(t) + WN(t-1) + WN(t-2) + ...
45
Random Walk, Synthesis from White Noise
RW(t) 1/[1-Z] WN(t) RW(t) 1 + Z + Z WN(t)
46
Autocovariance of RW in Theory
gRW,RW(u) = E{RW(t) - ERW(t)][RW(t-u)- ERW(t) ERW(t) = E[WN(t) + WN(t-1) +…] = 0 gRW,RW(u=0) = E[RW(t)*RW(t)] gRW,RW(0) = E[WN(t) + WN(t-1) + WN(t-2) + ..]* [WN(t) + WN(t-1) + WN(t-2) + ..] gRW,RW(0) = [s2 + s2 + s2 + ….] =
47
Autocovariance of RW in Practice, Length 100
gRW,RW(u=1) = E[RW(t)*RW(t-1)] gRW,RW(1) = E[WN(t) + WN(t-1) + WN(t-2) + ..]* [WN(t-1) + WN(t-2) + WN(t-3) + ..] gRW,RW(1) = [s2 + s2 + s2 + ….] = 99 s2 rRW,,RW (1) = gRW,,RW(1) / gRW,,RW(0) rRW,,RW (1) = 99/100
48
Simulated Autocorrelation, RW
Simulated Random Walk Sample: 1 100 Included observations: 100 Autocorrelation Partial Correlation AC PAC Q-Stat Prob . |*******| |*******| . |*******| | | . |****** | | | . |****** | *| | . |***** | *| | . |***** | | | . |**** | | | . |**** | *| | . |*** | | | . |*** | | |
49
Part IV: Random Walk Vs. ARONE
X(t) = b*x(t-1) + wn How close to 1 is b?
50
Exchange Rate $/Euro
51
Exchange Rate $/Euro
52
Exchange Rate, $ Per Euro
Sample: 1999: :03 Included observations: 51 Autocorrelation Partial Correlation AC PAC Q-Stat Prob . |*******| |*******| . |****** | *| | . |***** | | | . |**** | |*. | . |**** | | | . |**** | | | . |*** | *| | . |*** | *| | . |** | *| | . |*. | *| | . | | | | . | | | | .*| | | | .*| | | | .*| | | | **| | | | **| | |*. | **| | **| | ***| | *| | ***| | | | ***| | |*. | ***| | |*. | ***| | | | **| | *| |
53
Hong Kong $ Per US $
54
Hong Kong $ Per US $ Hong Kong $ Per US $ Sample: 1981:01 2003:03
Included observations: 267 Autocorrelation Partial Correlation AC PAC Q-Stat Prob .|*******| |*******| .|*******| | | .|*******| | | .|*******| | | .|****** | | | .|****** | | | .|****** | | | .|***** | | | .|***** | | | .|***** | | | .|***** | *| | .|**** | *| |
57
Correlogram of Ratio of inventory To sales
60
Autocorrelation of Residuals from an ARONE model of Ratio of Inventory To Sales
61
First Difference of Ratio
Dratinvsale=ratinvsale-ratinvsale(-1)
65
Correlogram of dratinvsale
68
Correlogram of Residuals from ARONE model of First difference of The ratio of Inventory to sales
69
First order autoregressive
Arone(t) = b*Arone(t-1) + wn(t) Root of the deterministic diference equation: arone(t) –b*arone(t-1) = 0 Let x1-u = arone(t-u) x – bx0 = 0, x =b =root
70
Is b = 1? Regression: arone(t) = b* arone(t-1) + wn (t) , H0 : b=1
Equivalently, subtract arone(t-1) from both sides (1-z)arone(t) = (b-1)*arone(t-1) + wn(t), H0 : (b-1) = 0, i.e. b=1
71
The fly in the ointment As b approaches 1, the estimated parameter no longer has Student’s t-distribution Dickey and Fuller use simulation to derive the appropriate distribution
72
Ratio of inventory to Sales example of the Dickey-Fuller test
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.