Stationary process NONSTATIONARY PROCESSES 1 In the last sequence, the process shown at the top was shown to be stationary. The expected value and variance.

Slides:



Advertisements
Similar presentations
Christopher Dougherty EC220 - Introduction to econometrics (chapter 2) Slideshow: a Monte Carlo experiment Original citation: Dougherty, C. (2012) EC220.
Advertisements

Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: introduction to maximum likelihood estimation Original citation: Dougherty,
1 THE DISTURBANCE TERM IN LOGARITHMIC MODELS Thus far, nothing has been said about the disturbance term in nonlinear regression models.
1 XX X1X1 XX X Random variable X with unknown population mean  X function of X probability density Sample of n observations X 1, X 2,..., X n : potential.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: asymptotic properties of estimators: plims and consistency Original.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 13) Slideshow: stationary processes Original citation: Dougherty, C. (2012) EC220 -
1 The graphical approach appears to have served the time series analysts satisfactorily, but on the whole econometricians prefer more formal methods and.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 13) Slideshow: tests of nonstationarity: introduction Original citation: Dougherty,
Augmented Dickey-Fuller Test Equation Dependent Variable: D(LGDPI) Method: Least Squares Sample (adjusted): Included observations: 44 after adjustments.
1 THE NORMAL DISTRIBUTION In the analysis so far, we have discussed the mean and the variance of a distribution of a random variable, but we have not said.
1 PROBABILITY DISTRIBUTION EXAMPLE: X IS THE SUM OF TWO DICE red This sequence provides an example of a discrete random variable. Suppose that you.
Random effects estimation RANDOM EFFECTS REGRESSIONS When the observed variables of interest are constant for each individual, a fixed effects regression.
1 The graphical approach appears to have served the time series analysts satisfactorily, but on the whole econometricians prefer more formal methods and.
MEASUREMENT ERROR 1 In this sequence we will investigate the consequences of measurement errors in the variables in a regression model. To keep the analysis.
ASYMPTOTIC PROPERTIES OF ESTIMATORS: PLIMS AND CONSISTENCY
Christopher Dougherty EC220 - Introduction to econometrics (chapter 13) Slideshow: nonstationary processes Original citation: Dougherty, C. (2012) EC220.
1 ASSUMPTIONS FOR MODEL C: REGRESSIONS WITH TIME SERIES DATA Assumptions C.1, C.3, C.4, C.5, and C.8, and the consequences of their violations are the.
EC220 - Introduction to econometrics (chapter 9)
EXPECTED VALUE OF A RANDOM VARIABLE 1 The expected value of a random variable, also known as its population mean, is the weighted average of its possible.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: expected value of a function of a random variable Original citation:
1 A MONTE CARLO EXPERIMENT In the previous slideshow, we saw that the error term is responsible for the variations of b 2 around its fixed component 
Christopher Dougherty EC220 - Introduction to econometrics (chapter 3) Slideshow: prediction Original citation: Dougherty, C. (2012) EC220 - Introduction.
Cross-sectional:Observations on individuals, households, enterprises, countries, etc at one moment in time (Chapters 1–10, Models A and B). 1 During this.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: maximum likelihood estimation of regression coefficients Original citation:
DERIVING LINEAR REGRESSION COEFFICIENTS
EC220 - Introduction to econometrics (chapter 12)
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: the normal distribution Original citation: Dougherty, C. (2012)
1 In a second variation, we shall consider the model shown above. x is the rate of growth of productivity, assumed to be exogenous. w is now hypothesized.
1 This sequence shows why OLS is likely to yield inconsistent estimates in models composed of two or more simultaneous relationships. SIMULTANEOUS EQUATIONS.
1 PREDICTION In the previous sequence, we saw how to predict the price of a good or asset given the composition of its characteristics. In this sequence,
EC220 - Introduction to econometrics (review chapter)
1 UNBIASEDNESS AND EFFICIENCY Much of the analysis in this course will be concerned with three properties of estimators: unbiasedness, efficiency, and.
FIXED EFFECTS REGRESSIONS: WITHIN-GROUPS METHOD The two main approaches to the fitting of models using panel data are known, for reasons that will be explained.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: sampling and estimators Original citation: Dougherty, C. (2012)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: autocorrelation, partial adjustment, and adaptive expectations Original.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: conflicts between unbiasedness and minimum variance Original citation:
Christopher Dougherty EC220 - Introduction to econometrics (chapter 8) Slideshow: measurement error Original citation: Dougherty, C. (2012) EC220 - Introduction.
THE FIXED AND RANDOM COMPONENTS OF A RANDOM VARIABLE 1 In this short sequence we shall decompose a random variable X into its fixed and random components.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 11) Slideshow: Friedman Original citation: Dougherty, C. (2012) EC220 - Introduction.
CONSEQUENCES OF AUTOCORRELATION
ALTERNATIVE EXPRESSION FOR POPULATION VARIANCE 1 This sequence derives an alternative expression for the population variance of a random variable. It provides.
CONFLICTS BETWEEN UNBIASEDNESS AND MINIMUM VARIANCE
ASYMPTOTIC AND FINITE-SAMPLE DISTRIBUTIONS OF THE IV ESTIMATOR
EC220 - Introduction to econometrics (chapter 8)
MULTIPLE RESTRICTIONS AND ZERO RESTRICTIONS
TYPE II ERROR AND THE POWER OF A TEST A Type I error occurs when the null hypothesis is rejected when it is in fact true. A Type II error occurs when the.
Simple regression model: Y =  1 +  2 X + u 1 We have seen that the regression coefficients b 1 and b 2 are random variables. They provide point estimates.
A.1The model is linear in parameters and correctly specified. PROPERTIES OF THE MULTIPLE REGRESSION COEFFICIENTS 1 Moving from the simple to the multiple.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 9) Slideshow: instrumental variable estimation: variation Original citation: Dougherty,
AUTOCORRELATION 1 Assumption C.5 states that the values of the disturbance term in the observations in the sample are generated independently of each other.
1 We will now look at the properties of the OLS regression estimators with the assumptions of Model B. We will do this within the context of the simple.
1 COVARIANCE, COVARIANCE AND VARIANCE RULES, AND CORRELATION Covariance The covariance of two random variables X and Y, often written  XY, is defined.
1 We will continue with a variation on the basic model. We will now hypothesize that p is a function of m, the rate of growth of the money supply, as well.
Christopher Dougherty EC220 - Introduction to econometrics (review chapter) Slideshow: alternative expression for population variance Original citation:
1 ASYMPTOTIC PROPERTIES OF ESTIMATORS: THE USE OF SIMULATION In practice we deal with finite samples, not infinite ones. So why should we be interested.
Definition of, the expected value of a function of X : 1 EXPECTED VALUE OF A FUNCTION OF A RANDOM VARIABLE To find the expected value of a function of.
4 In our case, the starting point should be the model with all the lagged variables. DYNAMIC MODEL SPECIFICATION General model with lagged variables Static.
HETEROSCEDASTICITY 1 This sequence relates to Assumption A.4 of the regression model assumptions and introduces the topic of heteroscedasticity. This relates.
INSTRUMENTAL VARIABLES 1 Suppose that you have a model in which Y is determined by X but you have reason to believe that Assumption B.7 is invalid and.
1 INSTRUMENTAL VARIABLE ESTIMATION OF SIMULTANEOUS EQUATIONS In the previous sequence it was asserted that the reduced form equations have two important.
1 ESTIMATORS OF VARIANCE, COVARIANCE, AND CORRELATION We have seen that the variance of a random variable X is given by the expression above. Variance.
NONSTATIONARY PROCESSES 1 In the last sequence, the process shown at the top was shown to be stationary. The expected value and variance of X t were shown.
1 We will illustrate the heteroscedasticity theory with a Monte Carlo simulation. HETEROSCEDASTICITY: MONTE CARLO ILLUSTRATION 1 standard deviation of.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 1) Slideshow: simple regression model Original citation: Dougherty, C. (2012) EC220.
FOOTNOTE: THE COCHRANE–ORCUTT ITERATIVE PROCESS 1 We saw in the previous sequence that AR(1) autocorrelation could be eliminated by a simple manipulation.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Introduction to Econometrics, 5th edition
Presentation transcript:

Stationary process NONSTATIONARY PROCESSES 1 In the last sequence, the process shown at the top was shown to be stationary. The expected value and variance of X t were shown to be (asymptotically) independent of time and the covariance between X t and X t+s was also shown to be independent of time.

2 The condition –1 <  2 < 1 was crucial for stationarity. Suppose  2 = 1, as above. Then the value of X in one time period is equal to its value in the previous time period, plus a random adjustment. This is known as a random walk. Random walk NONSTATIONARY PROCESSES

3 The figure shows an example realization of a random walk for the case where  t has a normal distribution with zero mean and unit variance. NONSTATIONARY PROCESSES

4 This figure shows the results of a simulation with 50 realizations. It is obvious that the ensemble distribution is not stationary. The distribution changes as t increases, becoming increasingly spread out. We will confirm this mathematically. NONSTATIONARY PROCESSES

5 If the process is valid for time t, it is valid for time t – 1. NONSTATIONARY PROCESSES Random walk

6 Hence X t can be expressed in terms of X t–2 and the innovations  t–1 and  t. NONSTATIONARY PROCESSES Random walk

7 Thus, continuing to lag and substitute, X t is equal to its value at time 0, X 0, plus the sum of the innovations in periods 1 to t. NONSTATIONARY PROCESSES Random walk

8 If expectations are taken at time 0, the expected value at any future time t is fixed at X 0 because the expected values of the future innovations are all 0. Thus E(X t ) is independent of t and the first condition for stationarity remains satisfied. NONSTATIONARY PROCESSES Random walk

9 This can be seen from the 50 realizations. The distribution of the values of X t spreads out as t increases, but there is no tendency for the mean of the distribution to change. (In this example X 0 = 0, but this is unimportant. It would be true for any value of X 0.) NONSTATIONARY PROCESSES

10 However, it is also clear from the figure that the ensemble distribution is not constant over time, and therefore that the process is nonstationary. The distribution of the values of X t spreads out as t increases, so the variance of the distribution is an increasing function of t. NONSTATIONARY PROCESSES

11 We will demonstrate this mathematically. We have seen that X t is equal to X 0 plus the sum of the innovations  1,...,  t. X 0 is an additive constant, so it does not affect the variance. NONSTATIONARY PROCESSES Random walk

12 The variance of the sum of the innovations is equal to the sum of their individual variances. The covariances are all zero because the innovations are assumed to be generated independently. NONSTATIONARY PROCESSES Random walk

13 The variance of each innovation is equal to  , by assumption. Hence the population variance of X t is directly proportional to t. As we have seen from the figure, its distribution spreads out as t increases. NONSTATIONARY PROCESSES Random walk

14 A second process considered in the last sequence is shown above. The presence of the intercept  1 on the right side gave the series a nonzero mean but did not lead to a violation of the conditions for stationarity. NONSTATIONARY PROCESSES Stationary process

15 If  2 = 1, however, the series becomes a nonstationary process known as a random walk with drift. NONSTATIONARY PROCESSES Random walk with drift

16 If the process is valid for time t, it is valid for time t – 1. NONSTATIONARY PROCESSES Random walk with drift

17 Hence X t can be expressed in terms of X t–2, the innovations  t–1 and  t, and an intercept. The intercept is 2  1. Irrespective of whatever else is happening to the process. a fixed quantity  1 is added in every time period. NONSTATIONARY PROCESSES Random walk with drift

18 Thus, lagging and substituting t times, X t is now equal X 0 plus the sum of the innovations, as before, plus the constant  1 multiplied by t. NONSTATIONARY PROCESSES Random walk with drift

19 As a consequence, the mean of the process becomes a function of time, violating the first condition for stationarity. NONSTATIONARY PROCESSES Random walk with drift

20 (The second condition for nonstationarity remains violated since the variance of the distribution of X t is proportional to t. It is unaffected by the inclusion of the constant  1.) NONSTATIONARY PROCESSES Random walk with drift

21 This process is known as a random walk with drift, the drift referring to the systematic change in the expectation from one time period to the next. NONSTATIONARY PROCESSES Random walk with drift

22 The figure shows 50 realizations of such a process. The underlying drift line is highlighted in yellow. It can be seen that the ensemble distribution changes in two ways with time. NONSTATIONARY PROCESSES

23 The mean changes. In this case it is drifting upwards because  1 has been taken to be positive. If  1 were negative, it would be drifting downwards. NONSTATIONARY PROCESSES

24 And, as in the case of the random walk with no drift, the distribution spreads out around its mean. NONSTATIONARY PROCESSES

25 Random walks are not the only type of nonstationary process. Another common example of a nonstationary time series is one possessing a time trend. NONSTATIONARY PROCESSES Deterministic trend

26 This type of trend is described as a deterministic trend, to differentiate it from the trend found in a model of a random walk with drift. NONSTATIONARY PROCESSES Deterministic trend

27 It is nonstationary because the expected value of X t is not independent of t. Its population variance is not even defined. NONSTATIONARY PROCESSES Deterministic trend

28 The figure shows 50 realizations of a variation where the disturbance term is the stationary process u t = 0.8u t–1 +  t. The underlying trend line is shown in white. NONSTATIONARY PROCESSES

29 Superficially, this model looks similar to the random walk with drift, when the latter is written in terms of its components from time 0. NONSTATIONARY PROCESSES Deterministic trend Random walk with drift

30 The key difference between a deterministic trend and a random walk with drift is that in the former, the series must keep coming back to a fixed trend line. NONSTATIONARY PROCESSES Deterministic trend Random walk with drift

31 In any given observation, Xt will be displaced from the trend line by an amount u t, but, provided that this is stationary, it must otherwise adhere to the trend line. NONSTATIONARY PROCESSES Deterministic trend Random walk with drift

Deterministic trend 32 By contrast, in a random walk with drift, the displacement from the underlying trend line at time t is the random walk. Since the displacement is a random walk, there is no reason why X t should ever return to its trend line. NONSTATIONARY PROCESSES Random walk with drift

33 It is important to make a distinction between the concepts of difference-stationarity and trend-stationarity. NONSTATIONARY PROCESSES Difference stationarity and trend stationarity

34 If a nonstationary process can be transformed into a stationary process by differencing, it is said to be difference-stationary. A random walk, with or without drift, is an example. NONSTATIONARY PROCESSES Difference stationarity

35 The first difference,  X t, is simply equal to the sum of  1 and  t. NONSTATIONARY PROCESSES Difference stationarity

36 This is a stationary process with population mean  1 and variance   2, both independent of time. It is actually iid and the covariance between  X t and  X t+s is zero. NONSTATIONARY PROCESSES Difference stationarity

37 If a nonstationary time series can be transformed into a stationary process by differencing once, as in this case, it is described as integrated of order 1, or I(1). NONSTATIONARY PROCESSES Difference stationarity

38 The reason that the series is described as 'integrated' is that the shock in each time period is permanently incorporated in it. There is no tendency for the effects of the shocks to attenuate with time, as in a stationary process or in a model with a deterministic trend. NONSTATIONARY PROCESSES Difference stationarity

39 If a series can be made stationary by differencing twice, it is known as I(2), and so on. To complete the picture, a stationary process, which by definition needs no differencing, is described as I(0). In practice most series are I(0), I(1), or, occasionally, I(2). NONSTATIONARY PROCESSES Difference stationarity

40 The stochastic component  t is iid. More generally, the stationary process reached after differencing may be ARMA(p, q): auto-regressive of order p and moving average of order q. NONSTATIONARY PROCESSES Difference stationarity

41 The original series is then characterized as an ARIMA(p, d, q) time series, where d is the number of times it has to be differenced to render it stationary. NONSTATIONARY PROCESSES Difference stationarity

42 A nonstationary time series is described as being trend-stationary if it can be transformed into a stationary process by extracting a time trend. NONSTATIONARY PROCESSES Trend stationarity

43 For example, the very simple model given by the first equation can be detrended by fitting it (second equation) and defining a new variable with the third equation. The new, detrended, variable is of course just the residuals from the regression of X on t. NONSTATIONARY PROCESSES Trend stationarity

44 The distinction between difference-stationarity and trend-stationarity is important for the analysis of time series. NONSTATIONARY PROCESSES Trend stationarity

45 At one time it was conventional to assume that macroeconomic time series could be decomposed into trend and cyclical components. NONSTATIONARY PROCESSES Trend stationarity

46 It was thought that the trend components were determined by real factors, such as the growth of GDP, and the cyclical components were determined by transitory factors, such as monetary policy. NONSTATIONARY PROCESSES Trend stationarity

47 Typically the cyclical component was analyzed using detrended versions of the variables in the model. NONSTATIONARY PROCESSES Trend stationarity

48 However, this approach is inappropriate if the process is difference-stationary. Although detrending may remove any drift, it does not affect the increasing variance of the series, and so the detrended component remains nonstationary. NONSTATIONARY PROCESSES Deterministic trend Random walk with drift

49 As will be seen in the next slideshow, this gives rise to problems of estimation and inference. NONSTATIONARY PROCESSES Deterministic trend Random walk with drift

50 Further, because the approach ignores the contribution of real shocks to economic fluctuations, it causes the role of transitory factors in the cycle to be overestimated. NONSTATIONARY PROCESSES Deterministic trend Random walk with drift

Copyright Christopher Dougherty These slideshows may be downloaded by anyone, anywhere for personal use. Subject to respect for copyright and, where appropriate, attribution, they may be used as a resource for teaching an econometrics course. There is no need to refer to the author. The content of this slideshow comes from Section 13.1 of C. Dougherty, Introduction to Econometrics, fourth edition 2011, Oxford University Press. Additional (free) resources for both students and instructors may be downloaded from the OUP Online Resource Centre Individuals studying econometrics on their own who feel that they might benefit from participation in a formal course should consider the London School of Economics summer school course EC212 Introduction to Econometrics or the University of London International Programmes distance learning course 20 Elements of Econometrics