Download presentation
Presentation is loading. Please wait.
Published byEeva-Kaarina Aune Ahonen Modified over 5 years ago
1
Semi-structural approach Kalman Filter
Output Gap detection: all the different approaches Luxembourg, 8-10 June 2016 CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE EUROPEAN COMMISSION
2
Where are we heading to? We’d like to measure potential output and the output-gap. Hence we have to go back to their economic notion. In doing so, bear in mind that economic notions are often the outcome of a theory. Hence subject to “theory-uncertainty”. Just to stress this point. “That measure—known as potential output—is an estimate of “full-employment” gross domestic product, or the level of GDP attainable when the economy is operating at a high rate of resource use.” [U.S. Congressional Budget Office, 2004] The ECB adopts a similar notion, but in addition emphasizes ambiguities already on the theoretical level: “Concerning the output gap, measurement errors are likely to be significant in view of the unavailability of a precise theoretical definition of this aggregate variable.” [Angeloni et al., 2001]
3
Where are we heading to? Thus, analysts and institutions adopt a notion of potential output and output gap, corresponding to a theory. Then, proceed to solve the measurement problem. Semi-structural and multivariate methods allow to estimate the variable of interest also exploiting information coming from the underline theory. Next, we focus on one of such methods, the multivariate Kalman Filter (KF), used by most institutions: OECD, the EU Commission, ECB to estimate the NAIRU; IMF for the output gap.
4
Preliminaries Goal is to obtain an estimate of a variable of interest x (e.g. NAIRU) that cannot be observed directly, but that is economically and statistically related to some other, observable variable z (e.g. inflation dynamics and actual unemployment related to the NAIRU through a Phillips curve). The problem is one of statistical inference. Under a Bayesian viewpoint, the idea is to recursively exploit conditional probability density f(x|z). To illustrate the KF logic, we start from a simple graphical example. An short & intuitive explaination of conditional densities and moments: Suppose we know the statistical relationship linking x,z. This is summarized by the joit density function f(x,z), which tells us the probability that (x,z) assumes any pair of values in their admissible domain. The conditional density f(x|z) is the probability density of x given a certain realizazion/value of z. Bayes theorem tells us: f(x|z)=f(x,z)/h(z), where h(z) = 𝑓(𝑥,𝑧)𝑑𝑥 is the marginal density of z. If the two variables are statistically independent, the probability associated to any value of x is insensitive to the value assumed by z. Equivalently, f(x|z) is insensitive to z. Indeed, for independet variables: f(x|z)=f(x,z)/h(z)=[g(x)*h(z)]/h(z) = g(x), the marginal density of x. Thus, unless, the variables are statistically independent, conditional moments will differ from the moments of their marginal densities. In particular, E(x) = 𝑥𝑔(𝑥)𝑑𝑥 is different from E(x|z)= 𝑥𝑓 𝑥 𝑧 𝑑𝑥 .
5
Example x is the current position of a caravan along a trail in the desert (1 coordinate). The position is unknown but can be estimated by observing the position of the polar star; the estimate is z. For simplicity, the caravan is not moving. There is some measurement error, which unable the observers to exactly determine x after z is observed. Measurement can be made by several people (indexed by t), independently, each of whom is subject to a different error: 𝒗 𝒕 ~𝒊𝒊𝒅𝑵(𝟎, 𝝈 𝒕 ) 𝑧 𝑡 =𝑥+ 𝑣 𝑡 The (conditional) probability that the caravan is in 𝑥 given a measurement 𝑧= 𝑧 𝑡 is 𝑓 𝑥 𝑧 𝑡
6
The first person makes star observation and concludes 𝑧 1
The best estimation is E(𝑥| 𝑧 1 )= 𝑧 1 The error variance is 𝐸 𝑥− 𝑧 =𝐸 (𝑣 ]= 𝜎 1 figure 1
7
Next, a trained navigator friend takes an independent measurement (remember the position is fixed), and obtains a measurement with a lower variance, 𝜎 2 < 𝜎 1 (figure 2). Note the higher density peak due to a smaller variance, indicating you have higher confidence that 𝑧 2 is the actual position. At this point, you have two measurements available for estimating the position. figure 2 What would you do? would you try to combine these data, or simply use the latest estimation?
8
Best is to exploit all information and compute the expected position and the error variance according to the conditional density; this, given measurements are independent, 𝑓(𝑥| 𝑧 1 , 𝑧 2 )=𝑓(𝑥| 𝑧 1 )∙𝑓(𝑥| 𝑧 2 ). This has mean and variance, 𝜇 2 , 𝜎 (2): figure 3 The mean has an intuitive interpretation: an avg. of measurements weighted by their relative variance. The “precision” is the sum of those obtained in the two measurements. The new cond. distribution has less variance!!
9
We are ready to define the KF
We are ready to define the KF. Just rewrite the last step and generalize:
10
This defines the KF, a data processing algorithm with the properties:
[Recursivity] it can be computed at every measurement-step just using the new measurement to update the last estimate. The coefficient K (Kalman gain) is updated at every step t: 𝐾 𝑡+1 = 𝜎 (𝑡) 𝜎 𝑡 + 𝜎 𝑡+1 [Efficiency] it solves an inference problem optimally (minimizing estimation errors). It produces maximum likelihood estimates; it is the minimum variance estimator, among all linear unbiased ones (BLUE). We shall look at estimated variances in the next slide. Because z (and v) are normally distributed, conditional distributions are also normally distributed.
11
We want to compute the estimate variance 𝜎 (𝑡).
Consider again step t=2. From the definition of precision, 𝜎 2 −1 , Using the definition of 𝐾 2 = 𝜎 (1) ( 𝜎 (1) + 𝜎 2 ), to substitute for 𝜎 2 , in the latest,
12
KF in a more general form
Our example was static and very basic; we can generalize it in many directions. The caravan can move along the track, so that its position changes over time (a dynamic model). We can extend it to more variables of interest, e.g. the caravan velocity (x becomes a vector). We can also consider that the caravan position and/or velocity may be subject to some random shock (vector u), e.g. weather change. We can have some control variables c affecting the states (food/water given to camels, camel reins/brakes) Finally, we may have more measurement variables z.
13
KF in a more general form
■ x is the “state vector” of variables of interest ■ F is the “state transition matrix”, measures the effect of each realization of x at time t-1 on the system state at time t. ■ c is the vector containing any “control inputs” (e.g. policy instruments) and B is the control input matrix. ■ u is the vector of the process noise terms for each variable in x. 𝐮 𝐭 ~𝒎𝒖𝒍𝒕𝒚𝑵(𝟎, 𝑸 𝒕 ). 𝑸 𝒕 is the variance-covariance matrix. ■ z is the vector of measurements ■ H is the transformation matrix mapping x into z ■ v is the vector of measurement noise terms for each measurement variable in z. 𝐯 𝐭 ~𝒎𝒖𝒍𝒕𝒚𝑵(𝟎, 𝑹 𝒕 ) white noise with variance cov-variance 𝑹 𝑡 .
14
Comparison General algorithm Example algorithm
𝐾 𝑡 = 𝜎 (𝑡−1) 𝜎 𝑡−1 + 𝜎 𝑡
15
Algorithm functioning
[step t], given 𝑃 𝑡|𝑡−1 , 𝑥 𝑡|𝑡−1 General algorithm Measurement update in t compute 𝐾 𝑡 using (∎) observe 𝑧 𝑡 and update estimates to 𝑥 𝑡|𝑡 and 𝑃 𝑡|𝑡 using (∗) Time update in t 𝑃 𝑡+1|𝑡 = 𝐹 𝑡+1 𝑃 𝑡|𝑡 𝐹 ′ 𝑡+1 + 𝑄 𝑡+1 𝑥 𝑡+1|𝑡 = 𝐹 𝑡+1 𝑥 𝑡|𝑡 + 𝐵 𝑡+1 𝑐 𝑡+1 ⋮ (∗) (∎) [step t+1], given 𝑃 𝑡+1|𝑡 , 𝑥 𝑡+1|𝑡
16
The time update (derivation)
17
Summary and conclusions
Kalman Filter is a relatively simple algorithm, which allows to estimate unobservable variables using available statistical information efficiently (it is the BLUE we can use to solve an inference problem). It also allows to forecast variables. It is semi-structural; it can be used to exploit structural equations representing economic (deterministic) relationship among variables. Relationships correspond to some theory; hence, they can be judged and discussed conceptually. Main drawbacks: it reacts strongly to starting values and to the specification of the estimated equations. in small samples, it tends to underestimate the variance matrix of the state var.’s x. This implies that estimated x may be too smooth. One way out is too revise the specification, either to specify the variance of the state variables / the signal-to-noise ratio (the smoothness of the state variables relative to that of the observation variable) can be constrained; or to add extra equations/information.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.