Download presentation
Presentation is loading. Please wait.
Published byEllie Goolsby Modified over 9 years ago
1
A word on metadata sheets and observed heterogeneity in ad hoc quality indicators of BCS data Presentation by Christian Gayer DG ECFIN A.4.2, Business and consumer surveys and short-term forecast BCS Workshop 15-16 November 2012, Brussels
2
Background (1) Transparency calls for regular updating of metadata sheets Apart from contact data etc, metadata sheets contain valuable info on sampling (universe, frame, sampling method, sample size, sampling error, response rates, treatment of non- response, weighting etc) Ideally, sheets should enable users to gauge the "quality" of survey data
3
Background (2) Quality is multi-facetted (relevance, accuracy, timeliness, accessibility, comparability, …) Focus here on accuracy Components are: sampling errors and non-sampling errors (frame, measurement, processing, non-response, assumptions) Sampling error depends on 1) inherent variability of figures to be measured, 2) sampling design, esp. sample size ("sample 4 times larger sampling error 2 times smaller")
4
Background (3) High sampling error reduces accuracy of point estimates Also leads to higher volatility of estimates in time Month-on-month changes therefore subject to noise which is increasing in the sampling error Currently: particular interest in signals of turning point The more noise, the more difficult to detect TPs We look at ad-hoc quality indicators of BCS data
5
Outline Focus here on 1.Sample sizes 2.Volatility 3.Tracking performance
6
1. Gross sample sizes Top (green) and lowest (red) 10 Samples vary broadly as function of country size
7
Gross sample sizes (total) Outliers particularly visible for large countries Response rates have to be taken into account
8
Effective sample sizes Top (green) and lowest (red) 10 Effective samples can be significantly reduced, reflecting low response rates ( bias and higher sampling error) Largest effective samples in indu & serv, smallest in reta & buil
9
Effective sample sizes (total) Outliers persist for at least two countries
10
2. Measure of the volatility/noise in the series: Months for cyclical dominance (MCD) MCD = Time span (in months) that one has to wait before one can attribute a change of direction to cyclical rather than irregular (noise) factors Based on time series decomposition into trend/cycle and irregular component Computation of m-on-m, 2-month, 3-month, etc. changes Computation of absolute averages of these n-period changes Comparison for the two components (ratio irreg/trend-cycle)
11
mean|IR-IR(-1)|= 2.19 mean|IR-IR(-2)|= 2.18 mean|IR-IR(-3)|= 2.16 mean|TC-TC(-1)|= 0.88 mean|TC-TC(-1)|= 1.74 mean|TC-TC(-1)|= 2.57 MCD=3 >>< >><
12
MCDs for confidence indicators 1 or 2: green 4 or more: red EU/EA aggregates are smoother ESIs are smoother than CIs Some CIs indicate change in cyclicaal conditions immediately (MCD=1) Oder of MCDs across surveys in line with sample sizes (indu & serv<cons<reta&buil) Strong variation across countries Irregular component can dominate cycle even after 4,5,6 months Caveat: Not always same sample
13
Some examples (1) n=2660 MCD=1 n=700 MCD=5 n=1625 MCD=1 n=601 MCD=5
14
Some examples (2) n=651 MCD=2 n=180 MCD=4 n=2400 MCD=1 n=190 MCD=5
15
But…. n=743 MCD=2 n=2438 MCD=5
16
Plotting sample sizes (effective) vs. MCDs
17
Continuous measure: Ratio of average absolute 2-month changes in irreg to trend/cycle Overall: no strong evidence, but very large samples correspond with low volatility (exceptions: Reta FR, PL; Serv RO)
18
Summary on volatility MCDs as materialisation of sampling error Wide differences in the usefulness of results for detecting trends and TPs In some cases volatility has to be reduced, otherwise short-term noise buries cyclical info we are interested in Ways to reduce volatility: larger samples, higher response rates, better stratification/weighting, stabilisation of (panel) responses, …
19
3. Tracking performance Behaviour with respect to hard reference series the indicators are supposed to track Reference series: growth in GDP, IP, value added in services, private consumption, building production index Biased estimates (due to e.g. frame errors or systematic non-response) should have worse tracking performance than unbiased estimates
20
Correlation with reference series >75% green, <50% red Correlation of EU/EA aggregates higher except for retail ESI more strongly correlated with GDP than CIs with sector reference series CIs in indu, serv and buil on average better than cons, reta (can also point to worse CI composition) Some countries fare much better than pothers There should be some positive correlation between broad sector CIs and respective output data Link to sample sizes?
21
Correlations vs. sample sizes Some visual correspondence, but not significant Non-sampling errors likely more important (frames, systematic non-response,…)
22
Conclusions (1) Few institutes provide info about the sampling error of their estimates Need some measure of volatility / the margin of error around the balance results Here we looked at MCDs instead Strong differences across countries w.r.t. sample sizes, smoothness/volatility and tracking performance (these are already aggregated indicators for total sectors and combining several questions!)
23
Conclusions (2) Which factors are driving these differences? Volatility: sample size and other characteristics of sampling method, … Tracking performance/bias: inappropriate frame, treatment of non-response, … Need to develop a common framework for the assessment… … with a view to a necessary improvement of the results in some cases We propose to set up a Taskforce "BCS quality assessment framework"
24
Thanks for your attention
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.