Download presentation
Presentation is loading. Please wait.
Published byEileen Dean Modified over 9 years ago
1
Rome, 8-11 July 2008 QUALITY 2008 An application to the Italian IIP Revision analysis to detect possible weakness in the estimation procedures A. Ciammola, T. Gambuti and A. Mancini ISTAT European Conference on Quality in Official Statistics
2
Rome, 8-11 July 2008 QUALITY 2008 2/21 Introduction Why revision analysis? A case study Next steps Outline
3
Rome, 8-11 July 2008 QUALITY 2008 3/21 Quality and some of its dimensions Accuracy Closeness of the estimate to the true (but unknown) value of the variable to be measured Timeliness Span between the reference period and the publication period Revision Reliability measure Reliability Closeness between preliminary estimate and subsequent estimates U
4
Rome, 8-11 July 2008 QUALITY 2008 4/21 Revision analysis Real-time databases collection of vintages computation of revisions Revisions Rt = Lt – Pt Rt = (Lt – Pt) / Lt Revision measures Size (MAR, RMAR, …) Bias (MR, T-test) Efficiency (News or Noise?, MSR, …)
5
Rome, 8-11 July 2008 QUALITY 2008 5/21 Useful references OECD / Eurostat Guidelines on Revisions Policy and Analysis http://www.oecd.org Themes related to revision policy and analysis Recommendations for revisions policy and analysis Guidelines for establishing a real-time database Recommended statistical measures Pre-programmed software for performing revisions analysis A framework for revisions policy of key economic indicators Comprehensive framework of reasons for revisions and their timing Guidelines on how to decompose total revision into different reasons for revisions Guidelines on how to use the results from revision analyses to improve compilation methods Case studies on the relationship between timeliness of release and size of revisions
6
Rome, 8-11 July 2008 QUALITY 2008 6/21 For users Objective Availability of all the relevant information for using appropriately the estimates of short-term indicators at different stages of the revision process provision of information about past revisions schedule future revisions (statistical and definitional) real-time databases gathering all the vintages analysis of size, bias and efficiency of revisions Why do we measure revisions?
7
Rome, 8-11 July 2008 QUALITY 2008 7/21 For producers Underlying issues Targets Why do we measure revisions? Bias in the revision process Inefficiency in compilation of preliminary estimates Reduction of (the size of) “avoidable” revisions Detection of the source for bias / inefficiency
8
Rome, 8-11 July 2008 QUALITY 2008 8/21 A case study Italian Index of Industrial Production (IIP) 1.Sources and timing of revisions 2.Revision analysis 3.Identification of specific sources for bias 4.Some evidences
9
Rome, 8-11 July 2008 QUALITY 2008 9/21 1. Sources and timing of revisions Y(t-3)Y(t-2)Y(t-1) Current Year Y(t) – Reference month JFMAMJJASOND M LR CE A LRCEPC M LR CE J J A S O LRCELR CE N D J F First estimateSecond estimateSix-month revisionAnnual revision LRLate respondentsCECorrection of errorsPCProductivity coefficients
10
Rome, 8-11 July 2008 QUALITY 2008 10/21 IIP - Revisions on year-on-year growth rates (raw indices) Legend* h=1 – after one monthh=12 – after 12 months MAR – Mean Absolute RevisionRMAR – Relative MAR MR – Mean Revision SD – Standard Deviation Period: Jan-03 / Dec-07 h=1h=12 # of revisions6048 MAR0.1420.246 RMAR0.0530.087 MR0.0750.083 SD of MR(HAC)0.0210.056 T-value3.5641.489 Significance of MRYes *No * 2. Revision analysis
11
Rome, 8-11 July 2008 QUALITY 2008 11/21 2. Revision analysis IIP - Revisions after one month on year-on-year growth rates (raw indices)
12
Rome, 8-11 July 2008 QUALITY 2008 12/21 2. Revision analysis Why this systematic component? Late respondents? Correction of errors? Productivity coefficients? (In revisions after 1 month, only July 2004, January 2005, January 2006 and January 2007 are affected) Which sectors? All sectors? Some specific sector? How to proceed? Simulation exercise Top-down approach Quality indicators
13
Rome, 8-11 July 2008 QUALITY 2008 13/21 3. Identification of specific sources for bias Simulation exercise Removal of the effect of the productivity coefficients Isolate sources of revisions external to the survey Fulfil the condition necessary to compute the average contribution of each components to the IIP revisions PS: revisions computed on y-o-y growth rates
14
Rome, 8-11 July 2008 QUALITY 2008 14/21 Diagram describing the top-down approach 3. Identification of specific sources for bias
15
Rome, 8-11 July 2008 QUALITY 2008 15/21 Quality indicators Revision measures Contribution of each component to the mean revision of the higher component Response rates 3. Identification of specific sources for bias
16
Rome, 8-11 July 2008 QUALITY 2008 16/21 MIGS - Revisions after one month on raw Y-o-Y growth rates Period: Jan-03 / Dec-07 CNDCDUCAPINTENE Weights %22.96.123.835.511.7 MAR0.2720.4150.3780.2230.149 RMAR0.0840.0810.0880.0730.040 MR0.0920.0720.0420.143-.003 Contribution to MR °0.0190.0060.0100.047-.003 SD of MR(HAC)0.0470.1030.0710.0300.040 T-value1.9620.6940.5894.724-.079 Significance of MRNo * Yes *No * Legend CND – Consumer non durables CDU – Consumer durables CAP – Capital goods INT – Intermediate Goods ENE – Energy ° Period Jan-04 / Dec-07 * 4. Some evidences
17
Rome, 8-11 July 2008 QUALITY 2008 17/21 Revisions after one month on raw Y-o-Y growth rates 4. Some evidences
18
Rome, 8-11 July 2008 QUALITY 2008 18/21 Average weighted response rates YearEstimateIIPCNDCDUCAPINTENE 2004 First91.593.994.390.388.397.4 Second95.095.796.193.493.899.6 2005 First90.290.693.588.187.998.7 Second93.393.195.491.392.4100.0 2006 First88.789.090.487.486.497.3 Second91.791.492.590.1 99.9 2007 First83.784.782.480.880.697.6 Second87.688.486.085.684.999.0 4. Some evidences
19
Rome, 8-11 July 2008 QUALITY 2008 19/21 Revisions after one month on raw Y-o-Y growth rates Period: Jan-04 / Dec-07 SS C,INT S C,IIP Weights %32.3 / 11.567.788.5 MAR0.3620.2630.159 RMAR0.1000.0820.055 MR0.2630.0710.056 Contribution to MR0.088 / 0.0300.0470.049 T-value3.9851.4071.766 P-value0.0000.1660.084 Legend S – Selected subset of INT (19 classes) S C,INT – Complement of S in INT (S U S C,INT = INT) S C,IIP – Complement of S in IIP (S U S C,IIP = IIP) 4. Some evidences
20
Rome, 8-11 July 2008 QUALITY 2008 20/21 Next steps Checking the stability of results over time Experimenting possible countermeasures to biased revisions Treatment of late respondents with different estimators Assessment of their effects on revisions ISTAT web page on revisions Real-time database for several short-term indicators Revision analysis
21
Rome, 8-11 July 2008 QUALITY 2008 21/21 THANK YOU!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.