Download presentation
Presentation is loading. Please wait.
Published byAron Palmer Modified over 9 years ago
1
Calibrated imputation of numerical data under linear edit restrictions Jeroen Pannekoek Natalie Shlomo Ton de Waal
2
Missing data Data may be missing from collected data sets Unit non-response Data from entire units are missing Often dealt with by means of weighting Item non-response Some items from units are missing Usually dealt with by means of imputation
3
Linear edit restrictions Data often have to satisfy edit restrictions For numerical data most edits are linear Balance equations: a 1 x 1 + a 2 x 2 + … + a n x n + b = 0 Inequalities: a 1 x 1 + a 2 x 2 +... + a n x n + b ≥ 0
4
Totals Sometimes also totals are known x 11 x 12 x 13 x 21 x 22 x 23 ……… x r1 x r2 x r3 X1X1 X2X2 X3X3
5
Eliminating balance equations We can “eliminate balance equations” Example: set of edits net + tax – gross = 0 net ≥ tax net ≥ 0 Eliminating the balance equations net = gross – tax gross – tax ≥ tax gross – tax ≥ 0
6
Eliminating balance equations We can “eliminate balance equations” Example: set of edits net + tax – gross = 0 net ≥ tax net ≥ 0 Eliminating the balance equations net = gross – tax gross – tax ≥ tax gross – tax ≥ 0
7
Eliminating balance equations By eliminating all balance equations we only have to deal with inequality edits If we sequentially impute variables, we only have to ensure that imputed values lie in an interval L i ≤ x i ≤ U i We can now focus on satisfying totals
8
Imputation methods Adjusted predicted mean imputation Adjusted predicted mean imputation with random residuals MCMC approach
9
Adjusted predicted mean imputation We use sequential imputation All missing values for a variable (the target variable) are imputed simultaneously We impute target column x t We use the model x t = β 0 + βx p + e We impute x t = β 0 + βx p Imputed values do not satisfy edits nor totals
10
Satisfying totals The totals of missing data for target variable (X t,mis ) as well as predictor (X p,mis ) are known We construct the following model for observed data x t,obs = β 0 + βx p,obs + e X t,mis = β 1 m + βX p,mis m is the number of missing values We apply OLS to estimate model parameters We impute x t,mis = β 1 + βx p,mis Sum of imputed values then equals known value of this total
11
Satisfying totals and intervals (edits) We impute x t,mis = β 1 + βx p,mis + a t a t,i are chosen in such a way that Imputed values lie in their feasible intervals Σ i a t,i = 0 Appropriate values for a t,i can be found by means of operations research technique For simple alternative technique, see paper
12
Satisfying totals and intervals (edits) Alternatively, draw m residuals by Acceptance/Rejection sampling from a Normal Distribution (zero mean and residual variance of the regression model) that satisfy interval constraints Adjust random residuals to meet the sum constraints as carried out for a t,i
13
MCMC approach Start with pre-imputed consistent dataset Randomly select two records We select a variable in these records. Note that we know the sum of these two values of this variable for the two records
14
MCMC approach We then apply following two steps 1. We determine intervals for the two values. 2. We then draw value for one missing value. Other value then immediately follows. Now, repeat Steps 1 and 2 until “convergence”. In Step 2 we draw a value from a posterior predictive distribution implied by a linear regression model under uninformative prior, conditional on the fact that it has to lie inside corresponding interval
15
Evaluation study: methods Evaluated imputation methods: UPMA: unbenchmarked simple predictive mean imputation with adjustments to imputations that satisfy interval constraints BPMA: benchmarked predictive mean imputation with adjustments to imputations that satisfy interval constraints and totals MCMC: BPMA with adjustments was used as pre- imputed data set for MCMC approach
16
Evaluation study: data set 11,907 individuals aged 15 and over that responded to all questions in 2005 Israel Income Survey and earned more than 1000 Israel Shekels for their monthly gross income Item non-response was introduced randomly to income variables 20% of records were selected randomly and their net income variable deleted 20% of records were selected randomly and their tax variable deleted while 10% of those records were in common with the missing net income variable Totals of each of the income variables are known
17
Evaluation study: data set We focus on three variables from the Income Survey: gross: gross income from earnings net: net income from earnings tax: tax paid Edits: net + tax = gross net ≥ tax gross ≥ 3 x tax gross ≥ 0, net ≥ 0, tax ≥ 0 Log transform was carried out on variables to ensure normality of data
18
Evaluation criteria d L1 average distance between imputed and true values Z number of imputed records on boundary of feasible region defined by edits K-S (Kolmogorov-Smirnov) compares empirical distribution of original values to empirical distribution of imputed values Sign sign test carried out on difference between original value and imputed value Kappa Kappa statistic for 2-dimensional contingency table; compares agreement against that which might be expected by chance
19
Results Net UPMABPMAMCMC d L1 2266.12132.64304.8 Z 204111 K-S 3.5355.1299.100 Sign 0.0147< 0.0001 0.0001 Kappa 0.1610.1780.117
20
Results Tax UPMABPMAMCMC d L1 786.8821.71393.7 Z 123120 K-S 3.5219.12911.158 Sign < 0.0001 Kappa 0.4180.4210.226
21
Conclusions MCMC approach is doing worse than other methods on all criteria except number of records that lie on boundary However, MCMC allows multiple imputation in order to take imputation uncertainty into account in variance estimation BPMA appear to be slightly better compared to UPMA except for K-S statistic Number of records that lie on boundary for UPMA is cause for concern MCMC approach is doing slightly better than BPMA approach in this respect
22
Future research Improving MCMC approach Carrying out multiple imputation using MCMC approach to obtain proper variance estimation In our study a log transformation was carried out on variables to ensure normality of data Correction factor was introduced into constant term of regression model to correct for this log transformation Better approach to this problem will be investigated Extending problem to situations where one has non-equal sampling weights
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.