Examining the Trade Off between Sampling and Non-response Error in a Targeted Non-response Follow-up Sarah Tipping and Jennifer Sinibaldi, NatCen
Background Groves and Heeringa (2006) drew a second- phase sample as part of the responsive design for the NSFG cycle 6. Estimated increased sampling variance by approx 20% Want to introduce Responsive Design into NatCen surveys
Overview of Methods 2 months of data from the Health Survey for England 2009 was used to simulate a responsive design protocol. Created a cut off for Phase 1 and modelled response propensities Three Phase 2 samples were drawn The designs were assessed by comparing response, bias, variance and mean square error
Objectives If we implement Responsive Design… Will we improve bias? Will the inflation in variance outweigh gains in bias?
Phase 1 Phase 1 of the simulation ended after all cases had been called four times Data from Phase 1 was used to model response Discrete hazard model using call-level data Included info about calls, interviewer characteristics, area level information (census and other measures) Saved the predicted probabilities and used them at Phase 2 to draw sample
Phase 2 Select PSUs for re-issue at Phase 2 Three approaches: 1.Cost effective sample 2.Pure bias reduction 3.Cost effective bias reduction All three designs selected PSUs with unequal selection probabilities => selection weights needed.
Results Evaluated the three Phase 2 sample designs by comparing response, bias, variance and mean square error The cost effective design had the highest Phase 2 response rate at 61%. (n = 250) Pure bias reduction = 51% (n = 227) Cost effective bias reduction = 53% ( n= 236)
Interviewer observations by sample type
Household type
Other demographics
Bias – survey estimates (women)
Bias – difference in survey estimates (women)
Variance of survey estimates (women)
Mean Square Error MSE was generated for a selection of key health estimates MSE = Var + Bias 2
MSE for survey estimates (women)
Conclusions Results are positive! Focusing on cost effectiveness increases bias of estimates ‘Pure’ bias reduction does not perform much better than cost effective bias reduction in terms of bias, variance inflation and MSE
Discussion points Discrepancies between interviewer observations and actual data. Selection weights need careful consideration, want to avoid large weights
Actual data and interviewer obs Survey data Interviewer obsNone Smoke only Kids only Smoke & kidsTotal None Smoke only Kids only Smoke & kids Total