Response Rates and Results of the Advance Letter Experiment 2004 RRFSS Workshop Toronto, June 23, 2004 David A. Northrup, Renée Elsbett-Koeppen and Andrea.

Slides:



Advertisements
Similar presentations
MICS 3 DATA ANALYSIS AND REPORT WRITING. Purpose Provide an overview of the MICS3 process in analyzing data Provide an overview of the preparation of.
Advertisements

Sampling Methods.
7.Implications for Analysis: Parent/Youth Survey Data.
Who Supports Health Reform? DavidW. Brady, Stanford University Daniel P. Kessler, Stanford University PS: Political Science and Politics January 2010.
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
Multiple Indicator Cluster Surveys Data Interpretation, Further Analysis and Dissemination Workshop Overview of Data Quality Issues in MICS.
Data Issues: Quality and Analysis By Ziyad Mahfoud, Ph.D. Associate Professor of Biostatistics Department of Public Health Ziyad Mahfoud, Ph.D.1.
Richer or poorer - gender, income and wealth Planning for retirement Professor Hazel Bateman School of Risk & Actuarial Studies UNSW Business School ARC.
2010 National Conference on Health Statistics Session 4. Finding Key Resources on the NCHS Website Anthony Quintana Evangeline Adams Alana Yick Office.
Urban American Indian and Alaska Native Health Indicator Graphs September 2010 Urban Indian Health Institute Seattle Indian Health Board.
Direct-to-Consumer Advertising of Prescription Drugs: Looking Back, Looking Forward Kathryn J. Aikin, Ph.D. Division of Drug Marketing, Advertising and.
NHS EASTERN CHESHIRE CCG
18/08/2015 Statistics Canada Statistique Canada Responsive Collection Design (RCD) for CATI Surveys and Total Survey Error (TSE) François Laflamme International.
Version 1 | Internal Use Only© Ipsos MORI 1 Version 1| Internal Use Only NHS BARNET CCG Latest survey results July 2015 publication.
A Tale of Two Methods: Comparing mail and RDD data collection for the Minnesota Adult Tobacco Survey III Wendy Hicks and David Cantor Westat Ann St. Claire,
Recruitment: Who will participate? Presentation on workshop in Luxembourg 10. April 2008 Sidsel Graff-Iversen.
HUMAN SERVICES Poverty, Hunger, Social Services and Seniors APRIL 20, 2015.
2004 Falls County Health Survey Texas Behavioral Risk Factor Surveillance System (BRFSS)
Deanna E. White, Adam Stevens, John Barbaro, Kristy McGill and Lynne Russell.
FPP Chapters Design of Experiments. Main topics Designed experiments Comparison Randomization Observational studies “control” Compare and contrast.
Protestant Churches’ Use of Social Media Sponsored by Fellowship Technologies, a partner in LifeWay’s Digital Church initiative.
Market Survey. “ Who are the potential clients? What do they want and prefer? ”
Nonresponse issues in ICT surveys Vasja Vehovar, Univerza v Ljubljani, FDV Bled, June 5, 2006.
Fieldwork efforts  Monitoring fieldwork efforts  Monitoring fieldwork efforts: Did interviewers /survey organisations implement fieldwork guidelines.
Multiple Indicator Cluster Surveys Survey Design Workshop Sampling: Overview MICS Survey Design Workshop.
Introduction to Probability and Statistics Consultation time: Ms. Chong.
A Latent Class Call-back Model for Survey Nonresponse Paul P. Biemer RTI International and UNC-CH Michael W. Link Centers for Disease Control and Prevention.
Using the American Community Survey (ACS) Maryland Sate Data Center Affiliate Meeting April 4, 2007.
Has Public Health Insurance for Older Children Reduced Disparities in Access to Care and Health Outcomes? Janet Currie, Sandra Decker, and Wanchuan Lin.
What Does “No Opinion” Mean in the HINTS? Michael P. Massagli, Ph.D. K. Vish Viswanath, Ph.D. Dana-Farber Cancer Institute.
Standard Error and Confidence Intervals Martin Bland Professor of Health Statistics University of York
Learning from our Success: Building a Parent-Only RRFSS 3 rd Annual RRFSS Workshop, June 23, 2004 Ruth Sanderson Middlesex-London Health Unit
Making Inferences. Sample Size, Sampling Error, and 95% Confidence Intervals Samples: usually necessary (some exceptions) and don’t need to be huge to.
Understanding the Respondent – a Key to Improve our Data Collection Strategies? Seminar on Statistical Data Collection 26 th September 2013 Anton Johansson,
1 Findings from Recent Consumer and Health-Care Provider Surveys Adelphi Research by Design supported by sanofi pasteur David R. Johnson, MD, MPHNVAC Meeting.
1 Non-Response Non-response is the failure to obtain survey measures on a sample unit Non-response increases the potential for non-response bias. It occurs.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Research Strategies. Why is Research Important? Answer in complete sentences in your bell work spiral. Discuss the consequences of good or poor research.
I. Research Strategies Module 02. A. Research Methodology Method of asking questions then drawing logical supported conclusions Researchers need to be.
© 2011 Pearson Prentice Hall, Salkind. Writing a Research Proposal.
1 Public Library Use in Oregon Results from the 2006 Oregon Population Survey Oregon State Library March 2007.
Chapter Eleven Sampling: Design and Procedures Copyright © 2010 Pearson Education, Inc
14 Statistical Testing of Differences and Relationships.
Margot E. Ackermann, Ph.D. and Erika Jones-Haskins, MSW Homeward  1125 Commerce Rd.  Richmond, VA Acknowledgements The Richmond.
Audit of outcomes in HIV BHIVA Audit and Standards Sub-Committee E Ong (chair), J Anderson, D Churchill, M Desai, S Edwards, S Ellis, A Freedman, P Gupta,
American Community Survey: Group Quarters and Data Comparisons Susan Schechter Chief, American Community Survey Office SDC/BIDC and FSCPE Joint Session.
Effects of Sampling and Screening Strategies in an RDD Survey Anthony M. Roman, Elizabeth Eggleston, Charles F. Turner, Susan M. Rogers, Rebecca Crow,
Bangor Transfer Abroad Programme Marketing Research SAMPLING (Zikmund, Chapter 12)
Organization of statistical investigation. Medical Statistics Commonly the word statistics means the arranging of data into charts, tables, and graphs.
Version 1 | Internal Use Only© Ipsos MORI 1 Version 1| Internal Use Only NHS CAMDEN CCG Latest survey results January 2015 publication.
Donna B. Konradi, DNS, RN, CNE GERO 586 Populations and Samples.
Version 1 | Internal Use Only© Ipsos MORI 1 Version 1| Internal Use Only NHS BIRMINGHAM CROSSCITY CCG Latest survey results July 2015 publication.
Response rates and disposition coding PHC 6716 June 8, 2011 Chris McCarty.
FDA/FSIS Food Safety Survey Methods Amy Lando, MPP Consumer Studies Team Office of Scientific Analysis and Support Center for Food Safety and Applied Nutrition.
Reducing Childhood ETS Exposure Reaching Parents Who Smoke Kathryn Kahler Vose, M.A. Executive Vice President, Porter Novelli Carrie Schum, M.A. Vice President,
Richard Lewis, Jr., Ph.D. President Round Top Consulting Associates.
Journalism 614: Non-Response and Cell Phone Adoption Issues.
Environmental and Social Influences on Tobacco Use Among 18 to 24 Year-Olds in Idaho Dr. John Hetherington Clearwater Research, Inc. Influences on Young.
Science Experiment by Name or Names Question: How does _____ affect _______?
Monday, June 23, 2008Slide 1 KSU Females prospective on Maternity Services in PHC Maternity Services in Primary Health Care Centers : The Females Perception.
© 2009 Pearson Prentice Hall, Salkind. Chapter 13 Writing a Research Proposal.
TOPIC - Page 1 Representativity Indicators for Survey Quality R-indicators and fieldwork monitoring Koen Beullens & Geert Loosveldt K.U.Leuven.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
Journalism 614: Non-Response and Cell Phone Adoption Issues.
Strategies for Evaluating Cessation Programs – Improving Response Rates Julie Rainey Marcy Huggins Professional Data Analysts, Inc. Minneapolis PROFESSIONAL.
Why do we need a compensation survey
The European Statistical Training Programme (ESTP)
Chapter 5: The analysis of nonresponse
Presentation transcript:

Response Rates and Results of the Advance Letter Experiment 2004 RRFSS Workshop Toronto, June 23, 2004 David A. Northrup, Renée Elsbett-Koeppen and Andrea Noack ISR, York University

Outline  general comments on response rates how response rates are calculated a very brief history of response rates what strategies have/are being put in place to deal with declining response rates

Outline (continued)  response rates and RRFSS what did it take to get the 62% rate for 2003 RRFSS number of calls refusal conversions results of the advance letter experiment

Calculating Response Rates  Completions / estimate of number of eligible households (HH) eligible HHs include completions, refusals, callbacks, and a % of the “never answered”  ISR method same as BRFSS, aka “CASRO 3”  RRFSS 2003 = 62%, exclude callbacks = 71%

Response Rates for American Election Study

Response Rates for BRFSS

Strategies for Improving Response Rates  interviewer training  increase call attempts  “convert” refusals  use advance letters  payments (as a lottery, to completers, to the whole sample)

Data Collection at ISR for RRFSS Response Rates  minimum number of 14 calls (more when there is reason to think extra calls might obtain a completion) limitation of one month sample release costs about 3 to 7 points on response rate  at least one attempt to convert almost all refusals

390,106

RRFSS: Fun with Numbers 1 (2003 Data)  number of calls: 390,106  percent of interviews completed first call: 21  number of interviews completed on the 10 th or subsequent calls: 3,158  number of interviews completed after a refusal: 2,678

RRFSS: Fun with Numbers 2 (2003 Data)  average number of calls per completed interview: 4.65  most calls made for a single completion: 33 (for two (different) interviews)  response rate if 10 plus calls and refusal conversions are dropped: 48.2%  number of complaints about interviewer calling registered at ISR: 13

Characteristics of Refusers: 2003 RRFSS Data variablestandardconverted mean age education: > than high school (%) university (%) % employed % saying health fair or poor % doctor told high blood pressure % smoke 100 cigarettes # of cases: standard = 24,700, converted = 2,640 all differences significant

Characteristics of Easy and Hard to Reach: 2003 RRFSS Data variableeasy to reach hard to reach mean age education: > than high school (%) university (%) % employed % saying health fair or poor % doctor told high blood pressure % smoke 100 cigarettes # of cases: easy = 17,000, hard = 3,150 all differences significant

Letter Experiment: 1  six Health Units participated (Durham, London, Grey Bruce, Halton, Waterloo, Sudbury)  test two versions of letter: ISR and HU  needed to work with our monthly target and wanted to acknowledge random variation in response rates per HU per month  used sample “replicates” to implement experiment

Letter Experiment: 2  Month one: replicate 1, ISR letter; replicate 2, HU letter; replicate 3, and 4 (when used), control group changed presentation in months 2 and 3 copy of letter at the end of this set of handouts exactly the same text, different letterhead, signature & envelope Except Halton

Letter Experiment: 3  survey introduction exactly the same except one additional sentence “Recently, we sent a letter to your household about an important research project.”  questions about the letter the same except Durham

Why the Letter Might Improve Response Rates to RDD Surveys  reduces the possibility that the telephone call catches people by surprise  increases legitimacy of research project in the eye of the potential respondents  demonstrates social value  improves the confidence of the interviewer

Why Advance Letters Might Not Improve Response Rates to RDD Surveys  letter does not reach, or is not read by, respondent  ceiling effects  survey topic & subpopulations  they give “timid” participants a chance to prepare to say “no”

Response Rates for Months 1 & 2 of the Experiment p value =.035 (for letter (1,200) versus no letter (1,345)) p value =.025 (ISR (600) versus HU (600))

Response Rates Months 1 & 2: All Six Health Units See next slide for numbers & p values

RR by HU and Treatment treatment, RR (%) P value HU none ISR HU none/ ISR none/ HU ISR/ HU Durham London Grey Halton Waterloo Sudbury Number of cases per HU: ISR = 50, HU = 50, none = 100

Mean Calls per Completion mean # of calls P value None ISR HU none/ ISR none/ HU ISR/ HU Durham London Grey Halton Waterloo Sudbury Number of cases per HU: ISR = 50, HU = 50, none = 100

Mean Calls per Completion by Letter Status Letter/no letter p =.890, ISR/HU p =.230, not see/saw p =.001

% of First Call Attempts Leading to Completions & Refusals letter/no letter p =.204, ISR/HU p =.008

At the Start of the Interview

Awareness of Letter Variable (based on 602 cases) total R indicated saw letter at intro30 R indicated letter came to house21 total respondents aware of letter51 personally read the letter40 got more info (web site, 1-800)1 letter made a lot of difference to decision to participate 26

Data Characteristics VariableISR (n=300) HU (n=300) none (n=620) P year of birth male (%) employed (%) health excellent smoked at least 100 cigarettes (%)

Costs: Month One  cost of materials: $314; staff cost: $1,919  total: $2,233  per case cost: $3.62  buys 72 interviews or 12 per HU  need to estimate savings from making fewer calls, and making fewer refusal conversion calls

Conclusions  HU letter (seems to) increase response and warrants consideration as a tool to improve RRFSS response rates  affect on variable distributions minimal, but small sample size limits scope of examination  social-political distance between respondent and sender probably matters  letters may have value other than just increasing response rates

Questions