Can we trust the opinion polls – a panel discussion Patrick Sturgis, University of Southampton Jouni Kuha, London School of Economics Nick Moon, GfK NOP Joel Williams, TNS UK Will Jennings, University of Southampton
The final polls Pollster Mode Fieldwork n Con Lab Lib UKIP Green Other Populus O 5–6 May 3917 34 9 13 5 6 Ipsos-MORI P 1186 36 35 8 11 YouGov 4–6 May 10307 10 12 4 ComRes 1007 Survation 4088 31 16 7 ICM 3–6 May 2023 Panelbase 1–6 May 3019 33 Opinium 4–5 May 2960 TNS UK 30/4–4/5 1185 32 14 Ashcroft* 3028 BMG* 3–5 May 1009 SurveyMonkey* 30/4-6/5 18131 28 Result 37.8 31.2 8.1 12.9 3.8 6.3 MAE (=1.9) 4.1 2.5 1.0 1.4 0.9
Flash bulb memory moment for many
Error on Con/Lab lead
Frequency of GB Polls 1940-2015
Unlikely to have had an effect Postal voting Voter registration Overseas voters Question wording/framing Turnout weighting Mode of interview Late swing Deliberate misreporting
Unrepresentative samples
Final polls vs. Post-election surveys
Conservative lead by GB region, polls v election result
Age among those aged 65- (three polls)
Herding
Variability of final polls
Conclusions Polls remain the only viable way of forecasting election results But they are prone to a range of errors, not just sampling error – the methodology is fragile Within existing paradigm, there is little scope for radical change
Recommendations include questions during the short campaign to determine whether respondents have already voted by post. Where respondents have already voted by post they should not be asked the likelihood to vote question. review existing methods for determining turnout probabilities. Too much reliance is currently placed on self-report questions which require respondents to rate how likely they are to vote, with no strong rationale for allocating a turnout probability to the answer choices. review current allocation methods for respondents who say they don’t know, or refuse to disclose which party they intend to vote for. Existing procedures are ad hoc and lack a coherent theoretical rationale. Model-based imputation procedures merit consideration as an alternative to current approaches.
Recommendations BPC members should: take measures to obtain more representative samples within the weighting cells they employ. investigate new quota and weighting variables which are correlated with propensity to be observed in the poll sample and vote intention.
Recommendations The Economic & Social Research Council should: fund a pre as well as a post-election random probability survey as part of the British Election Study in the 2020 election campaign.
Recommendations state explicitly which variables were used to weight the data, including the population totals weighted to and the source of the population totals. clearly indicate where changes have been made to the statistical adjustment procedures applied to the raw data since the previous published poll. This should include any changes to sample weighting, turnout weighting, and the treatment of Don’t Knows and Refusals. commit, as a condition of membership, to releasing anonymised poll micro-data at the request of the BPC management committee to the Disclosure Sub Committee and any external agents that it appoints.
Recommendations pre-register vote intention polls with the BPC prior to the commencement of fieldwork. This should include basic information about the survey design such as mode of interview, intended sample size, quota and weighting targets, and intended fieldwork dates.
Recommendations provide confidence (or credible) intervals for each separately listed party in their headline share of the vote. provide statistical significance tests for changes in vote shares for all listed parties compared to their last published poll.
Reflections on the EU Referendum Polls Will Jennings Department of Politics & International Relations University of Southampton w.j.jennings@soton.ac.uk @drjennings http://www.ncrm.ac.uk/RMF2016/programme/session.php?id=K5
Outline How did the final polls perform? How did the polls line up with the final result? How (and when) did pollsters adjust their methods? How did this impact on accuracy of the polls? University of Bath http://www.ncrm.ac.uk/RMF2016/programme/session.php?id=K5
The final polls MAE Leave BPC Members TNS* Fieldwork Sample Remain Leave MAE ORB 14–19 June 877 54 46 5.9 Survation 20 June 1003 51 49 2.9 ComRes 17-22 June 1032 Opinium 20-22 June 3011 0.9 YouGov 20-23 June 3766 Ipsos MORI 21-22 June 1592 52 48 3.9 Populus 4740 55 45 6.9 TNS* 16-22 June 2320 48.8 51.2 0.7 Result Average MAE 3.8 BPC Members http://www.britishpollingcouncil.org/performance-of-the-polls-in-the-eu-referendum/ *TNS did not remove/reallocate undecided voters, so were not included in the British Polling Council list of final polls (where the average error is 4.3).
Polling errors, 2015-16 e.g. Lab/Con in GB 2015, SNP/Lab in Scotland 2015, etc.
Mean absolute error
Mean absolute error
Net error
Online vs. phone
Online vs. phone
+3 Remain, -3 Leave (31 May-3 June poll) Pollster adjustments Date Change Reported effect ORB 14-19 June* Only those who indicate they are definite to vote; Assume DKs break 3:1 to Remain +2 Remain, -2 Leave Survation n/a ComRes 17-22 June* DKs reallocated on economy question; Target population includes Northern Ireland (UK not GB) +1 Remain, -1 Leave Opinium 31 May - 3 June Weighting targets include attitudinal questions (via BES) +3 Remain, -3 Leave (31 May-3 June poll) YouGov 20-22 June* Target population includes NI (UK not GB); weighted by reported probability of voting. Ipsos MORI 21-22 June* Only those included for whom outcome of the referendum is very or fairly important Populus TNS 16-22 June* Not weighted by estimated likelihood to vote (in contrast to previous two polls) -3 Remain, +3 Leave http://www.britishpollingcouncil.org/performance-of-the-polls-in-the-eu-referendum/ http://ourinsight.opinium.co.uk/opinium-blog/methodology-update http://whatukthinks.org/eu/questions/should-the-united-kingdom-remain-a-member-of-the-eu-or-leave-the-eu/ http://whatukthinks.org/eu/questions/should-the-united-kingdom-remain-a-member-of-the-eu-or-leave-the-eu/?notes *Adjustment to final poll.
Opinium
Herding?
Herding?
Summary Another polling miss… lessons to be learned: Systematic bias, not just error. Online polls told a more accurate story than telephone. Errors more spread compared to May 2015. Adjustments to final polls increased error.
Exit polls
Exit poll for a referendum – hypothetical examples Stratified sample of m polling stations, with design effect D=0.9 At each station, a systematic sample of n=150 voters Power calculation: What is the smallest m such that where is estimated proportion of winning side is the desired power when true proportion of winning side is standard deviation of across polling stations i is 0.104 (as for districts in the actual referendum)
Exit poll for a referendum – hypothetical examples What is the smallest number m of polling stations such that General Election exit poll includes m=140 polling stations =0.52, =0.5 case: Required sample size for a simple random sample of voters would be 2398 Desired power (): True : 0.9 0.5 0.51 1174 430 0.52 294 107