Download presentation
Presentation is loading. Please wait.
Published byRandolph Spencer Modified over 6 years ago
1
? Examining alternative ways of predicting election results
Using online research ? Jon Puleston Alex Wheatley Lightspeed Research
2
Spent 2 years experimenting 2015 UK election Brexit 2016 US election
3
In that time we have run about 40 online experiments
4
With mixed success
5
Conducted extensive review of the causes of errors in opinion polls
Mr Undecided The waverer The man fed up with Answering the phone The shy voter The lady who says what she thinks you want to hear The guy who doesn’t go out if its raining The news anchor man The client with just a small PR budget The tinkering pollster The tactical voter
6
This much we have learnt
7
Some of the issues from our perspective
Its an emotional decision – not just about who to vote for but whether or not to vote For some its not a case of either or, many people are in two minds Some people don’t know Some people are shy Some people change their minds We are asking people to speculate about their own behaviour - notoriously difficult The people who participate in online are not necessarily representative of their whole caucus
8
Having to predict 2 things that are inter-connected: Who they will vote or and whether on not they will vote
9
Dealing with people in two minds about who to vote for
10
Switching from a binary voting choice to relative chances of voting for
More clarity, can track movements, delivers back the same headline voting intention figures
11
Measuring feelings
12
Implicit research methods
Intelligent Honest Fun Caring
13
Implicit research methods
I.A.T. showed clear lead for Cameron across positive attributes
14
Investigating the emotions of Brexit
A much less clear message *percentage thumbs up of undecided voters
15
Investigating the emotions of Brexit
A clearer picture? percentage thumbs up
16
Simple speed evaluation of various visual stimulus…
17
The US Election A very close call percentage thumbs up
18
The emotional factors paint a picture
Scope to model?
19
The emotional factors paint a picture
Scope to model?
20
Measuring decisions and passions
Using opened data
21
Provides understanding of the reasoning behind their choices
A gap of passion & reasons Talking about: Making America great again He is a smart business man An opportunity for Change Hillary is Corrupt/Criminal/ Liar Talking about: She is not Trump Trump is Dangerous She as experience Average word count - 20 Average word count - 16 IBM Watson sentiment analysis -16% IBM Watson sentiment analysis -59%
22
Understanding the repulsions
Who was the lesser of two evils? “He seems like the lesser of two evils” “I think she is the lesser of two evils”
23
Measuring how hard you have to pinch your nose
The ‘ Scandal’ and ‘Sexism Scandal’ Both Forgivable Both Unforgivable Trump Unforgivable Hilary Unforgivable
24
Exploring attitudes towards issues as proxy voting indicators
The ‘ Scandal’ and ‘Sexism Scandal’
25
Exploring attitudes towards issues as proxy voting indicators
The ‘ Scandal’ and ‘Sexism Scandal’
26
An added factor: Trump was an establishment underdog
27
Trying to find the experts
Using prediction research
28
Can we predict the result?
GOT IT RIGHT! Can we predict the result? Longitudinally measuring predictive abilities to find the experts
29
Emotions bias prediction
More work needed Hard to separate predictions from preferences Emotions bias prediction 2015 Conservative victory makes sample self selecting
30
Asking consumers A non-expert prediction
31
Can we make consumer predictions more informed?
Breaking down the problem
32
Stubbornness of views Red flag for confirmation bias
Percentage predicting Remain Victory
33
Same again in the US Consumer predicting her victory
34
More informed predictions
Breaking down the US election into 7 swing state puzzles
35
More informed predictions
Red flag for confirmation bias Breaking down the problem measures the confidence gap 6% got all States correct – ‘expert predictors’ of the future?
36
Finding the unbiased:
37
Near field prediction methods
Predict who will win Predict how your friends or family will vote
38
The issues with all alternative forms of election prediction….
A question of calibration You need 20 directionally accurate data points to prove the effectiveness of any one technique – i.e. predict 20 elections correctly before you could take any seriously You need upwards of 100 data points to start to effectively calibrate any one technique (and consumer researchers would argue you need a lot more e.g. Millward Brown Link test calibrated on 110,000 ads)
39
Not a lot wrong with classical polling techniques if they are executing effectively
Using carefully crafted sample sources Using large enough regionally & demographically balanced sample Using quota invite sampling techniques Carefully calibrating intention to vote responses Being mindful of shy voter issue, measuring and weighting accordingly Using thoughtful weighting procedures calibrated against larger scale establishment surveys and historical voting data
40
Where next? Recognise that nearly all our research represent anacdotal evidence We are starting to testing out some of these alternative methods more widely
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.