senckađ
Group745
Group745
Group745
Group745
Group745
Group745
EDITION
Global
USA
UK
AUNZ
CANADA
IRELAND
FRANCE
GERMANY
ASIA
EUROPE
LATAM
MEA
Trends and Insight in association withSynapse Virtual Production
Group745

Why Did Pollsters Get the UK Election So Wrong? Adland Weighs In...

11/05/2015
Publication
London, UK
237
Share
Mindshare’s Jeremy Pounder, Isobar’s Patricia McDonald and Hometown’s Don Larotonda on what the UK election misstep means for advertising researchers

A hung parliament with no overall majority; kingmakers the Liberal Democrats with a respectable if unimpressive 27 seats; the prospect of a Labour-SNP Odd Couple-style team up. According to pre-election opinion polls, that’s what millions of Brits were predicted to wake up to on the morning of May 8. 

And they couldn’t have been more wrong. In fact they were so off the mark that the British Polling Council has announced an independent inquiry into the polling firms

Instead the British public woke up to a (slim) Conservative majority, a Scottish Nationalist Party landslide (they took all but three seats available in Scotland), the annihilation of the Lib Dems and the resignation of party leaders Ed Miliband, Nick Clegg and Nigel Farage. 

Politics, like the advertising industry, is increasingly beguiled by the power of data both to understand consumer tastes and opinions and to predict future behaviour. So where did the pollsters go wrong? And are there any lessons the ad industry can learn from the mistakes and missteps of the political number-crunchers? 


Jeremy Pounder, Head of Research at Mindshare


Most of the pollsters are feeling a little sheepish this week as the Tories outperformed all expectations, with Stephan Shakespeare the CEO of YouGov capturing the mood with this tweet:


In their defence, the actual voting shares that both Labour and Conservative achieved were within the margins of error of the individual polls, typically +/- 3%. So Labour ended up with 31% (BBC poll of polls on 6 May had them on 33%) and Conservatives on 37% (versus 34%).  And the pollsters had the UKIP, Green and Lib Dem shares about right.

Nonetheless, given that there were nine companies publishing polls which all underestimated the Conservative share (in effect creating a sample of 9,000+) there is something more fundamental than sampling error going on.

Is it about the methodology?  Probably not in terms of any online bias as has been suggested, as ICM’s phone poll also over-estimated Labour’s share.  Perhaps, as in 1992, we will find that the geographical sampling approach may contain some biases.

But, perhaps more fundamentally we are placing too much pressure (hope? faith?) on the conversion of voting intention into voting behaviour.  This link may have been undermined by a number (or combination) of factors:

 1)      The ‘Shy Vote’ – the reluctance to admit voting Conservative, in person or online in a survey, which was also witnessed to an extent in the failure of the 1992 polls

2)      The tactical vote – the rise of the smaller parties made tactical voting much more prevalent, particularly with online vote exchanges. We may have seen last minute switching of votes from ex-Tory Kippers back to the Tories in seats where UKIP were unlikely to win.

3)      The flight to safety – the relentless focus of the Conservatives on the spectre of SNP calling the shots may have converted some last minute waverers back to the Conservative fold.

No doubt the pollsters will be carrying out their own post-mortems.  But next time perhaps we should all just follow the money. The bookies had Cameron overtaking Miliband as favourite to be PM in the last 48 hours before polling day….


Patricia McDonald, Chief Strategy Officer, Isobar UK


Perhaps the biggest losers on Thursday night were the pollsters. Apart from the Lib Dems and, in a gift to headline writers everywhere, Ed Balls. And other Cabinet and Shadow Cabinet scalps too numerous to mention. But apart from that, it was the pollsters and the pundits. Even the legendary Nate Silver, oft suspected of possessing supernatural powers, failed to come close to predicting a Tory majority. 

So just how did the pollsters get it so badly wrong, and what does it mean for the future of research? As many an indignant marketer demanded on Friday morning, if we can't trust researchers to accurately predict a national election, why should we trust them with the important things, like brand tracking?!

There are any number of hypotheses of course. But they ultimately boil down to one thing: claimed behaviour. As the industry has known for a long time, people are seldom entirely honest about their intentions in research, consciously or unconsciously seeking to present the best possible image of themselves. This is a particularly acute problem when dealing with sensitive issues and surely none are more sensitive than an election in which concern for public good (the NHS, education and the price of austerity) battled private fears expertly stoked (fear of chaos, of the unknown, of economic implosion). 

There are any number of reasons people's answers may not reflect their real intentions; they may wish to please the interviewer or have perceptions of the "right" answer. The questionnaire design may itself skew responses-people respond differently when asked to rank things on a five point scale than a seven or ten point scale for example. 

There are techniques, of course, for finding ways through the self-delusion (or self-preservation) we all employ. Smart qualitative researchers have been using them for years. We use gestalt or displacement techniques to encourage respondents to express things they can't comfortably articulate. These are difficult to scale however and no matter how carefully designed the research it is a bold respondent - or researcher - who would conclude: "I know it's not what you want to hear. I know it makes me sound scared and selfish-heck, maybe I am scared and selfish." 

So what is the answer? Perhaps we should stop looking at what people say and start looking at what they do. Start looking at what digital data, aggregated and at scale can tell us about behaviour and beliefs. 

One of my favourite ideas is John Battelle's description of search log data as ‘The database of intentions’, an incredibly prescient observation made back in 2003: 

"The Database of Intentions is simply this: The aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result. This information represents, in aggregate form, a place holder for the intentions of humankind – a massive database of desires, needs, wants, and likes that can be discovered, subpoenaed, archived, tracked, and exploited to all sorts of ends. Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward." 

It's an extraordinary (and somewhat daunting) idea and, as anyone who's ever been amused, appalled or bewildered by Google's autocomplete feature can attest, it rings painfully true. The search bar is at once confessional and confidant, guardian of our silliest questions and most irrational fears. The search bar knows that while we may claim to eat organic, we're ordering pizza. It knows that while we claim to read The Economist we love a sneaky peak at Heat magazine.

So what does that have to do with polling? Anonymised and aggregated, at scale, the 3.5 billion search queries Google deals with a day send a powerful signal. A signal that filtered properly can predict a number of things. Perhaps , most famously, Google used search queries to predict where flu is due to strike, although some problems with the model have crept in in recent years. Analysis we've undertaken within the Dentsu Aegis group shows that digital data-search volumes, social buzz-accurately anticipate slower moving brand and business metrics. Why not voting patterns? 

In the age of big data and predictive algorithms it seems strangely old fashioned to keep asking people what they intend, with all the potential for conscious or unconscious obfuscation. In five years' time we may have switched polling data for intent modelling. 

What then does all this mean for our brand tracking studies? There's a decent chance that by then they too may be replaced, or at least supplemented by real time digital data. Of course there's also the chance that people simply don't care enough about brand image to lie-now there's a truly frightening thought.


Don Larotonda, Strategy Director at Hometown London


So, not for the first time, the polls were wrong.  For months and months, data was collected, interpreted and reported, only for it to be exposed as erroneous when it came down to the business of voters actually voting.

What happened is simple - people happened.  People who said they would vote for this party or that simply changed their minds when they got into the polling booth. We can speculate and suggest as to why, but the truth is we will never really know, not on an individual basis anyway. 

What we can be certain of is that data is unlikely to ever overcome the very human problem of us not acting in private as we confess to act in public. In the polling booth, whatever honest judgments we articulated to pollsters before the big day, these are often overtaken by the intuitive and automatic part of the brain when push comes to shove. Maybe the thought processes changed when given a moment to consider, or the gut just took over - or sheer panic set-in - but either way the vote was cast, and not how we were led to expect. 

Whatever the reason, poll data alone will never be able to predict any outcome with real certainty, as rational empiric data cannot (yet) replicate the irrationality of human behaviour. 


SIGN UP FOR OUR NEWSLETTER
Work from LBB Editorial
Bratwurst
Field Roast
10/12/2024
14
0
Identify As We
Sephora
10/12/2024
9
0
Dream House SUPER BOWL 2022
Rocket Mortgage
10/12/2024
5
0
ALL THEIR WORK
SUBSCRIBE TO LBB’S newsletter
FOLLOW US
LBB’s Global Sponsor
Group745
Language:
English
v10.0.0