October 20, 2016

Why did British pollsters get election wrong? Sampling methods blamed

File photograph of an electoral worker counting ballots as polls close in Britain’s general election, at a counting centre in Sheffiled

A post-mortem into why opinion polls failed spectacularly to predict a Conservative Party victory in Britain’s 2015 parliamentary election has blamed sample recruitment methods and, potentially, unintended herd behaviour by pollsters.

For months, right up to the eve of the May election, opinion polls were showing a dead heat between the ruling Conservatives and opposition Labour Party. In the event, Prime Minister David Cameron’s party was ahead by well over 6 percentage points.

The polling fiasco angered many in politics and the media because expectations that neither party would win an overall majority in parliament meant the campaign was dominated by speculation about potential alliances with smaller parties.

This may have influenced the outcome, the argument goes, because there was little scrutiny of what a one-party Conservative government would do.

It also bolstered Conservative warnings that Labour would need support from the pro-independence Scottish National Party to govern, an unpopular scenario in England.

The accuracy and impact of opinion polls remain live issues ahead of a referendum on whether Britain should remain in the European Union that could take place this year. Polls suggest a tight race between the ‘in’ and ‘out’ campaigns.

After the May election, an association of pollsters commissioned an inquiry by a panel of statistics experts, who reported on Tuesday the main problem was the make-up of samples.

In particular, the samples had too many young voters and too few elderly ones. Young people are more likely to support Labour but less likely to vote, while the elderly are overwhelmingly Conservative-leaning and more likely to vote.

“We’ll be suggesting some things that we think will reduce the risk of being wrong, but there’s no silver bullet,” inquiry chairman Patrick Sturgis, professor of research methodology at the University of Southampton, told Reuters.

The panel also said it could not rule out ‘herding’, where pollsters considering ways to adjust their raw data select the option that produces the most ‘reasonable’ results, which they may perceive to be those in line with other polls.

“When they’re choosing what to do, they may be influenced by what they think the result should be. That comes from the other polls,” Sturgis said, adding herding was unintentional.

Ben Page, chief executive of Ipsos MORI, said the pollster would make changes, such as asking extra questions to better filter out data from respondents unlikely to vote.

Page said it was almost impossible to prove whether the inaccuracy of the polls had affected the result.

“It may have influenced the media and that’s why they’re so cross about it,” he told Reuters. “But would more accurate polls have prompted the voters to conduct forensic scrutiny of Conservative policies? I think that’s much less likely.”

Related posts