Welcome to the 106th edition of The Week in Polls, which sees this newsletter enter its third year. To mark TWIP’s birthday, readers of the free edition can sign up for the paid-for version and get the first 90 days free:
This edition welcomes reality to centre stage as we have a set of election results against which to compare traditional polls, an MRP survey and other models and projections. Whose is ordering in the brass bands and who is applying egg to their face? Let’s see…
Then it’s a look at the latest voting intention polls followed by, for paid-for subscribers, 10 insights from the last week’s polling and analysis. (If you’re a free subscriber, sign up for a free trial here to see what you’re missing.)
Been forwarded this email by someone else? Sign up to get your own copy here.
Want to know more about political polling? Get my book Polling UnPacked: the history, uses and abuses of political opinion polling.
Mayor contests: how the polls did
Pollsters 1 - 0 Political journalists: the verdict from London?
It wasn’t a good election for political journalists speculating on results based on insider information at the early stages of counts.
To be clear, it is possible for excellent count teams for a party or candidate to get to be sure of the result - or know that it is going to be close - quickly. When I used to do this for the Liberal Democrats in Parliamentary by-elections, my record was 10.46pm, and pre-midnight was the norm.1
But whatever was fuelling the punditry bubble over from Thursday evening into Saturday morning was as well founded as all those stories about how the Prime Minister was going to call a general election last Monday. Because the speculation about the Mayor of London race being close did not age at all well when reality appeared in the form of vote totals being announced.
So how did the pollsters do? There are three ways of answering that: ‘brilliantly’, ‘pretty well’ and a ‘bit of a mixed bunch’.
Number one: who did the pollsters say would win? Answer: all the pollsters with polls close to the end of the election said that Sadiq Khan would win, and all said he would win by a large margin. They were all right.
Number two: how did the vote share figures for each pollster compare with the result? A good way to measure this is to look at the average vote share error, e.g. if in a three-way race a pollster is 2 points over on one party, 3 points under on another and 1 point over on a third, then the average error is 2 points (2+3+1 is 6, then divided by 3).
On this score, the four pollsters with polls close to the end of the campaign all did pretty well:
Those are pretty good scores, with the average error being in line with the usual admonitions that polls are typically accurate to within 2 or 3 points on average.
But…
There’s another way of looking at it. The simple big picture on lead. Pollsters, with some justification, sometimes dislike being judged on the lead (as polls are designed to get vote share right, and the lead is the incidental outcome, with errors on vote share - if they are in opposite directions - being magnified on lead figures).
Yet the lead is the simple thing that most people most of the time want to know, at least about those in contention to win.
Last time, the polls didn’t do well on this, with the final polls from the five pollsters polling close to the end of the campaign getting Labour leads of between 11% and 21%, averaging 15%. The result was a lead of just 5%.
This time, we had four pollsters to judge. Their average Labour lead was 16%, and reality was 11%. Half the error on 2021, but still a chunky error. However the average hides the fact that Redfield & Wilton at 13% and Savanta at 11% were close to reality, with Survation (19%) and YouGov (22%) well out.
Overall, Savanta was therefore the best performing pollster - good average error and very close on the lead, giving both the right big picture and pretty good on the details.
It would be fair to say, though, that overall the polls painted a picture of a bigger Labour win than the one we saw, and YouGov in particular will need to look at their figures in some detail.
West Midlands
This was the big drama of the election count multiday saga, and one in which the pollsters came out well:
All four pollsters with polls near the end of the campaign rightly got the result as being close and three of the four had it at very close indeed. Mean average errors were all decent and, unlike London, none were way out on the lead. So although two got the winner wrong, their polls had been presented as ‘the race is very close’ and so didn’t end up misleading.
One note of warning, though: all the pollsters were significantly under on Akhmed Yakoob. (Redfield & Wilton gave a 3% share for others without listing Yakoob, and therefore I allocated all 3% to him for the mean error calculation.) None of the pollsters really picked up the story of the independent candidate’s challenge.
Given the likely distinctive demographic make-up of their support (which, I think reasonably, is generally assumed to heavily skew Muslim), that’s a warning about how good, or not, polls are at getting representative samples from all communities.
The other Mayor polls
For brevity, let’s look at the lead,2 for the other Mayor polls:
Tees Valley: 12.3% Conservative lead in reality, with YouGov having given a 7% lead and Redfield & Wilton having it tied.
York and North Yorkshire Mayor: 7.8% Labour lead in reality, Labour Together (who are a pollster as well as a pressure group) had a 14% Labour lead.
East Midlands: an 11.5% Labour lead in reality versus a 13% lead with More in Common.
Greater Manchester: a 53.0% Labour lead in reality versus a 51% lead with More in Common.
North East: a 13.1% Labour lead in reality versus a 2% lead with More in Common.
No pollster then got it out and out wrong, though R&W had one race tied that was won comfortably and More in Common had one race as close that wasn’t.
Less good is that the errors on the leads averaged over 6 points: 1, 2, 5, 6 ,11 and 12 were the different errors.
An overall verdict on the traditional polls
A decent set of polls then, pretty good though not brilliant. In the defence of pollsters, turnout in Mayor contests is lower than in general elections and the general pattern is that the lower the turnout, the harder polling becomes.
Moreover, these are contests where, outside of London, there is little track record of polling and so the development of the best methodology is still in its infancy.
One possible reason for the polls not being more accurate is that they were not done that close to polling day itself. In general elections, the final poll on which the reputation of pollsters rest is nearly always done right up against the end of the campaign. That is to guard against being caught out by late swing meaning reality shifts between the poll and the result. Doing polls earlier lets pollsters get more media coverage and social media publicity out of them - and if that helps encourage them to do the polls, let’s not begrudge that.
The polls did, however, broadly tell the right picture. The main lesson if anything is that more polls are a good idea as that helps guard against the occasional misfire and gives more opportunity hone techniques.
More polls. please.
UPDATE: Andy Lawton has done a comparison of the polls versus the Mayor results.
Local elections: how MRP did
As with last year, YouGov ran an MPR model to predict local election outcomes in a clutch of councils.
After their debut last year I wrote:
It got the big picture right - and so once again showed the value of polling over poll-free punditry in understanding what’s going on - and it got the direction of political movement right in many places. But, it struggled more at picking up how far that movement was going, especially where the movement was coming from a party bucking national trends.
How did it do this time?
For each of 16 councils, YouGov gave both a ‘note’ (e.g. ‘Significant Labour gains’) and a ‘call’ (e.g. ‘Conservative hold’ or ‘too close to call’).
Of those 16 calls, 11 were right, 1 reasonably right though one can quibble3, 3 a little wrong but not horribly so (all being ‘too close to call’ with notes showing Labour gains and in reality Labour making enough gains to win by decent margins)4 and only 2 being both wrong and suggesting the wrong basic dynamics of the result:
Reigate & Banstead was down as ‘too close to call’ and ‘some Conservative gains’, but actually saw them lose 5 seats and finish with 9 councillors fewer than the others.
Walsall was down as ‘too close to call’ and ‘significant Labour gains’, but actually saw them lose 2 seats and the Conservatives keep control with an unchanged majority of 14.
Overall then, the right picture in 14 out of 16 cases. That’s pretty impressive and is an improvement on my 14/18 score for them last year.
However as was also the case last year last year, the ‘notes’ fared less well. I scored it at 12 right (e.g. predicting ‘little change’ when indeed few seats changed hands), but of the others, three got it wrong and in the wrong direction (e.g. predicting ‘some gains’ for a party who went on to lose seats),5 and one was right about the gains, just not the scale.
Those figures are certainly good enough to make it a good thing if YouGov does more local election MRPs in future. And while local and general election MRPs are not quite the same beasts - turnout modelling is a much bigger issue for the former, and the latter has a much smaller range of winners - it is perhaps a sign that YouGov’s MRP for the general election will perform well too.
One suggestion if those in YouGov Towers are reading this: now you’ve proven your success twice, how about putting more precise range figures on what counts as ‘some’, ‘significant’, ‘little’ and so on? If the model does badly, that would make the misses more obvious. But if it does well, it’ll make the successes all the more impressive.
Let me end on some praise: a successful local elections MRP giving results for individual councils is a major innovation in British polling. Chapeau, YouGov.
Seat totals: how the models and projections did
With no Electoral Calculus predictions this time around, there were five main predictions, either based on models and data or on expertise and punditry.
Colin Rallings and Michael Thrasher are the doyens of gathering local election data over the decades. Although they no longer make predictions for May elections based on previous council by-election results, they still use their data and expertise to make more qualitative predictions. Rob Hayward, Conservative peer and former Parliamentary by-election candidate (Christchurch), does a media briefing each year that impacts much of the media coverage. Newer on the scene is Sam Freedman with his comprehensive election previews, now in their second year.
Then turning to harder data, Ben Walker of Britain Elects and the New Statesman has a model that churns out specific numbers, while Stephen Fisher, one of the brains behind exit polls and national vote share projections, uses national polls to model local election results.
Lots of different approaches, then. How did they do?
Overall, everyone got roughly the right picture.
It’s notable that Labour consistently underperformed the predicted gains. The question of why is one of the more useful things to dig into than some of the eccentric ‘these results show the Conservatives could win the general election’ claims.
Rob Hayward also made a prediction for how the Thraster & Rallings national vote projections would turn out: Conservative 26%, Labour 40%, Lib Dem 13%, Reform 11%. The actual figures6 are: Conservative 27%, Labour 34%, Lib Dem 16%. So he overestimated the Labour quite significantly (6 points) and was also 3 points low on the Lib Dem vote.
Sam Freedman meanwhile made predictions for how the BBC national vote share projections would turn out: Conservative 24%, Labour 36%, Lib Dem 20%. The actual figures are Conservative 25%, Labour 34%, Lib Dem 17%. Close on both Labour and Conservatives, then, and in a neat symmetry with Rob Hayward, 3 points high on the Lib Dem vote.
This newsletter is usually mean to political punditry when it is saying something different from the polls. Only fair therefore to point out that Rob and Sam - helped by data and extensive local knowledge, something not all punditry shares - did well compared to the purer data approaches.
National voting intention polls
No sign of ‘Rishi Sunak’s best week’ having moved the polls in the government’s favour in the week of polling since then:
For more details and updates through the week, see my daily updated table here and for all the historic figures, including Parliamentary by-election polls, see PollBase.
Last week’s edition
Mayor elections polling bonanza.
My privacy policy and related legal information is available here. Links to purchase books online are usually affiliate links which pay a commission for each sale. Please note that if you are subscribed to other email lists of mine, unsubscribing from this list will not automatically remove you from the other lists. If you wish to be removed from all lists, simply hit reply and let me know.
Has the publicity about Angela Rayner’s tax affairs hurt her public standing?, and other polling news
The following 10 findings from the most recent polls and analysis are for paying subscribers only, but you can sign up for a free trial to read them straight away.
Keep reading with a 7-day free trial
Subscribe to The Week in Polls to keep reading this post and get 7 days of free access to the full post archives.