How did the MRPs cope with the Liberal Democrats?
Welcome to the 120th edition of The Week in Polls (TWIP) which dives into how the general election MRPs did at foretelling the Liberal Democrats performance.
Then it’s a summary of the latest national voting intention polls followed by, for paid-for subscribers, 10 insights from the last week’s polling and analysis. (If you’re a free subscriber, sign up for a free trial here to see what you’re missing.)
First though this week’s mild stare of disapproval goes to the Daily Telegraph for reporting an open access, unrepresentative online vote on its own website as an “exclusive Telegraph poll” of the Conservative leadership election. (Fun fact: ahead of the general election, more Telegraph readers told a proper poll that they were going to vote Labour than Conservative.)
Been forwarded this email by someone else? Sign up to get your own copy here.
Want to know more about political polling? Get my book Polling UnPacked: the history, uses and abuses of political opinion polling.
MRPs and the Lib Dems: how did they get on?
Why the Liberal Democrats matter
Many answers I could give… but in this context, the Liberal Democrats are a good test for MRPs as to whether they can tell us something that national data cannot. That is, it is a party whose performance is sufficiently decoupled from national vote shares for the national polls to be a weak guide to its fortunes but also a party whose performance is on a sufficiently large scale that it is reasonable to expect MRPs to be able to get it right.
It is fair enough if MRPs stumbled over whether, say, Jeremy Corbyn would win in Islington North. He was an expelled party leader standing against their former party, a genuine one-off contest.
But for the Lib Dems? A smaller party trying to buck national trends by concentrating heavily in target seats - that’s a long-running, familiar part of the British electoral system. So too is a party winning Parliamentary by-elections on huge swings and then trying to keep them at the succeeding general election. And at 72 MPs, with more than 1-in-10 of the House of Commons, it is a problem if MRPs are so imprecise that they cannot track accurately the fortunes of a party of that size.
While the Lib Dem result was spectacular - the best for a century and a dramatically larger decoupling of seat numbers and national vote share than ever before - it was also just the sort of thing an MRP should be able to get right.
So did they?
In what follows, I’ve assessed MRPs by how they did in calling seats for the right party. Of course, pollsters often argue that their polls are best judged by how close they are on vote shares rather than on calling the winners correctly. But given the way MRPs were presented and used, along with what details the pollsters chose to publish, I think seats rather than vote shares is a good yardstick.1
The MRPs analysed
I make it that 22 MRPs2 were published during the general election from 11 different providers.3 All but two provided readily available seat by seat figures, with one not providing individual constituency figures4 and one providing them in an inaccessible way via a clickable map.5
It is tempting to regret that these two did not provide convenient spreadsheet downloads of seat by seat results. But to be fair there is a decent argument that this is the preferable approach as it diverts attention from individual seat figures. If you think MRPs are really about the overall seat totals projected, with significant margins of doubt around individual seats, and so that people should not fixate on the individual seat figures, imbuing them with an undue apparent accuracy, then it makes sense not to flaunt your individual constituency results.
What I found least convincing were the messages from some MRP modellers who both keenly promoted links to their seat-by-seat details and also warned about not paying too much attention to the results for any individual seat. That is not just a matter of wanting both to have your cake and to eat it, but rather of building a new bakery, making some cakes, sticking them in a shop window, plastering the shop with posters about the exciting new cakes just arrived from the bakery… and then bemoaning the levels of cake eating that follow. Either have confidence in your seat-by-seat figures or don’t dangle them enticingly in front of us.
So for those who did publish and promote individual seat results, it’s fair enough to judge them by those.
Seat totals
Before we do that, how did all those MRPs do on the Lib Dem seat totals? (For those who produced more than one MRP, I have taken the one closest to polling day.)
It is a mixed picture:
In one respect, all the MRPs were right about the Lib Dems. For a party that was projected to have won 8 seats on the new boundaries if they had been in place in 2019, and which went into the election with 15 MPs, even the Ipsos figure of 38 wins would have been an extremely good result. That net gain of 30 on the 2019 notionals would still have been the party’s biggest gains in its history, and the best for any of its predecessor parties since 1923.
And yet while the MRPs were all correct that it was going to be a very good result for the Liberal Democrats, and three were within 5 seats of the correct total, a further 4 were 20 or more seats off. That is the sort of size of error that in other elections would have been catastrophic for telling the right story about the Lib Dems. Moreover, not one MRP over-estimated the Lib Dem seat number, which gives a hint of a possible systematic problem too.
However, once again this newsletter has to doff its cap to Patrick English and the YouGov team who were spot on with the Lib Dem seat tally.
During the campaign, the YouGov and Survation MRPs were treated by people such as myself in the Lib Dems as the ones to pay the most attention to for tracking Lib Dem progress, and they did indeed get two of the top four slots.
Stonehaven’s MRP has always been the lowest profile of those listed and with only one set of data from it coming out in a lowkey way, it is not surprising that those figures got little attention. Their good performance though does reinforce my rule of the thumb that the most accurate pollsters often come from the ranks of the lowest profile pollsters (as with Verian on vote share).
Perhaps the surprise entrant in the top quartet is Electoral Calculus/Find Out Now as, although EC is one of the highest profile election modelling outfits, it has also at times produced eyebrow raising individual constituency figures. It did though talk before and during the election about refining its model, and when the results were all in that was an impressively accurate seat total projection. (Though note that the other Electoral Calculus tie-up, with Savanta, did much less well.)
Those are seat totals. What about the individual seat projections? These, it turns out, shed more light on the Electoral Calculus performance.
Seat hits and misses
Excluding the two MRPs without (accessible) individual seat by seat projections, this is how the error rates - either putting Lib Dems ahead in a seat the party didn’t win or putting the party behind in a seat it did win - pan out:
The contrast between YouGov and Electoral Calculus/Find Out Now is instructive. The former not only was dead on at 72 seats, but also only called four seats wrongly regarding the Lib Dems. The latter, however, had the highest rate of ‘false positives’, putting the Lib Dems ahead in seats the party did not win. Its headline figure was so close not due to the low number of individual seat errors but due to the errors it made mostly cancelling out.
One regular miss-call by MRPs was Godalming and Ash, which 85% of all the MRPs published during the election had the Lib Dems winning but where Jeremy Hunt won. This was also one of the few Lib Dem - Conservative constituency battles where there was an intense local Conservative operation, helped by Hunt’s own personal financial generosity. He seems to have got value from money from at least his own donations, and this was the sort of genuine outlier that it would be unreasonable to expect MRPs to get spot on. (Plus he only won by the slim margin of 891 votes. The headline win/loss prediction score on this seat very nearly looked very different.)
By-election seats also proved challenging for some MRPs. Here I am - with apologies to my readers who were involved in some of these MRPs - much less generous about MRPs getting such seats badly wrong. Parliamentary by-elections are a regular, long-standing feature of British elections, and it is common to see a good handful of seats change hands between parties in a by-election. Lib Dems making several gains on huge swings from a Conservative government is a pattern we’ve seen several times since 1945. There are enough by-elections for political scientists to be able to model them for other purposes. So if you are going to do an MRP, I think it’s fair to say you should be able to cope with Lib Dem by-election gains.
Going into this election, there had been four Lib Dem gains, with all four winners restanding. In addition, boundary changes pretty much chopped in half two of those seats, so there were another two seats, with other candidates, which were also by-election defences of a sort. The Lib Dems won both of those too, making for a total of six out of six successful by-election ‘defences’.
But what did the MRPs say? This was not a happy story for many, with, for example, Labour being repeated greatly over-rated in Frome and East Somerset, being put in the top two several times and even first more than once. Labour’s result was to finish fourth on 14%.
What perhaps should have been the easiest one to get right, North Shropshire, was a bit of a disaster for MRPs. Helen Morgan, the Lib Dem MP, did not just win, she won easily. She got a 31% majority, higher than in her by-election win, and polled 53%, again higher than in her by-election win. A very clear win, in other words. Yet five MRPs got it wrong, thinking that she would not win, with only three getting it right. Moreover, those that got it wrong had some spectacularly wrong figures, including one with the Conservatives first and Reform second, and one with Conservatives first and Labour second.
The exit poll, by the way, didn’t do much better, giving the Conservatives a 48% change and Labour a 45% chance. To emphasise, this is a seat where the Lib Dem candidate won, with 15,311 majority which was a 31% margin. And Labour - given a 45% chance of winning in the exit poll - got 7%.
The ninth MRP, Survation, decided to omit the constituency from its seat-by-seat details, perhaps wisely concluding that its model could not cope with that seat. Of course, you can argue that such special manual intervention comes with methodological problems. Though conversely, I think it’s also fair to question whether some other modellers should have looked at their North Shropshire figures pre-publication and gone back to revise their modelling before publishing, taking their North Shropshire problem as a warning sign of wider issues.
After all, while Helen Morgan’s win was spectacular, it was not some weird outlier. It was part of a pattern of Lib Dems holding multiple by-election gains and of doing extremely well in seats heavily targeted all through the Parliament.
And although North Shropshire was the most eye-catching example of MRPs struggling to get Lib Dem figures right, it was not alone. There were some other very odd patterns in some of the MRPs. It wasn’t just that some overall had the Lib Dem performance too low, perhaps due to fieldwork being too early or a general under-estimation of tactical voting, and so undershot on seat numbers. The internal story within several was hard to figure out.
Take Woking. It was a ‘classic’ Lib Dem gain this time. Lib Dems in second place already, big local government gains during the Parliament, clearly a Lib Dem target with Ed Davey visits and the like - and a big Lib Dem win in the end, by 21%, with Labour squeezed, down to fourth on 9%.
Ipsos, however, had this as a three way fight, with Conservatives first, Labour second and Lib Dems in third (though not by much, to be fair.) Nor was that a one-off. There were a dozen seats in total which the Lib Dems won with the Conservatives in second place where Ipsos had got the top two as being Conservatives and Labour.
If you look at just those dozen, you would think the story was an easy one: a resurgent Labour Party, sweeping aside an under-performing Liberal Democrat campaign. Except that… elsewhere, Ipsos still had the Lib Dems doing well enough as to make the most gains in their history.
You might predict one or the other, fair enough, but to be predicting both at the same time - both record breaking gains and yet also crashing and burning in top target seats - makes it hard to see what the internally consistent story was in that model’s results.
It is of course a little unfair to pick on the one MRP that came out bottom in the first table. But there were similar puzzles with other MRPs too. What was weird - and I think it worthy of more work by MRP modellers as they refresh their models for future use - was the number of MRPs showing the Lib Dems crashing and burning in marquee target seats like Woking at the same time as suggesting the party was going to be making many gains.
And, especially after the 2024 election result, there are more than enough Lib Dem MPs that it is a problem if MRPs cannot properly understand what is happening to the party.
There is also a lesson here for journalists. Not all MRPs are equal. Different models have different characteristics. It may be hard to predict in advance which will be the most accurate. For example, it’s certainly arguable that some of the things which made YouGov so accurate on Lib Dem numbers this time may have carried it astray in inflating Lib Dem projected numbers in, say, 2015.6
But there are also clear variations between different MRP models, such as over how proportional the swing they project is or the way that some adjust for unwinding and others do not. Or the way that some are more aggressive in predicting tactical voting further ahead of polling day than others. While there was some good commentary by pollsters themselves on why their models differed, media coverage gave almost no clue as to why MRPs varied and which ones might be better or worse in particular circumstances.
It is not only some MRP teams who should be looking at how to raise their game next time. So too the media.
National voting intention polls
A thin table so far I’ve reset it to include only post-general election polls:
For more details and updates through the week, see my daily updated table here and for all the historic figures, including Parliamentary by-election polls, see PollBase.
Last week’s edition
Do long-term trends mean doom for the Conservatives?
My privacy policy and related legal information is available here. Links to purchase books online are usually affiliate links which pay a commission for each sale. Please note that if you are subscribed to other email lists of mine, unsubscribing from this list will not automatically remove you from the other lists. If you wish to be removed from all lists, simply hit reply and let me know.
How voters view the Conservative leadership contenders, and other polling news
The following 10 findings from the most recent polls and analysis are for paying subscribers only, but you can sign up for a free trial to read them straight away.
Keep reading with a 7-day free trial
Subscribe to The Week in Polls to keep reading this post and get 7 days of free access to the full post archives.