Welcome to the 142nd edition of The Week in Polls (TWIP), which takes a look at the House of Commons seat projections starting to appear again from pollsters and asks, should we pay attention to them? And if so, how to get the best from them?
Then it’s a summary of the latest national voting intention polls and a round-up of party leader ratings, followed by, for paid-for subscribers, 10 insights from the last week’s polling and analysis.
This week, that includes views from pollsters on the prospects for Reform.
Finally before we get down to the serious business, the week my eyebrows were raised by some polling from the archives. History has not been kind to Dr. Beeching’s report that resulted in large scale railway closures. Here, courtesy of NOP, is what the public thought at the time in 1963:
Want to know more about political polling? Get my book Polling UnPacked: the history, uses and abuses of political opinion polling.
What to make of the Parliamentary seat projections?
We have already had 55 standard voting intention polls so far in this Parliament. But we now also have the first set of seat projections from an MRP, from More in Common, as well as the first set of seat projections from a new analysis of local council by-elections, by JL Partners.
Generally, I view such polling data as similar to a firm’s stock price. A firm’s stock price is both a useful indicator of how things are going, a hint of what might be to come and a measure that in itself is consequential. The share price is not the same as a company’s fundamentals, and worrying about the latter is often both a good way of predicting and indeed influencing the former. But there are still good reasons why people look at share prices.
Similarly with polls. So early in a Parliament they are not a firm prediction of what is going to happen at the next Westminster general election by any means, but they do give a rough idea of the basic political landscape.
And please, disregard the Westminster-centric, top-down centralising myopia of ‘but the next election is years away’. Polls impact the media, politicians and the public well ahead of an election. And anyway, there are big elections coming up every May, with the fate of thousands of politicians and billions of pounds of public expenditure at stake.1 Sure, voting intention polls for a Westminster general election are not polling for those elections specifically. Yet there is a consistent decent relationship between the national picture from the polls and the results in at least council elections (though of course, for Scottish Parliament and Welsh Senedd elections, Scottish and Welsh specific polls are far more useful).
Before I get sucked into writing yet another piece about why the answer to problems with polls is more polling and better polling commentary,2 let’s look instead at what More in Common and JL Partners have to offer.
Both in their own ways give us seat projections.
That is useful because if in 2020 a psephological genie had popped up in your toaster one morning to reveal to you that the next general election would see Labour on 34% and the Lib Dems on 12%, I think it’s fair to say you would not have gone on buttering your toast expecting a Labour majority of 174 and a Lib Dem seat tally of 72. First past the post can do all sorts of things when matching up vote share to seats.
Moreover, that genie popping up in your toaster tomorrow with the next general election vote share figures would most likely be an even less sure guide to seat numbers given the complexity of the political contests between different pairs of parties (and independents) in different parts of the country and types of seats.
So seat projections, if done well, are useful. How useful, though, are those More in Common and JL Partners?
More in Common
Starting with More in Common’s MRP model, in the words of Peter Kellner it is a model “whose prediction of the seats won by the main parties was the closest of the MRP polls”. Or in my less charitable words in a comment on Peter’s post: “ their prediction of Lib Dem seat numbers was one of the most inaccurate”.
To give the full picture, More in Common’s final MRP had Labour on 430 seats (18 seats too high), Conservative 126 seats (5 too high), Lib Dems 52 (20 too low), SNP 16 (7 too high), Reform 2 (3 too low), Plaid 2 (2 too low) and Greens 1 (3 too low).
The right overall picture, certainly - Labour big win, Lib Dems up a lot, Reform making a showing, SNP hammered. But the Lib Dem figure was the joint third worst of all the MRPs and, for example, putting the Greens on 1 rather than 4 may have only been an error of 3 but it was also the difference between a mediocre Green result and one at the top end of their hopes.
All of which shows the value in a nuanced approach to understanding the polls, paying attention to the detailed figures while remembering not to expect them to have a degree of precision that is beyond their track record. That MRP got the big picture right, but had plenty of non-trivial errors in the details. A point that matters all the more if you are interested in parties other than Conservatives and Labour.
Which is why I think it is fair to say that More in Common rather oversold their new MRP, writing it up themselves in a way that implies an extremely high degree of precision and accuracy. Quoting their own words:
The model estimates that an election today would produce a highly fragmented and unstable Parliament with 5 parties holding over 30 seats.
Big picture, fair enough. But their words went on:
Key findings
… Labour would lose 87 seats to the Conservatives, 67 to Reform UK, and 26 to the SNP, and their Red Wall gains would be almost entirely reversed.
Reform UK would be third largest party, increasing their seat total 14-fold to 72 seats.
I think it’s fair to say that reading that, you are being lured into thinking that pinpoint predictions to the nearest seat are the right level of accuracy to expect from an MRP and that it is at that level of detail that you can reasonably discuss their results.
Especially as - to their credit3 - the caveats More in Common add, albeit further down the story include this explicit point:
We also don’t have assumptions about tactical voting in this model as we would during a General Election campaign as these tend to not be useful without an imminent election.
Yet it is not as if tactical voting was a minor issue at the general election.
Or as polling expert for The Economist Owen Winter put it:
More in Common's MRP has the predictable mid-term attenuation, hurting the Lib Dems and Greens most of all. In other words: Lib Dem MPs will surely outperform mid-term MRPs after dropping a kajillion leaflets during an election campaign.
There is a respectable methodological debate to be had about the extent to which MRP models well ahead of a general election should ‘force’ levels of tactical voting to appear in their results, based on knowledge of past electoral patterns (just as there is over modelling turnout too: how far should you take at face value what people say?). But if you are going to acknowledge that you are not fully accounting for tactical voting, I think it’s fair to question whether you should be headlining seat numbers for parties down to the very last single seat.
More in Common’s Luke Tryl was generous enough to respond in some detail to this point, explaining their approach:
… was to do a “point in time” snapshot, but stressing uncertainty and range in write up and tables. Think risk if we’d just done ballparks was lack of transparency? On TV [tactical voting] we used in the GE, but I just don’t think we know outside of an election if patterns would be 2024 strong or 2019 weak.
For my money, the headlines and introductory text in their own write-ups did the opposite of stressing uncertainty, though. Whether you agree with me or not on that, it certainly shows the value of checking the details of an MPR before deciding what it tells us.
What the poll does show us that the removal of the Conservatives from office hasn’t resulted in a collapse in anti-Conservative voting. Far from it, even with the model acknowledged to downplay the likely tactical voting impact at the next election, it still shows an efficient Labour vote distribution (winning more seats than the Conservatives, just, on a lower vote share, just), an effective Lib Dem vote distribution (with a seat tally that in any context before 4 July would have been seen as stupendous) and overall a continuation of a fragmented political picture.
Useful conclusions then which add to what the national vote share polls tell us, as long as you do not try to force undue precision on them.
JL Partners
If that is MRPs, what about JL Partners? Their POLARIS model uses local council by-elections. To simplify greatly, they look at how various demographic and political variables relate to changes in support for different parties in council by-elections. They then apply those changes to voting at the last general election, showing what it would mean for constituency results if there was a similar pattern of demographic and political change as seen in the council contests. More detail, much more detail, is in their description.
JL Partners too cannot avoid the temptation of specific headlines on their results:
Reform UK is on track to win over 70 seats, given local election results. Meanwhile, Labour stand to lose 155 seats across the country, leaving them without a majority.
Which again could be read as implying a degree of precision that is beyond what is reasonable to expect of their model, though they do revert to more general language:
This method currently projects a very different House of Commons, with the Labour Party losing its majority and no party in a position to form a majority government … Coastal seats across the North East would be lost by Labour to Reform UK, including Easington and Hartlepool. Further losses would be metered out by Reform in the Thames Estuary and large parts of the North West.
And to sensible caveats:
Starting with the 2019 general election, we found the model fared very well calling 92% of constituencies for the correct party in England and Wales.
Such good news was not the case for the 2024 general election back test. The model drastically overestimated the Conservative position and drastically underestimated Labour’s position — Labour were predicted to win a slim majority. However, the main reason for this is due to the insurgent nature of Reform UK. The party had very little success in local election in the run up to the general election, despite having major support in national polls.
In other words, the big picture painted by the model seems plausible given its (back tested) prior performance, and again tells us things about the political landscape that national voting intention polls cannot.
But also, the caveats mean that these seat projections really aren’t point estimates to be taken seriously to the last single seat. Nor are they estimates immune to significant changes in the political landscape, as with the pre-2024 problem the model had with Reform.
As with More in Common’s write-up, you could debate whether it is good that JL Partners say, or unfortunate that it is not until near the end of the piece that they say:
That being said, this method likely predicts the worst-case reasonable scenario for Labour right now. It is likely that Labour would outperform these numbers.
Again then, the lesson is that the details are an essential part of making sense of the headlines. That is something which, alas, most polling coverage in the media in the UK is not very good at.4 Sometimes, not good at all.
A particular weakness is that so much of the polling coverage is single-sourced, that is a story covering just the one poll, rather than (as this newsletter tries to do, for example), using a new poll as a starting point to talk about poll data more broadly.
Single sourced news stories are generally deprecated except in special circumstances. Single poll stories, though, exist in abundance.
Though I would not go quite so far as Amy Walter who said:
Journalists talking about polls are like pre-teens talking about sex. They know all the words. They talk about it a lot. But they have no idea what they’re talking about.
Amy Walter, Editor-in-Chief, The Cook Report Political Report
An important factor to consider when trying to do better and to make sense of the details is how well seat models reflect the levels of tactical voting that it is reasonable to expect at the next general election. This is point which contributed to both newsletter favourite Professor Rob Ford and a journalist who definitely does get wisdom out of polling details, Stephen Bush, to be fairly critical of the value of the JL Partners and More in Common figures.
But once you understand the limitations from the details, there is real value in both approaches. As I put it on social media:
The seat total numbers from More in Common and JL Partners are of limited use.
But... there are two things that are very useful about this pair of projections.
1. We've got two different methodologies (MRP, local election data) to add to the national polls. Having evidence of multiple sorts helps give confidence in any common overall picture they draw, and
2. The patterns, rather than the levels, are useful. There are good reasons to think, e.g., that the MRP underplays the level of anti-Con tactical voting if there was a general election right now, and that the local by-election data may overplay it.
So 'x party loses y seats as tactical voting recedes' isn't a useful take. But 'widespread pattern of anti-Conservative tactical voting even though they're no longer in government' is a useful insight. The pattern can be identified even if the strength/level of it can't.
More of both therefore, please.
And lots more of the caveats and context when talking about both, double please.
I have not included the Stonehaven MRP that is just out in the above analysis, as at the time of writing all that is public so far is a newspaper story with minimal details. As the above shows, it is best to have more details first. A point that is reinforced by the newspaper’s account which, for example, suggests the MRP shows a mere 0.1% swing from the Liberal Democrats to the Conservatives since July yet that goes with the Lib Dem losing 25 seats. Without more details there is no way of telling whether that surprising pair of statistics go together because the MRP is producing insightfully unexpected or implausibly incongruous figures.5
Voting intentions and leadership ratings
Here are the latest national general election voting intention polls, sorted by fieldwork dates:
Next, a summary of the the leadership ratings, sorted by name of pollster:
For more details, and updates during the week as each new poll comes out, see my regularly updated tables here and follow The Week in Polls on Bluesky.
For the historic figures, including Parliamentary by-election polls, see PollBase.
Catch-up: the previous two editions
My privacy policy and related legal information is available here. Links to purchase books online are usually affiliate links which pay a commission for each sale. Quotes from people’s social media messages sometimes include small edits for punctuation and other clarity. Please note that if you are subscribed to other email lists of mine, unsubscribing from this list will not automatically remove you from the other lists. If you wish to be removed from all lists, simply hit reply and let me know.
Glass half empty or glass half full? Government approval ratings, and other polling news
The following 10 findings from the most recent polls and analysis are for paying subscribers only, but you can sign up for a free trial to read them straight away.
YouGov’s write-up of its end of year polling takes a very negative line about
Keep reading with a 7-day free trial
Subscribe to The Week in Polls to keep reading this post and get 7 days of free access to the full post archives.