by Adrian Worton
Since last week's General Election, we have been trying to use the results to understand how the model we built for it could have been set up in a way which would have allowed our predictions to be closer to the truth.
After looking at how much we favour the favourite within our model, this time we consider the use of a partybased swing, and the nature of independence between constituencies.
Since last week's General Election, we have been trying to use the results to understand how the model we built for it could have been set up in a way which would have allowed our predictions to be closer to the truth.
After looking at how much we favour the favourite within our model, this time we consider the use of a partybased swing, and the nature of independence between constituencies.
How would swing work within the model?
To answer this we need to go back to how we calculate our probabilities. This is quite mathsheavy, so feel free to skip to the next section. We will use the randomlyselected example of Gedling, which is displayed in the slideshow below. Use the arrow keys to navigate between images.
To answer this we need to go back to how we calculate our probabilities. This is quite mathsheavy, so feel free to skip to the next section. We will use the randomlyselected example of Gedling, which is displayed in the slideshow below. Use the arrow keys to navigate between images.

We initially start with odds given to us by bookies (Fig 1a), which naturally give a lower value the better a party is expected to do. In our example, the favorite is Labour, as their price is the lowest at 1/10 (0.1). However, we are going to be looking at probabilities, and therefore want a higher value to represent the favourite. So we transform each odd by using the formula 1/(odd+1) (Fig 1b)

This formula is unchanged from the method we used in our Premier League simulator, where we explain the reason for using it. However, we now raise each odd to the same power, which we have dubbed ϕ, which we set as equal to 1.8. As we see in Fig 1c, it reduces the size of all the odds, but affects the lower odds much more dramatically. Therefore, this increases the size of the favourite relative to the other parties.
We then sum up these values (in our example this is 0.842 + 0.030 + 0.002 = 0.874) and divide all of our values by this sum. In other words, we are just stretching the values (which stay the same size relative to each other) to make sure they add up to 1 (Fig 1d). This means we now have our probabilities.
However, we now want to apply a swing to each party which affects all their probabilities in the same way. To do this we go back to the stage shown in Fig 1c, before we rescale our values. We now multiply each value by a new variable, shown in Fig 1e. These values are consistent across all constituencies. For example, sLAB will take on the same value in Gedling as it will in Gordon. Each s term will be a random number between 0.5 and 2. In other words, it could halve or double each party's value, or somewhere in between.
We then rescale our values to equal 1 as before. Despite these swings sounding drastic, the effect within each constituency is small. In Fig 1f we see the effect of sLAB being equal to 0.5 (sCON and sUKIP left at 1, meaning they don't alter) only sees Labour's chances reduce by 3.4%. The change is even less drastic when we set sLAB to 2 (Fig 1g) or sCON to 2 (Fig 1h). However, even small changes like these can be quite drastic when applies across all 650 constituencies.
The use of our s variables are identical to the multiplies we used in an earlier article looking at which parties could influence the result of the election.
Is such a swing necessary?
We had realised that the odds may not be perfectly weighted to reflect the chances of a favourite winning a seat, and this is why we introduced ϕ ahead of the election. However, we had always assumed that the bookmakers had no (unintentional) bias built into their models in favour of one party or another. This meant we could assume all 650 were independent.
This was naïve, as we discovered on results day that the polls (which the odds, and therefore our probabilities) had horrendously miscalculated the national support for a lot of parties. Most notably, the Conservatives performed much better than expected, whilst Labour and the Liberal Democrats did far worse. It is easy to look back and credit these discrepancies to factors such as the "shy Tory" or "lazy Labour" effects, but these are not new factors, and we would expect bookmakers to have factored these into the calculation of their odds. Indeed, such faith in the level of analysis done by bookies is exactly why we have based so many models on their odds.
This is why our introduction of swing is needed  to allow our model to include scenarios where unforeseen factors influence the results. Aside from the two previously mentioned, there could be others not considered at all. For example, it is likely that the supporters of smaller parties such as UKIP and the Greens are going to be more vocal in their support, meaning their votes could be lower than suggested by polls. But as these are two parties who are reaching new heights in national profile, it is hard for psephologists to get an accurate measure of this effect, meaning they could easily under or overestimate their chances in all seats. Therefore it is entirely necessary that we include such randomness into our model.
What would our updated model have predicted?
We had actually toyed with the idea before the election, but decided against it. We can now retrospectively fit our new mechanism to our the final version of our model and run multiple simulations to see what we could have been predicting.
As there are far more factors to include in our model, we ran substantially more simulations; 10000, rather than the 2000 used in the preelection update. We will just look at the three parties whose results we did not get remotely close to with our predictions: Conservatives, Labour and the Liberal Democrats. The results of these simulations are shown below. Use either the arrow icons or the thumbnails on the right to navigate.
We then sum up these values (in our example this is 0.842 + 0.030 + 0.002 = 0.874) and divide all of our values by this sum. In other words, we are just stretching the values (which stay the same size relative to each other) to make sure they add up to 1 (Fig 1d). This means we now have our probabilities.
However, we now want to apply a swing to each party which affects all their probabilities in the same way. To do this we go back to the stage shown in Fig 1c, before we rescale our values. We now multiply each value by a new variable, shown in Fig 1e. These values are consistent across all constituencies. For example, sLAB will take on the same value in Gedling as it will in Gordon. Each s term will be a random number between 0.5 and 2. In other words, it could halve or double each party's value, or somewhere in between.
We then rescale our values to equal 1 as before. Despite these swings sounding drastic, the effect within each constituency is small. In Fig 1f we see the effect of sLAB being equal to 0.5 (sCON and sUKIP left at 1, meaning they don't alter) only sees Labour's chances reduce by 3.4%. The change is even less drastic when we set sLAB to 2 (Fig 1g) or sCON to 2 (Fig 1h). However, even small changes like these can be quite drastic when applies across all 650 constituencies.
The use of our s variables are identical to the multiplies we used in an earlier article looking at which parties could influence the result of the election.
Is such a swing necessary?
We had realised that the odds may not be perfectly weighted to reflect the chances of a favourite winning a seat, and this is why we introduced ϕ ahead of the election. However, we had always assumed that the bookmakers had no (unintentional) bias built into their models in favour of one party or another. This meant we could assume all 650 were independent.
This was naïve, as we discovered on results day that the polls (which the odds, and therefore our probabilities) had horrendously miscalculated the national support for a lot of parties. Most notably, the Conservatives performed much better than expected, whilst Labour and the Liberal Democrats did far worse. It is easy to look back and credit these discrepancies to factors such as the "shy Tory" or "lazy Labour" effects, but these are not new factors, and we would expect bookmakers to have factored these into the calculation of their odds. Indeed, such faith in the level of analysis done by bookies is exactly why we have based so many models on their odds.
This is why our introduction of swing is needed  to allow our model to include scenarios where unforeseen factors influence the results. Aside from the two previously mentioned, there could be others not considered at all. For example, it is likely that the supporters of smaller parties such as UKIP and the Greens are going to be more vocal in their support, meaning their votes could be lower than suggested by polls. But as these are two parties who are reaching new heights in national profile, it is hard for psephologists to get an accurate measure of this effect, meaning they could easily under or overestimate their chances in all seats. Therefore it is entirely necessary that we include such randomness into our model.
What would our updated model have predicted?
We had actually toyed with the idea before the election, but decided against it. We can now retrospectively fit our new mechanism to our the final version of our model and run multiple simulations to see what we could have been predicting.
As there are far more factors to include in our model, we ran substantially more simulations; 10000, rather than the 2000 used in the preelection update. We will just look at the three parties whose results we did not get remotely close to with our predictions: Conservatives, Labour and the Liberal Democrats. The results of these simulations are shown below. Use either the arrow icons or the thumbnails on the right to navigate.
On each graph, the green bars indicate the minimum and maximum of our previous predictions, and the star indicates the total each party actually finished on.
In all three cases, our prediction range are unsurprisingly much wider than before, and is now wide enough to include the real result.
Clearly, the real results all occur a very small proportion of the time, and had these actually been the results of our simulation, we would have been focusing on the center of each graph, where the vast majority of simulations ended up. But regardless, having prediction ranges which include the real result is very important, as it would have potentially lessened the shock of the exit poll and subsequent results. And the model should never be configured such that the real result was actually impossible to attain.
Conclusion
What this shows is why a partybased swing is a hugely useful addition to our model, and one which should have been implemented. Had it been done, we would have been a mile ahead of other predictors of the election, none of whom had produced any predictions which hinted at the true result.
An interesting development of this is that it would also have allowed us to make final constituencybased predictions based on the exit poll, by picking s values for each party which give expected seat results in line with the exit polls.
In all three cases, our prediction range are unsurprisingly much wider than before, and is now wide enough to include the real result.
Clearly, the real results all occur a very small proportion of the time, and had these actually been the results of our simulation, we would have been focusing on the center of each graph, where the vast majority of simulations ended up. But regardless, having prediction ranges which include the real result is very important, as it would have potentially lessened the shock of the exit poll and subsequent results. And the model should never be configured such that the real result was actually impossible to attain.
Conclusion
What this shows is why a partybased swing is a hugely useful addition to our model, and one which should have been implemented. Had it been done, we would have been a mile ahead of other predictors of the election, none of whom had produced any predictions which hinted at the true result.
An interesting development of this is that it would also have allowed us to make final constituencybased predictions based on the exit poll, by picking s values for each party which give expected seat results in line with the exit polls.
So, had this mechanism been in place when the exit poll was released, we would have been able to set the swing values shown in the table on the right.
Whilst it wouldn't have given us exactly perfect scores (not helped by the lack of exit poll data for every party), it would see a large improvement on our initial predictions. And what this means is that we would have been able to provide improved probabilities for parties winning each seat by applying this swing. To get an idea of what this would have been like, use the dropdown list below:
Whilst it wouldn't have given us exactly perfect scores (not helped by the lack of exit poll data for every party), it would see a large improvement on our initial predictions. And what this means is that we would have been able to provide improved probabilities for parties winning each seat by applying this swing. To get an idea of what this would have been like, use the dropdown list below:
This would have substantially improved our live coverage of the election, and will be a very useful tool in future use of this model.
General Election articles
Previous: Phinding Phi
Previous: Phinding Phi