top of page
Writer's pictureVeritium Political Insights

Why the Pundits Are (Occasionally) Wrong about Elections

Updated: Jan 24, 2022

Nate Silver quickly rose to fame as his predictions impressed the world. In 2008, Silver and his website (FiveThirtyEight) correctly predicted the outcome in 49 of 50 states in the presidential election as well as the victor in all 35 Senate races. Four years later, he was even better, predicting all 50 states correctly in the Presidential election. Even in 2016, when the nation was reeling in shock after Donald Trump defeated Hillary Clinton to become the 45th president of the United States, Silver bested his peers. Some forecasters had Clinton as high as a 99.9% chance to win on election night, but FiveThirtyEight was considerably more conservative, only giving Clinton a 71.4% chance of winning the election. 71.4% was, in retrospect, not a bad forecast. FiveThirtyEight was roughly in line with betting markets, but also erroneously predicted that Senate control was likely to be split 50-50 (Republicans won 52 seats). In 2018, FiveThirtyEight erred again, predicting Democrats would win Senate races in Indiana and Missouri only for them to lose by 6%. FiveThirtyEight’s Senate forecast missed to the left for the third straight election in 2020; Democrats only won 50 seats instead of the expected 52. Some of FiveThirtyEight’s forecasts were terribly inaccurate. FiveThirtyEight missed to the left in Iowa’s Senate race by 5.2%, in Maine by 10.6%, in South Carolina by 5.1%, in Kentucky by 6.6%, and in Montana by 6.8%. For the third Senate election in a row, FiveThirtyEight underperformed bettors on PredictIt, who used common sense to guess that Democrats had effectively no chance to flip Senate races in deep-red states across America. Polling seems to be a completely flawed institution, and constantly misses to the left no matter how much FiveThirtyEight tries to adjust it to be accurate. Other forecasters were similarly incorrect in 2020, as The Economist went as far left as to favor Democrat Theresa Greenfield in Iowa (lost by 6%) and RaceToTheWH favored Biden to win Ohio over Trump (lost by 10%). These forecasts were handily defeated by rational bettors on PredictIt who gave Trump a 72% chance to win Ohio and Joni Ernst (R) a 65% chance to win Iowa. These are especially shocking examples of a consistent problem in the industry, where forecasts have joined the polls in understating Republican electoral support. After all, Ernst had lost 5 of the 8 final polls to Greenfield and Biden was leading in the final three polls in Ohio. Polls were so misleading that Veritium Insights’ Senate forecasts from a year before the 2020 election achieved 91.4% accuracy, tying the number of races predicted correctly by election night predictions from FiveThirtyEight, RaceToTheWH, DecisionDeskHQ, and bettors on PredictIt (as well as defeating The Economist and the Princeton Election Consortium). Although we believe this is an aberration (polls were especially bad), and we would never bet on our forecasts from a year out over FiveThirtyEight forecasts from the night over in the future, this should never happen. And our forecasts from the night before election night crushed the competition with 97.1% accuracy, only failing to predict Jon Ossoff’s shocking victory over David Perdue. While 2020 may have been an aberration, it is still a clear sign that models have been relying too much on conventional polling as a predictive tool.


Polls have a Consistent Sampling Bias and Need to be Adjusted

Republicans have outperformed their average generic-ballot polling margin in the final house popular vote result in 17 out of the last 20 midterm elections, by an average of 2.5 points. Although polling has gotten a lot of bad press in the modern era for underestimating Republicans, this has been happening since 1980. This issue has been exacerbated in the Trump era, but it also has been consistent enough to where forecasts need to account for this shortcoming. Our models, which are built off of past data, naturally adjust polling an average of 2.41 points towards Republicans. Forecasts that are built normatively (instead of being built on real data) will not make this important adjustment. Given that most forecasts do not adjust their polls towards Republicans, they end up slanting too far towards Democrats.


Race Polling is Not Good Enough to Tell the Whole Story

Our model has also demonstrated that the public weighs the importance of state and single race polling too strongly. These smaller sample polls (such as polling in one House or Senate race, or a single state in a Presidential election) are significantly more predictive when combined with other data, rooted in historical elections. Although historical election data is weaker than polling in the sense that it is outdated, it is stronger in the sense that it consists of votes and real-world turnout. Our modeling concludes that race polling should only consist of an average of 61% of a race’s prediction (higher or lower depending on the quality of polling in the race), with the other 39% a combination of other statistically significant factors such as past election results, approval rating, incumbency, and national polling. National polling is generally more stable and accurate than state polling, as well as far less volatile. Other forecasting services almost exclusively use polls to predict races (polls, for example, consisted of over 98% of FiveThirtyEight’s 2018 Florida Senate Forecast, the most recent year FiveThirtyEight published their methodology). The over-reliance on volatile and low-quality state and district polling caused forecasters to make preposterous predictions in red states in 2020 (and caused Democratic Donors to pour $220 million into Amy McGrath and Jamie Harrison). Analysis of past data just shows that polls only tell so much of the story, and more complex modeling is needed to increase accuracy.


Bottom Line

Pollsters are struggling to predict election results accurately. Until pollsters are able to reconcile the differences between their polls and election results, forecasting must be wary about relying on polls too much. Strong and accurate forecasting models must not only adjust polling based upon historical differences, but also account for other data points such as historical election results, incumbency, and political environment analysis.


Comments


bottom of page