How polling got it wrong this year is a complicated tale

Published 8:00 am Thursday, December 8, 2016

DHM RESEARCH - John Horvick

Monday, Nov. 7 was a simpler time, with a simpler story. All anyone could talk about was what a Clinton presidency would look like, and what sound such a lofty glass ceiling might make when it finally broke. The Democrats seemed likely to eke out a majority in the Senate, while the Republican party teetered on the edge.

So much for the story being a simple one. One of the many looming questions in this newly complicated tale: How could we have been so wrong?

As a pollster, I’ll take some of the arrows.

A common critique in the aftermath is that polls were far off, and that these errors created a false belief Clinton was sure to win, which may have depressed Democratic turnout.

There is some truth to this narrative, especially looking at the Midwest and Rust Belt states. But taking a national view reveals that not much has changed from 2012 to 2016 in terms of the overall quality of polls.But taking a national view reveals that not much has changed from 2012 to 2016 in terms of the overall quality of polls.

National polling was quite good this year, and even better than four years ago. In 2012, national polling averages had Barack Obama with a 0.7 percent lead over Mitt Romney. Obama ended up winning the national popular vote by 3.9 percent, which was 3.2 percentage points higher than predicted. In this election, Clinton lead in national polling averages by 3.2 percent. As of this writing, Clinton currently leads Trump in the popular vote by 1.8 percent, which is a difference of 1.4 percentage points: a smaller gap than between the predictions and results of 2012.

Looking at the state level admittedly reveals another story, and polling errors in Michigan, Wisconsin, Pennsylvania, and elsewhere have received much scrutiny. In my professional opinion, what’s been too long ignored — by pollsters, media outlets, and citizens alike — are some inherent uncertainties in election polling.

First, we must acknowledge that predicting close races with few undecided voters is much easier than doing so for close races with many undecided voters.

This election, significantly more poll respondents reported that they were “undecided” or planning to vote for a third-party candidate. Nationally, about 10 percent of voters were not committed to voting for a major party candidate, as compared to around 5 percent in both 2008 and 2012. In many states, this number was even higher than 10 percent. The impact of this larger pool of uncertain voters was often downplayed or ignored in media commentary about the polls.

Another source of uncertainty is in modeling election turnout, where pollsters project who will vote. A pollster’s best estimate of turnout, sometimes called their likely voter model, is based on historical trends, screening questions, and their own best guesses. Inherently, this practice is built on a set of assumptions that can differ from what happens on election day.

In close elections, if turnout varies even slightly from expectations, assumptions can become sources of error.

In Wisconsin, Michigan, and Pennsylvania, pollsters assumed that turnout among Democrats would match or exceed that of previous elections. Instead, turnout was lower.

If pollsters had presented a range of polling numbers based on a range of turnout assumptions — including the possibility of depressed turnout among Democrats — the public would have had a better appreciation for the unknowns.

Election polling should not be viewed as an infallible crystal ball.

Election polling should not be viewed as an infallible crystal ball. Instead, by presenting multiple outcomes, pollsters could reframe our work in a more accurate light: As a tool to help make informed predictions about electoral outcomes.

Finally, election forecasting platforms such as fivethirtyeight.com reveal an undercurrent to all these critiques: humans are bad at understanding probabilities and reckoning with uncertainty.

These platforms use cutting edge modeling and the aggregation of poll results to provide estimated win percentages. On the eve of the election, fivethirtyeight.com forecast a 71 percent probability of a Clinton win, and a 29 percent probability of a Trump win. Since then, the site has been criticized for being “wrong,” when in fact they said all along that a Trump win was a distinct, if less than likely possibility.

From our overestimating the risks of flying, to our underestimating the risks of driving, people of all sorts — pollsters included — often shelve inconvenient probabilities to maintain our preferred storylines, preserve our assumptions, and protect ourselves from the notion of uncertainty itself.

A Trump win was always possible. While at the time it may have been convenient to downplay smaller uncertainties — the high level of undecided voters, the impossibility in perfectly predicting turnout — we now find ourselves faced with uncertainty of a much larger magnitude: What does a Trump presidency actually look like?

John Horvick is vice president and political director of DHM Research, a leader in public opinion and policy research. DHM is a non-partisan and independent firm located in Portland, Seattle, and Washington, D.C., and has been conducting research for almost 40 years. Learn more at www.dhmresearch.com and find us on Twitter @DHMresearch .

Marketplace