Those of us locked into the political world like to say there are “no days off” when it comes to working on and thinking about elections. But for those who are not at the mercy of the 24-hour news cycle, let me set the stage for you: We are mere months away from the start of the Republican presidential primaries where Donald Trump is almost certainly going to clinch the Republican nomination, after which he will face off with Joe Biden in November (possibly from federal prison) in an inevitably close election. Feeling any deja vu?
I certainly am feeling a familiar sense of dread, which is why staying committed to being an informed voter is more important than ever. Voting and convincing others to vote is necessary for democracy. Voting is how we, the people, ensure our government reflects and protects our best interests. Staying informed, however, can be easier said than done. We are constantly being bombarded with information from all sides, and discerning the accuracy of facts and analysis of data to draw your own conclusions is increasingly difficult. Particularly when those facts have failed us before, it is our responsibility to account for the context in which those facts have appeared.
“Voting is how we, the people, ensure our government reflects and protects our best interests.”
Voting is a hallmark of democracy, and to keep the government accountable, public polling can be used to gauge citizens’ thoughts on several issues to conclude how well the government is operating. Although many citizens rely on polls to gauge how their interests align with their parties and candidates, public polling is not a perfect science. The imperfection of public polls was prominent in the 2016 election when Trump’s win over Hillary Clinton shocked the left and contradicted numerous polls that put Clinton in the Oval Office. The disappointment and distrust that followed warrant a closer look at how polls are conducted and what conclusions can be drawn from them.
First, we need to understand the context of conducting polls. Numerous media outlets use differing techniques to conduct their own public polls. Pew Research compared six different sources and their own polling techniques. The study found that the use of opt-in survey samples yielded largely inaccurate results about public opinion. Although they are more cost-efficient and accessible to a broader population through the internet, opt-in surveys are more likely than probability surveys to yield inaccurate and unreliable results. Results from opt-in surveys draw from random samples of responses that do not account for the target demographic of a particular question. In the realm of U.S. politics, opt-in surveys are additionally susceptible to bogus responses from non-citizens who intentionally answer surveys multiple times in order to skew results. If pollsters insist on using opt-in polls for the mass convenience, it should also be their responsibility to scrutinize panel samples to ensure that the results accurately reflect the attitudes of respondents. Considering a news source and how that source conducts research ensures an accurate understanding of the information generated. Taking the time to go one step further by investigating the methods of a source is always worthwhile.
The samples from which survey conclusions are drawn are additionally important because targeting or accounting for specific demographics can distort results. In 2016, several polls showed Clinton with a healthy lead over Trump. The wide margin in polling between Clinton and Trump led to widespread belief that Clinton would easily secure the win on election day. Pew Research identified several possible explanations for the disparities in polling, estimating that likely culprits were nonresponse bias and errors in the methods pollsters used to identify whom to poll. Both of these factors led to polls that did not account for the people who would actually go on to vote in the 2016 election. I remember when I woke up on Nov. 9 my heart dropped because I was so certain that Clinton would not just win, but win easily. Like the polls, I failed to account for those disillusioned from voting and responding truthfully to polls. I assumed certain demographics would vote for Clinton without doing the research to back that assumption up with evidence. Encased in a blue bubble, my false confidence led me to see Clinton’s win as inevitable. Needless to say, I’m doing what I can to prevent my heart from dropping like that again — fool me once, fool me twice.
Besides the polls themselves, current events and the time between the poll date and the election date should influence how we interpret the results. Polls released a year out from an election date are extremely unlikely to predict the results of that election. Any such polls that claim with certainty how some race has already been decided without accounting for a year’s worth of campaigning and global and national events are not to be taken seriously. Even polls released considerably closer to Election Day may not accurately predict that election. A Reuters poll released in August 2008, three months away from the November 2008 presidential election, showed John McCain had a five-point lead over Barack Obama. But, by mid-September of 2008, the worst financial crisis since the Great Depression dominated headlines and continued to do so through Election Day, leading to increasingly negative coverage of McCain. In 2013, Pew Research studied the effect of the crisis on the 2008 election. They found that in the critical five weeks leading up to the election, negative coverage of McCain largely outweighed the positive. By examining their own polls from those weeks, they found that McCain and Obama were polling evenly one week before the financial crisis, but every poll conducted after found McCain trailing Obama by at least six points. This example further illustrates how current events influence public opinion.
“Polling is not perfect.”
Bringing it back to the 2024 election, we should be using previous mistakes to revise our analysis of information generated today. We can comfortably predict the likelihood of Trump as the Republican nominee, and many polls have shown him neck and neck with Biden. The release of a poll from ABC News and The Washington Post that gave Trump a 9-point lead in particular drew national attention. If one were to only read the headline of that poll without reading up on the methodology or accounting for the context of the poll, it might seem like that poll spells out the beginning of an end. But even as pollsters continue to investigate and treat the flaws in their methods, outlier polls are unavoidable. Identifying an outlier poll can be tricky, but one good tip is to compare that poll to others that ask the same question. ABC News and The Washington Post did the work to investigate why their poll differed so greatly from the average of national polls taken by higher-quality polling institutions. They identified the possible factors as partisan nonresponse bias and sample error. Interpreting outlier polls can be slightly more complicated. Although it is inadvisable to use one poll as the foundation for our political understanding, there is also a danger in cherry-picking polls with results we find favorable and discarding those we do not. Exploring the results of an outlier poll in the context of others — not just comparing results but actually comparing methodology and margins of error — can help situate the outlier poll either within the range of results or help us conclude that we can discard it completely.
Polling is not perfect. For every election, not just presidential elections, polls can only gauge the errors in their methodology and results after the election has accounted for all votes submitted. There will never be a poll that will account for the entirety of public opinion. If you want your voice heard, my advice is to vote. What polls cannot account for in public opinion we must make up through the ballot. Vote in local elections, vote in state elections, vote in national elections, just vote.