Trusting Polls in 2018: What Makes a Poll Worth Looking At
by Adam Ciampaglio
As we move into the 2018 Election Cycle, polls are being questioned and scrutinized across the nation. With new polls coming out everyday on federal, state, and local races, many wonder how much they can actually trust these numbers. Both consumers and journalists face the difficult decision of which polls to trust. This isn’t a new problem either, as polls from the 2004 election cycle faced similar critique due to the Internet’s growing role in elections. By reflecting on the 2016 election through in-depth analysis and examining the methodology of pollsters, we can gain many insights. Moving forward, it is critical to apply this knowledge to our evaluation of polls in the current cycle.
When evaluating a poll, the first crucial step is qualifying what the poll is trying to decipher. The two most common objectives in political polling are either sentiment of the general public on issues or how voters are going to vote on Election Day. In order to accurately represent the general population, pollsters use various methodologies to increase the precision of the predictive power of their polls. This reduces subjective decision-making when it comes to the weighting of respondents.
The most important thing for any poll is the disclosure of information about the poll and how it was conducted. If a poll does not release the methodology behind it, there is very little room for evaluation or trust.
When looking at methodologies, the first and most influential factor to consider is how the poll was provided to its participants, otherwise known as respondents. The most common ways of administering polls include internet surveys, automated interactive voice response (IVR), and live questionnaires. With 16% of Americans lacking access to Internet, polls conducted online have been shown to under-represent various demographics and also have a much lower response rate, both of which undermine their accuracy. In regard to IVR, many new delivery methods have surfaced in order to navigate various federal and state laws on automated calling. These laws prohibit auto dialing cellphones, which leaves certain demographics underrepresented in polls that use this delivery method. These methods have a spotty track record and have been shown to lead to a decrease in initial and full response rates. Paired with inability to ensure the targeted respondent is the one reached and a history of poor execution across the industry, IVR conducted surveys are another mark of less accurate polling. In contrast to these delivery methods, the American Association of Public Opinion Research includes live polling one of the best practices of survey research, as it allows more nuanced questions with an overall higher response rate. Live polling is crucial to reaching the correct respondents and best representing their opinions.
Various other factors should also play a role in poll evaluations. The best polls on upcoming elections must target both registered and likely voters as respondents. With only 56% of voting age Americans voting in the past 2016 Presidential Election and even less turning in non-presidential years and primaries, polling registered voters alone leads to an easier to execute but less accurate poll. Likely voter polls use historical data paired with respondent intention to vote to create a more accurate picture of what election day results will look like. Balancing these factors to discern who qualifies as likely voters is key, as they could potentially be skewed unintentionally.
Cellphone vs Landline Mix
With the demographics of cellphone only voters and landline voters diverging, the mixing of respondents from both pools is critical. With various polling groups increasing their share of cellphone based respondents, polls without any of these respondents often heavily compensate for this in their weights and create a less precise poll. Still, this demographic difference can sometimes be used strategically, as seen when some pollsters use a higher landline mix for races that they feel will have a turnout of older voters. In the end, deciding the landline to cell phone ratio requires expertise and objective reasoning.
In addition to all of these factors, the way a poll is modeled or who the poll considers to represent likely voters is a key ingredient. Assessing the area being surveyed, historical data, and respondent reported motivation prior to conducting the poll helps avoid overweighting any demographic and gives likely voter polls an edge. Blurring the lines between art and science, this skill is hard to measure by anything other than the variance of a poll from actual returns. In an effort to measure this and all factors discussed above, the website FiveThirtyEight rates pollsters as a part of their extensive process of aggregating polls. In calculating their pollster ratings, FiveThirtyEight looks at methodology, accuracy and various other statistical factors. The key is to use these ratings as a guide but not ultimately the deciding factor in evaluating polls or polling firms. Utilizing all of this information moving forward is vital to not just critiquing, but fully understanding the multitude of polls coming out everyday.
At Data Orbital we take all of these factors into consideration whenever we conduct survey research. By using the best methodologies and maintaining the AAPOR standards of disclosure, we strive for accuracy in all we do. We accomplish this through a high level of intentionality in our survey structures and accurate analysis of the results. Taking great pride in predicting the 2016 general US Senate and Presidential race outcomes with less than a 1% margin of error, we see this reflected in our B+ rating by FiveThirtyEight. With excitement we look ahead to continuing to provide accurate, insightful predictive surveys throughout the 2018 election cycle.
Adam Ciampaglio is a Data and Marketing Analyst at Data Orbital working to help pull useful nuggets out of data to transform into understandable and actionable information for clients and the public.