Public Opinion Poll Data Are Very Valuable for Public Policy Planners, but Data Can Prove Counterproductive if Polling Data Are Not Representative of the Opinions Sought.


July, 17 2020   |   Kenneth F. Warren


The rise of scientific polling: The quest for representativeness

Scientific polling began in 1936 in the United States when George Gallup, using “scientific” quota sampling methods, correctly predicted that Franklin D. Roosevelt, the Democratic candidate, would win the 1936 presidential election (Hillygus, 2011), embarrassing The Literary Digest, which used an unscientific straw poll that sampled people unrepresentative of the US electorate; surveying mostly telephone and car owners and Literary Digest subscribers, and excluding those less affluent Americans during the Great Depression who could not afford phones, cars, or a subscription to The Literary Digest (Warren, 2001; 2016).

Although The Literary Digest surveyed roughly ten million people with 2.27 million responding (Friedman, 2007), it predicted Alfred Landon, the Republican candidate, would win, yet he lost to Roosevelt, 60.8% to 36.5% (Leip, 2020). The gross inaccuracy of the Literary Digest poll destroyed the credibility of unscientific straw polls, ushering in new pollsters dedicated to “getting it right” through the development of scientific polling methodologies. What was learned in 1936 was that it is not how many people are interviewed, as long as a reasonable number of people are interviewed (say, 400 minimum), but whether pollsters interview a representative sample.

Although political polls attract much attention, most polls are commissioned by policy planners in the private and public sectors. For example, businesses rely heavily on market surveys to measure the likely appeal of their products. In democracies, public policy planners use polls to help them design and implement policies that promote the public welfare. However, planners must be shrewd enough to weigh the value of public opinion, realizing that the public often has only a superficial understanding of issues. Practically, public policy planners should not ignore public opinion because this induces higher implementation failure rates due to citizen resistance, often leading to costly delays and lawsuits (Bourdeaux, 2008).

 

Getting a representative sample has always been the goal of scientific polling

Policy planners should be aware that pollsters today face enormous challenges in trying to get a representative sample. The use of landline phones, for decades the “gold standard” in survey research, has been compromised by new technologies, especially the use of caller screening devices and cell phones, causing the response rates to decline sharply from 36% in 1997 to 6% by the end of 2018, forcing Pew Research and most other polling firms to abandon landline phone sampling as cost-ineffective (Kennedy and Hartig, 2019). Pollsters have desperately sought to transition from telephone surveys to alternate survey methodologies, but questions remain as to whether any are sound enough to generate reliable, representative samples (Kennedy and Hartig, 2019). If policy makers are relying on poll data for planning purposes, they should be aware of the different polling methods being used by pollsters today so they can determine whether the results are reasonably representative. It is fair to say that an unrepresentative poll is not a reliable poll and therefore should not be used by public policy planners.

Quota Sampling: Pollsters today, out of necessity, have returned to an old yet flawed polling methodology, quota sampling, first used by George Gallup in the 1936 presidential election. Gallup predicted FDR would win with 54% of the two-party vote, yet FDR received 61% (Cantril, 1991), quite far off the mark, indicating his survey methodology needed to be improved. In theory, quota sampling makes a lot of sense. That is, pollsters draw and sample respondents who are proportionately representative of the demographics of the targeted population universe (e.g., 51% female, 49% male; 71% white, 12% black, 13% Latino, 4% “other”). However, as Gallup eventually came to realize, this methodology is rooted in the erroneous assumption that the various demographics are known. The reality is that the mass data source lists that pollsters rely upon (e.g., census data, various residency and utility lists) are imperfect from the beginning and become outdated rapidly. Consequently, if pollsters seek to interview 13% Latinos, but the true percentage is 19% Latinos, their poll results reflecting Latino opinion will be wrong since the poll is underrepresenting Latinos.  

Quota sampling poll results will be compromised by the pollster’s inability to mirror perfectly the demographics in their samples, but conscientious pollsters using quota sampling should produce results that are “good enough.” Any reputable polling firm will include the percentages of various demographics when presenting poll results, usually found at the end of the poll’s results. Public policy planners should know enough to scrutinize the demographics used to draw the sample because, if the demographics are clearly off, so will be the poll results.

Random Sampling: Because pollsters using quota sampling failed to predict Harry Truman’s victory over Thomas Dewey in the 1948 presidential election, pollsters turned to random sampling. Ideally, random sampling is the best way to poll because nothing is assumed about the population universe. According to probability mathematics, if everyone in the population universe stands an equal chance of being selected, those randomly sampled should represent a perfect microcosm of the total population universe. When landline phones were randomly sampled from the 1960s through the early 2000s (the gold standard era in polling), when 95-96% of Americans used landline phones, polling was accurate and inexpensive.

However, polling firms have had to abandon traditional phone polling en masse since such polling has become too cost-ineffective (Kennedy and Deane, 2019), because by 2020 about 60% of Americans had only cell phones (Zagorosky, 2019), creating a very serious problem for pollsters.

The obvious advantage of reaching people through landline phones is that they are at home, where it is more convenient to respond to a poll, while cell phone users are often not at home and called when it is inconvenient to respond, thus greatly increasing the rejection rate. Also, cell phones are subject to dropped calls; US federal law prohibits cell phones to be called by automated dialing;[i] and cell phone numbers are often not associated with their present living address, but where they lived when they first purchased their cell phone, frustrating pollsters trying to reach respondents living in a targeted area (e.g., Ohio, not New York, where they first bought their cell phone). Pollsters have experimented with merging landline phone numbers with cell phone numbers, but this strategy has proved too costly in reaching the “right people.”

Robo-polls: Having to abandon traditional phone polling, many polling companies have resorted to using interactive voice response (IVR) or robo-polls. Polling firms using IVR can reach many thousands of people for only pennies a call, getting, say, 700 people to respond, despite the enormous rejection rate, within a few hours. But this methodology has serious flaws. One problem is that there is no guarantee that an eligible respondent answers the phone, although this has not proved to be a major problem since age and other eligibility questions are asked as “filter” questions and respondents seem to answer the questions about as accurately as in live interviewing. The most serious problem is that the demographics by age, race, income, education, and domicile are quite different for landline/broadband phone users than for cell-phone-only users (Anderson, 2018). Since, as noted, federal law prohibits the use of automated phone calling of cell phones, this means that younger people are grossly underrepresented in robo-calling because only disproportionately older landline phone users are reached,[ii] presenting a very serious representativeness problem.

Pollsters try to fix this representativeness problem by weighting the data. Virtually all poll data are weighted, so theoretically this is not a problem. Even in the golden years of random landline phone polling, some weighting was normally used to tweak for age and gender since almost always disproportionately more females and older people were interviewed. However, in robo-polls younger people are so underrepresented that it defies common ethical sense to weight for younger respondents when so extremely few have responded (say, 5, when the demographic calls for 57, because the opinion of 5 cannot accurately represent the opinion of the 57) (Cohn, 2016). Consequently, policy makers should be aware of the problems associated with robo-polling, but particularly when they are presented with only the weighted poll results. It might be a good idea to ask the polling firm for the unweighted and weighted data so the representativeness of the poll can be assessed.

Yet, policy makers should be aware that imperfect weighting of demographics may not make any significant difference, depending upon the questions asked. Scrutinizing cross-tabulations often shows that there is no real difference in how questions are answered by, say, males and females or between younger and older adults. This is commonly found on the ratings of city services, for example, where males and females and younger and older adults tend to rank city services about the same. However, scrutiny is advised since some poll questions may be very demographically sensitive. For example, when a city wants to know how many residents support the building of a new hockey rink facility, results will likely differ significantly along age and gender lines, with younger males much more likely to support construction of a proposed hockey rink than older residents, especially older female residents. Likewise, resident surveys almost always find that renters are more likely to favor proposed property tax increases than homeowners because homeowners must pay this tax, while renters do not.

Finally, it is worth noting that robo-polls require calling many thousands of landline phone numbers to get a proper sample size, so robo-calling will only be feasible in quite large communities where pollsters have enough phone numbers to complete the poll.

Internet Polling: Pollsters have been experimenting with Internet panel polls in recent years because of cost-prohibitive landline phone polling, but policy planners should also be aware of their serious methodological weaknesses, causing the American Association of Public Opinion Research (AAPOR) to claim in 2010 that such polls are inherently unrepresentative. Unfortunately, AAPOR’s basic criticism of Internet panel polls still apply, causing their results to still be suspect. Internet panel surveys are not as problematic as they once were because today over 90% of American adults use the Internet (Clement, 2019). The chief problem with Internet panel polls is that panelists are chosen non-randomly to participate in the panel and often are given rewards for participating. Even though efforts are made to make the panel representative of different demographics, the fact that panelists essentially “self-select” themselves makes them inherently unrepresentative and, in a real sense, as AAPOR charges, “professional interviewees” (AAPOR, 2010; AAPOR, 2019). As with robo-polls, a real concern for public policy analysts is that Internet panel polls are often impossible to administer in smaller communities—below, say, 500,000, often the focus of policy makers—because polling firms often have trouble recruiting enough panelists to reflect the community’s demographics.

In sum, policy planners should use polls because they are essential to intelligent planning. Polls provide quantified public opinion preferences on public policy that can provide useful guidance to planners, especially in democracies, where public opinion does and should play a vital role. But policy planners must be informed users of public opinion polls, understanding that today all pollsters, regardless of their polling methodologies, have representativeness problems. I have highlighted the major reasons why. Yet the good news is that pollsters, determined to get a representative sample, have worked diligently to minimize biases and generate polling data that is still “representative enough” to be useful in public policy planning. Despite well-publicized mispredictions by pollsters (e.g., Clinton winning the 2016 presidential election; Brexit being defeated), recent research by scholars from the University of Southampton and the University of Texas, Austin, concludes that “although the polling industry faces a range of substantial challenges, we find no evidence to support claims of a crisis in the accuracy of polling” (Jennings and Wlezien, 2018).

 


Kenneth F. Warren is Professor of Political Science at Saint Louis University and President of The Warren Poll.

 

[i] Telephone Consumer Protection Act of 1991, as amended, 47 USC § 227.

[ii] Technically, US federal law allows cell phones to be called if initially a live interviewer reaches the cell phone user and then switches to IVR, but this technique has proved too impractical/expensive to be useful.

 

References

 AAPOR (2019). Report of the AAPOR Task Force on Transition from Telephone Surveys to Self-Administered and Mixed-Mode Surveys, Task Force Report. https://www.aapor.org/getattachment/Education-Resources/Reports/Report-of-the-Task-Force-on-Transitions-from-Telephone-Surveys-FULL-REPORT-FINAL.pdf.aspx

AAPOR (2010). AAPOR Report on Online Survey Panels, AAPOR, March 25, 2010, https://www.aapor.org/Communications/Press-Releases/AAPOR-Releases-Report-on-Online-Survey-Panels.aspx

Anderson, Chris (2018). “2017 was a good year for (good) polls,” Beacon Research, January 20, 2018, https://beaconresearch.com/2017-was-a-good-year-for-good-polls/

Bourdeaux, Carolyn (2008). “Politics versus professionalism: The effect of institutional structure on democratic decision making in a contested policy arena,” Journal of Public Administration Research and Theory, 18,3: 349-373.

Cantril, Albert H. (1991). The Opinion Connection: Polling, Politics, and the Press (Washington, DC: CQ Press), 93-106.

Clement, J. (2019). “Internet usage in the United States – Statistics & Facts,” statista, August 20, 2019, https://www.statista.com/topics/2237/internet-usage-in-the-united-states/

Cohn, Nate (2016). “How one 19-year-old Illinois man is distorting national polling averages,” The New York Times, October 12, 2016, https://www.nytimes.com/2016/10/13/upshot/how-one-19-year-old-illinois-man-is-distorting-national-polling-averages.html

Freidman, David, Robert Pisani, and Roger Purves (2007). Statistics, 4th ed. (New York: Norton, ISBN 0-393-92972), 355-336.

Jennings, Will and Christopher Wlezien (2018). “Election polling errors across time and space,” Nature Human Behavior, Vol. 2, April 2018, 276-283. Quote, 283.

Kennedy, Courtney, and Claudia Deane (2019). “What our transition to online polling means for decades of phone survey trends,” Pew Research Center, https://www.pewresearch.org/fact-tank/2019/02/27/what-our-transition-to-online-polling-means-for-decades-of-phone-survey-trends/

Leip, David (2020). “1936 United States Presidential Election Result,” https://uselectionatlas.org/RESULTS/

Kennedy, Courtney, and Hannah Hartig (2019). “Response rates in telephone surveys have resumed their decline,” Pew Research Center, https://www.pewresearch.org/fact-tank/2019/02/27/response-rates-in-telephone-surveys-have-resumed-their-decline/

Pew Research (2019). “Mobile Fact Sheet,” https://www.pewresearch.org/internet/fact-sheet/mobile/

Warren, Kenneth F. (2001). In Defense of Public Opinion Polling (Boulder, CO: Westview Press), 87-88.

Warren, Kenneth F. (2016). “Public Opinion Polls” Wiley StatsRef: Statistics Reference Online, 2.

Zagorsky, Jay L. (2019). “Rise and fall of the landline:143 years of telephones becoming more accessible – and smart,” https://phys.org/news/2019-03-fall-landline-years-accessible-smart.html

 


Share this news

Comments (0)


It is mandatory to be registered to comment

Click here to access.

Click here to register and receive our newsletter.

Partners Program

Executive Master (EMPA)

PUBLIC 50

Public 50