American President Donald Trump has created a tsunami of labeling things he disagrees with as “fake news.” But it is neither an American nor a new phenomenon. A search in Google News for “fake news” produces more than 65 million hits, across the globe.


April, 08 2019   |   Donald F. Kettl


American President Donald Trump has created a tsunami of labeling things he disagrees with as “fake news.” But it is neither an American nor a new phenomenon. A search in Google News for “fake news” produces more than 65 million hits, across the globe. The European Commission has charged that a campaign by the Hungarian government to suggest that the EC supported illegal immigration was fake news. Meanwhile, the British government has created a permanent “Rapid Response Unit” to counter stories it believes are wrong or dangerous.

The “fake news” label sticks for two reasons. First, it is an irresistible trademark, which turns the traditional notion of “news,” which conveys a sense of trustworthiness, on its head, by suggesting that some uncomfortable things are wrong.

Second, some news unquestionably is fake, or at least erroneous. No news organization gets its stories right all the time. Social media has become intermingled with traditional news sources, and there is no filter on social media. Facebook and Twitter posts never come with “believe this” or “not true” labels. Facebook founder Mark Zuckerberg in 2018 apologized for a “break of trust” in sharing users’ data with Cambridge Analytica, which in turn was alleged to use the information to trap politicians. Nation states have actively used social media to undermine democratic institutions, and individuals tend to share information on social media with friends in ways that reinforce their views and create echo chambers, unmediated by a test for accuracy.

In an online world where traditional news outlets have websites posted next to social media sites of uncertain parentage, it has gotten far harder to define what constitutes “news”—and what news is “fake.” Even 24-hour news channels intersperse straight news coverage, which they try to get right, with talking-head panels, which they stage to boost ratings. It is impossible to draw the line between their efforts to report facts, through reporting and editing and fact checking, and their efforts to fuel viewership, through unvetted opinions, the louder the better.

Fact-checking organizations themselves have come under attack. Trump has attacked fact-checkers for inaccuracies. So has one of America’s new-breed liberal leaders, Rep. Alexandria Ocasio-Cortez (New York).

All of these deep disputes point to the same issue: it is increasingly the case that no one believes everything from anyone, and many people believe nothing at all. For the profession of public policy and policy analysis, devoted to the pursuit of the best answers for society’s biggest problems, that is a fundamental—indeed, existential—problem, for five reasons:

  • Hired guns. There is a sense among many critics of policy analysis that analyses steer toward the views of those who pay for the work.
  • Investigator bias. Critics often assume that analysts carry their own biases and that these predispositions shape the conclusions. After all, many policy analysts work for think tanks identified as “left-leaning” or “right-leaning,” and there is a presumption that the work they produce is never purely fact-focused, and that it is spun to fit the predilections of analysts or those who fund them.
  • Fuzzy knowledge. There is never any piece of policy analysis on any complex issue that produces a firm conclusion. Indeed, analysts are taught about the importance of significance tests and sample sizes. Their goal is to press their findings as hard as possible to make the statistical tests as significant as possible. But the conclusions are never certain and, wherever there is uncertainty, there is always room for quibbling—especially, as opponents of an analysis might argue, the uncertainty itself is the product of partisan assumptions that analysts make to begin with.
  • Rear-view mirrors. Compounding the problem is that policymakers need to look ahead, to figure out what to do. Policy analysts often build their work by looking backwards, to get as much data as they can find. In fact, the best way to reduce uncertainty in a policy analysis is to look as far back as possible but, of course, that often only reduces the analyst’s insight into the future. 
  • Solving the wrong problem. All of the previous issues roll into a far bigger challenge. The easiest way for analysts to escape these traps is to define problems for their work that are distant from the problems that policymakers need to solve. But that, in turn, only compounds the fake-news problem. If there is a gap between what analysts say and what policymakers need, there can be a large gulf that only values and assumptions can fill in. And that, in turn, only opens the way to more charges of fake news.

 

The awful paradox is that the very professionals—and academics—devoted to reducing the scourge of fake news find themselves inescapably caught up in it. Finding the way out requires truly innovative and clever strategies.

  • Vaccinating against hubris. The first step is to recognize that there is no particular reason why policymakers need to pay any attention at all to analysts or their analysis. After all, policymakers have made decisions—sometimes not good ones, of course—for thousands of years, without recourse to advanced benefit-cost or evaluation analysis. In a world full of fake news, there is no lack of views to reinforce any policy decision. So the route to pushing back the problem of fake news begins by recognizing that policymakers do not need to do so.
  • Getting the problems right. One of the reasons why policymakers do not pay attention to analysis is that they do not see useful answers in the work that analysts produce. There is an inherent tension between advancing theory, which requires careful, sometimes painstaking building on past academic work, and problem solving, which requires answering the questions that policymakers most want to have answered. Rarely are they the same. Analysts and academics, often two different sides of the same brains, need to build better bridges between the different sides. The first step is to tune the research radar to be more attentive to the questions where policymakers most need answers. That is connected to the next point.
  • Asking questions that need answering, not focusing on problems where there are data. The research imperatives of the academic community drive researchers to work where they can advance theory through methodologically sophisticated tours de force. That creates a strong incentive to focus on small questions, use existing datasets, and seek high degrees of causality. The problems with which policymakers are wrestling, more often, tend to be big puzzles, where data are often scarce, and where causality is often elusive. That tends to shunt the research agenda away from the policy questions on which policymakers most need answers, and to push the work that researchers produce out of sync with when policymakers need those answers. Researchers need to focus far more on the questions for which policymakers need answers, rather than on questions their data allow them to investigate.
  • Relying more on big data. One solution to find the data that researchers need is to explore the data that can be extracted from large-scale activities. There is the truly big supply of big data that flows from data mining by organizations like Google and others. But there is also the opportunity to meld data collected for other purposes (like Yelp restaurant reviews) with public problems (like where best to deploy restaurant inspections). There is the thoughtful review of large-scale operations for important findings, like patterns of fraud that can be teased from thousands (or millions) of health insurance claims. Unexpected data sources can provide evidence for exploring important problems in insightful, unexpected ways.
  • Talking in a language convincing to policymakers—and the public. Sophisticated policy analysis sometimes gets in its own way by conversing in prose that is difficult for outsiders to penetrate and by backing up the conclusions with statistics that only a narrow band of cognoscenti can understand. New technologies make the visualization of data far more effective and persuasive. And good data pictures are far more likely to be much more memorable.
  • Telling a good story. Many policymakers, of course, remember stories much more often than they remember regression coefficients. They tell stories to connect with their stakeholders. Highly trained analysts sometimes scoff at such storytelling, but telling good stories has to become a more integral part of policy analysis. The messages that analysts want to convey are far more likely to be effective and persuasive if they are part of a good story. This does not necessarily mean that the research needs to be unsophisticated or unnuanced. Just as the central tendency of a very complex dataset can be captured in a single statistic, like the mean, a very complicated problem can be captured in a single anecdote, if that anecdote captures what the evidence finds. Analysts sometimes see the instinct for storytelling as a conflict with their goal of conveying sophisticated meaning. They are far more likely to be effective in the battle against fake news by embracing storytelling—and by ensuring that the stories that get told are the ones that capture their best understanding of what works and what does not.

 

The “fake news” debate is real, and it is not going away. The roots of the debate are ageless, but the pressures fueling it have intensified and make it even hotter. At the same time, not only is it possible to know things. The more complex public problems become, the more important it is for societies to make smart choices, because the landscape is less and less forgiving of errors.

Smarter policy analysis surely cannot solve all of these problems. It can never tell policymakers what to do with any real certainty. But good policy analysis can vastly improve the odds of success and protect the system from failure. And that is a very good and important thing, ever more precious in an ever more contentious world.

 


Donald F. Kettl is Professor and Academic Director at LBJ Washington Center, Lyndon B. Johnson School of Public Affairs, The University of Texas at Austin. He is also a nonresident senior fellow at the Volcker Alliance, the Brookings Institution and the Partnership for Public Service.

Kettl has authored or edited numerous books, including Can Governments Earn Our Trust? (2017), Little Bites of Big Data for Public Policy (2017), The Politics of the Administrative Process (7th edition, 2017), Escaping Jurassic Government: Restoring America’s Lost Commitment to Competence (2016), System under Stress: The Challenge to 21st Century Governance (3rd edition, 2013), System under Stress: Homeland Security and American Politics (2004), The Next Government of the United States: Why Our Institutions Fail Us and How to Fix Them (2008), and The Global Public Management Revolution (2005).

He has received three lifetime achievement awards: the American Political Science Association’s John Gaus Award, the Warner W. Stockberger Achievement Award of the International Public Management Association for Human Resources, and the Donald C. Stone Award of the American Society for Public Administration.


Share this news

Comments (0)


It is mandatory to be registered to comment

Click here to access.

Click here to register and receive our newsletter.

Partners Program

Executive Master (EMPA)

PUBLIC 50

Public 50