After the Commission on Evidence-Based Policymaking (CEP) released its final report last fall, Congress introduced bipartisan legislation with support from the executive branch to implement its recommendations, all of which aim to elevate the availability and use of data to build evidence about government programs while protecting privacy and confidentiality
The emphasis on evidence-based policymaking is a welcome one, but progress in this arena will have to contend with fundamental tensions between academic evidence-based policymaking and the unpredictable and sometimes messy world of politics. Researchers need to address these tensions in order to reach policymakers and shape policy.
Politicians don’t always have clear-cut goals. Research should reflect that.
The academic model of building policy from evidence suggests that policymakers have explicit goals and that the weights they place on different outcomes are known. Our framework for evaluating policies presumes that the criteria for success have been specified. In reality, politicians are often quite fuzzy about the specific outcome they want to achieve. They have to be—building coalitions requires balancing disparate, sometimes conflicting agendas and constituencies. In response, researchers should consider evaluating policies with a range of outcomes in mind. Rather than expecting policymakers to be explicit in stating goals, researchers should be explicit in stating how their results stack up against a range of different outcomes.
Politicians and the public pay attention to the mechanisms that allow a policy to achieve its intended goal. Researchers should, too.
Building policy from evidence often means being relatively agnostic about how a policy achieves its intended result. But people and politicians often have strong beliefs about the way policies work, and evidence requires them to confront those beliefs. For example, many policymakers believe in free markets not simply as a way to get things done, but as an ideology. Sometimes—as with health insurance—perfectly free, unregulated markets may not be the best way to achieve the obvious goal of expanding coverage. Researchers may find their work more valued if they more directly assess and acknowledge the implications of the mechanisms that were used to achieve a given result.
Spell out consequences for non-actors.
Politicians and the press thrive on individual stories and anecdotes. Indeed, storytelling is hard-wired into all of us. Not infrequently, quantitative evidence from data is inconsistent with compelling narratives, even when both the narrative and the evidence are true and valid. One reason for this is structural. Narrative is all about action—“I did this or that because of this or that”—but quantitative evidence also captures the behavior of people who chose not to act because a policy was in place.
Stories of these non-actors often don’t show up in the dominant narrative, which usually focuses on people joining programs, or taking-up coverage because of an expansion or a mandate. But the quantitative evidence is often driven by people who stay on coverage a little longer or fail to drop it when they otherwise would have. Individuals in this latter group are not only invisible, but may not even be aware that their inaction is the result of specific policies. Members of this group often don’t even know who they are, so they have no narrative, and yet there they are—faceless and anonymous—driving the statistics. Researchers should, wherever they can, explicitly address how both actors and non-actors drive their results. That will help policymakers understand the disconnections between dominant narratives and policy outcomes.
Don’t just focus on the average experience, but consider the outliers.
Finally, another disjunction between narrative and evidence has to do with magnitudes. This is a country of over 320 million people, so it’s nearly inevitable that there will be a story that conflicts with the evidence of policy impact on the average experience. It’s difficult for policymakers and voters to process the difference between one compelling anecdote and millions of statistical lives. One way to address this disjunction may be to try to put parameters around it. Researchers can go beyond offering estimates of the average effect and use statistical methods to provide estimates of the variations in responses to a policy and the likely number of people who fall into each category.
Photo Credit: Maura Friedman/Urban Institute