Cult of the Irrelevant: National Security Eggheads & Academics
For decades, international relations scholars have increasingly worried that American foreign policymakers aren’t buying what they’re selling. From the Vietnam war to NATO expansion to the Iraq war, the Beltway foreign policy elite has frequently ignored the work of academics who study those subjects, often at great cost to the nation. Why do foreign policymakers so rarely pay attention to scholarship on the regions they are bombing and seeking to dominate?
Michael C. Desch, political science professor at the University of Notre Dame, lays blame at the feet of the academy. In his new book, Cult of the Irrelevant: The Waning Influence of Social Science on National Security, Desch writes that “the privileging of complex methods and universal models over engaging substantive issues…reduced the policy relevance of the work of many academic defense intellectuals.” In other words, by moving toward abstruse ontological questions (“Sovereignty and the UFO” comes to mind) or complex statistics or formal modeling (coefficients or Greek letters), incentives inside the academy have shifted the field in the direction of policy irrelevance.
Desch poses the tradeoff between “rigor” and “relevance” by granting the premise that statistics and formal models define rigor. As Desch catalogs, from the Progressive Era through the Behavioral Revolution to modern times, social science has been punching up at the hard sciences by aping their methods. Too frequently, though, this view romanticizes what even the most rigorous science can actually do.
For example, hard scientists have the ability to run experiments, which social scientists mostly lack. But the work of meta-researcher John Ioannidis suggests that social scientists may not be missing out on much. According to a 2010 Atlantic article describing Ioannidis’s efforts to test the validity of medical research: “80 percent of non-randomized studies turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.”
Further, when it came to his later effort to test “49 of the most highly regarded research findings in medicine over the previous 13 years,” Ioannidis found that of those that had suggested effective treatments, “41 percent had been convincingly shown to be wrong or significantly exaggerated.” Cutting-edge medical research: a bit better than a coin toss. On a related note, the scholarly journal American Statistician recently devoted an issue to the question of abolishing the concept of statistical significance. No science produces capital-T truth with much frequency.
What this suggests is that all science oversells its rigor. Although the hope may be fanciful, a sense of limits among both the producers and consumers of science would be better than granting that math wizards are scientists and qualitative scholars are poets.
Cult of the Irrelevant exposes serious pathologies in the social sciences, most vividly the disgraceful treatment of Richard Betts by Harvard University. For his part, Desch calls for a more pluralistic, problem-driven political science. I agree, but I fear this will require more punching back from qualitative scholars along the lines of meta-researchers in medical research or the introspective statisticians described above.
The related question is whether political science’s flaws explain its irrelevance to policy. Here, in my view, Desch blames the victim.
There is a surfeit of excellent scholarship, written in clear prose and easy to find, that policymakers would benefit from consuming. Some of it is Desch’s. The journal International Security, to which Desch points, publishes relevant scholarship frequently accompanied by explicit policy recommendations. Just the last two issues saw articles on the subjects of India’s nuclear doctrine, how demographics affect countries’ war-proneness, whether China can reverse-engineer U.S. military technology to catch up quickly, how Chinese public opinion is likely to affect its crisis behavior, how best to measure national power and what it tells us about the U.S. position internationally, and what makes drone-led counterterrorism effective. This is only one journal. One should also include blogs like the Monkey Cage and War on the Rocks, as well as efforts, like Bridging the Gap, that offer to take policymakers by the hand and lead them to illuminating scholarship.
In short, it is easy to see that a wealth of relevant scholarship lies at the fingertips of any interested policymaker or staffer. The question becomes why policy seems to indicate that it is rarely consumed.
The evidence suggests that foreign policymakers do not seek insight from scholars, but rather support for what they already want to do. As Desch quotes a World War II U.S. Navy anthropologist, “the administrator uses social science the way the drunk uses a lamppost, for support rather than illumination.” Scholars’ disinclination to be used in this way helps explain more of the distance.
It also explains the rise of think tanks, which are more pliant than academics but provide similar marketing support. As Benjamin Friedman and I wrote in a 2015 article on the subject, think tanks undertake research with an operational mindset: that is, “the approach of a passenger riding shotgun who studies the map to find the ideal route, adjusts the engine if need be, and always accepts the destination without protest.”
As former senator Olympia Snowe once put it, “you can find a think tank to buttress any view or position, and then you give it the aura of legitimacy and credibility by referring to their report.” Or consider the view of Rory Stewart, now a member of parliament in the UK, but once an expert on Afghanistan who was consulted on the Afghan surge but opposed it:
It’s like they’re coming in and saying to you, “I’m going to drive my car off a cliff. Should I or should I not wear a seatbelt?” And you say, “I don’t think you should drive your car off the cliff.” And they say, “No, no, that bit’s already been decided—the question is whether to wear a seatbelt.” And you say, “Well, you might as well wear a seatbelt.” And then they say, “We’ve consulted with policy expert Rory Stewart, and he says…”
Or look at how policymakers themselves define relevance. Stephen Krasner, an academic who became a policymaker, lamented the uselessness of much academic security studies literature because “[e]ven the most convincing empirical findings may be of no practical use because they do not include factors that policy makers can manipulate.”
The explicit claim here is that for scholarship to be of any practical use, it must include factors that policymakers can manipulate. This reflects a strong bias toward action, even in relatively restrained presidencies.
To take two recent examples, the Obama administration blew past voluminous academic literature suggesting the Libya intervention was likely to disappoint. President Barack Obama himself asked the CIA to analyze success in arming insurgencies before making a decision over what to do in Syria. The CIA replied with a study showing that arming and financing insurgencies rarely works. Shortly thereafter, Obama launched a billion-dollar effort to arm and finance insurgents in Syria.
As Desch tracks the influence of scholars on foreign policy across the 20th century, a pattern becomes clear: where scholars agree with policy, they are relevant. Where they do not, they are not.
In several of the cases Desch identifies where scholars disagreed with policy, they were right and the policymakers were tragically, awfully wrong. In the instances where scholars differed with policy at high levels, Desch blames their “unrealistic expectations” for causing “wartime social scientists to overlook the more modest, but real, contribution they actually made” to policy. But why would we want scholars to trim their sails in this way? And why should social scientists want to be junior partners in doomed enterprises?
Social scientists have produced reams of qualitative and historically focused research with direct relevance to policy. They publish blog posts, tweets, excerpts, op-eds, and video encapsulations of their work. The only thing left for them to do is to convey their findings via interpretive dance, and a plan for doing that is probably in the works already. In the meantime, it should be simultaneously heartening and discouraging for policy-inclined scholars to realize that It’s Not Us, It’s Them.
In a country as powerful and secure as the United States, elites can make policy built on shaky foundations. Eventually, the whole thing may collapse. Scholars should focus on pointing out these fundamental flaws—and thinking about how they might help rebuild.
Justin Logan is director of programs and a research associate at the Center for the Study of Statesmanship at Catholic University.