Mickey Rooney, who passed away earlier this week, had such an astonishingly long and varied career, from vaudeville to Broadway and from silent film to digital video, that it’s hard to sum up other than with banalities related to its very longevity. Rather than try to do any kind of justice to his entire career, I want to focus on a single, terrifyingly powerful performance that has stuck with me for years and, I suspect, will stick with me as long as I live, and that, to me, exemplifies something about his chosen profession.
That’s his role as Fugly Floom in “Babe: Pig in the City.” If you haven’t seen the film, stop what you’re doing right now and download it – but know what you’re getting into. This isn’t really a kids’ film; it’s like a cross between “Charlotte’s Web” and “Taxi Driver.” There are so many astonishing moments in the film – moments of terror, like Flealick the terrier’s near-death experience; of wincingly painful need, like the Pink Poodle’s shameless display for the dog catcher; and of deep pathos, like Thelonius the orangutan’s gratitude for the simple dignity of clothing. But a key anchor for the whole experience is Rooney’s performance as Floom, an aging, creepy clown.
The key to that performance is its sincerity. There’s not a moment in which Rooney mugs, or trades off his persona, not a moment in which he solicits our sympathy. He is completely inside this terrifyingly remote and strange character, a man who has become the clown, who is no longer performing because he is never not performing. The closest comparable I can come to the kind of pain I felt watching it is the experience of watching Emil Jannings in “The Blue Angel” – but there we have the consolation of narrative, of understanding how he became the pathetic character he is at the end, and our able, to some extent, to distance ourselves from him. I couldn’t do that with Rooney; he is, as Lear said of Poor Tom, “the thing itself” – not unaccommodated man, but social man, man repetitively performing the clownish role of being human, which is what, in this film, life in the city reduces us to.
Not every actor, approaching Lear’s four score years and as continuing more than a decade past it, would have such commitment to his art that he would undertake a role like Floom, put himself on the line like that, emotionally and artistically. Only a real artist would do that. Which is what Rooney was.
I’ve been following with interest the condemnatory thread engendered by my old friend, Reihan Salam’s Slate piece, “Why I Am Still a Neocon.” I’m sorry to have to tell Salam that I think many of the criticisms are justified. If you want to see a good roundup of such criticisms, go visit Daniel Larison’s blog; he has been fairly exhaustive.
I’m joining the thread to try to introduce some analytical clarity, and to see whether any case can be made for neoconservatism as such – rather than make a case for internationalism more broadly and then simply impute that case to the much more specific views of neoconservatism.
First of all, “Neoconservative” and “Bush administration” shouldn’t be treated as synonyms; by the same token, “moralized foreign policy” isn’t a synonym either. The Bush administration, for all its ideological zeal, had to deal with the actual world, and inevitably strayed from whatever one might identify as the “one true path” of neoconservatism. Moreover, many key figures in the administration – Donald Rumsfeld, for instance – have never really been thought of as proper neoconservatives (though Rumsfeld has been lionized by plenty of neoconservative figures).
As well, plenty of foreign policy types have undertaken actions for moral reasons without thereby becoming neoconservatives. Old-fashioned liberal internationalism is one morally-inflected foreign policy stream that should not be identified as neoconservative; so is more contemporary humanitarian interventionism. Realists can also make room for morally-motivated actions, like the rescue of the Ethiopian Jews which was substantially made possible by George H. W. Bush (not that he got much credit for it).
Moreover, virtually everyone involved in any stream of foreign policy thinking embraces the concept of collective security to some degree. There are virtually no true isolationists out there – most definitely including Rand Paul. So nobody should say, in effect, I’m a neoconservative because I believe that NATO retards the development of intra-European rivalries, or the American alliance with Japan reassures some other Asian countries that we will restrain any revival of Japanese nationalism. Plenty of realists would say the same.
Salam’s initial column suffers from a surfeit of confusion about all of the above, but particularly on the question of morality in foreign policy. Salam’s example of immoral American behavior relates to Nixon’s support for Pakistan during the 1971 crackdown in East Pakistan. He decries that support, and wishes America had stood up for democracy and human rights. But Pakistan was at the time an American ally, and India, whose intervention ultimately led to independence for Bangladesh, was considered a vague hanger-on of the Soviet bloc. Realistically, he’s not complaining that America didn’t intervene against Pakistan; he’s complaining that America didn’t reduce its level of support for Pakistan in the wake of the crackdown – or use its leverage to induce Pakistan to act with more restraint. Neither action sounds remotely like neoconservatism either in theory or in practice. What they sound most like is the Carter policy in the late stages of the Shah’s reign in Iran – a policy that absolutely can be defended on the merits, but for which I strongly doubt you can find a single neoconservative defender.
His follow-up column at National Review makes clearer the true heart of his argument, to whit: that American hegemony is good for the world and, hence, for America, and needs to be maintained even at a high cost. To maintain that hegemony, we need to retain a massive military advantage over any plausible combination of adversaries, define our interests globally, and reassure our allies that our primary needs from them are support functions rather than building substantial independent military capabilities.
That is a perspective very worth debating – I’ll hopefully debate it later today – but it should not be identified with neoconservatism but rather with what I would call the “Washington consensus” that has obtained for roughly 25 years, and that is only recently coming under any kind of serious scrutiny. The neoconservative persuasion antedates the “unipolar moment” of the 1990s, and the reason that lots of people who do not call themselves neocons refuse to associate themselves with the label is not merely a matter of avoiding unpleasant associations but because they do not agree with certain views that are quite central to neoconservatism as it actually exists.
If we’re to be more precise, then, neoconservatism should be characterized by three attributes in particular.
First, neoconservatism’s main analytical insight is that the internal character of a regime can have a material effect on its foreign policy. Specifically, the mid-century totalitarian regimes in Germany, Italy, Japan and the Soviet Union derived their legitimacy in part from their status as revisionist, expansionist powers, and hence could not adopt a policy of peaceful coexistence without succumbing to internal contradictions. A foreign policy aimed not merely at deterrence but at changing those regimes’ character was the only solution to the threat they posed to international order. Neoconservatives don’t want to spread democracy simply because they are nice. They want to spread democracy because they believe that democracies will be naturally more aligned with each other and because democracies will be naturally less inclined to undertake expansionist wars that threaten the international system.
Second, neoconservatism is fundamentally activist, by which I mean not merely that it has an expansive view of national interests or that it has no moral problem with intervening in other countries, but that it holds as an article of faith that power cannot be husbanded. On the contrary, a vigorously activist and successful power will grow more powerful simply by virtue of having demonstrated such vigor. Another way of putting it is that neoconservatives don’t really believe that an aggressive power will trigger balancing by lesser powers; rather, they believe that an aggressive power will more-likely trigger bandwagoning. Therefore, inasmuch as the United States wants to grow in power and not shrink, it needs to err on the side of action.
Third, neoconservatives have a strong bias against the legitimacy and value of international law. Skeptical of the restraining power of custom or tradition, neoconservatives tend to see law as meaningful only as an expression of an entity with a monopoly of violence. As such, in the international sphere there is only “law” if some entity is willing to use overwhelming force to ensure that said law is obeyed. The United States is unique in today’s world in potentially occupying the role of that entity, and recurrent dreams of a “league of democracies” or some such are attempts to come up with an entity that would have many of the characteristics of the United States without obviously being a single, hegemonic nation.
There is an insightful kernel of truth in each of the above tenets, but that insight is often badly abused in practice. To take the first, Nazi Germany in particular really probably couldn’t endure without continually being on the attack, and the best evidence of that fact is that it launched a thoroughly mad war on the Soviet Union when it had not yet forced Britain into a separate peace (and at the same time that its ally, Japan, attacked the United States in a similarly mad expansion of the war). And more generally, the notion of some degree of separation between regime interests and the national interest is a valuable one for thinking about how other states behave.
But the insight is badly abused when we conclude that democracies will never be aggressive or expansionist. Britain, France and the United States all have expansionist and imperialist histories, and they continue to have expansive views of their national interests and prerogatives to intervene that they do not apply to other actors in the international system. Modern India and Israel should also be added to the list of such democracies. Populist, illiberal democracies may be among the most conflict-prone regimes on earth. But the insight is even more badly abused when truths about Nazi Germany and imperial Japan are applied to other powers that may be hostile and unfree, but are not obviously expansionist or even revisionist. Traditional authoritarian regimes are among the most cautious in terms of their foreign policy, and even some highly ideological regimes, like Iran, have not been nearly as aggressive as neoconservative theory suggests they must be. Indeed, the neoconservatives may well have been wrong about the Soviet Union itself, and George Kennan, who saw more continuity than discontinuity with pre-Soviet Russian history, more correct.
The second insight also contains a kernel of truth. There are indeed times when an active power provokes bandwagoning rather than balancing – plenty of realists would agree. But the opposite is also true. The United States easily assembled a broad coalition to fight the Gulf War because Iraq had aggressively conquered and absorbed another sovereign state. Countries all over the world saw that behavior as a threat – and rather than seek to placate the aggressor, rushed to support (and even goaded) a power that proposed to reverse the aggression. By contrast, the coalition assembled to fight the Iraq War was much more limited, precisely because America was viewed in much of the world as the aggressor. There was some bandwagoning by minor countries around the American banner, but much more widespread concern about what our actions portended about America’s global aims. Today, concerns about Chinese revisionist pretensions have driven a number of Pacific Rim states into closer alliance with the United States. This is balancing behavior. But if the United States began to support Japanese nationalist pretensions to revisionism with the enthusiasm with which the neocons supported Georgia’s 2008 war, or launched an unprovoked attack on North Korea comparable to our war against Iraq, that calculus would undoubtedly change – quickly, dramatically, and not in our favor.
As for the last insight, yes, international law lacks a police power to back it up definitively. But that does not mean that it has no value or meaning. Law and respect for law is a signaling mechanism to other states about the character of the state they are dealing with. Cavalierly asserting that the law can’t stand in the way of our righteous action sends a very clear signal: that we recognize no restraint. That is not going to make any other state comfortable unless they agreed with us in our assessment of our own absolute righteousness. And that discomfort poses actual costs to our ability to conduct an effective foreign policy, whether for humanitarian purposes or for the protection of our national interest.
In other words, neoconservatism’s genuine insights are modest and contingent. We can’t cavalierly assume that Iran’s regime interests are identical to its national interests (rightly considered, Iran and the United States have no material interests in conflict), and should take into account the ideological basis of the regime when we consider its likely foreign policy. But we also can’t cavalierly assume that, because it is an ideological regime, it is inherently aggressive and expansionist – particularly when there is almost no actual evidence of such ambitions. We should not assume that, say, Russia’s actions in the Crimea will “automatically” generate balancing by European powers, and that we can therefore take a blasé attitude towards events in a far off country of which we know nothing. But by the same token we should not assume that there will be a bandwagoning effect around any attempt to “lead” a coalition to “force” Russia to rescind its annexation and withdraw from that territory. A law-based approach to both conflicts may be emotionally unsatisfying, and may fail, but may still be more responsible and more likely to achieve success than an approach – favored by actual card-carrying neoconservatives if not by Salam – that emphasizes the threat or use of force, unilaterally if necessary.
In actual practice, neoconservatives have a tendency to be stopped clocks, hammers that see every problem as a nail. And stopped clocks and hammers are not good guides to policy, regardless of where they are stopped or how hard the hammer. They would add more value to the foreign policy debate if they would return to the empirical rigor of the original neoconservatives in domestic policy, and stop behaving as if they had found some kind of eternal truths.
Salam, given his intellect and his preexisting sympathies, is an excellent person to begin that kind of change within the self-identified neoconservative ranks. But to change, you first have to acknowledge that you have a problem.
Damon Linker has a deliberately-provocative column out today, arguing that the GOP has made a distinct turn against democracy as such:
This was the week, of course, when the Supreme Court’s five-member conservative majority knocked down limits on aggregate contributions to federal political campaigns, opening the door for the rich to exercise even more influence on the political system than they already do. It was also the week when Rep. Paul Ryan unveiled his latest budget proposal, which would gut food stamps and other aid to the poor. And as I wrote about the other day, this is a political season that has seen the Republican Party working to make it harder for poor people and members of minority groups to vote.
Then there was venture capitalist Tom Perkins suggesting a couple of months ago that only taxpayers should be permitted to vote — and that those who pay more in taxes should be given more votes to cast in elections. And that came less than two years after Mitt Romney was caught kissing up to wealthy GOP donors by denigrating the “moochers” who make up 47 percent of the country’s population.
Ladies and gentlemen, that many data points make a pattern. We seem to be living in an era in which the Republican Party is turning against democracy in an increasingly explicit and undeniable way.
That list of data points includes some considerable stretches – cutting food stamps may be both cruel and foolish, but is it really credible to call it anti-democratic? – but I think Linker has a real point about the current trend on the right. But I don’t think he’s at all correct in saying that this turn is “unprecedented” in American history. And I wish he’d taken the anti-democratic point of view a little more seriously, so that its profound flaws might be effectively exposed.
To take the first point: the United States has turned away from majoritarianism repeatedly in our history. The dramatic expansion of slavery in the South, and the antebellum efforts to extend the legal reach of the slavery into new territories and even into free states, represented a turn away from democracy. The elimination of the franchise for African Americans in the South after Reconstruction, the institution of Jim Crow laws, and their tightening during the Progressive era (Woodrow Wilson is the one who brought Jim Crow to the nation’s capital), the imposition of the poll tax – all these represented turns away from democracy. One might characterize the various 19th- and early-20th century anti-Catholic campaigns as anti-democratic as well – that certainly would be less of a stretch than Linker’s point about food stamps. Ditto for the Lochner-era Supreme Court decisions striking down democratically-enacted laws, intended to protect working people, for abridging “freedom of contract.” The point being: while the evolution of the written Constitution may reflect a monotonic expansion of the franchise and an ever-expanding circle of citizenship, the lived experience of Americans has not been so linearly progressive.
So we may be in one of those regressive periods again.
Presumably because of space constraints, Linker doesn’t discuss why we might have entered one of those periods. I suspect that demographic change coupled with the agonizingly slow recovery from the financial crisis do much to explain the turn to zero-sum thinking in politics, which in turn explains much of the appeal of anti-democratic arguments on the part of those who see themselves as the true proprietors of the state and the country. Nor does he do much to ask whether the anti-democratic stance makes sense in its own terms, other than to say that Aristotle would have recognized it.
Myself, I don’t think it does, and I don’t think Aristotle would (though Coriolanus might). Aristotle’s case for aristocracy very plainly implies a kind of reciprocal obligation that is completely foreign to the Randite arguments so common on the American right these days. And those arguments rarely take the explicit form of arguing that the wealthy should rule because they are more virtuous. Rather, the two most common forms of the argument are: that it’s unfair for one’s representation to be less than proportional to one’s contribution (therefore people who don’t pay income taxes should not be allowed to vote), and that it’s dangerous to give power to the unpropertied (because they don’t have a sufficient stake in stable property rights that promote productive enterprise).
And all of these arguments are transparently absurd. If the question is fairness – that one’s representation should be proportional to one’s contributions – wouldn’t you have to account for the contributions that were never compensated for properly? This country was substantially built by the coerced contributions of African slaves. Should those slaves’ descendants get “extra” votes to compensate for that manifest unfairness? And shouldn’t the benefit one derives from the state also be included as part of the calculus of one’s contribution? The state protects the distribution of property, after all, with the threat of violence. Should heirs, therefore, be disenfranchised, because they benefit from the state’s monopoly of violence, but have contributed nothing themselves?
And why should contributions be measured in monetary terms? Only veterans explicitly risk their lives to protect the country as a whole. Perhaps only veterans should be allowed to vote? Without mothers, there would be no next generation of Americans at all. Perhaps only women with children should be granted the vote? (Or perhaps they should just pay much less in taxes.) Once we start debating who deserves more votes, it’s obvious that the debate will not be resolved by reason, but by force or sheer weight of numbers. Which is a pretty good case all by itself for the universal franchise.
Meanwhile, if the question is the voters’ stake in the state, why should this incline anyone toward restricting the franchise? Is there any evidence that the road to stability and prosperity lies in that direction? Read your Livy. Or take a look at the history of Latin America. I’m not saying that there isn’t a coherent argument that ownership of property is important to virtuous citizenship – but assuming that we actually care about the well-being of the population as a whole, that argument leads logically not to plutocracy but to some version of distributism.
Now, distributism has other problems – most particularly, that it’s not obvious how it would work in a modern non-agricultural context. (Broad distribution of property in the form of shares of large national enterprises is a variant of socialism.) But at least it is a response to the problem that those who worry about “the 47%” are concerned with that doesn’t simply write that half the country out of the sphere of moral concern.
Readers may wonder why I bother even to dispute an argument against democracy, as I have done before in this space. The reason is: that’s what arguments are for. Giving up on that idea of reasoned deliberation and dispute is very close kin to giving up on democracy itself, which is a problem on the left these days as well as on the right.
Well, it all depends on what data you emphasize.
Gallup put out two recent pieces suggesting the answer is: yes. The first demonstrated that, over the course of time, whites as a whole have gotten more Republican, and more reliably so:
In recent years, party preferences have been more polarized than was the case in the 1990s and most of the 2000s. For example, in 2010, nonwhites’ net party identification and leanings showed a 49-point Democratic advantage, and whites were 12 percentage points more Republican than Democratic. The resulting 61-point racial and ethnic gap in party preferences is the largest Gallup has measured in the last 20 years. Since 2008, the racial gaps in party preferences have been 55 points or higher each year; prior to 2008, the gaps reached as high as 55 points only in 1997 and 2000.
The increasing racial polarization in party preferences is evident when comparing the data by presidential administration. Nonwhites’ average party preferences have been quite stable across the last three administrations, consistently showing a roughly 47-point Democratic advantage under Clinton, Bush, and Obama. On average, 69% of nonwhites have identified as Democrats or said they were independents who leaned Democratic, and 21% have identified as Republicans or leaned Republican.
Meanwhile, whites have become increasingly Republican, moving from an average 4.1-point Republican advantage under Clinton to an average 9.5-point advantage under Obama.
And a subsequent piece noted, more specifically, that voters over age 65 have trended strongly toward the Republicans, and identified that trend with the fact that the 65+ group is much whiter than the electorate as a whole:
Gallup’s analysis reveals that the changes in seniors’ party preferences are attributable in part to attitudinal change among today’s seniors as they have aged. This is evident in survey results from 1993 and 2003 that show the party preferences of today’s seniors when they were 10 or 20 years younger.
In 1993, Americans then aged 45 to 79 represented the age group that today is 65 to 99. At that time, 20 years ago, those 45 to 79 were highly Democratic, with a 12-point advantage in favor of the Democrats. That gap was larger than the average seven-point Democratic advantage among younger age groups that year.
Ten years later, all age cohorts had become more Republican and were fairly balanced politically. Today’s seniors, who were aged 55 to 89 in 2003, were the only age cohort to tilt Democratic at that time. The 2013 results show that today’s seniors have continued to move in a Republican direction, while the younger age cohorts have gone back in a Democratic direction.
U.S. party preferences are strongly polarized along racial lines, and one reason seniors are more Republican now is that they are racially distinct from other age groups. Eighty-five percent of those 65 and older are non-Hispanic whites, according to Gallup estimates, compared with 77% of 50- to 64-year-olds, 66% of 30- to 49-year-olds, and 54% of 18- to 29-year-olds.
So: whites are trending Republican, and seniors are trending Republican, and those two groups overlap substantially, all of which is driving increasing racial polarization in voting.
But there’s another way to slice the same data from Gallup:
Across different age cohorts, whites show something like a 12-point advantage for Republicans – except for the youngest cohort of white voters, which shows a 2-point advantage for the Democrats. Meanwhile, across all age cohorts non-white voters show a marked preference for the Democratic Party. But that advantage shrinks with every cohort: from a 58-point Democratic advantage among non-white seniors, to a mere 37-point advantage among the youngest non-white cohort.
In other words: white Democrats and non-white Republicans both skew young relative to their racially-similar counterparts of the opposite party. That suggests a possible counter-narrative, whereby racial polarization in voting is actually weakening over time.
Here’s a possible way to reconcile both readings of the data. Racial solidarity is a more substantial vote-motivator for older Americans than for younger Americans – both for white and non-white groups. Assuming that currently-young voters don’t grow more racially-motivated over time, that means that, over time, the electorate as a whole will be less-motivated by racial solidarity in voting. However, in the Obama era, the racial identity of each party has become more sharply defined in voters’ minds, with the Republicans being understood as the white-identified party and the Democrats being identified as the non-white-identified party. The latter effect dominated over the former in the Obama era, resulting in a higher degree of racial polarization in voting. But the weaker identification of young voters, white and non-white, with the party that “represents” them racially, suggests that this polarization could be temporary, and could be quickly reversed if events weakened the racial identity of either or both parties in a future election.
If you haven’t been following the debate between Ta-Nehisi Coates and Jonathan Chait about the legitimacy or illegitimacy of a “critique of black culture” as part of a rhetorical strategy against crime/unemployment/teen pregnancy/etc., then you must not be on the internet. To catch up, start here and continue here, here, here, here, and here. Basically, Al Gore invented the internet so we could do this.
Meanwhile, Ross Douthat has entered the lists with a phenomenal post that demonstrates a welcome attentiveness and appreciation for Coates’s perspective:
Looking back on the debates of the 1990s, Coates says that ”there was really no doubt” that a neoliberal magazine would use a photo of a black single mom to illustrate its Clinton-era case for welfare reform, and I know well what he thinks about the excerpt from “The Bell Curve” that ran in TNR in that same era. But it’s at least noteworthy a generation later, the name “Charles Murray” is mainly associated with a controversial argument about cultural collapse in downscale white America, and the most recent cover story on poverty, culture and welfare in a political magazine was Kevin Williamson’s grim essay on Appalachia in National Review. Nor are these examples really outliers: Murray’s “Coming Apart” raised the argument’s profile and enriched it with a searching look at social indicators, but the idea of a pan-racialsocial crisis with its roots in the decline of the two-parent familyhas featured prominently in conservative discussions since the Bush era, if not before.
And the story that some of us on the right, at least, would tell about that crisis is one that’s actually reasonably consonant with Coates’s grim account of the African-American experience on these shores. Beginning in the 1960s, we would argue, a combination of cultural, economic and ideological changes undercut the institutions — communal, religious, familial — that sustained what you might call the bourgeois virtues among less-educated Americans. Precisely because blacks had been consistently brutalized throughout their history in this country, they were more vulnerable than whites to these forces, and so the social crisis showed up earlier, and manifested itself more sweepingly, in African-American communities than it did among the white working class and among more recent immigrants. This pattern inclined a lot of people, right and left, to see the crisis as an essentially inner-city, black-underclass problem, and prompted the kinds of Reagan and Clinton-era debates which ultimately gave us welfare reform, tough-on-crime policies, and a national campaign against teen pregnancy. But now we know differently: However one assesses the wisdom and justice of those policies (and Coates and I would have some major disagreements there, I’m sure), the racialized framework in which they were debated and implemented does not fit the lived reality of America in 2014.
By which I mean that (just as Coates suggests) we don’t have a black culture of poverty; we have an American culture of poverty. We don’t have an African-American social crisis; we have an American social crisis. We aren’t dealing with “other people’s pathologies” (the title of Coates’s post) in the sense of “other people” who exist across a color line from “us.” We’re dealing with pathologies that follow (and draw) the lines of class, but implicate every race, every color, every region and community and creed.
In this landscape, certain ways of talking about culture and poverty really are inappropriate, and for roughly the reasons Coates suggests — because they essentially involve a flight into the more comforting (for white people) patterns of the recent past, into a reassuring Othering of social pathology, into a conversation that has why can’t those poor black people get their act together? written over and over again between its lines. In this landscape, it’s usually a mistake — no, not a “racist” mistake, but still a mistake — for white Republican politicians interested in poverty to overstress the “inner city” in their rhetoric. In this landscape, forms of moral exhortation around sex and marriage and work and responsibility that are really just outsiders’ critiques of “black culture” are even less defensible than usual.
Before adding my own 2c of criticism, I just want to acknowledge how smart this is.
Douthat goes on to make two objections to Coates’ apparent perspective: first, that he seems to veer close to denying that culture is any kind of an independent variable in sociology, a stance he calls “radical[ly] reductionist” and presumptively uninteresting; second, that he doesn’t acknowledge the existence of, well, Ross Douthat, and other supporters of the Bush-era social agenda who made a conscious effort both to talk a talk and walk a walk that was post-racial in its analysis. (Uncharitably, one might describe it as seeking to supplant America’s traditional racial identity politics with a trans-racial Christian identity politics.)
I think Douthat has a legitimate point there – but my main objection would be something like the following. Most people would agree that the church had a more central place in African American life in 1965 than it did in most white communities. And yet, in 1965, whatever forces were driving the breakdown of the traditional family had a greater impact on the African American community than they did in white communities. Shouldn’t that suggest that exhortatory moralizing is perhaps not the strongest line of defense?
Moreover, Douthat argues that any kind of re-moralization, to work, would need to be driven by leaders that are exceptionally credible with those on the receiving end of the sermon. But he identifies the social pathologies that concern him as more class-based than race-based. In which case, to achieve his own goals of re-moralization, doesn’t he need an authentic working-class leadership? It’s worth noting that the closest Charles Murray came to a remedy for our national “coming apart” was for elites to try living closer to working-class people. He doesn’t suggest any adjustment of our national political and economic arrangements that would cede more power to the working class.
And I stress the word “power” deliberately. It is entirely possible to simultaneously experience more consumer choice, and more consumer comfort, while experiencing a diminishment of power, a lack of control over one’s own life, and a lack of involvement collective decision making.
My 2c for Coates comes from a somewhat different direction. To whit: what is the politics implied by his critique?
The most obvious political thrust of a narrative of communal subjugation is nationalist and revolutionary. You make the case that your people has been brutalized and stolen from and raped and murdered with impunity. That case motivates the determination to rise up and prove your collective manhood by throwing the foreigner out of power. Depending on the circumstances, that might mean expelling an occupier (Kenya, for example), or toppling a minority regime (South Africa, for example), or carving one’s own state out from larger structure (South Sudan, for example). Nationalism, of course, doesn’t necessarily solve the inequities associated with the legacy of the historic injustice. But it makes it possible to act communally on a formally independent basis. And there’s a vital dignity in that – or so many of the world’s peoples have concluded.
True nationalism has never been a particularly practical option for the African-American community, though. And Coates himself is emphatic about his Americanness, his stake in a collective experiment in which he will likely always be a minority. He just wants more white Americans to love America without treating it as exceptional or objectively superior.
The point I want to make is that this agenda is itself a variety of exhortatory moralism, aimed at the other, just as Paul Ryan’s is. It’s just that the pathology in question is not crime or teen pregnancy but unexamined white supremacist premises. And that’s why I ask Coates the question I asked with regard to last year’s Best Picture: what kind of politics are implied by that kind of searing indictment divorced from any gesture toward action? Chait’s increasing irritation at Coates isn’t really about feeling misrepresented, but about the feeling that Coates’s is a counsel of despair.
Which brings me back to the original basis of the argument – does Barack Obama agree with Paul Ryan about something fundamental. Of course they do. They are both American politicians. So the fundamental thing that they agree on is: words are an instrument of power.
Why does Barack Obama exhort “Cousin Pookie” to “get off the couch” and vote? Because if he gets to the polls, Cousin Pookie will vote for him. He is not an analyst, trying to be fair to Cousin Pookie. If African-Americans were a disproportionate percentage of voters in 2008, he wanted them to be an even bigger disproportionate percentage of voters in 2012. Because he wanted to win. It has nothing to do with justice.
If Coates is disappointed that the election of Barack Obama has not radically improved racial dynamics in America, he should remember that Barack Obama is just the President of the United States. Coates complained in one of his pieces that Chait was treating the President as if he were the coach of “team Negro” – which would make exhortation to “try harder” appropriate – whereas in fact he’s the commissioner of the league. But if it’s not the commissioner’s job to give morally exhortatory speeches to “his” team, it’s also not the commissioner’s job to rail against the unfair advantage of the Yankees’ payroll. And it’s important not to forget that the commissioner is chosen not by the players or by the fans, but by the owners.
Which doesn’t mean that some commissioners aren’t more favorable to the interests of the players, and some less.
UPDATE: Here’s another way to put my question to Coates. The dominant narrative in speaking about black poverty could be described as “up and out.” The conservative variant emphasizes the personal responsibility element in making that happen, and the liberal variant emphasizes the economic and social policy assistance element, and there are further variations on variations to include conservative reformers and so forth – but the commonality is “up and out.” I don’t read Coates as denying that personal responsibility is important – I read him as denying that African Americans deserve any special notice in that regard, that they exhibit any special deficiency.
But I also read something else: an objection to that narrative as such, regardless of where the emphasis is placed. Because his own inheritance, from his father, is a narrative not of “up and out” but of “up and over.”
And my question is: what, in the context of America in 2014, does “up and over” mean to him?
Just a minor point to add to Daniel Larison’s typically sensible post about the folly of issuing empty threats over Ukraine.
I hope that everyone agrees that bluffing is dangerous, because a bluff can be called and, if it is, the bluffer must either make good on the bluff – which, presumably, is very strongly counter to his interest, else he wouldn’t be bluffing but making threats in earnest – or suffer exposure as someone whose threats are not to be taken seriously. If we say, “don’t cross this line in the sand or else” and the person we are threatening crosses it, and we do nothing, then he’ll be that much less inclined to pay any attention when we draw such lines in future sands. (Note: I’m not arguing that our credibility is some unitary factor independent of the characteristics of individual conflicts; other actors in the system can presumably make rational estimates of where our “real” interests lie. Nonetheless, it isn’t a good thing to get a reputation for making empty threats.)
On the other hand, bluffing is a useful tool because, as in poker, it enables you to “play” a somewhat stronger hand than the one you actually have. If there is some uncertainty about whether a threat is a bluff or not, the threat may be accepted as real, and you gain the benefit of the threat at a lower cost than it would take to accumulate the cards necessary to make it good. Moreover, the acceptance of the bluff as true itself sends a signal to other potential opponents: our last opponent backed down in the face of our threats. He thought we were serious. Maybe you should, too?
Looked at this way, there’s a case for judicious bluffing – that is to say, as a matter of calculated risk. If I bluff, my opponent may call – but he may treat the bluff as serious, and back off, and if he does so then the “power” of my declared threats has been enhanced. In other words, there’s a risk of loss, but also a risk of gain. If the action we’re trying to deter is sufficiently damaging, it becomes relatively easy to make the case for bluffing – because a successful bluff deters the action and also enhances one’s credibility, while a failed bluff only results in a loss of credibility; the negative action would presumably happen anyway if the bluff hadn’t been made in the first place.
A bit of simple math might be helpful to explain this point of view. Assume, for simplicity’s sake, that the loss or gain to credibility (“c”) is symmetric – we gain just as much from a successful bluff as we lose from a bluff being called – and that the opponent’s action (“a”) is certain if either the bluff is called or no threat is made. We’ll use P(s) to represent the probability of the bluff’s success. In that case, you get the following:
No bluff: cost = a (opponent takes the action)
Bluff called: cost = a+c (opponent takes the action *plus* we lose credibility)
Bluff successful: cost = -c (i.e. we gain credibility because our opponent backed down)
Total cost of bluffing = P(s)*(-c) + (1-P(s))*(a+c) = a+c-P(s)*a-2*P(s)*c
Since the cost of not bluffing is “a,” to compare bluffing to not bluffer we subtract “a” from both sides. Result: bluffing makes sense if c is less than the sum P(s)*a+2P(s)*c.
That looks like a pretty big number relative to c. To illustrate, take the following example: c is twice as large as a – i.e., the cost to credibility of a failed bluff is twice as large as the cost of the action we’re trying to deter in the first place – and the probability of success is only 50%. Should you bluff?
No bluff: cost = 1
Bluff called: cost = 3
Bluff successful: gain = 2
Total cost of bluffing = 50% * 3 – 50% * 2 = 1.5-1.0 = 0.5
Your indifference point in this ridiculously simplified analysis would be a 40% chance of success. In other words, this analysis leads to the conclusion that you should bluff in circumstances where the bluff is 50% more likely to fail than to succeed, and where the total cost of a failed bluff is three times as large as the cost of never making a threat.
You can see how someone might rationally conclude that bluffing is a pretty good strategy, in a lot more cases than you might initially suspect. Indeed, notwithstanding the many excellent points in Paul Pillar’s refresher course in Cold War deterrence, he gives the inaccurate impression that America was did not do a lot of bluffing in that multi-decade standoff. Whereas, in fact, there is considerable question whether America’s core deterrent ever was truly credible, in the sense that it was never clearly rational to actually escalate to a nuclear exchange for the sake of Western Europe or Japan, and yet America threatened first use of nuclear weapons in response to a conventional Soviet assault.
Obviously, there are a dozen ways to poke holes my analysis above (and I should be clear, that analysis is not something I’m defending, just something I cooked up to illustrate a point that I then wanted to debate). The effect on credibility could be asymmetric, for example – the gain from a successful bluff could be of much lower magnitude than the loss from a called bluff. Or you can question the whole framework by emphasizing the inherent uncertainty of all the numbers involved (which, after all, will most likely be pulled out of the analyst’s posterior). But one hole that should get poked more often is the unwarranted assumption that threats can only decrease, and not increase, the likelihood of the action you’re attempting to deter.
Suppose your opponent is contemplating action “a” that would accrue some gain to him at some cost on you – but not a large enough cost to be worth fighting him over. Nonetheless, you threaten to fight if he takes that action. If he allows himself to be deterred, in our analysis above your credibility is enhanced – you experience a gain in power. But at whose expense?
First and foremost: your opponent’s. After all, every other actor in the system can rationally conclude that you might well be bluffing just as easily as your opponent can. They can’t be certain – but they know there’s a good chance. If your opponent backs down in a situation where a bluff is fairly probable, that results in a substantial blow to his credibility. Even if the cost of fighting with you is sufficiently high that it would mean a substantial cost to the opponent to take an action that leads to war, he can’t afford simply to absorb the cost of backing down in the face of a possible bluff. He has to play the odds.
Well, let’s run the odds from his perspective, using the same kind of over-simplified analysis. Assume that calling our bluff or backing down generates symmetrical gain and loss, and that the value of the action itself is still 1/2 the cost of backing down to a bluff. Assume, further, than the cost of war is 10x the value of the action. We’ll use P(b) to indicate the opponent’s estimate of the probability that we are bluffing. (We already know that the true probability is 1.) Well?
Back down: loss = 2 (loss in credibility)
Call bluff, no war: gain = 3 (1 for action itself, 2 for gain in credibility)
War: loss = 10
Total value of calling bluff: P(b)*3-(1-P(b)*10) = P(b)*13-10
Since the loss due to backing down is 2 (a value of -2), it’s worth calling the bluff if P(b)*13 is greater than 8, or, in other words, if there’s a greater than 8/13 chance we are bluffing (in which case the expected loss from calling the bluff is also 2).
Think about that. In this ridiculously over-simplified, zero-sum analysis, we should bluff if we think there’s at least a 40% chance of the bluff succeeding, even if we rule out in advance the option of making good on the threat and even though, if our bluff is called, we’ll lose 3 times what we would have lost if we had never bluffed at all. And our opponent should call our bluff if they think there’s at least a 60% chance of it being a bluff, notwithstanding that if they’re wrong and we go to war they’ll lose 10 times what they would have gained from taking the action if we’d never threatened them. Moreover, if our opponent estimates at least a 70% chance that we are bluffing, the relative value of taking the action becomes *higher* to them than it would have been had we never bluffed in the first place, because of the incremental value to their prestige and credibility in having defied our threats. Now, recall that we’re likely to have positively-biased estimates of our own ability to bluff. Does it still seem reasonable to assume that threats will at least reduce, and not increase, the likelihood of our opponent taking a given action?
Again, there are a dozen holes that can be poked in such an admittedly over-simplified analysis. But the important point is that there are entirely rational reasons to suspect that issuing a threat can increase the likelihood that the opponent takes the action you are trying to deter. For any such action, “a,” there are a variety of potential costs and benefits to the actor – a vast penumbra of uncertainty about outcomes that might in itself be sufficient to deter many actors from many potentially beneficial actions. By issuing a threat, you’ve made one of those potentialities much more concrete: inaction will definitely result in some loss. Depending on how large that loss looms, and what the opponent figures are the odds that you’re bluffing, the threat itself could be sufficient to motivate the action you intended to deter – or some other action of equal or greater cost to you.
With America defining its interests in such a global fashion, it’s very likely that this dynamic plays an important part in our opponents’ responses. It certainly seems to have been relevant in Georgia, where part of Russia’s motivation in provoking Georgia into launching a war was precisely the desire to call America’s bluff. (How much, after all, is South Ossetia itself really worth to anybody, even the South Ossetians?) The same might prove true in Eastern Ukraine if we handle the situation in the way that some hawks prefer.
First of all, I think the overall orientation of Douthat’s column is exactly right: the illusions of liberal internationalism and hawkish neo-conservatism were congruent – sufficiently so that both the Russians and we ourselves sometimes had trouble telling them apart. And I think his conclusion is strong as well: we need a realistic response, one that recognizes Russian’s revisionism and the real limits to our power to respond.
But a realistic response also needs to be clear-headed about what our interests actually are here. From Douthat’s column, I sense an underlying assumption that the two illusory programs were intended to advance American interests, but, because they were based on illusions, could not succeed. That is to say: it would be good for America if Russia were to become a “normal” country and good for America if we expanded our “sphere of influence” into places like Georgia and Ukraine, but we miscalculated what was possible. I think he’s right about the limits of the possible, but the implicit assumption – that our original objectives were even in our interests – needs to be examined.
Let’s take the second goal. Assuming we’re going to accept terms like “sphere of influence,” what would be the advantage to America of expanding ours into Ukraine? A Ukrainian manpower contribution to NATO? The economic benefits of greater trade with Ukraine? Diplomatic support for American initiatives? None of these is obviously of substantial benefit. Meanwhile, we (or, more correctly, the European Union) would take on the burden of Ukraine’s substantial political and economic deficits.
Assuming the goal of expanding NATO and the EU eastward isn’t specifically to weaken Russia, then – which I’ll assume for the sake of argument – the purpose would primarily be to improve the political and economic situation in Ukraine, which would then have ancillary benefits for us and our allies in terms of both avoiding the costs of instability on the edge of Europe (refugees, the need for humanitarian assistance, the possibility of being dragged into an actual conflict) and reaping the benefits of trade with a more prosperous partner.
The rather unfavorable offer that the EU made to Ukraine prior to the crisis strongly suggests that our European partners didn’t think these benefits were worth the costs. And since the collateral benefits of a successful “expansion” would accrue primarily to them, why would we want to pay more than they would? Other than to gratify us with the sheer size of our “sphere,” why would we want to add Ukraine?
Now, the first goal – the “normalization” of Russia - would certainly be in American interests, inasmuch as a revisionist power is necessarily some degree of threat to all status-quo powers. But I don’t see why “normalcy” requires submission to an American-led security architecture. A Russia analogous to South Africa or Brazil, that sought to play a positive regional role but kept aloof from or even actively questioned America’s grander pretensions, would presumably qualify for “normalcy.” And such an end-game would seem to be far more “realistic” than assuming Russia would ever become an outright supporter of American hegemony.
Moreover, it would arguably be better-congruent with our interests. Again, even assuming Russia would ever consider subordination to an American-led global security architecture (unlikely), that implies that we would undertake the responsibility for assuring that Russia’s legitimate grievances were addressed satisfactorily, and would implicate us in its handling of its own internal problems. It’s clear to me why we would want Russia to handle these matters the way we would prefer, but it’s not clear to me why we would want the responsibility for assuring that they would be so handled. If we don’t have a good reason for taking on Ukraine as the next Italy, why would we want to take on Russia?
What all of that adds up to is to say that prior to the intervention in Crimea, America’s primary interest with respect to Russia was surely in avoiding a resumption of international tension between Russia and the West, such as is now taking place. We had many other secondary interests – assistance in pursuing our war against al Qaeda, stability in the energy markets, cooperation in reaching a verifiable negotiated solution to the Iranian nuclear program, mediation of the Syrian civil war, etc. And we had some interests that would have conflicted with Russia’s, including an interest in establishing the international norm that “spheres of influence” as such are an outdated concept incompatible with allowing all states sovereign freedom of action (which is not the same thing as saying that NATO should expand to include any country we like). But what is happening now is surely what we most wanted to avoid.
Now that it is upon us, though, our interests are somewhat different. Nobody should have been surprised that Russia was unwilling to simply sit back and accept the overthrow of the Ukrainian government. I wasn’t terribly surprised by direct Russian intervention either – Russia had done much the same in South Ossetia, Abkhazia and Trans-dniestria, and Serbia had done much the same in the various wars associated with the breakup of Yugoslavia. But the hastily-organized and highly questionable referendum and annexation have raised the stakes considerably. Russia’s legal position is extremely weak. The Crimea was transferred to Ukraine entirely legally per the law that prevailed in the Soviet Union; Russia agreed to respect the territorial integrity of Ukraine when the Soviet Union broke up; and secession should properly require not only a referendum but a negotiated agreement with the parent country (as was the case with the breakup of Czechoslovakia, and will be the case if Belgium, Canada, Spain, the UK or any other Western country perpetually at risk of splitting finally takes the plunge). If the annexation of Crimea is accepted, then the entire post-Cold War settlement is up for forcible revision. Given that our primary interest is the region is in the maintenance of stability and order, we should not be sanguine about that prospect.
The issue, then, isn’t how to “punish” Russia – we’re not Russia’s nanny. Ideally, what we’d want to do is walk back some of the decisions that got us where we are now. Unfortunately, I don’t see a viable way to do that. In theory, Ukraine could agree to allow Crimea to separate for a price (I think it would be sensible for them to do that), and for Russia to agree to allow a new referendum to be conducted under independent international auspices (I think it would be sensible for them to do that as well), which would pave the way for a legitimate separation from Ukraine. But neither of those things is likely to happen. Ukraine isn’t going to ask for or accept a bribe; the new government is nationalist in orientation and to do so would undermine the basis of their authority. Russia isn’t going to offer a bribe – they already have Crimea, so why would they pay for it? – and they aren’t going to accept the principal that outsiders picked by the West have any legitimate role in arbitrating the dispute. This is, ultimately, the great cost of the Clinton-Bush years with respect to Russia: the Russians are, very reasonably, convinced that any concessions made to the West will be pocketed, but that anything they get in exchange may be withdrawn at any time.
Given that there’s no obvious way to walk back the annexation, and that accepting the annexation would amount to opening the pandora’s box of wholesale revision of the post-Cold War settlement, I suspect that the real choices are outright war with Russia (which nobody wants) or a persistently high level of tension. But high levels of tension make conflict more likely. Douthat mentions two things that America should not do in response to the situation in Crimea, specifically because they would be provocative: deploy troops to Estonia or send arms to Kyiv. I don’t disagree – but how should we respond if Ida-Viru (which is over 70% Russian, and which contains over a third of Estonia’s Russian population, and also most of Estonia’s natural resources, such as they are) starts talking about seceding from Estonia, with Russian encouragement? How should we respond if outright civil war erupts in Ukraine and Russia moves in to “keep the peace”? Those are not rhetorical questions – we need to know what our answers would be. My point being, “containment” is not a condition of peace.
And deterrence is a fragile thing. Is it really credible that the United States would go to war with Russia over Estonia? Or with China over Taiwan? Ultimately, deterrence is not about making the other side certain of its defeat but uncertain of its victory – sufficiently uncertain to be unwilling to risk war. Which implies, as a corollary, convincing them that peace is safer than war. War over Taiwan remains relatively low-likelihood because China still reasonably believes that it will get Taiwan peacefully at some point in the future. The moment that belief comes under serious question, war becomes much more attractive – but nothing we do then will make it “worth” war with China to defend Taiwan, and the Chinese know that.
In Crimea, Russia decided that war was safer than peace – that if it did not use force, it would be very likely to lose. So it used force. Responding simply by raising the stakes of future conflict heightens the conditions that led Russia to that conclusion in the first place, making further conflict more likely. Responding weakly undermines deterrence directly, and would encourage Russia to see what further gains it can make by boldness. Restoring deterrence without provoking additional conflict will therefore not be easy, because we have to simultaneously raise the cost of further provocations and provide a credible basis for Russia to believe that enough of its interests could be secured without the use of force.
To put it bluntly: there is no good reason ever to expand NATO to include Ukraine. Realism means not only recognizing limits, but setting them. To say that now, though, in the current context, is to confirm to Russia that their approach to Crimea was effective, and should be repeated. Therefore, the objective of our diplomacy should be to create a context within which saying such a thing is possible again, because it is part of a more general resolution of outstanding issues. And in the meantime, we should expect a persistently higher level of tension in the region.
A number of people sent me the David Atkins piece that Rod Dreher linked to, but I think Dreher takes the discussion in a not-very-fruitful direction. Basically, he suggests that if the Left really cares about economics, they should let the Right have its way on cultural issues, and if the Right really cares about social issues, they should let the Left have its way on economics.
Which – sure, if that were the true preferences of true entities battling for supremacy. But there is no Left and no Right. Those are abstractions according to which we choose to divide individuals.
Here’s how I would describe things:
- Economic elites really care about preserving their privileges.
- Elected officials really care about reducing the risk of losing office.
- The culture war – for both nominal Left and Right, is an extremely effective way of serving the interests of both economic elites and elected officials.
Why? Because the culture war turns politics into a question of identity, of tribalism, and hence narrows the effective choice in elections. We no longer vote for the person who better represents our interests, but for the person who talks our talk, sees the world the way we do, is one of us. That contest is a cheap and easy one for politicians of any stripe to enter – and, usually, an easy one to win. It sorts the overwhelming majority of the population into easy-to-count-on camps who will not demand that politicians do anything for them, because they’re too afraid the hated “other team” might get into power.
And it’s a good basis for politics from the perspective of economic elites. If the battle between Left and Right is fundamentally over social questions like abortion and gay marriage, then it is not fundamentally over questions like who is making a killing off of government policies and who is getting screwed. Economic elites may lean to one or the other side on any cultural question (they can be found on both sides), but they can maintain their privileges no matter which side wins any particular battle. So whoever they want to win, that’s the ground on which they want the battle to be fought.
Atkins focuses on the Left-wing version of identity politics – the way in which putting so much energy into fighting for adequate representation for every tribal group has drained energy away from the fight to shift the terms of the social contract overall. It’s much easier to get corporations to agree to adopt affirmative action policies than to get them to agree to recognize a union. So if activist energy goes mostly into fighting for the former, by definition it won’t focus on the latter.
But the same thing is true of Right-wing identity politics. If you can get out the votes by decrying the unfairness of affirmative action, then you won’t need to call for tougher anti-trust enforcement, or for patent and copyright reform, or for breaking up the mega-banks, or for reducing corporate welfare, or for a trade policy organized around moving American manufacturing up the value chain, or any other policy – and I deliberately picked policies that at various points in history have been or could plausibly be part of the Republican “mix” – that might change the terms on which our economy functions in a broad sense, rather than just jockeying for position against other groups within the existing arrangements.
That doesn’t mean social issues don’t matter. It means that they should not be the organizing basis of large political coalitions.
Successful single-issue lobbies work both sides of the aisle. The NRA wants Democrats to be pro-gun as well as Republicans – and lo, while the Democratic Party is less pro-gun than the Republican, it’s also less anti-gun than it used to be, and there are plenty of pro-gun Democrats in the West. AIPAC wants both parties to support the interests of the State of Israel – and lo, there’s only a slight difference between the two parties in terms of their willingness to support the pro-Israel agenda. If I were an activist motivated primarily by a desire to restrict abortion, my top political question would be: where and how can I get Democrats to listen to me. Who’s the most anti-abortion (or least pro-abortion) candidate in every Democratic primary? That’s who we want to throw our support to in that primary – to show that our votes can be won. If anybody wants to win them.
The evidence is overwhelming that winning this or that election doesn’t determine the shape of the culture – and in a healthy political culture the parties are going to take turns holding power fairly regularly anyway. A strategy to change the culture by always voting Republican or always voting Democrat is guaranteed not only not to change the culture, but to throw away the chance of your vote affecting anything else. Which is one reason why I am not primarily motivated by social issues, as compared with issues of war and peace, the general welfare, and good governance.
I really believe the following:
- If you believe that the country needs broader access to government-supported (or -provided) health care, more welfare spending generally, stronger unions, stricter environmental regulation, and so forth, and think these things are worth paying higher taxes for, then you should vote for the Democrats, even if you think affirmative action is folly and abortion is wrong and the Second Amendment is sacred. And you should fight – hard – within the party and in the media to make more space for your views on social issues within the Democratic Party and the country as a whole.
- And if you believe that the country needs lower taxes, more streamlined and flexible regulations, more flexible labor markets, and so forth, and think these things are worth living with greater inequality for, then you should vote for the Republicans even if you believe in the importance of workforces that “look like America” and that abortion is a civil right and that guns should be more tightly controlled. And you should fight – hard – within the party and in the media to make more space for your views on social issues within the Republican Party and the country as a whole.
- With the caveat that you should sometimes vote against the party whose views you share on matters related to economics and the general welfare if that party (or candidate) is corrupt, or incompetent, or has dangerous views on foreign policy, or is simply exhausted and incapable of meeting the challenges of the moment – if, for whatever reason, you think they will do a distinctly worse job than the other party that is more closely aligned with what you see as the national interest (and/or your own interest).
Andrew Sullivan probably did more for the movement for gay marriage than any other single individual. And he has never been a Democrat, and has prominently endorsed both Democrats and Republicans at different points in time, without changing his views on the issue which, undoubtedly, is closer to his heart than any other.
My advice to people like Rod Dreher who are on the other side is neither to withdraw from politics nor to keep their shoulder to the wheel for their partisan “side,” but to follow Andrew Sullivan’s example.
That’s what the children of Israel responded when presented with the words of the Lord at Sinai – not, “we hear, and we will do” but “we will do, and we will hear.” (Exodus 24:7)
The two glosses I’ve heard on that particular verse are: first, that it’s the ultimate testament to the faith of the Israelites at that moment, that they agree to perform the divine command even though they haven’t really “heard” it yet (that is to say, they haven’t absorbed its meaning). Second, that it’s a statement about the nature of hearing the divine command – that we can’t really hear it until we’ve performed it.
I was thinking about this apropos of David Sessions’s mild but firm objections to the way an essay of his about the non-rational bases of what he calls his “de-conversion” have been understood by some religious readers, including our own Rod Dreher, who juxtaposed his essay with the story of Champagne Butterfield, an ex-lesbian convert to evangelical Christianity. The discussion ties back to a post Ross Douthat put up last week about how secularism by its nature changes our experience of reality (or if it does) which I’ve been meaning to touch on but haven’t gotten around to.
Here’s the heart of Sessions’s objection to Dreher’s juxtaposition, and, more generally, to those who read the narrative of his de-conversion as equivalent, in some sense, to conversion narratives:
[Butterfield's experience] is something much different than what I meant to say while channeling Charles Taylor. There is a superficial similarity in the sense that Butterfield and I both had experiences that changed us before we had a full explanation or argument for what happened. What Butterfield describes in this passage is essentially her embrace of obscurantism, a “truth” that either defies or ignores well-established scholarship—and even her own previous experience—on human sexual orientation. But the fact that experience drives intellectual transformation is not a license to abandon intellectual rigor. For example, how does she know God has a point of view about homosexuality, or that it’s negative? Why does she think Christianity requires her to obey it before she understands? What if Christians disagree about what that view is, or think that view is something that’s obviously misinformed? Does it make sense that a Christian God would want a convert to break up a happy family? For a former scholar, Butterfield shows remarkably little philosophical skepticism; she also seems to cast aside her training in how to review and evaluate the available evidence to determine if these views she’s been introduced to are reasonable or even widely considered to be Christian.
In fact, it’s her theological incuriosity that’s perhaps most surprising. As Patrol’s Kenneth Sheppard wrote, analyzing the problems with Butterfield’s conversion narrative: “the question of how to read the Bible, how to determine what it teaches on subjects such as sin (or if it is in fact univocal on such questions), and how to embody that teaching, never seems to arise; this is a rather glaring omission for someone who used to be a literature professor.”
If I understand his objection, what he’s saying is that while his own de-conversion was motivated by experience, social context, and emotion, and not merely by intellectual argument, he feels like Butterfield’s conversion is explicitly a rejection of the process of intellection. And, for that reason, he finds it problematic and troubling, quite apart from not being parallel to his own experience.
I see his point, but I’m not sure he’s really grasping the nettle. It’s comforting to think that the liberal, secular mind is simply more open than the religious, but in my experience you can find plenty of closed-minded people in both camps, and the more open-minded have different points of stress where they turn away from the possibility of uncomfortable truths. There are very, very few individuals who approximate a truly Socratic level of openness to doubt about their own knowledge.
The nettle, I think, is that the qualities of their respective experiences are incommensurate. What I hear when I read the descriptions of Butterfield’s experience is, most primally, the experience of being commanded. The feeling that an authority has instructions for her, and that she must obey them. Sessions’s de-conversion contained no trace of that feeling.
Is that feeling a good thing or a bad thing? Something to be embraced or something to be analyzed and demystified? That question is a very central one to adherents of (or objectors to) the Abrahamic religious traditions. But you won’t get anywhere in trying to understand that question if you start from the proposition that God’s commands ought to be reasonable.
Would God want to break up a happy family? Well, God tests his first prophet, Abraham, by ordering him to sacrifice his only son, and on the plain reading of the text Abraham passed the test by showing his willingness to obey right up to the last possible moment. There are other readings of the text, but it seems to me, as for Kierkegaard, that this is a story about what obedience to God really means. It means obedience when God commands you to do something that flatly contradicts everything else you believe: your rational self-interest, your deepest feelings, your innate moral sense, even the apparent meaning of God’s own prior promises (Abraham was promised a glorious posterity through Isaac, after all). What’s leaving your beloved partner compared to that? And if you want a Christian text, how about Luke 14:26? Apparently, you can’t really love Jesus unless you prefer him to your father, mother, spouse, children, brothers, sisters, and are willing to abandon them all to follow him. That’s a pretty explicit proof-text response to Sessions’s question, isn’t it? Pragmatically, the meaning of saying that this or that religious practice is God’s command is to say that we do not question them by asking whether they are reasonable.
Now, you can accept that position intellectually, or tacitly, because you were brought up to do so, without feeling the experience of divine command. And that, I assume, is where Sessions started out his life. And then he had other experiences that led him to question whether he still wanted to accept that position – and, ultimately, led him to reject it. But those experiences that led to his de-conversion were not qualitatively similar to Butterfield’s; they were not experiences of being commanded.
Again, I’m not saying that this makes Butterfield’s experience more authentic or powerful than Sessions’s. I’m not even saying that I know how you are supposed to respond to that kind of experience. I’m just asserting the primacy of experience itself as an explanation of Butterfield’s behavior, and saying that moralizing about her response is harder than you might think.
An analogy: the experience of falling in love. Can we trust it? How should we understand it? How should we respond to it? These are not easy questions to answer. Should you marry the person for whom you experience that feeling? What if the feeling doesn’t last? What if you’re already married – should you leave your spouse for this new love? What if you never experienced that feeling with your spouse – now should you consider leaving them for this other person? Should you shun this person you’ve fallen in love with, lest the experience cause you to do something irrational or morally wrong? Or should you cultivate that feeling of blind devotion while, simultaneously, abjuring any socially or morally forbidden expression of affection? (The medievals developed an entire quasi-religious system around the latter and since Dreher is in such deep Dante these days I’d really like him to investigate the relationship of Dante’s idolatry of Beatrice to the courtly love tradition.) These aren’t easy questions to answer – unless you answer that the experience of falling in love is a bad one, to be shunned, categorically, which, it seems to me, devolves into answering that experience as such should have no bearing on our actions. Which, to my mind, is an untenable approach to life.
All of which brings me back to Douthat, who asks a very good question about the whole business of religious experience:
[M]y question . . . is whether the buffered self/porous self distinction is supposed to describe a difference in the lived, felt substance of religious experience itself, or whether it’s ultimately an ideological superstructure that imposes an interpretation after the fact. Taylor’s argument seems to be that the substance of experience itself changes in modernity: He leans hard on the idea that (as he puts it) “the whole situation of the self in experience is subtly but importantly different” for people who fully inhabit the secular age. Which would seem to imply that when Verhoeven was in that church, his actual experience of what felt like the dove descending was “subtly but importantly different” from the experiences that the not-as-secularized believers around him might have been having — more attenuated, more unreal, and thus easier to respond to in the way he ultimately did. And it would imply, as well, that if Takeshi Ono’s worldview had been more secular to begin with, he wouldn’t just have reacted to his visions differently (by, say, visiting a therapist rather than a Buddhist priest); he would have had a different experience, period, in which he somehow felt more buffered and less buffeted throughout.
This isn’t just an academic distinction; it has significant implications for the actual potency of secularism. To the extent that the buffered self is a reading imposed on numinous experience after the fact, secularism looks weaker (relatively speaking), because no matter how much the intellectual assumptions of the day tilt in its favor, it’s still just one possible interpretation among many: On a societal level, its strength depends on the same mix of prejudice, knowledge, fashion and reason as any other world-picture, and for the individual there’s always the possibility that a mystical experience could come along (as Verhoeven, for instance, seemed to fear it might) that simply overwhelms the ramparts thrown up to keep alternative interpretations at bay.
But if the advance of the secular world-picture actually changes the nature of numinous experience itself, by making it impossible to fully experience what Taylor calls “enchantment” in the way that people in pre-secular contexts did and do, then the buffered self is a much more literal reality, and secularism is self-reinforcing in a much more profound way. It doesn’t just close intellectual doors, it closes perceptual doors as well.
I think this is a very good way of describing a key question, and my answer is to reject the dichotomy as presented. That is to say: I don’t believe that secularism is a mere “ideological superstructure,” that we have fundamentally similar experiences of the numinous or uncanny as did our more “enchanted” ancestors, but merely have learned how to explain them away after the fact. Necessarily, our worldview interpenetrates our experience; we can only experience things that we can perceive, and we perceive through categories that we have already formed. But I also don’t believe that secularism merely “buffers” us from those experiences. Indeed, I’m not sure if buffers us at all.
All sorts of people have uncanny experiences, and in my experience they make all sorts of different kinds of sense of them. There is no rule that says that because you are not a deeply religious person you have to dismiss those experiences as signs of incipient madness. To pick an extreme example, there are people who are convinced they have been abducted by aliens from outer space – people who, generally, do not manifest other signs of psychosis. But there are plenty of other people who experience hauntings, prophetic dreams, out-of-body experiences, and so forth. In my experience, there’s no particular pattern suggesting that these experiences are less common among the non-religious, and no particular pattern suggesting that non-religious people are more inclined to discredit these experiences as “obviously” intra-psychic as opposed to being in some way mysterious.
By the same token, most of the people I know who’ve had these experiences don’t take them particularly deeply to heart, though some of them do, sometimes. But who’s to say that more religious people find them profoundly transforming? Can you even experience profound transformation on a regular basis?
I’ve only had one experience that comes close to the kinds of things Ross is talking about, about twenty years ago. I had the most profound, visceral feeling that I was trapped in a container or box and was suffocating – the experience of feeling buried alive. The experience was triggered by something trivial – I think I was glancingly watching a television show about ghosts or something like that – but the experience itself was absolutely overwhelming. And it affected my life deeply; I felt I had to change my life, and quickly. My now-wife, then girlfriend, was a profound comfort to me through the experience, and that kindness profoundly shaped my feelings for and about her. Many of the decisions I made after that, from my career choices to my marriage to my turn toward greater religiosity, can be traced back to that experience.
Now, if you ask me how I’d describe that experience, I’d say it was a severe panic attack. But that is just a label; it’s not a phenomenology. I can imagine, if I were living in a more “enchanted” world, that I might have understood the experience somewhat differently at the very time it was happening, and not only afterwards – that the precise manifestation of the experience might have differed in various ways. But that doesn’t mean I was “closed off” to that kind of experience on account of modernity. I certainly didn’t feel “buffered” in any way.
And the experience affected me independently of my “understanding” of it. Just because I could say, “that was just a panic attack” – that didn’t make the experience any less powerful, or blunt the urgency of responding to it. Explanations don’t necessarily drain experience of power. (I believe William James had something to say about that.)
And that, too, is not an artifact of modernity. Imagine if you were a girl living in fifteenth-century France, and the archangel Michael told you to lead an army to expel the English. That experience felt absolutely real to you. Now, suppose your betters – priests, magistrates, and so forth – told you that it wasn’t the archangel Michael, it was a demon tempting you to sin, and you must recant your testimony and accept that understanding of your experience – that is to say, not let it affect you. Would you recant? Could you? Isn’t that pretty analogous to me telling myself not to worry about the feeling of being buried alive, that it was “just” a panic attack, and not something to take to heart? Or to Butterfield’s partner telling her that she’s being brainwashed by reading the bible, or Sessions questioning why she’d given up her critical faculties all of a sudden?
Primal experience is possible within all ideological frameworks, secular and religious alike. It can be rejected or “explained away” within all ideological frameworks, secular and religious alike. And it is potentially disruptive of all ideological frameworks, secular and religious alike.
Walter Russell Mead connects the Russian incursion in the Crimea to the Libyan war to draw a general lesson about the general utility of a nuclear deterrent:
When Ukraine escaped from the Soviet Union in 1990, Soviet nukes from the Cold War were still stationed on Ukrainian territory. After a lot of negotiation, Ukraine agreed to return those nuclear weapons to Russia in exchange for what (perhaps naively) its leaders at the time thought would be solid security guarantees from the United States and the United Kingdom. The “Budapest Memorandum” as this agreement is called, does not in fact require the United States to do very much. We can leave Ukraine twisting in the wind without breaking our limited formal obligations under the pact.
If President Obama does this, however, and Ukraine ends up losing chunks of territory to Russia, it is pretty much the end of a rational case for non-proliferation in many countries around the world. If Ukraine still had its nukes, it would probably still have Crimea. It gave up its nukes, got worthless paper guarantees, and also got an invasion from a more powerful and nuclear neighbor.
The choice here could not be more stark. Keep your nukes and keep your land. Give up your nukes and get raped. This will be the second time that Obama administration policy has taught the rest of the world that nuclear weapons are important things to have. The Great Loon of Libya gave up his nuclear program and the west, as other leaders see it, came in and wasted him.
It is almost unimaginable after these two powerful demonstrations of the importance of nuclear weapons that a country like Iran will give up its nuclear ambitions. Its heavily armed, Shiite-persecuting neighbor Pakistan has a hefty nuclear arsenal and Pakistan’s links with Iran’s nemesis and arch-rival Saudi Arabia grow closer with every passing day. What piece of paper could Obama possibly sign—especially given that his successor is almost certainly going to be more hawkish—that would replace the security that Iran can derive from nuclear weapons? North Korea would be foolish not to make the same calculation, and a number of other countries will study Ukraine’s fate and draw the obvious conclusions.
This analysis is, on the surface, extremely persuasive. Which is exactly why I think it deserves a closer, more critical look.
First, let’s look at the proposition with respect to Ukraine specifically. Was it even plausible that Ukraine could have held on to an independent nuclear deterrent after the collapse of the Soviet Union? The answer is almost certainly, “no.” Indeed, it’s hard to imagine any action that would more greatly have imperiled the stabilization of the post-Soviet order than such a determination on Ukraine’s part. Western and Russian interests were aligned in wanting to see Ukraine denuclearized; an independent nuclear Ukraine would have been treated as a dangerous rogue state. Russia’s ability to project power in the immediate aftermath of the collapse of the Soviet Union was extremely limited, but Ukraine’s ability to defend itself was even more ephemeral. The best evidence that Ukraine had no real choice but to denuclearize is precisely that Ukraine got almost nothing in exchange for agreeing to hand over its Soviet nuclear weapons.
Assuming, for the sake of argument, that a nuclear Ukraine was a real possibility, how would it have responded differently to the events of the past few weeks? Nuclear weapons would not have changed the election results that brought a pro-Russian premier to power – though they would dramatically increase Russian interest in ensuring a pro-Russian Ukraine. By the same token, nuclear weapons would not have deterred ethnic Ukrainians from taking to the streets. If events continued to play out as they have, and Russia sent troops to Crimea, nuclear saber-rattling against Russia would be completely specious; nobody would believe such a transparently suicidal threat. How would nuclear weapons avail Ukraine in the current crisis? What seems most likely to me is that, if Ukraine had an independent nuclear deterrent, Putin would have intervened much earlier to make sure that Yanukovich remained in power. He certainly wouldn’t risk a Ukrainian nuclear deterrent falling into the hands of an anti-Russian party.
That point can be generalized. There is considerable evidence that a nuclear deterrent does not suffice to prevent either conventional conflict with other states, or violations of a country’s sovereignty, or regime change. Israel’s nuclear deterrent dates to the 1960s, but did not prevent the surprise Syrian-Egyptian attack in 1973. South Africa’s nuclear deterrent dates to the 1970s, but did not prevent the dissolution of the apartheid regime (which voluntarily denuclearized to prevent its arsenal falling into the hands of the ANC). Pakistan’s nuclear deterrent dates to the 1990s, but did not prevent America from toppling its Afghan ally, or conducting drone warfare and engaging in covert operations within Pakistani territory, including the assassination of Osama Bin Laden. Most obviously, the enormous Soviet nuclear arsenal was of no utility in preventing the sudden and spectacular collapse of the Soviet regime. (Nor, for that matter, was the Russian nuclear deterrent useful in deterring Western intervention to dismember Serbia, a traditional Russian ally.)
Dictators may well learn the lesson from Libya that denuclearization will not bring Western protection – which is true. It does not therefore follow that a nuclear Libya could have done anything different to defeat its insurgency. It is likely that Western powers would have been much more reluctant to initiate a bombing campaign – a suicidal threat would have more credibility coming from a man literally fighting for his life – but, on the other hand, Western powers would have a much, much greater incentive to be involved in a Libyan civil war if there was a question of the ultimate disposition of nuclear weapons. Consider: would America take a hands-off attitude to civil war in Pakistan? It seems to me we would likely be more involved in such a civil war than we are in Syria’s, precisely because the disposition of a nuclear arsenal would be at issue.
The primary utility of nuclear weapons is to deter other nuclear powers from escalating to nuclear warfare. Secondarily, nuclear weapons are useful as a deterrent to conventional war if they can be plausibly deployed in a tactical fashion against a foreign invasion. So: U.S. plans for fighting World War III involved the first use of tactical nuclear weapons against Soviet armor, either in Germany or in Poland. Would the Soviet Union have escalated from that event to a suicidal strategic nuclear exchange? American war planners obvious thought not. Similarly, Pakistan could use tactical nuclear weapons on its own territory against an invading Indian army. That prospect may have deterred India from launching a massive invasion of Pakistan in response to any number of provocations. Nuclear weapons are not useless, in other words, but their utility is distinctly limited.
What are the implications for Iran or North Korea? I doubt either Iran or North Korea were particularly inclined to trust pieces of paper in the first place. The rational case for Iran to go nuclear can only be countered by a rational case to not go nuclear – concrete interests that could be secured by an agreement, concrete risks to refusing to come to an agreement. Perhaps the prospect of full normalization – which would have very substantial economic benefits for Iran – would be enough carrot, while the prospect of continued isolation, being the object of covert warfare, and the risk of Saudi Arabia going nuclear in response to an Iranian bomb are sufficient stick. Perhaps not. North Korea is much tougher because there is no plausible path forward for the regime within the world community of nations, so there’s not much that can be offered as a carrot.
But the more important point lies elsewhere. Mead says that if Obama “allows” Ukraine to be dismembered, then other countries will draw the lesson that if you want to prevent the same from happening to you, you’d better get a bomb. But even if an aggressive American response were effective in securing Crimea for an independent, pro-Western Ukraine (which I don’t believe it would be), the lesson a country like Iran would draw is certainly that America successfully intervened to secure regime change in Ukraine, and is therefore undoubtedly still determined to do the same in Iran. That is to say, it’s more likely the Iranian regime would identify with Russia than with Ukraine in this situation.
The conclusions are not mutually-exclusive, of course. Iranian hard-liners could interpret any attempt to find a negotiated solution to the crisis in Crimea as proof that the West responds well to force, while also finding any attempt to force Russia to withdraw from Crimea as proof that negotiations with the West are pointless (after all, Yanukovych negotiated a deal with the opposition under Western auspices, and the opposition simply broke the deal). But if Mead is trying to make a case that a more forceful Administration response to the situation in Crimea would be reassuring to Iranians inclined to negotiate in good faith, I think he’s kidding himself.