I want to give one cheer for Damon Linker’s essay about pluralism and how it gets in the way of young moderns approaching traditional religion. I think he’s onto something, but I would frame the problem differently than he does.
Here’s the heart of Linker’s argument:
But perhaps the most daunting obstacle to getting the nones to treat traditional religion as a viable option is the sense that it simplifies the manifest complexity of the world. Yes, we long for a coherent account of the whole of things. But we don’t want that account to be a fairy tale. We want it to reflect and make sense of the world as it is, not as we childishly wish it to be.
The tendency toward oversimplification is a perennial temptation for all forms of human thinking, but it’s especially acute in matters of religion. My former boss Fr. Richard John Neuhaus exemplified it quite vividly when he grandly pronounced on numerous occasions that economics is a function of politics, politics is a function of culture, and culture is a function of religion. It sounds nice and tidy, but it’s too nice and tidy.
Each of Neuhaus’ nested spheres — economics, politics, culture, religion — has its own dignity and logic. Each can and should be understood on its own terms — and the tendency of each to subsume the others under its own categories and assumptions can and should be resisted.
There is a whole, and it can be grasped. But it is a complex whole. A pluralistic whole. A differentiated whole shot through with contradiction and paradox. This is something that modern men and women intuitively understand, even if they’ve never read a word of the great philosophical pluralists (Daniel Bell, Isaiah Berlin, and Michael Oakeshott), and even if they choose to devote their lives to fighting it in a futile and self-defeating embrace of fundamentalism.
Until religion comes to grips with and responds creatively to the fact of pluralism, it will find itself embroiled in a battle against reality.
And that is a battle it is bound to lose.
Linker conflates two questions here, neither of which is, in my opinion, the heart of the matter.
The first question is whether a given traditional religion’s account of reality is a “fairy tale” or whether it grapples with “the world as it is.” But it’s trivially easy to indict secular or even anti-religious types for holding a “simplified” view of reality, and equally easy to find great philosophical diversity within any of the world’s great religious traditions. Most of the things most people believe – whether they are religious or not – are not particularly firmly “grounded” and do not give a particularly coherent and comprehensive account of reality. But we make do. Meanwhile, the Book of Job grapples with theodicy in a far more sophisticated manner than moralistic therapeutic deism does. If fairy tales were the problem, why do so many young people – including “nones” – gravitate towards what Linker himself has described as an “insipid” theology?
The second question relates to the problem of consilience, whether all our different ways of apprehending reality ultimately “fit together.” Linker expresses a kind of eschatological faith in such consilience – ultimately, yes, everything fits together – but right here and now we can’t see quite how, and so, in the here and now, we have to respect the distinctive languages and assumptions of different disciplines and not wave them away by trying to reduce them to a single, “fundamental” mode of understanding. I have no objection to this stance, which I share, but again, I’m puzzled why it’s a problem for traditional religion. Yes, there have been philosophers within various religious traditions who have striven mightily to impose a kind of consilience on all of knowledge – within Christianity, Thomas Aquinas comes to mind – but if we conclude they failed, well, so they failed. They are only human, and no religious tradition should be reduced to those individual philosophical labors.
Linker’s experience of a certain brand of Christianity was deeply marked by the need to answer all fundamental questions – I rather suspect he shared that need at the time that he first started working for First Things – but I don’t know that most traditionally religious people, including traditional Catholics, are similarly defined by that need, which is peculiar to a certain variety of intellectual who can be found in religious and non-religious quarters. Rather more common for those who adhere to traditional religion, I think, is a desire to know how to live, whether in the form of an actual manual for living or a community that manifests distinctive and specific norms. That desire, I think, is closer to the heart of what attracts most people who come to traditional religion, as well as being closer to the heart of what keeps those who do not leave the fold.
So why do I want to give Linker one cheer? Because I do think that pluralism poses a fundamental challenge to traditional religion. But it’s not the pluralism of modes of knowledge that poses a challenge, but the pluralism of identity. It’s not that traditional religion can’t “handle” natural selection, or psychopharmacology, or biblical source-criticism; it’s certainly not that it can’t explain the evil of the Holocaust. It’s that traditional religion – Abrahamic religion, anyway – demands that you identify yourself definitively as an adherent. It demands an unequivocal commitment. And contemporary young people, according to all the evidence, are very wary of making commitments like in any walk of life: in love, in work, or in terms of religious identity.
Why that resistance to commitment is a topic for another time – but that, I think, is the fundamental challenge of pluralism to traditional religion among young people today.
Meanwhile, it’s worth remembering that not all religious traditions work the way the Abrahamic religions do. Hinduism is something you are born into, not something you adhere to. And if you aren’t born into it, you can perfectly well dabble in Hindu traditions or practices in a way that you can’t dabble in Christianity or Islam, and, over time, assimilate into Hindu society – or not. Buddhism, historically, has spread spatially by conversion, and not just genealogically by descent, but there’s nothing comparable to baptism, circumcision or the shahadah to mark one’s “conversion” to Buddhism, and little sense of exclusivity vis-a-vis other traditions such as is taken for granted in Abrahamic religions. The various indigenous Chinese “religions” – if they are even properly characterized as such – are even less exclusive, as are the various practices that fall under the rubric of “animism.”
Plenty of non-Abrahamic religious traditions, including Western religious ideas that pre-date the encounter with Jerusalem, have sophisticated things to say about the nature of the divine, as well as about how to live, but do not demand the kind of allegiance that the God of Abraham does. It may be that these modes of religion will hold greater appeal in the future because they “work better” with the kind of pluralism of identity that I described above. Or maybe American Christianity will “evolve” in that direction; Christianity has taken divergent forms in the past, from the Cathars to the Shakers to the Mormons, so who’s to say what future Christianities will emerge. Or maybe Christianity will emphasize its exclusivity of allegiance more than ever, pushing against the tenor of the times to make itself more distinct, and winning its share of adherents precisely for that reason.
Or maybe all of these things will happen. By definition, pluralism implies that there is no one thing that any religion “must” do lest it be consigned to irrelevancy. To any question or problem, including the one identified by Linker, there exist a plurality of answers.
Although I have fallen away from observance in many areas, Passover remains a special case where I remain a bit medakdek – not by comparison with somebody strictly observant, but in comparison with my year-round standards of observance. In particular, we always do a fairly complete rendition of the Passover seder, reading (and singing) the complete text of the haggadah.
As with many traditional liturgical texts, the combination of strangeness and familiarity in the haggadah both attracts and repels readers, particularly young readers. If they attend to it, children inevitably ask questions – the text itself is organized around the concept of questions, though the questions it assumes will be asked (“why on this night do we dip twice?”) are not necessarily the ones a modern child would resort to, and the answers tend to prompt additional questions rather than satisfying. But attending to the text is a tall order – for great stretches, the text itself seems to be treating irrelevancies (“One might think that the obligation to relate the story of the Exodus begins with the first day of the month of Nissan”) or being mind-numbingly pedantic (“‘Our oppression’ – this refers to the persecution, as it is said, ‘I have also seen how the Egyptians are oppressing them’”).
Updating the text to make it more modern or relevant does not tend to be a very effective strategy – it tends to make it less interesting rather than more so. A better strategy is to make a game of the whole process – a favorite in our family is “seder bingo,” which requires that participants listen carefully for some of the more obscure words in the haggadah so that they can score them first on their cards.
The other funny thing about the haggadah is that, while supposedly we are “telling the story,” the story doesn’t actually get told anywhere in the text. Rather, the text presumes familiarity with both the story of the Exodus and the words of the biblical telling, and then riffs on it – and on the obligation to tell – in various ways. In our family, we’ve remedied that omission over the past few years by acting out the story in a kind of “pre-show” before sitting down to the table. My son typically played Moses, until last year when he decided he wanted to play Pharaoh.
This year, though, he announced that he’s getting too old for the “kids’ play” and wanted to do something different. What we settled on was: a mock trial. Of God. We would put God on trial – for the crime of inflicting the plagues on the Egyptians.
The idea turned out to be inspired, as everybody, adults and children alike, got into the spirit of the thing. My wife did a fabulously over-the-top turn as Pharaoh’s daughter, called as a witness for the prosecution (“such a nice boy, until he got into GOD – after that, oy, such troubles!”); a friend of my son’s was a splendidly self-regarding Pharaoh, and both the prosecution and defense did a fine job making their respective cases.
Trials of God have been undertaken before, in life and in literature, but the typical plaintiff is someone who asserts his or her own innocence – the great problem of theodicy is typically described as the problem of the suffering of the innocent, particularly children who cannot be plausibly understood to be guilty. In our trial, the prosecution pursued this tack to a certain extent, but not as much as I expected. The focus was much more on the violent and aggressive nature of the crimes than on the innocence of the victims.
The defense’s case proceeded even further from the form that I expected. I expected them either to claim justifiable homicide (the plagues were just punishment for the crimes of the Egyptians, and constituted a kind of self-defense on the part of God’s people) or lack of jurisdiction (who has the standing to prosecute God?). Instead, they claimed there was no evidence that God had anything to do with the crimes attributed to Him. Who, after all, had seen God perform any of the miracles? Moses, yes; Aaron, yes; but God? Pure hearsay.
The more I thought about both the prosecution and the defense tack, the more I liked them. I have never much liked the Deuteronomic logic of reward and punishment, which is so obviously at variance with the way reality actually works, nor seen the appeal of approaching prayer and sacrifice as a way of propitiating the deity. I’ve also never been much moved by assertions of God’s injustice – because they seem animated by that same Deuteronomic conviction that I never shared in the first place. The question to my mind has always been: how do you relate to reality? The prosecution’s case – that terrible things happen in the universe – is unarguable. The question is whether one still sees the hand of God in the universe, or whether one does not.
Oh, and the result of the trial? It ended in a hung jury.
Mickey Rooney, who passed away earlier this week, had such an astonishingly long and varied career, from vaudeville to Broadway and from silent film to digital video, that it’s hard to sum up other than with banalities related to its very longevity. Rather than try to do any kind of justice to his entire career, I want to focus on a single, terrifyingly powerful performance that has stuck with me for years and, I suspect, will stick with me as long as I live, and that, to me, exemplifies something about his chosen profession.
That’s his role as Fugly Floom in “Babe: Pig in the City.” If you haven’t seen the film, stop what you’re doing right now and download it – but know what you’re getting into. This isn’t really a kids’ film; it’s like a cross between “Charlotte’s Web” and “Taxi Driver.” There are so many astonishing moments in the film – moments of terror, like Flealick the terrier’s near-death experience; of wincingly painful need, like the Pink Poodle’s shameless display for the dog catcher; and of deep pathos, like Thelonius the orangutan’s gratitude for the simple dignity of clothing. But a key anchor for the whole experience is Rooney’s performance as Floom, an aging, creepy clown.
The key to that performance is its sincerity. There’s not a moment in which Rooney mugs, or trades off his persona, not a moment in which he solicits our sympathy. He is completely inside this terrifyingly remote and strange character, a man who has become the clown, who is no longer performing because he is never not performing. The closest comparable I can come to the kind of pain I felt watching it is the experience of watching Emil Jannings in “The Blue Angel” – but there we have the consolation of narrative, of understanding how he became the pathetic character he is at the end, and our able, to some extent, to distance ourselves from him. I couldn’t do that with Rooney; he is, as Lear said of Poor Tom, “the thing itself” – not unaccommodated man, but social man, man repetitively performing the clownish role of being human, which is what, in this film, life in the city reduces us to.
Not every actor, approaching Lear’s four score years and as continuing more than a decade past it, would have such commitment to his art that he would undertake a role like Floom, put himself on the line like that, emotionally and artistically. Only a real artist would do that. Which is what Rooney was.
I’ve been following with interest the condemnatory thread engendered by my old friend, Reihan Salam’s Slate piece, “Why I Am Still a Neocon.” I’m sorry to have to tell Salam that I think many of the criticisms are justified. If you want to see a good roundup of such criticisms, go visit Daniel Larison’s blog; he has been fairly exhaustive.
I’m joining the thread to try to introduce some analytical clarity, and to see whether any case can be made for neoconservatism as such – rather than make a case for internationalism more broadly and then simply impute that case to the much more specific views of neoconservatism.
First of all, “Neoconservative” and “Bush administration” shouldn’t be treated as synonyms; by the same token, “moralized foreign policy” isn’t a synonym either. The Bush administration, for all its ideological zeal, had to deal with the actual world, and inevitably strayed from whatever one might identify as the “one true path” of neoconservatism. Moreover, many key figures in the administration – Donald Rumsfeld, for instance – have never really been thought of as proper neoconservatives (though Rumsfeld has been lionized by plenty of neoconservative figures).
As well, plenty of foreign policy types have undertaken actions for moral reasons without thereby becoming neoconservatives. Old-fashioned liberal internationalism is one morally-inflected foreign policy stream that should not be identified as neoconservative; so is more contemporary humanitarian interventionism. Realists can also make room for morally-motivated actions, like the rescue of the Ethiopian Jews which was substantially made possible by George H. W. Bush (not that he got much credit for it).
Moreover, virtually everyone involved in any stream of foreign policy thinking embraces the concept of collective security to some degree. There are virtually no true isolationists out there – most definitely including Rand Paul. So nobody should say, in effect, I’m a neoconservative because I believe that NATO retards the development of intra-European rivalries, or the American alliance with Japan reassures some other Asian countries that we will restrain any revival of Japanese nationalism. Plenty of realists would say the same.
Salam’s initial column suffers from a surfeit of confusion about all of the above, but particularly on the question of morality in foreign policy. Salam’s example of immoral American behavior relates to Nixon’s support for Pakistan during the 1971 crackdown in East Pakistan. He decries that support, and wishes America had stood up for democracy and human rights. But Pakistan was at the time an American ally, and India, whose intervention ultimately led to independence for Bangladesh, was considered a vague hanger-on of the Soviet bloc. Realistically, he’s not complaining that America didn’t intervene against Pakistan; he’s complaining that America didn’t reduce its level of support for Pakistan in the wake of the crackdown – or use its leverage to induce Pakistan to act with more restraint. Neither action sounds remotely like neoconservatism either in theory or in practice. What they sound most like is the Carter policy in the late stages of the Shah’s reign in Iran – a policy that absolutely can be defended on the merits, but for which I strongly doubt you can find a single neoconservative defender.
His follow-up column at National Review makes clearer the true heart of his argument, to whit: that American hegemony is good for the world and, hence, for America, and needs to be maintained even at a high cost. To maintain that hegemony, we need to retain a massive military advantage over any plausible combination of adversaries, define our interests globally, and reassure our allies that our primary needs from them are support functions rather than building substantial independent military capabilities.
That is a perspective very worth debating – I’ll hopefully debate it later today – but it should not be identified with neoconservatism but rather with what I would call the “Washington consensus” that has obtained for roughly 25 years, and that is only recently coming under any kind of serious scrutiny. The neoconservative persuasion antedates the “unipolar moment” of the 1990s, and the reason that lots of people who do not call themselves neocons refuse to associate themselves with the label is not merely a matter of avoiding unpleasant associations but because they do not agree with certain views that are quite central to neoconservatism as it actually exists.
If we’re to be more precise, then, neoconservatism should be characterized by three attributes in particular.
First, neoconservatism’s main analytical insight is that the internal character of a regime can have a material effect on its foreign policy. Specifically, the mid-century totalitarian regimes in Germany, Italy, Japan and the Soviet Union derived their legitimacy in part from their status as revisionist, expansionist powers, and hence could not adopt a policy of peaceful coexistence without succumbing to internal contradictions. A foreign policy aimed not merely at deterrence but at changing those regimes’ character was the only solution to the threat they posed to international order. Neoconservatives don’t want to spread democracy simply because they are nice. They want to spread democracy because they believe that democracies will be naturally more aligned with each other and because democracies will be naturally less inclined to undertake expansionist wars that threaten the international system.
Second, neoconservatism is fundamentally activist, by which I mean not merely that it has an expansive view of national interests or that it has no moral problem with intervening in other countries, but that it holds as an article of faith that power cannot be husbanded. On the contrary, a vigorously activist and successful power will grow more powerful simply by virtue of having demonstrated such vigor. Another way of putting it is that neoconservatives don’t really believe that an aggressive power will trigger balancing by lesser powers; rather, they believe that an aggressive power will more-likely trigger bandwagoning. Therefore, inasmuch as the United States wants to grow in power and not shrink, it needs to err on the side of action.
Third, neoconservatives have a strong bias against the legitimacy and value of international law. Skeptical of the restraining power of custom or tradition, neoconservatives tend to see law as meaningful only as an expression of an entity with a monopoly of violence. As such, in the international sphere there is only “law” if some entity is willing to use overwhelming force to ensure that said law is obeyed. The United States is unique in today’s world in potentially occupying the role of that entity, and recurrent dreams of a “league of democracies” or some such are attempts to come up with an entity that would have many of the characteristics of the United States without obviously being a single, hegemonic nation.
There is an insightful kernel of truth in each of the above tenets, but that insight is often badly abused in practice. To take the first, Nazi Germany in particular really probably couldn’t endure without continually being on the attack, and the best evidence of that fact is that it launched a thoroughly mad war on the Soviet Union when it had not yet forced Britain into a separate peace (and at the same time that its ally, Japan, attacked the United States in a similarly mad expansion of the war). And more generally, the notion of some degree of separation between regime interests and the national interest is a valuable one for thinking about how other states behave.
But the insight is badly abused when we conclude that democracies will never be aggressive or expansionist. Britain, France and the United States all have expansionist and imperialist histories, and they continue to have expansive views of their national interests and prerogatives to intervene that they do not apply to other actors in the international system. Modern India and Israel should also be added to the list of such democracies. Populist, illiberal democracies may be among the most conflict-prone regimes on earth. But the insight is even more badly abused when truths about Nazi Germany and imperial Japan are applied to other powers that may be hostile and unfree, but are not obviously expansionist or even revisionist. Traditional authoritarian regimes are among the most cautious in terms of their foreign policy, and even some highly ideological regimes, like Iran, have not been nearly as aggressive as neoconservative theory suggests they must be. Indeed, the neoconservatives may well have been wrong about the Soviet Union itself, and George Kennan, who saw more continuity than discontinuity with pre-Soviet Russian history, more correct.
The second insight also contains a kernel of truth. There are indeed times when an active power provokes bandwagoning rather than balancing – plenty of realists would agree. But the opposite is also true. The United States easily assembled a broad coalition to fight the Gulf War because Iraq had aggressively conquered and absorbed another sovereign state. Countries all over the world saw that behavior as a threat – and rather than seek to placate the aggressor, rushed to support (and even goaded) a power that proposed to reverse the aggression. By contrast, the coalition assembled to fight the Iraq War was much more limited, precisely because America was viewed in much of the world as the aggressor. There was some bandwagoning by minor countries around the American banner, but much more widespread concern about what our actions portended about America’s global aims. Today, concerns about Chinese revisionist pretensions have driven a number of Pacific Rim states into closer alliance with the United States. This is balancing behavior. But if the United States began to support Japanese nationalist pretensions to revisionism with the enthusiasm with which the neocons supported Georgia’s 2008 war, or launched an unprovoked attack on North Korea comparable to our war against Iraq, that calculus would undoubtedly change – quickly, dramatically, and not in our favor.
As for the last insight, yes, international law lacks a police power to back it up definitively. But that does not mean that it has no value or meaning. Law and respect for law is a signaling mechanism to other states about the character of the state they are dealing with. Cavalierly asserting that the law can’t stand in the way of our righteous action sends a very clear signal: that we recognize no restraint. That is not going to make any other state comfortable unless they agreed with us in our assessment of our own absolute righteousness. And that discomfort poses actual costs to our ability to conduct an effective foreign policy, whether for humanitarian purposes or for the protection of our national interest.
In other words, neoconservatism’s genuine insights are modest and contingent. We can’t cavalierly assume that Iran’s regime interests are identical to its national interests (rightly considered, Iran and the United States have no material interests in conflict), and should take into account the ideological basis of the regime when we consider its likely foreign policy. But we also can’t cavalierly assume that, because it is an ideological regime, it is inherently aggressive and expansionist – particularly when there is almost no actual evidence of such ambitions. We should not assume that, say, Russia’s actions in the Crimea will “automatically” generate balancing by European powers, and that we can therefore take a blasé attitude towards events in a far off country of which we know nothing. But by the same token we should not assume that there will be a bandwagoning effect around any attempt to “lead” a coalition to “force” Russia to rescind its annexation and withdraw from that territory. A law-based approach to both conflicts may be emotionally unsatisfying, and may fail, but may still be more responsible and more likely to achieve success than an approach – favored by actual card-carrying neoconservatives if not by Salam – that emphasizes the threat or use of force, unilaterally if necessary.
In actual practice, neoconservatives have a tendency to be stopped clocks, hammers that see every problem as a nail. And stopped clocks and hammers are not good guides to policy, regardless of where they are stopped or how hard the hammer. They would add more value to the foreign policy debate if they would return to the empirical rigor of the original neoconservatives in domestic policy, and stop behaving as if they had found some kind of eternal truths.
Salam, given his intellect and his preexisting sympathies, is an excellent person to begin that kind of change within the self-identified neoconservative ranks. But to change, you first have to acknowledge that you have a problem.
Damon Linker has a deliberately-provocative column out today, arguing that the GOP has made a distinct turn against democracy as such:
This was the week, of course, when the Supreme Court’s five-member conservative majority knocked down limits on aggregate contributions to federal political campaigns, opening the door for the rich to exercise even more influence on the political system than they already do. It was also the week when Rep. Paul Ryan unveiled his latest budget proposal, which would gut food stamps and other aid to the poor. And as I wrote about the other day, this is a political season that has seen the Republican Party working to make it harder for poor people and members of minority groups to vote.
Then there was venture capitalist Tom Perkins suggesting a couple of months ago that only taxpayers should be permitted to vote — and that those who pay more in taxes should be given more votes to cast in elections. And that came less than two years after Mitt Romney was caught kissing up to wealthy GOP donors by denigrating the “moochers” who make up 47 percent of the country’s population.
Ladies and gentlemen, that many data points make a pattern. We seem to be living in an era in which the Republican Party is turning against democracy in an increasingly explicit and undeniable way.
That list of data points includes some considerable stretches – cutting food stamps may be both cruel and foolish, but is it really credible to call it anti-democratic? – but I think Linker has a real point about the current trend on the right. But I don’t think he’s at all correct in saying that this turn is “unprecedented” in American history. And I wish he’d taken the anti-democratic point of view a little more seriously, so that its profound flaws might be effectively exposed.
To take the first point: the United States has turned away from majoritarianism repeatedly in our history. The dramatic expansion of slavery in the South, and the antebellum efforts to extend the legal reach of the slavery into new territories and even into free states, represented a turn away from democracy. The elimination of the franchise for African Americans in the South after Reconstruction, the institution of Jim Crow laws, and their tightening during the Progressive era (Woodrow Wilson is the one who brought Jim Crow to the nation’s capital), the imposition of the poll tax – all these represented turns away from democracy. One might characterize the various 19th- and early-20th century anti-Catholic campaigns as anti-democratic as well – that certainly would be less of a stretch than Linker’s point about food stamps. Ditto for the Lochner-era Supreme Court decisions striking down democratically-enacted laws, intended to protect working people, for abridging “freedom of contract.” The point being: while the evolution of the written Constitution may reflect a monotonic expansion of the franchise and an ever-expanding circle of citizenship, the lived experience of Americans has not been so linearly progressive.
So we may be in one of those regressive periods again.
Presumably because of space constraints, Linker doesn’t discuss why we might have entered one of those periods. I suspect that demographic change coupled with the agonizingly slow recovery from the financial crisis do much to explain the turn to zero-sum thinking in politics, which in turn explains much of the appeal of anti-democratic arguments on the part of those who see themselves as the true proprietors of the state and the country. Nor does he do much to ask whether the anti-democratic stance makes sense in its own terms, other than to say that Aristotle would have recognized it.
Myself, I don’t think it does, and I don’t think Aristotle would (though Coriolanus might). Aristotle’s case for aristocracy very plainly implies a kind of reciprocal obligation that is completely foreign to the Randite arguments so common on the American right these days. And those arguments rarely take the explicit form of arguing that the wealthy should rule because they are more virtuous. Rather, the two most common forms of the argument are: that it’s unfair for one’s representation to be less than proportional to one’s contribution (therefore people who don’t pay income taxes should not be allowed to vote), and that it’s dangerous to give power to the unpropertied (because they don’t have a sufficient stake in stable property rights that promote productive enterprise).
And all of these arguments are transparently absurd. If the question is fairness – that one’s representation should be proportional to one’s contributions – wouldn’t you have to account for the contributions that were never compensated for properly? This country was substantially built by the coerced contributions of African slaves. Should those slaves’ descendants get “extra” votes to compensate for that manifest unfairness? And shouldn’t the benefit one derives from the state also be included as part of the calculus of one’s contribution? The state protects the distribution of property, after all, with the threat of violence. Should heirs, therefore, be disenfranchised, because they benefit from the state’s monopoly of violence, but have contributed nothing themselves?
And why should contributions be measured in monetary terms? Only veterans explicitly risk their lives to protect the country as a whole. Perhaps only veterans should be allowed to vote? Without mothers, there would be no next generation of Americans at all. Perhaps only women with children should be granted the vote? (Or perhaps they should just pay much less in taxes.) Once we start debating who deserves more votes, it’s obvious that the debate will not be resolved by reason, but by force or sheer weight of numbers. Which is a pretty good case all by itself for the universal franchise.
Meanwhile, if the question is the voters’ stake in the state, why should this incline anyone toward restricting the franchise? Is there any evidence that the road to stability and prosperity lies in that direction? Read your Livy. Or take a look at the history of Latin America. I’m not saying that there isn’t a coherent argument that ownership of property is important to virtuous citizenship – but assuming that we actually care about the well-being of the population as a whole, that argument leads logically not to plutocracy but to some version of distributism.
Now, distributism has other problems – most particularly, that it’s not obvious how it would work in a modern non-agricultural context. (Broad distribution of property in the form of shares of large national enterprises is a variant of socialism.) But at least it is a response to the problem that those who worry about “the 47%” are concerned with that doesn’t simply write that half the country out of the sphere of moral concern.
Readers may wonder why I bother even to dispute an argument against democracy, as I have done before in this space. The reason is: that’s what arguments are for. Giving up on that idea of reasoned deliberation and dispute is very close kin to giving up on democracy itself, which is a problem on the left these days as well as on the right.
Well, it all depends on what data you emphasize.
Gallup put out two recent pieces suggesting the answer is: yes. The first demonstrated that, over the course of time, whites as a whole have gotten more Republican, and more reliably so:
In recent years, party preferences have been more polarized than was the case in the 1990s and most of the 2000s. For example, in 2010, nonwhites’ net party identification and leanings showed a 49-point Democratic advantage, and whites were 12 percentage points more Republican than Democratic. The resulting 61-point racial and ethnic gap in party preferences is the largest Gallup has measured in the last 20 years. Since 2008, the racial gaps in party preferences have been 55 points or higher each year; prior to 2008, the gaps reached as high as 55 points only in 1997 and 2000.
The increasing racial polarization in party preferences is evident when comparing the data by presidential administration. Nonwhites’ average party preferences have been quite stable across the last three administrations, consistently showing a roughly 47-point Democratic advantage under Clinton, Bush, and Obama. On average, 69% of nonwhites have identified as Democrats or said they were independents who leaned Democratic, and 21% have identified as Republicans or leaned Republican.
Meanwhile, whites have become increasingly Republican, moving from an average 4.1-point Republican advantage under Clinton to an average 9.5-point advantage under Obama.
And a subsequent piece noted, more specifically, that voters over age 65 have trended strongly toward the Republicans, and identified that trend with the fact that the 65+ group is much whiter than the electorate as a whole:
Gallup’s analysis reveals that the changes in seniors’ party preferences are attributable in part to attitudinal change among today’s seniors as they have aged. This is evident in survey results from 1993 and 2003 that show the party preferences of today’s seniors when they were 10 or 20 years younger.
In 1993, Americans then aged 45 to 79 represented the age group that today is 65 to 99. At that time, 20 years ago, those 45 to 79 were highly Democratic, with a 12-point advantage in favor of the Democrats. That gap was larger than the average seven-point Democratic advantage among younger age groups that year.
Ten years later, all age cohorts had become more Republican and were fairly balanced politically. Today’s seniors, who were aged 55 to 89 in 2003, were the only age cohort to tilt Democratic at that time. The 2013 results show that today’s seniors have continued to move in a Republican direction, while the younger age cohorts have gone back in a Democratic direction.
U.S. party preferences are strongly polarized along racial lines, and one reason seniors are more Republican now is that they are racially distinct from other age groups. Eighty-five percent of those 65 and older are non-Hispanic whites, according to Gallup estimates, compared with 77% of 50- to 64-year-olds, 66% of 30- to 49-year-olds, and 54% of 18- to 29-year-olds.
So: whites are trending Republican, and seniors are trending Republican, and those two groups overlap substantially, all of which is driving increasing racial polarization in voting.
But there’s another way to slice the same data from Gallup:
Across different age cohorts, whites show something like a 12-point advantage for Republicans – except for the youngest cohort of white voters, which shows a 2-point advantage for the Democrats. Meanwhile, across all age cohorts non-white voters show a marked preference for the Democratic Party. But that advantage shrinks with every cohort: from a 58-point Democratic advantage among non-white seniors, to a mere 37-point advantage among the youngest non-white cohort.
In other words: white Democrats and non-white Republicans both skew young relative to their racially-similar counterparts of the opposite party. That suggests a possible counter-narrative, whereby racial polarization in voting is actually weakening over time.
Here’s a possible way to reconcile both readings of the data. Racial solidarity is a more substantial vote-motivator for older Americans than for younger Americans – both for white and non-white groups. Assuming that currently-young voters don’t grow more racially-motivated over time, that means that, over time, the electorate as a whole will be less-motivated by racial solidarity in voting. However, in the Obama era, the racial identity of each party has become more sharply defined in voters’ minds, with the Republicans being understood as the white-identified party and the Democrats being identified as the non-white-identified party. The latter effect dominated over the former in the Obama era, resulting in a higher degree of racial polarization in voting. But the weaker identification of young voters, white and non-white, with the party that “represents” them racially, suggests that this polarization could be temporary, and could be quickly reversed if events weakened the racial identity of either or both parties in a future election.
If you haven’t been following the debate between Ta-Nehisi Coates and Jonathan Chait about the legitimacy or illegitimacy of a “critique of black culture” as part of a rhetorical strategy against crime/unemployment/teen pregnancy/etc., then you must not be on the internet. To catch up, start here and continue here, here, here, here, and here. Basically, Al Gore invented the internet so we could do this.
Meanwhile, Ross Douthat has entered the lists with a phenomenal post that demonstrates a welcome attentiveness and appreciation for Coates’s perspective:
Looking back on the debates of the 1990s, Coates says that ”there was really no doubt” that a neoliberal magazine would use a photo of a black single mom to illustrate its Clinton-era case for welfare reform, and I know well what he thinks about the excerpt from “The Bell Curve” that ran in TNR in that same era. But it’s at least noteworthy a generation later, the name “Charles Murray” is mainly associated with a controversial argument about cultural collapse in downscale white America, and the most recent cover story on poverty, culture and welfare in a political magazine was Kevin Williamson’s grim essay on Appalachia in National Review. Nor are these examples really outliers: Murray’s “Coming Apart” raised the argument’s profile and enriched it with a searching look at social indicators, but the idea of a pan-racialsocial crisis with its roots in the decline of the two-parent familyhas featured prominently in conservative discussions since the Bush era, if not before.
And the story that some of us on the right, at least, would tell about that crisis is one that’s actually reasonably consonant with Coates’s grim account of the African-American experience on these shores. Beginning in the 1960s, we would argue, a combination of cultural, economic and ideological changes undercut the institutions — communal, religious, familial — that sustained what you might call the bourgeois virtues among less-educated Americans. Precisely because blacks had been consistently brutalized throughout their history in this country, they were more vulnerable than whites to these forces, and so the social crisis showed up earlier, and manifested itself more sweepingly, in African-American communities than it did among the white working class and among more recent immigrants. This pattern inclined a lot of people, right and left, to see the crisis as an essentially inner-city, black-underclass problem, and prompted the kinds of Reagan and Clinton-era debates which ultimately gave us welfare reform, tough-on-crime policies, and a national campaign against teen pregnancy. But now we know differently: However one assesses the wisdom and justice of those policies (and Coates and I would have some major disagreements there, I’m sure), the racialized framework in which they were debated and implemented does not fit the lived reality of America in 2014.
By which I mean that (just as Coates suggests) we don’t have a black culture of poverty; we have an American culture of poverty. We don’t have an African-American social crisis; we have an American social crisis. We aren’t dealing with “other people’s pathologies” (the title of Coates’s post) in the sense of “other people” who exist across a color line from “us.” We’re dealing with pathologies that follow (and draw) the lines of class, but implicate every race, every color, every region and community and creed.
In this landscape, certain ways of talking about culture and poverty really are inappropriate, and for roughly the reasons Coates suggests — because they essentially involve a flight into the more comforting (for white people) patterns of the recent past, into a reassuring Othering of social pathology, into a conversation that has why can’t those poor black people get their act together? written over and over again between its lines. In this landscape, it’s usually a mistake — no, not a “racist” mistake, but still a mistake — for white Republican politicians interested in poverty to overstress the “inner city” in their rhetoric. In this landscape, forms of moral exhortation around sex and marriage and work and responsibility that are really just outsiders’ critiques of “black culture” are even less defensible than usual.
Before adding my own 2c of criticism, I just want to acknowledge how smart this is.
Douthat goes on to make two objections to Coates’ apparent perspective: first, that he seems to veer close to denying that culture is any kind of an independent variable in sociology, a stance he calls “radical[ly] reductionist” and presumptively uninteresting; second, that he doesn’t acknowledge the existence of, well, Ross Douthat, and other supporters of the Bush-era social agenda who made a conscious effort both to talk a talk and walk a walk that was post-racial in its analysis. (Uncharitably, one might describe it as seeking to supplant America’s traditional racial identity politics with a trans-racial Christian identity politics.)
I think Douthat has a legitimate point there – but my main objection would be something like the following. Most people would agree that the church had a more central place in African American life in 1965 than it did in most white communities. And yet, in 1965, whatever forces were driving the breakdown of the traditional family had a greater impact on the African American community than they did in white communities. Shouldn’t that suggest that exhortatory moralizing is perhaps not the strongest line of defense?
Moreover, Douthat argues that any kind of re-moralization, to work, would need to be driven by leaders that are exceptionally credible with those on the receiving end of the sermon. But he identifies the social pathologies that concern him as more class-based than race-based. In which case, to achieve his own goals of re-moralization, doesn’t he need an authentic working-class leadership? It’s worth noting that the closest Charles Murray came to a remedy for our national “coming apart” was for elites to try living closer to working-class people. He doesn’t suggest any adjustment of our national political and economic arrangements that would cede more power to the working class.
And I stress the word “power” deliberately. It is entirely possible to simultaneously experience more consumer choice, and more consumer comfort, while experiencing a diminishment of power, a lack of control over one’s own life, and a lack of involvement collective decision making.
My 2c for Coates comes from a somewhat different direction. To whit: what is the politics implied by his critique?
The most obvious political thrust of a narrative of communal subjugation is nationalist and revolutionary. You make the case that your people has been brutalized and stolen from and raped and murdered with impunity. That case motivates the determination to rise up and prove your collective manhood by throwing the foreigner out of power. Depending on the circumstances, that might mean expelling an occupier (Kenya, for example), or toppling a minority regime (South Africa, for example), or carving one’s own state out from larger structure (South Sudan, for example). Nationalism, of course, doesn’t necessarily solve the inequities associated with the legacy of the historic injustice. But it makes it possible to act communally on a formally independent basis. And there’s a vital dignity in that – or so many of the world’s peoples have concluded.
True nationalism has never been a particularly practical option for the African-American community, though. And Coates himself is emphatic about his Americanness, his stake in a collective experiment in which he will likely always be a minority. He just wants more white Americans to love America without treating it as exceptional or objectively superior.
The point I want to make is that this agenda is itself a variety of exhortatory moralism, aimed at the other, just as Paul Ryan’s is. It’s just that the pathology in question is not crime or teen pregnancy but unexamined white supremacist premises. And that’s why I ask Coates the question I asked with regard to last year’s Best Picture: what kind of politics are implied by that kind of searing indictment divorced from any gesture toward action? Chait’s increasing irritation at Coates isn’t really about feeling misrepresented, but about the feeling that Coates’s is a counsel of despair.
Which brings me back to the original basis of the argument – does Barack Obama agree with Paul Ryan about something fundamental. Of course they do. They are both American politicians. So the fundamental thing that they agree on is: words are an instrument of power.
Why does Barack Obama exhort “Cousin Pookie” to “get off the couch” and vote? Because if he gets to the polls, Cousin Pookie will vote for him. He is not an analyst, trying to be fair to Cousin Pookie. If African-Americans were a disproportionate percentage of voters in 2008, he wanted them to be an even bigger disproportionate percentage of voters in 2012. Because he wanted to win. It has nothing to do with justice.
If Coates is disappointed that the election of Barack Obama has not radically improved racial dynamics in America, he should remember that Barack Obama is just the President of the United States. Coates complained in one of his pieces that Chait was treating the President as if he were the coach of “team Negro” – which would make exhortation to “try harder” appropriate – whereas in fact he’s the commissioner of the league. But if it’s not the commissioner’s job to give morally exhortatory speeches to “his” team, it’s also not the commissioner’s job to rail against the unfair advantage of the Yankees’ payroll. And it’s important not to forget that the commissioner is chosen not by the players or by the fans, but by the owners.
Which doesn’t mean that some commissioners aren’t more favorable to the interests of the players, and some less.
UPDATE: Here’s another way to put my question to Coates. The dominant narrative in speaking about black poverty could be described as “up and out.” The conservative variant emphasizes the personal responsibility element in making that happen, and the liberal variant emphasizes the economic and social policy assistance element, and there are further variations on variations to include conservative reformers and so forth – but the commonality is “up and out.” I don’t read Coates as denying that personal responsibility is important – I read him as denying that African Americans deserve any special notice in that regard, that they exhibit any special deficiency.
But I also read something else: an objection to that narrative as such, regardless of where the emphasis is placed. Because his own inheritance, from his father, is a narrative not of “up and out” but of “up and over.”
And my question is: what, in the context of America in 2014, does “up and over” mean to him?
Just a minor point to add to Daniel Larison’s typically sensible post about the folly of issuing empty threats over Ukraine.
I hope that everyone agrees that bluffing is dangerous, because a bluff can be called and, if it is, the bluffer must either make good on the bluff – which, presumably, is very strongly counter to his interest, else he wouldn’t be bluffing but making threats in earnest – or suffer exposure as someone whose threats are not to be taken seriously. If we say, “don’t cross this line in the sand or else” and the person we are threatening crosses it, and we do nothing, then he’ll be that much less inclined to pay any attention when we draw such lines in future sands. (Note: I’m not arguing that our credibility is some unitary factor independent of the characteristics of individual conflicts; other actors in the system can presumably make rational estimates of where our “real” interests lie. Nonetheless, it isn’t a good thing to get a reputation for making empty threats.)
On the other hand, bluffing is a useful tool because, as in poker, it enables you to “play” a somewhat stronger hand than the one you actually have. If there is some uncertainty about whether a threat is a bluff or not, the threat may be accepted as real, and you gain the benefit of the threat at a lower cost than it would take to accumulate the cards necessary to make it good. Moreover, the acceptance of the bluff as true itself sends a signal to other potential opponents: our last opponent backed down in the face of our threats. He thought we were serious. Maybe you should, too?
Looked at this way, there’s a case for judicious bluffing – that is to say, as a matter of calculated risk. If I bluff, my opponent may call – but he may treat the bluff as serious, and back off, and if he does so then the “power” of my declared threats has been enhanced. In other words, there’s a risk of loss, but also a risk of gain. If the action we’re trying to deter is sufficiently damaging, it becomes relatively easy to make the case for bluffing – because a successful bluff deters the action and also enhances one’s credibility, while a failed bluff only results in a loss of credibility; the negative action would presumably happen anyway if the bluff hadn’t been made in the first place.
A bit of simple math might be helpful to explain this point of view. Assume, for simplicity’s sake, that the loss or gain to credibility (“c”) is symmetric – we gain just as much from a successful bluff as we lose from a bluff being called – and that the opponent’s action (“a”) is certain if either the bluff is called or no threat is made. We’ll use P(s) to represent the probability of the bluff’s success. In that case, you get the following:
No bluff: cost = a (opponent takes the action)
Bluff called: cost = a+c (opponent takes the action *plus* we lose credibility)
Bluff successful: cost = -c (i.e. we gain credibility because our opponent backed down)
Total cost of bluffing = P(s)*(-c) + (1-P(s))*(a+c) = a+c-P(s)*a-2*P(s)*c
Since the cost of not bluffing is “a,” to compare bluffing to not bluffer we subtract “a” from both sides. Result: bluffing makes sense if c is less than the sum P(s)*a+2P(s)*c.
That looks like a pretty big number relative to c. To illustrate, take the following example: c is twice as large as a – i.e., the cost to credibility of a failed bluff is twice as large as the cost of the action we’re trying to deter in the first place – and the probability of success is only 50%. Should you bluff?
No bluff: cost = 1
Bluff called: cost = 3
Bluff successful: gain = 2
Total cost of bluffing = 50% * 3 – 50% * 2 = 1.5-1.0 = 0.5
Your indifference point in this ridiculously simplified analysis would be a 40% chance of success. In other words, this analysis leads to the conclusion that you should bluff in circumstances where the bluff is 50% more likely to fail than to succeed, and where the total cost of a failed bluff is three times as large as the cost of never making a threat.
You can see how someone might rationally conclude that bluffing is a pretty good strategy, in a lot more cases than you might initially suspect. Indeed, notwithstanding the many excellent points in Paul Pillar’s refresher course in Cold War deterrence, he gives the inaccurate impression that America was did not do a lot of bluffing in that multi-decade standoff. Whereas, in fact, there is considerable question whether America’s core deterrent ever was truly credible, in the sense that it was never clearly rational to actually escalate to a nuclear exchange for the sake of Western Europe or Japan, and yet America threatened first use of nuclear weapons in response to a conventional Soviet assault.
Obviously, there are a dozen ways to poke holes my analysis above (and I should be clear, that analysis is not something I’m defending, just something I cooked up to illustrate a point that I then wanted to debate). The effect on credibility could be asymmetric, for example – the gain from a successful bluff could be of much lower magnitude than the loss from a called bluff. Or you can question the whole framework by emphasizing the inherent uncertainty of all the numbers involved (which, after all, will most likely be pulled out of the analyst’s posterior). But one hole that should get poked more often is the unwarranted assumption that threats can only decrease, and not increase, the likelihood of the action you’re attempting to deter.
Suppose your opponent is contemplating action “a” that would accrue some gain to him at some cost on you – but not a large enough cost to be worth fighting him over. Nonetheless, you threaten to fight if he takes that action. If he allows himself to be deterred, in our analysis above your credibility is enhanced – you experience a gain in power. But at whose expense?
First and foremost: your opponent’s. After all, every other actor in the system can rationally conclude that you might well be bluffing just as easily as your opponent can. They can’t be certain – but they know there’s a good chance. If your opponent backs down in a situation where a bluff is fairly probable, that results in a substantial blow to his credibility. Even if the cost of fighting with you is sufficiently high that it would mean a substantial cost to the opponent to take an action that leads to war, he can’t afford simply to absorb the cost of backing down in the face of a possible bluff. He has to play the odds.
Well, let’s run the odds from his perspective, using the same kind of over-simplified analysis. Assume that calling our bluff or backing down generates symmetrical gain and loss, and that the value of the action itself is still 1/2 the cost of backing down to a bluff. Assume, further, than the cost of war is 10x the value of the action. We’ll use P(b) to indicate the opponent’s estimate of the probability that we are bluffing. (We already know that the true probability is 1.) Well?
Back down: loss = 2 (loss in credibility)
Call bluff, no war: gain = 3 (1 for action itself, 2 for gain in credibility)
War: loss = 10
Total value of calling bluff: P(b)*3-(1-P(b)*10) = P(b)*13-10
Since the loss due to backing down is 2 (a value of -2), it’s worth calling the bluff if P(b)*13 is greater than 8, or, in other words, if there’s a greater than 8/13 chance we are bluffing (in which case the expected loss from calling the bluff is also 2).
Think about that. In this ridiculously over-simplified, zero-sum analysis, we should bluff if we think there’s at least a 40% chance of the bluff succeeding, even if we rule out in advance the option of making good on the threat and even though, if our bluff is called, we’ll lose 3 times what we would have lost if we had never bluffed at all. And our opponent should call our bluff if they think there’s at least a 60% chance of it being a bluff, notwithstanding that if they’re wrong and we go to war they’ll lose 10 times what they would have gained from taking the action if we’d never threatened them. Moreover, if our opponent estimates at least a 70% chance that we are bluffing, the relative value of taking the action becomes *higher* to them than it would have been had we never bluffed in the first place, because of the incremental value to their prestige and credibility in having defied our threats. Now, recall that we’re likely to have positively-biased estimates of our own ability to bluff. Does it still seem reasonable to assume that threats will at least reduce, and not increase, the likelihood of our opponent taking a given action?
Again, there are a dozen holes that can be poked in such an admittedly over-simplified analysis. But the important point is that there are entirely rational reasons to suspect that issuing a threat can increase the likelihood that the opponent takes the action you are trying to deter. For any such action, “a,” there are a variety of potential costs and benefits to the actor – a vast penumbra of uncertainty about outcomes that might in itself be sufficient to deter many actors from many potentially beneficial actions. By issuing a threat, you’ve made one of those potentialities much more concrete: inaction will definitely result in some loss. Depending on how large that loss looms, and what the opponent figures are the odds that you’re bluffing, the threat itself could be sufficient to motivate the action you intended to deter – or some other action of equal or greater cost to you.
With America defining its interests in such a global fashion, it’s very likely that this dynamic plays an important part in our opponents’ responses. It certainly seems to have been relevant in Georgia, where part of Russia’s motivation in provoking Georgia into launching a war was precisely the desire to call America’s bluff. (How much, after all, is South Ossetia itself really worth to anybody, even the South Ossetians?) The same might prove true in Eastern Ukraine if we handle the situation in the way that some hawks prefer.
First of all, I think the overall orientation of Douthat’s column is exactly right: the illusions of liberal internationalism and hawkish neo-conservatism were congruent – sufficiently so that both the Russians and we ourselves sometimes had trouble telling them apart. And I think his conclusion is strong as well: we need a realistic response, one that recognizes Russian’s revisionism and the real limits to our power to respond.
But a realistic response also needs to be clear-headed about what our interests actually are here. From Douthat’s column, I sense an underlying assumption that the two illusory programs were intended to advance American interests, but, because they were based on illusions, could not succeed. That is to say: it would be good for America if Russia were to become a “normal” country and good for America if we expanded our “sphere of influence” into places like Georgia and Ukraine, but we miscalculated what was possible. I think he’s right about the limits of the possible, but the implicit assumption – that our original objectives were even in our interests – needs to be examined.
Let’s take the second goal. Assuming we’re going to accept terms like “sphere of influence,” what would be the advantage to America of expanding ours into Ukraine? A Ukrainian manpower contribution to NATO? The economic benefits of greater trade with Ukraine? Diplomatic support for American initiatives? None of these is obviously of substantial benefit. Meanwhile, we (or, more correctly, the European Union) would take on the burden of Ukraine’s substantial political and economic deficits.
Assuming the goal of expanding NATO and the EU eastward isn’t specifically to weaken Russia, then – which I’ll assume for the sake of argument – the purpose would primarily be to improve the political and economic situation in Ukraine, which would then have ancillary benefits for us and our allies in terms of both avoiding the costs of instability on the edge of Europe (refugees, the need for humanitarian assistance, the possibility of being dragged into an actual conflict) and reaping the benefits of trade with a more prosperous partner.
The rather unfavorable offer that the EU made to Ukraine prior to the crisis strongly suggests that our European partners didn’t think these benefits were worth the costs. And since the collateral benefits of a successful “expansion” would accrue primarily to them, why would we want to pay more than they would? Other than to gratify us with the sheer size of our “sphere,” why would we want to add Ukraine?
Now, the first goal – the “normalization” of Russia - would certainly be in American interests, inasmuch as a revisionist power is necessarily some degree of threat to all status-quo powers. But I don’t see why “normalcy” requires submission to an American-led security architecture. A Russia analogous to South Africa or Brazil, that sought to play a positive regional role but kept aloof from or even actively questioned America’s grander pretensions, would presumably qualify for “normalcy.” And such an end-game would seem to be far more “realistic” than assuming Russia would ever become an outright supporter of American hegemony.
Moreover, it would arguably be better-congruent with our interests. Again, even assuming Russia would ever consider subordination to an American-led global security architecture (unlikely), that implies that we would undertake the responsibility for assuring that Russia’s legitimate grievances were addressed satisfactorily, and would implicate us in its handling of its own internal problems. It’s clear to me why we would want Russia to handle these matters the way we would prefer, but it’s not clear to me why we would want the responsibility for assuring that they would be so handled. If we don’t have a good reason for taking on Ukraine as the next Italy, why would we want to take on Russia?
What all of that adds up to is to say that prior to the intervention in Crimea, America’s primary interest with respect to Russia was surely in avoiding a resumption of international tension between Russia and the West, such as is now taking place. We had many other secondary interests – assistance in pursuing our war against al Qaeda, stability in the energy markets, cooperation in reaching a verifiable negotiated solution to the Iranian nuclear program, mediation of the Syrian civil war, etc. And we had some interests that would have conflicted with Russia’s, including an interest in establishing the international norm that “spheres of influence” as such are an outdated concept incompatible with allowing all states sovereign freedom of action (which is not the same thing as saying that NATO should expand to include any country we like). But what is happening now is surely what we most wanted to avoid.
Now that it is upon us, though, our interests are somewhat different. Nobody should have been surprised that Russia was unwilling to simply sit back and accept the overthrow of the Ukrainian government. I wasn’t terribly surprised by direct Russian intervention either – Russia had done much the same in South Ossetia, Abkhazia and Trans-dniestria, and Serbia had done much the same in the various wars associated with the breakup of Yugoslavia. But the hastily-organized and highly questionable referendum and annexation have raised the stakes considerably. Russia’s legal position is extremely weak. The Crimea was transferred to Ukraine entirely legally per the law that prevailed in the Soviet Union; Russia agreed to respect the territorial integrity of Ukraine when the Soviet Union broke up; and secession should properly require not only a referendum but a negotiated agreement with the parent country (as was the case with the breakup of Czechoslovakia, and will be the case if Belgium, Canada, Spain, the UK or any other Western country perpetually at risk of splitting finally takes the plunge). If the annexation of Crimea is accepted, then the entire post-Cold War settlement is up for forcible revision. Given that our primary interest is the region is in the maintenance of stability and order, we should not be sanguine about that prospect.
The issue, then, isn’t how to “punish” Russia – we’re not Russia’s nanny. Ideally, what we’d want to do is walk back some of the decisions that got us where we are now. Unfortunately, I don’t see a viable way to do that. In theory, Ukraine could agree to allow Crimea to separate for a price (I think it would be sensible for them to do that), and for Russia to agree to allow a new referendum to be conducted under independent international auspices (I think it would be sensible for them to do that as well), which would pave the way for a legitimate separation from Ukraine. But neither of those things is likely to happen. Ukraine isn’t going to ask for or accept a bribe; the new government is nationalist in orientation and to do so would undermine the basis of their authority. Russia isn’t going to offer a bribe – they already have Crimea, so why would they pay for it? – and they aren’t going to accept the principal that outsiders picked by the West have any legitimate role in arbitrating the dispute. This is, ultimately, the great cost of the Clinton-Bush years with respect to Russia: the Russians are, very reasonably, convinced that any concessions made to the West will be pocketed, but that anything they get in exchange may be withdrawn at any time.
Given that there’s no obvious way to walk back the annexation, and that accepting the annexation would amount to opening the pandora’s box of wholesale revision of the post-Cold War settlement, I suspect that the real choices are outright war with Russia (which nobody wants) or a persistently high level of tension. But high levels of tension make conflict more likely. Douthat mentions two things that America should not do in response to the situation in Crimea, specifically because they would be provocative: deploy troops to Estonia or send arms to Kyiv. I don’t disagree – but how should we respond if Ida-Viru (which is over 70% Russian, and which contains over a third of Estonia’s Russian population, and also most of Estonia’s natural resources, such as they are) starts talking about seceding from Estonia, with Russian encouragement? How should we respond if outright civil war erupts in Ukraine and Russia moves in to “keep the peace”? Those are not rhetorical questions – we need to know what our answers would be. My point being, “containment” is not a condition of peace.
And deterrence is a fragile thing. Is it really credible that the United States would go to war with Russia over Estonia? Or with China over Taiwan? Ultimately, deterrence is not about making the other side certain of its defeat but uncertain of its victory – sufficiently uncertain to be unwilling to risk war. Which implies, as a corollary, convincing them that peace is safer than war. War over Taiwan remains relatively low-likelihood because China still reasonably believes that it will get Taiwan peacefully at some point in the future. The moment that belief comes under serious question, war becomes much more attractive – but nothing we do then will make it “worth” war with China to defend Taiwan, and the Chinese know that.
In Crimea, Russia decided that war was safer than peace – that if it did not use force, it would be very likely to lose. So it used force. Responding simply by raising the stakes of future conflict heightens the conditions that led Russia to that conclusion in the first place, making further conflict more likely. Responding weakly undermines deterrence directly, and would encourage Russia to see what further gains it can make by boldness. Restoring deterrence without provoking additional conflict will therefore not be easy, because we have to simultaneously raise the cost of further provocations and provide a credible basis for Russia to believe that enough of its interests could be secured without the use of force.
To put it bluntly: there is no good reason ever to expand NATO to include Ukraine. Realism means not only recognizing limits, but setting them. To say that now, though, in the current context, is to confirm to Russia that their approach to Crimea was effective, and should be repeated. Therefore, the objective of our diplomacy should be to create a context within which saying such a thing is possible again, because it is part of a more general resolution of outstanding issues. And in the meantime, we should expect a persistently higher level of tension in the region.
A number of people sent me the David Atkins piece that Rod Dreher linked to, but I think Dreher takes the discussion in a not-very-fruitful direction. Basically, he suggests that if the Left really cares about economics, they should let the Right have its way on cultural issues, and if the Right really cares about social issues, they should let the Left have its way on economics.
Which – sure, if that were the true preferences of true entities battling for supremacy. But there is no Left and no Right. Those are abstractions according to which we choose to divide individuals.
Here’s how I would describe things:
- Economic elites really care about preserving their privileges.
- Elected officials really care about reducing the risk of losing office.
- The culture war – for both nominal Left and Right, is an extremely effective way of serving the interests of both economic elites and elected officials.
Why? Because the culture war turns politics into a question of identity, of tribalism, and hence narrows the effective choice in elections. We no longer vote for the person who better represents our interests, but for the person who talks our talk, sees the world the way we do, is one of us. That contest is a cheap and easy one for politicians of any stripe to enter – and, usually, an easy one to win. It sorts the overwhelming majority of the population into easy-to-count-on camps who will not demand that politicians do anything for them, because they’re too afraid the hated “other team” might get into power.
And it’s a good basis for politics from the perspective of economic elites. If the battle between Left and Right is fundamentally over social questions like abortion and gay marriage, then it is not fundamentally over questions like who is making a killing off of government policies and who is getting screwed. Economic elites may lean to one or the other side on any cultural question (they can be found on both sides), but they can maintain their privileges no matter which side wins any particular battle. So whoever they want to win, that’s the ground on which they want the battle to be fought.
Atkins focuses on the Left-wing version of identity politics – the way in which putting so much energy into fighting for adequate representation for every tribal group has drained energy away from the fight to shift the terms of the social contract overall. It’s much easier to get corporations to agree to adopt affirmative action policies than to get them to agree to recognize a union. So if activist energy goes mostly into fighting for the former, by definition it won’t focus on the latter.
But the same thing is true of Right-wing identity politics. If you can get out the votes by decrying the unfairness of affirmative action, then you won’t need to call for tougher anti-trust enforcement, or for patent and copyright reform, or for breaking up the mega-banks, or for reducing corporate welfare, or for a trade policy organized around moving American manufacturing up the value chain, or any other policy – and I deliberately picked policies that at various points in history have been or could plausibly be part of the Republican “mix” – that might change the terms on which our economy functions in a broad sense, rather than just jockeying for position against other groups within the existing arrangements.
That doesn’t mean social issues don’t matter. It means that they should not be the organizing basis of large political coalitions.
Successful single-issue lobbies work both sides of the aisle. The NRA wants Democrats to be pro-gun as well as Republicans – and lo, while the Democratic Party is less pro-gun than the Republican, it’s also less anti-gun than it used to be, and there are plenty of pro-gun Democrats in the West. AIPAC wants both parties to support the interests of the State of Israel – and lo, there’s only a slight difference between the two parties in terms of their willingness to support the pro-Israel agenda. If I were an activist motivated primarily by a desire to restrict abortion, my top political question would be: where and how can I get Democrats to listen to me. Who’s the most anti-abortion (or least pro-abortion) candidate in every Democratic primary? That’s who we want to throw our support to in that primary – to show that our votes can be won. If anybody wants to win them.
The evidence is overwhelming that winning this or that election doesn’t determine the shape of the culture – and in a healthy political culture the parties are going to take turns holding power fairly regularly anyway. A strategy to change the culture by always voting Republican or always voting Democrat is guaranteed not only not to change the culture, but to throw away the chance of your vote affecting anything else. Which is one reason why I am not primarily motivated by social issues, as compared with issues of war and peace, the general welfare, and good governance.
I really believe the following:
- If you believe that the country needs broader access to government-supported (or -provided) health care, more welfare spending generally, stronger unions, stricter environmental regulation, and so forth, and think these things are worth paying higher taxes for, then you should vote for the Democrats, even if you think affirmative action is folly and abortion is wrong and the Second Amendment is sacred. And you should fight – hard – within the party and in the media to make more space for your views on social issues within the Democratic Party and the country as a whole.
- And if you believe that the country needs lower taxes, more streamlined and flexible regulations, more flexible labor markets, and so forth, and think these things are worth living with greater inequality for, then you should vote for the Republicans even if you believe in the importance of workforces that “look like America” and that abortion is a civil right and that guns should be more tightly controlled. And you should fight – hard – within the party and in the media to make more space for your views on social issues within the Republican Party and the country as a whole.
- With the caveat that you should sometimes vote against the party whose views you share on matters related to economics and the general welfare if that party (or candidate) is corrupt, or incompetent, or has dangerous views on foreign policy, or is simply exhausted and incapable of meeting the challenges of the moment – if, for whatever reason, you think they will do a distinctly worse job than the other party that is more closely aligned with what you see as the national interest (and/or your own interest).
Andrew Sullivan probably did more for the movement for gay marriage than any other single individual. And he has never been a Democrat, and has prominently endorsed both Democrats and Republicans at different points in time, without changing his views on the issue which, undoubtedly, is closer to his heart than any other.
My advice to people like Rod Dreher who are on the other side is neither to withdraw from politics nor to keep their shoulder to the wheel for their partisan “side,” but to follow Andrew Sullivan’s example.