That’s what the children of Israel responded when presented with the words of the Lord at Sinai – not, “we hear, and we will do” but “we will do, and we will hear.” (Exodus 24:7)
The two glosses I’ve heard on that particular verse are: first, that it’s the ultimate testament to the faith of the Israelites at that moment, that they agree to perform the divine command even though they haven’t really “heard” it yet (that is to say, they haven’t absorbed its meaning). Second, that it’s a statement about the nature of hearing the divine command – that we can’t really hear it until we’ve performed it.
I was thinking about this apropos of David Sessions’s mild but firm objections to the way an essay of his about the non-rational bases of what he calls his “de-conversion” have been understood by some religious readers, including our own Rod Dreher, who juxtaposed his essay with the story of Champagne Butterfield, an ex-lesbian convert to evangelical Christianity. The discussion ties back to a post Ross Douthat put up last week about how secularism by its nature changes our experience of reality (or if it does) which I’ve been meaning to touch on but haven’t gotten around to.
Here’s the heart of Sessions’s objection to Dreher’s juxtaposition, and, more generally, to those who read the narrative of his de-conversion as equivalent, in some sense, to conversion narratives:
[Butterfield's experience] is something much different than what I meant to say while channeling Charles Taylor. There is a superficial similarity in the sense that Butterfield and I both had experiences that changed us before we had a full explanation or argument for what happened. What Butterfield describes in this passage is essentially her embrace of obscurantism, a “truth” that either defies or ignores well-established scholarship—and even her own previous experience—on human sexual orientation. But the fact that experience drives intellectual transformation is not a license to abandon intellectual rigor. For example, how does she know God has a point of view about homosexuality, or that it’s negative? Why does she think Christianity requires her to obey it before she understands? What if Christians disagree about what that view is, or think that view is something that’s obviously misinformed? Does it make sense that a Christian God would want a convert to break up a happy family? For a former scholar, Butterfield shows remarkably little philosophical skepticism; she also seems to cast aside her training in how to review and evaluate the available evidence to determine if these views she’s been introduced to are reasonable or even widely considered to be Christian.
In fact, it’s her theological incuriosity that’s perhaps most surprising. As Patrol’s Kenneth Sheppard wrote, analyzing the problems with Butterfield’s conversion narrative: “the question of how to read the Bible, how to determine what it teaches on subjects such as sin (or if it is in fact univocal on such questions), and how to embody that teaching, never seems to arise; this is a rather glaring omission for someone who used to be a literature professor.”
If I understand his objection, what he’s saying is that while his own de-conversion was motivated by experience, social context, and emotion, and not merely by intellectual argument, he feels like Butterfield’s conversion is explicitly a rejection of the process of intellection. And, for that reason, he finds it problematic and troubling, quite apart from not being parallel to his own experience.
I see his point, but I’m not sure he’s really grasping the nettle. It’s comforting to think that the liberal, secular mind is simply more open than the religious, but in my experience you can find plenty of closed-minded people in both camps, and the more open-minded have different points of stress where they turn away from the possibility of uncomfortable truths. There are very, very few individuals who approximate a truly Socratic level of openness to doubt about their own knowledge.
The nettle, I think, is that the qualities of their respective experiences are incommensurate. What I hear when I read the descriptions of Butterfield’s experience is, most primally, the experience of being commanded. The feeling that an authority has instructions for her, and that she must obey them. Sessions’s de-conversion contained no trace of that feeling.
Is that feeling a good thing or a bad thing? Something to be embraced or something to be analyzed and demystified? That question is a very central one to adherents of (or objectors to) the Abrahamic religious traditions. But you won’t get anywhere in trying to understand that question if you start from the proposition that God’s commands ought to be reasonable.
Would God want to break up a happy family? Well, God tests his first prophet, Abraham, by ordering him to sacrifice his only son, and on the plain reading of the text Abraham passed the test by showing his willingness to obey right up to the last possible moment. There are other readings of the text, but it seems to me, as for Kierkegaard, that this is a story about what obedience to God really means. It means obedience when God commands you to do something that flatly contradicts everything else you believe: your rational self-interest, your deepest feelings, your innate moral sense, even the apparent meaning of God’s own prior promises (Abraham was promised a glorious posterity through Isaac, after all). What’s leaving your beloved partner compared to that? And if you want a Christian text, how about Luke 14:26? Apparently, you can’t really love Jesus unless you prefer him to your father, mother, spouse, children, brothers, sisters, and are willing to abandon them all to follow him. That’s a pretty explicit proof-text response to Sessions’s question, isn’t it? Pragmatically, the meaning of saying that this or that religious practice is God’s command is to say that we do not question them by asking whether they are reasonable.
Now, you can accept that position intellectually, or tacitly, because you were brought up to do so, without feeling the experience of divine command. And that, I assume, is where Sessions started out his life. And then he had other experiences that led him to question whether he still wanted to accept that position – and, ultimately, led him to reject it. But those experiences that led to his de-conversion were not qualitatively similar to Butterfield’s; they were not experiences of being commanded.
Again, I’m not saying that this makes Butterfield’s experience more authentic or powerful than Sessions’s. I’m not even saying that I know how you are supposed to respond to that kind of experience. I’m just asserting the primacy of experience itself as an explanation of Butterfield’s behavior, and saying that moralizing about her response is harder than you might think.
An analogy: the experience of falling in love. Can we trust it? How should we understand it? How should we respond to it? These are not easy questions to answer. Should you marry the person for whom you experience that feeling? What if the feeling doesn’t last? What if you’re already married – should you leave your spouse for this new love? What if you never experienced that feeling with your spouse – now should you consider leaving them for this other person? Should you shun this person you’ve fallen in love with, lest the experience cause you to do something irrational or morally wrong? Or should you cultivate that feeling of blind devotion while, simultaneously, abjuring any socially or morally forbidden expression of affection? (The medievals developed an entire quasi-religious system around the latter and since Dreher is in such deep Dante these days I’d really like him to investigate the relationship of Dante’s idolatry of Beatrice to the courtly love tradition.) These aren’t easy questions to answer – unless you answer that the experience of falling in love is a bad one, to be shunned, categorically, which, it seems to me, devolves into answering that experience as such should have no bearing on our actions. Which, to my mind, is an untenable approach to life.
All of which brings me back to Douthat, who asks a very good question about the whole business of religious experience:
[M]y question . . . is whether the buffered self/porous self distinction is supposed to describe a difference in the lived, felt substance of religious experience itself, or whether it’s ultimately an ideological superstructure that imposes an interpretation after the fact. Taylor’s argument seems to be that the substance of experience itself changes in modernity: He leans hard on the idea that (as he puts it) “the whole situation of the self in experience is subtly but importantly different” for people who fully inhabit the secular age. Which would seem to imply that when Verhoeven was in that church, his actual experience of what felt like the dove descending was “subtly but importantly different” from the experiences that the not-as-secularized believers around him might have been having — more attenuated, more unreal, and thus easier to respond to in the way he ultimately did. And it would imply, as well, that if Takeshi Ono’s worldview had been more secular to begin with, he wouldn’t just have reacted to his visions differently (by, say, visiting a therapist rather than a Buddhist priest); he would have had a different experience, period, in which he somehow felt more buffered and less buffeted throughout.
This isn’t just an academic distinction; it has significant implications for the actual potency of secularism. To the extent that the buffered self is a reading imposed on numinous experience after the fact, secularism looks weaker (relatively speaking), because no matter how much the intellectual assumptions of the day tilt in its favor, it’s still just one possible interpretation among many: On a societal level, its strength depends on the same mix of prejudice, knowledge, fashion and reason as any other world-picture, and for the individual there’s always the possibility that a mystical experience could come along (as Verhoeven, for instance, seemed to fear it might) that simply overwhelms the ramparts thrown up to keep alternative interpretations at bay.
But if the advance of the secular world-picture actually changes the nature of numinous experience itself, by making it impossible to fully experience what Taylor calls “enchantment” in the way that people in pre-secular contexts did and do, then the buffered self is a much more literal reality, and secularism is self-reinforcing in a much more profound way. It doesn’t just close intellectual doors, it closes perceptual doors as well.
I think this is a very good way of describing a key question, and my answer is to reject the dichotomy as presented. That is to say: I don’t believe that secularism is a mere “ideological superstructure,” that we have fundamentally similar experiences of the numinous or uncanny as did our more “enchanted” ancestors, but merely have learned how to explain them away after the fact. Necessarily, our worldview interpenetrates our experience; we can only experience things that we can perceive, and we perceive through categories that we have already formed. But I also don’t believe that secularism merely “buffers” us from those experiences. Indeed, I’m not sure if buffers us at all.
All sorts of people have uncanny experiences, and in my experience they make all sorts of different kinds of sense of them. There is no rule that says that because you are not a deeply religious person you have to dismiss those experiences as signs of incipient madness. To pick an extreme example, there are people who are convinced they have been abducted by aliens from outer space – people who, generally, do not manifest other signs of psychosis. But there are plenty of other people who experience hauntings, prophetic dreams, out-of-body experiences, and so forth. In my experience, there’s no particular pattern suggesting that these experiences are less common among the non-religious, and no particular pattern suggesting that non-religious people are more inclined to discredit these experiences as “obviously” intra-psychic as opposed to being in some way mysterious.
By the same token, most of the people I know who’ve had these experiences don’t take them particularly deeply to heart, though some of them do, sometimes. But who’s to say that more religious people find them profoundly transforming? Can you even experience profound transformation on a regular basis?
I’ve only had one experience that comes close to the kinds of things Ross is talking about, about twenty years ago. I had the most profound, visceral feeling that I was trapped in a container or box and was suffocating – the experience of feeling buried alive. The experience was triggered by something trivial – I think I was glancingly watching a television show about ghosts or something like that – but the experience itself was absolutely overwhelming. And it affected my life deeply; I felt I had to change my life, and quickly. My now-wife, then girlfriend, was a profound comfort to me through the experience, and that kindness profoundly shaped my feelings for and about her. Many of the decisions I made after that, from my career choices to my marriage to my turn toward greater religiosity, can be traced back to that experience.
Now, if you ask me how I’d describe that experience, I’d say it was a severe panic attack. But that is just a label; it’s not a phenomenology. I can imagine, if I were living in a more “enchanted” world, that I might have understood the experience somewhat differently at the very time it was happening, and not only afterwards – that the precise manifestation of the experience might have differed in various ways. But that doesn’t mean I was “closed off” to that kind of experience on account of modernity. I certainly didn’t feel “buffered” in any way.
And the experience affected me independently of my “understanding” of it. Just because I could say, “that was just a panic attack” – that didn’t make the experience any less powerful, or blunt the urgency of responding to it. Explanations don’t necessarily drain experience of power. (I believe William James had something to say about that.)
And that, too, is not an artifact of modernity. Imagine if you were a girl living in fifteenth-century France, and the archangel Michael told you to lead an army to expel the English. That experience felt absolutely real to you. Now, suppose your betters – priests, magistrates, and so forth – told you that it wasn’t the archangel Michael, it was a demon tempting you to sin, and you must recant your testimony and accept that understanding of your experience – that is to say, not let it affect you. Would you recant? Could you? Isn’t that pretty analogous to me telling myself not to worry about the feeling of being buried alive, that it was “just” a panic attack, and not something to take to heart? Or to Butterfield’s partner telling her that she’s being brainwashed by reading the bible, or Sessions questioning why she’d given up her critical faculties all of a sudden?
Primal experience is possible within all ideological frameworks, secular and religious alike. It can be rejected or “explained away” within all ideological frameworks, secular and religious alike. And it is potentially disruptive of all ideological frameworks, secular and religious alike.
Walter Russell Mead connects the Russian incursion in the Crimea to the Libyan war to draw a general lesson about the general utility of a nuclear deterrent:
When Ukraine escaped from the Soviet Union in 1990, Soviet nukes from the Cold War were still stationed on Ukrainian territory. After a lot of negotiation, Ukraine agreed to return those nuclear weapons to Russia in exchange for what (perhaps naively) its leaders at the time thought would be solid security guarantees from the United States and the United Kingdom. The “Budapest Memorandum” as this agreement is called, does not in fact require the United States to do very much. We can leave Ukraine twisting in the wind without breaking our limited formal obligations under the pact.
If President Obama does this, however, and Ukraine ends up losing chunks of territory to Russia, it is pretty much the end of a rational case for non-proliferation in many countries around the world. If Ukraine still had its nukes, it would probably still have Crimea. It gave up its nukes, got worthless paper guarantees, and also got an invasion from a more powerful and nuclear neighbor.
The choice here could not be more stark. Keep your nukes and keep your land. Give up your nukes and get raped. This will be the second time that Obama administration policy has taught the rest of the world that nuclear weapons are important things to have. The Great Loon of Libya gave up his nuclear program and the west, as other leaders see it, came in and wasted him.
It is almost unimaginable after these two powerful demonstrations of the importance of nuclear weapons that a country like Iran will give up its nuclear ambitions. Its heavily armed, Shiite-persecuting neighbor Pakistan has a hefty nuclear arsenal and Pakistan’s links with Iran’s nemesis and arch-rival Saudi Arabia grow closer with every passing day. What piece of paper could Obama possibly sign—especially given that his successor is almost certainly going to be more hawkish—that would replace the security that Iran can derive from nuclear weapons? North Korea would be foolish not to make the same calculation, and a number of other countries will study Ukraine’s fate and draw the obvious conclusions.
This analysis is, on the surface, extremely persuasive. Which is exactly why I think it deserves a closer, more critical look.
First, let’s look at the proposition with respect to Ukraine specifically. Was it even plausible that Ukraine could have held on to an independent nuclear deterrent after the collapse of the Soviet Union? The answer is almost certainly, “no.” Indeed, it’s hard to imagine any action that would more greatly have imperiled the stabilization of the post-Soviet order than such a determination on Ukraine’s part. Western and Russian interests were aligned in wanting to see Ukraine denuclearized; an independent nuclear Ukraine would have been treated as a dangerous rogue state. Russia’s ability to project power in the immediate aftermath of the collapse of the Soviet Union was extremely limited, but Ukraine’s ability to defend itself was even more ephemeral. The best evidence that Ukraine had no real choice but to denuclearize is precisely that Ukraine got almost nothing in exchange for agreeing to hand over its Soviet nuclear weapons.
Assuming, for the sake of argument, that a nuclear Ukraine was a real possibility, how would it have responded differently to the events of the past few weeks? Nuclear weapons would not have changed the election results that brought a pro-Russian premier to power – though they would dramatically increase Russian interest in ensuring a pro-Russian Ukraine. By the same token, nuclear weapons would not have deterred ethnic Ukrainians from taking to the streets. If events continued to play out as they have, and Russia sent troops to Crimea, nuclear saber-rattling against Russia would be completely specious; nobody would believe such a transparently suicidal threat. How would nuclear weapons avail Ukraine in the current crisis? What seems most likely to me is that, if Ukraine had an independent nuclear deterrent, Putin would have intervened much earlier to make sure that Yanukovich remained in power. He certainly wouldn’t risk a Ukrainian nuclear deterrent falling into the hands of an anti-Russian party.
That point can be generalized. There is considerable evidence that a nuclear deterrent does not suffice to prevent either conventional conflict with other states, or violations of a country’s sovereignty, or regime change. Israel’s nuclear deterrent dates to the 1960s, but did not prevent the surprise Syrian-Egyptian attack in 1973. South Africa’s nuclear deterrent dates to the 1970s, but did not prevent the dissolution of the apartheid regime (which voluntarily denuclearized to prevent its arsenal falling into the hands of the ANC). Pakistan’s nuclear deterrent dates to the 1990s, but did not prevent America from toppling its Afghan ally, or conducting drone warfare and engaging in covert operations within Pakistani territory, including the assassination of Osama Bin Laden. Most obviously, the enormous Soviet nuclear arsenal was of no utility in preventing the sudden and spectacular collapse of the Soviet regime. (Nor, for that matter, was the Russian nuclear deterrent useful in deterring Western intervention to dismember Serbia, a traditional Russian ally.)
Dictators may well learn the lesson from Libya that denuclearization will not bring Western protection – which is true. It does not therefore follow that a nuclear Libya could have done anything different to defeat its insurgency. It is likely that Western powers would have been much more reluctant to initiate a bombing campaign – a suicidal threat would have more credibility coming from a man literally fighting for his life – but, on the other hand, Western powers would have a much, much greater incentive to be involved in a Libyan civil war if there was a question of the ultimate disposition of nuclear weapons. Consider: would America take a hands-off attitude to civil war in Pakistan? It seems to me we would likely be more involved in such a civil war than we are in Syria’s, precisely because the disposition of a nuclear arsenal would be at issue.
The primary utility of nuclear weapons is to deter other nuclear powers from escalating to nuclear warfare. Secondarily, nuclear weapons are useful as a deterrent to conventional war if they can be plausibly deployed in a tactical fashion against a foreign invasion. So: U.S. plans for fighting World War III involved the first use of tactical nuclear weapons against Soviet armor, either in Germany or in Poland. Would the Soviet Union have escalated from that event to a suicidal strategic nuclear exchange? American war planners obvious thought not. Similarly, Pakistan could use tactical nuclear weapons on its own territory against an invading Indian army. That prospect may have deterred India from launching a massive invasion of Pakistan in response to any number of provocations. Nuclear weapons are not useless, in other words, but their utility is distinctly limited.
What are the implications for Iran or North Korea? I doubt either Iran or North Korea were particularly inclined to trust pieces of paper in the first place. The rational case for Iran to go nuclear can only be countered by a rational case to not go nuclear – concrete interests that could be secured by an agreement, concrete risks to refusing to come to an agreement. Perhaps the prospect of full normalization – which would have very substantial economic benefits for Iran – would be enough carrot, while the prospect of continued isolation, being the object of covert warfare, and the risk of Saudi Arabia going nuclear in response to an Iranian bomb are sufficient stick. Perhaps not. North Korea is much tougher because there is no plausible path forward for the regime within the world community of nations, so there’s not much that can be offered as a carrot.
But the more important point lies elsewhere. Mead says that if Obama “allows” Ukraine to be dismembered, then other countries will draw the lesson that if you want to prevent the same from happening to you, you’d better get a bomb. But even if an aggressive American response were effective in securing Crimea for an independent, pro-Western Ukraine (which I don’t believe it would be), the lesson a country like Iran would draw is certainly that America successfully intervened to secure regime change in Ukraine, and is therefore undoubtedly still determined to do the same in Iran. That is to say, it’s more likely the Iranian regime would identify with Russia than with Ukraine in this situation.
The conclusions are not mutually-exclusive, of course. Iranian hard-liners could interpret any attempt to find a negotiated solution to the crisis in Crimea as proof that the West responds well to force, while also finding any attempt to force Russia to withdraw from Crimea as proof that negotiations with the West are pointless (after all, Yanukovych negotiated a deal with the opposition under Western auspices, and the opposition simply broke the deal). But if Mead is trying to make a case that a more forceful Administration response to the situation in Crimea would be reassuring to Iranians inclined to negotiate in good faith, I think he’s kidding himself.
I hope Daniel McCarthy is right that Russia doesn’t “want” Crimea. He certainly seems very sure of himself about that. But the history of Abkhazia, South Ossetia and Transdniestria doesn’t leave me particularly sanguine about his prediction for where things go from here.
The thing is, Russia no doubt has a preference hierarchy with regard to Crimea. At the top of the hierarchy is, undoubtedly, a unified Ukraine closely aligned with Russia. Next in the hierarchy might be a unified Ukraine aligned neither with Russia nor with the West. But it might not – and even if that choice is next on their list, at some point Russia undoubtedly prefers keeping a firm hold on Crimea even at the cost of hostile relations with Kiev.
The country that probably puts stability in Ukraine at the top of its preference hierarchy is Germany. A hostile Russia and a nationalist Ukraine are both problematic from a German perspective; a Ukrainian civil war would lead to a refugee exodus, many of whom would wind up in Germany; and it’s not 100% clear that Germany is that interested in EU expansion anyway, since they have to foot the biggest bill for bringing in poorer countries.
What’s most troubling to me is that I can make a reasonable case that Ukrainian nationalists should welcome Russian intervention in Crimea. The status-quo ante meant a large Russian bloc, and a large Russian naval base, within Ukraine. The former makes it harder for Ukrainian nationalists to dominate the country electorally; the latter makes it harder to maintain a policy of distancing from Russia. Lose Crimea, and both problems are solved. Of course, nationalists can’t simply allow sovereign territory to be seized by enemy forces. But what if Crimea achieves de-facto independence, but is not annexed by Russia and independence is not recognized by any other country? Kiev could demand an end to the violation of its sovereignty. And Russia could refuse to accede to that demand. And this could become the new status quo. Wouldn’t that, in the short-term, anyway, be optimal from the perspective of a Ukrainian nationalist?
There are examples beyond the Russian periphery that suggest these kinds of informal arrangements can last for quite a long time. The Krajina region of Bosnia, for example. Or Turkish Cyprus.
We should not be sanguine, in other words, that Russia is playing a sophisticated game aimed at a known outcome. They are, undoubtedly, making the moves they think are optimal for preserving what they can of their influence in their periphery. But they don’t hold all the cards, and a negative-sum outcome may appear to be optimal for both sides in the conflict if a positive-sum outcome – which, in this case, would be a unified Ukraine with friendly relations both with Russia and with the EU – appears unlikely.
That’s the kind of situation where honest, engaged mediation by trusted, powerful outsiders can, potentially, make a difference. Unfortunately, I can’t think of any outside powers that are powerful, trusted, and honest.
Ibsen is a playwright with whom I have a conflicted relationship. On the one hand, he is the progenitor of a type of theater that I think has largely run its course: a theater of realistic characters and pressing social “issues” and everything played out under the proscenium arch. On top of that, he’s neither Chekhov nor Strindberg; I rarely sense the fineness of perception of the Russian, or the uninhibited ferocity of the Swede.
On the other hand, he basically invented theater for the modern age. There’s something a little ridiculous about someone as artistically insignificant as myself venturing to wonder whether I like or don’t like Ibsen. The question, ultimately, isn’t his stature; it’s whether he still works now, for audiences today.
Well, seeing the electric Young Vic production of A Doll’s House at the Brooklyn Academy of Music’s Harvey theater this past week has settled the question for me. Yes, he emphatically does speak to contemporary audience – if you treat him, his characters, his form of theater, and his text, as if he is.
Start with the form of theater. A Doll’s House plays out in the Helmers’ apartment, and the play doesn’t provide an obvious opportunity to step outside the four walls of that space and engage the audience directly (as, say, the recent Broadway production of Enemy Of the People did). So what set designer Ian MacNeil did is put the whole apartment on a turntable, so that we feel not like spectators at a scene being played out on a stage, for us, but people spying on a drama playing out in an actual space. That’s more a cinematic than a theatrical feeling – but, for a contemporary audience, making us feel like we’re behind a camera is, ironically, probably a good way of making us feel like something is real. (The ravishing production of Lady Windermere’s Fan that I saw this past summer in Canada similarly used elements that recalled film conventions to marvelous effect. It’s almost the opposite of actually putting film and video into a stage production, which often makes the film look stagy and fake.)
Next: the translation, by Simon Stephens. It feels alive, present, and does so without being exactly colloquial. And the performances match that feeling: we never forget that we’re looking at people in the 19th century, but we’re also never explicitly reminded of it. They act as if we belong in the same room together. And so we feel like we are.
It’s an extraordinarily subtle thing. It’s in the way that Nora, played by Hattie Morahan – who looks, moves, even breathes like a young Teri Garr – flirts with her dear friend, Dr. Rank, played by a strikingly handsome Steve Toussaint. She shows him a bit of ankle – that’s all – and the moment is alive with possibility. And, see, that’s how flirting still works – it doesn’t have to be very explicit to be very real. But it’s exciting, not mortifyingly forbidden the way it so often seems in period-y nineteenth century drama. It’s in the way that Nora’s husband, played by a convincingly clueless Dominic Rowan, paws his wife when he’s drunk, and the way she resists without wanting to make a “thing” of resisting – and the sitcom humor of the way he reacts when interrupted, over and over, by unwelcome visitors. That’s all real to our life now – but it doesn’t feel false to theirs. It’s true to the relationship we’ve come to see, and know, on stage.
So, too, with Nora’s big finish. In Ibsen’s time, her decision to suddenly walk out of her marriage because she discovered her husband was a stranger to her – because of the horrible way he reacts when he discovers she forged her father’s signature on a document so she could take out a loan, which she needed for – well, that doesn’t really matter; that’s so much plot mechanics – when she walked out, it was scandalous. But here’s the thing: it’s still scandalous. It doesn’t matter that divorce happens all the time – when a happily married mother of three suddenly walks out of her life, we’re shocked. And we are – because Morahan doesn’t sound like she’s making a statement. She sounds like her life has come apart, all of a sudden. Which is just what is happening. Which is something that still happens, now, and is just as wrenching and disorienting when it does.
A Doll’s House was understood at the time as a feminist text, which Ibsen objected to – and Joyce did as well; I believe his line was “if he’s a feminist then I’m an archbishop.” Which is a fabulous line, because, you know, it’s also a compliment: Joyce probably knew as much Catholic doctrine and history as many archbishops. And Ibsen’s objection may, similarly, have been that he wanted to understand why a woman would do such a thing, not to make a point about women or marriage or structural oppression. But that – to try to understand a woman from the inside – is probably the most feminist thing a male artist can do.
It seems the director, Carrie Cracknell, sees things very much the same way – so I saw the play she wanted me to see. Her comment on the translation:
The intention of Simon and I, when we were working on the new version, was to really respect the original and not to try to make a radically departing version, but to release the original play for a contemporary audience. The way Simon approached that was to cut some of the slightly more over-expressed text in the translation, so that it felt more psychologically attuned to the way people speak now. We were also interested in uncovering certain elements of the play — for example, the relationship between Nora and the children, and the sexual dynamics between Nora and Torvald, which in its day was slightly more guarded in the way it was written. Simon made that more expressed and visceral in his version. But we also imagined our version like revealing layers of dirt from an old painting, nothing any more radical than that — trying to find the polish and shine of the original play.
And on the feminism of the ending:
The play has rightly been cast as a feminist play because it’s the first time we really staged a woman breaking out of the destructive confines of marriage. But I also appreciate the fact that Ibsen felt the play was more than that, and that he was trying to express something bigger or deeper about the individual within societal structures. It just so happens that heroine was a woman, and a woman breaking out of those structures. I also feel that it’s important that the final door slam [which occurs at the end of the play] isn’t a moment of triumph, not a moment of catharsis. It has to be the beginning of an unraveling of a life lived — of the lives of the three small children, of the lives of the staff, of the life of Torvald, and the life of Nora. They all have to wake up the next morning and work out who they are in this new perspective, and Nora has to head off into an uninhabitable world and find out who she is. So on one level she’s a feminist heroine, but the play is also darker and murkier and more complicated than a sort of triumphant finale.
The finale isn’t triumphant – but the production is a triumph. Go see it.
A Doll’s House plays at BAM’s Harvey Theater through March 16th.
Well, I suppose not obligatory – nobody’s making me write it – but this is a year where there is a modest amount of drama in the “Best Picture” category, and it’s also a rare year where I’ve seen almost all the major-category nominees. So I should probably say something. This, then, is something.
This is a funny year in which there was a large number of worthy films and no single film that is obviously a “Best Picture” film. Compare “12 Years a Slave” to last year’s “Lincoln.” I found Steve McQueen’s film to be far more interesting than Spielberg’s, but also much less-satisfying precisely because McQueen seems aggressively uninterested in providing the satisfactions of a traditional narrative.
Or compare “American Hustle” with “Silver Linings Playbook,” David O. Russell’s 2013 nominee. His newer film is much more ambitious, much more complex – and much more of a narrative mess. That makes it more interesting in many ways – but also much less of a “Best Picture” type of film.
Both of these films make you think about what they are doing even as you experience them. They don’t exactly carry you along. But that “on a great ride” feeling is a big part of what people love about the movies. So I think both “12 Years a Slave” and “American Hustle” have a wall to get over to win Best Picture that another film – which I liked less – doesn’t.
That film is “Gravity.” Compare “Gravity to last year’s “Life of Pi,” another technically-pathbreaking, spiritually-oriented film about an individual adrift in a hostile environment. “Life of Pi” had a metafictional frame that contained the “message” of the movie, while the main story was a frankly fantastical one. That metafictional layering was clearly intended to make you think, even as the story of the boy and the tiger had a visceral power. “Gravity,” by contrast, keeps you rooted in the experience of the film itself; the “message” is the weakest, least-interesting aspect of the film. That slightness might hurt it, of course; “Best Picture” films are supposed to be important. But I’m betting not.
Best Picture is expected to be a contest between these three films, with the other six nominees as dark horses. I have a hard time seeing “American Hustle” win. I definitely preferred “12 Years a Slave” to “Gravity,” but I recognize the substantial technical achievement of the latter. (That long “shot” toward the beginning of the film deserves an Oscar all of its own.) There’s some talk that they may split the Picture and Director honors, Cuarón winning for Director while “12 Years a Slave” wins for Picture, but the funny thing is that what I liked best about both films was the direction, while other elements (particularly the screenplays) struck me as relatively weak.
If I were voting, from these nine films, I’d probably vote for “12 Years a Slave,” which is a consequential, powerful but flawed film. There are individual scenes that are going to stay with me forever, even if the film as a whole felt like less than the sum of those scenes.
But if I’m predicting, I’d predict “Gravity.”
My thoughts on the rest of the “Best Picture” nominees:
“Captain Phillips” (which I wrote up here) has stayed with me more for the performance of Barkhad Abdi than for anything else.
“Dallas Buyers Club” is the only “Best Picture” nominee I haven’t seen.
“Her” (which I wrote up here) has a great production design and a set of really compelling performances, but it is so, so sad, and not, ultimately, in a cathartic way.
“Nebraska” (which I took two cracks at, here and here) hasn’t stayed with me as powerfully as I thought it might have. I still think Bruce Dern gave a great performance, and I did love June Squibb, but I worry that the film wasn’t a challenge for Alexander Payne – that it took him places that, mostly, he already knew.
“Philomena” I haven’t had a chance to write up, other than in passing in my post from yesterday on religious films. I don’t have too much to say about it; it’s a sweet little film, well-written and well-structured. It certainly benefitted from low expectations on my part; it didn’t sound like something I’d like, and lo and behold, I liked it. I’m sure it’s thrilled to be nominated.
“The Wolf of Wall Street” I also haven’t had a chance to write up – and I should. DiCaprio’s performance is technically amazing – that scene where he has to get out the door, down the stairs and into his car while unable to stand up because he’s taken too many quaaludes is a comic tour de force. And Scorsese is absolutely in control of his film. But I found myself falling between the “love” and “hate” camps with respect to the film, in a place of relative indifference. Why? Two reasons. First, the film is too short. I’m entirely serious. People chortled when Thelma Schoonmaker said it was really hard to edit the film down from four hours, but I felt like I could see what she meant. They managed to preserve all these set pieces, but I felt sometimes like multiple peripheral characters never got defined, or got lost, because there wasn’t time to let us understand who they were. And I assume that’s because too much was left on the cutting room floor. The second, more important reason, though, is that Jordan Belfort just isn’t a very interesting person. His story is a boringly self-aggrandizing one. This isn’t really a story about Wall Street, because Belfort was a petty criminal who just made it much bigger than you’d ever expect. It’s like, what would happen if Ricky Roma from “Glengarry Glen Ross” somehow made hundreds of millions of dollars. So, he’d be a jerk on a colossal scale. What else? Not much else.
Now, for the other categories:
Best Director: Cuarón, for “Gravity.” He’ll get this one whether “Gravity” gets Best Picture or not.
Best Actor: Everybody says it’s McConaughey’s to lose, and since I didn’t see “Dallas Buyers Club,” I can’t really venture an opinion. Of the other four nominees, I would probably pick Bruce Dern.
Best Actress: Everybody says it’s Cate Blanchett, who has swept every prior award this year. I saw “Blue Jasmine,” but haven’t written it up. I thought she was fantastic, and single-handedly saved the film from being kind of unbearable. I would certainly vote for her.
Best Supporting Actor: Everybody says it’s Jared Leto, and again, I didn’t see “Dallas Buyers Club,” so I can’t say. I’d vote for Michael Fassbender from the other four nominees, but I wouldn’t be upset if either Barkhad Abdi or Bradley Cooper won.
Best Supporting Actress: I predict Lupita Nyong’o. I’d also vote for her, even though I adored Jennifer Lawrence and think June Squibb is a hoot and a half.
Best Original Screenplay: this will probably go to “American Hustle,” and I’m not sure how I feel about that because I feel like the screenplay has loads of marvelous stuff but also real structural problems. On the other hand, it’s a much more interesting screenplay than “Nebraska,” and I actively disliked the writing of “Blue Jasmine” – so maybe I’d vote for it after all. Or maybe I’d vote for “Her,” just for sheer cussedness. Yeah, I’d probably vote for “Her.” I wish I could write in “All Is Lost” – a screenplay with essentially no dialogue. Just for total cussedness.
Best Adapted Screenplay: this will surely go to “12 Years a Slave,” which I’m not thrilled about since I think the screenplay is the weakest part of the film. I would probably vote for “Before Midnight.”
I really hope “The Act of Killing” wins Best Documentary, because that film knocked me flat – it was by far my favorite film of the year. Would have written it up except Eve Tushnet got there first with the best headline ever (and an excellent review under it).
I’m embarrassed to say that I’ve seen none of the Foreign Film nominees.
Technical awards: “Gravity” should take the lion’s share of these: Cinematography, Editing, Sound Editing and Mixing, Visual Effects. People say it will also win Best Score; I admit, I don’t remember the score. I do remember the score for “Her,” which drove me nuts, and which suited the film perfectly, so I’d vote for “Her.” “Gravity” might also win Best Production Design, but I would definitely vote for “Her.” Costume Design I would vote for “American Hustle;” I don’t really have a view on who will win. What else? Makeup?
Feel free to tell me your own predictions in comments. I can still change mine for the pool up until Sunday night.
Matt Yglesias wants us to stop worrying because the national debt is still less than our national assets:
[O]n a net basis the United States of America does not have any public debt and perhaps never did.
The conventional way for debt scaremongers to measure the national debt is to compare gross public debt to GDP. But the normal way you measure the debt load of a business or a household is to ask for a net figure. Just because you have hundreds of thousands of dollars in mortgage debt doesn’t mean you’re a pauper. In fact it probably means you’re a rich person who owns an expensive house. It is of course possible to take out a large mortgage and then end up “underwater” because house prices decline, but it’s simply not the case that a large amount of gross debt is a sign of overextension. It’s typically a sign of prosperity and creditworthiness.
But “net debt” doesn’t mean the difference between assets and liabilities – that’s the definition of net assets. Because when liabilities are greater than assets, you are technically bankrupt. I doubt that anyone is reassured by the fact that the United States is not technically bankrupt – I should hope nobody thinks that we needn’t worry about public indebtedness until that happens.
“Net debt” on the other hand means liabilities minus current assets – assets that are either cash or readily convertible to cash. If Yglesias’s chart is labeled correctly – and I can’t see his underlying data, so I may simply be misinterpreting what he’s trying to show – then we’re looking at overall public assets versus public debt. Most public assets are certainly not readily convertible to cash. And the public debt is frequently quoted as a net debt figure – that is, debt minus short-term receivables, cash and other marketable securities – so I wouldn’t be entirely surprised if the red line is already a net debt figure. Without access to the underlying data and clear definitions thereof, I can’t be sure, but it certainly doesn’t look like a chart showing that the United States never had any net debt – and it couldn’t be, because the US did, and does, have net outstanding debt.
Yglesias disparages the debt-to-GDP ratio, but it’s a pretty good rough-and-ready tool for measuring fiscal health, because your GDP is your tax base, and you service your debt out of taxes. If interest rates are very low, you can service a higher debt more cheaply, but over the long term rates should have a close relationship to nominal GDP, so if you’re projecting very low rates for a very long time, you’re also probably forecasting very low nominal GDP for a very long time. Which would be another reason to worry about a high degree of indebtedness.
For a business or household, you’d also pay attention to leverage – that is to say, not merely how above water you are, but how close to the water line. If you own a house at 5% down, you have a more risky financial position than if you own a house at 25% down, regardless of the value of your house. If you’re a bank with only 3% true equity capital, you’re running a riskier bank than if you have 10% true equity capital. To a certain extent, this is true for countries as well; there’s generally no way for foreigners to foreclose if the value of national assets drops below net debt, but currency crises aren’t exactly a picnic either. In the context of banking, Yglesias understands the importance of leverage. Why, here, does he suggest that the only thing that really matters is whether you’re in the money or not?
Here’s what I see when I look at the chart: from 1850 to 1950, both public debt and the value of public assets grew at a much faster rate than the economy as a whole. Debt expanded rapidly in wartime (Civil War, World War II, War War I), and tended to contract (as a percent of GDP) thereafter. Since 1950, however, the value of public assets has been relatively more stable (rising modestly through 1970, then falling through 1990, then rising again to somewhat above the 1970 peak) while the public debt first dropped dramatically (by nearly 50% to 1970, measured as a percentage of GDP) and then shot back up to around its 1950 peak. We’re not yet as leveraged as we were after the Civil War or after World War II, but our “net national equity” is smaller than it was in any other period, and (if you extrapolate out) shrinking.
Piketty’s argument is (in part) that the 1970-2010 period is much more representative of what trends in wealthy countries are going to look like for the foreseeable future than was the period from 1900 to 1970. That’s a period in which public indebtedness was rising rapidly while public assets rose barely at all. Why that’s a basis for complacency, I have no idea.
Michael Cieply, writing in The New York Times, tells an anecdote to illustrate Hollywood’s aversion to religious films for a mass market audience:
It was in the mid-1990s, and a good writer, earlier nominated for an Oscar, had an earnest modern-day Christ story about a damaged man in Los Angeles who might or might not be the Messiah. “The Greatest Story Ever Told” meets “Falling Down,” more or less.
We tried it out on Columbia executives, but four minutes into the pitch the studio’s production president ran out to take calls. A remaining vice president nodded off in his seat. “At least I’ve got an anecdote,” the writer muttered.
With a few exceptions that have generally skewed toward humor or horror — the God comedy “Bruce Almighty,” the angel romance “Michael” and the exorcism film “The Rite” come to mind — it has been that way for decades. Major studios suddenly get distracted when anyone suggests tackling serious religious subjects.
Hmmm, I thought. Mid-1990s. I seem to recall that there were films made more or less around that time, with a decidedly similar theme. Here’s one. Here’s another. And another. And another. Is it possible that Cieply’s anecdote of Hollywood indifference is one that could be told about, well, almost any kind of script?
Now, each of the movies I linked to has a distinct sensibility of its own. But that “Christ may be walking among you, where you least-expect to find him” theme is common to all. Only one of them, the Canadian “Jesus of Montreal,” is a small-scale film. The other three were solid mid-budget films aimed at a mass audience, and all of them did well by one measure or another. “The Shawshank Redemption,” which probably did the worst of the bunch in its initial box office, is now widely cited as people’s favorite movies of all time.
Cieply complains that Hollywood used to make films like “The Ten Commandments,” but won’t take that kind of risk anymore. Did he manage to miss “Prince of Egypt,” a book-of-Exodus-based biblical epic put out by DreamWorks in 1998? Or does it not count because it is animated – notwithstanding that animated films have been some of the most successful, both financially and in terms of cultural impact, of the past twenty years.
As Damon Linker points out, films like “The Ten Commandments” are hardly serious takes on religious themes. Have there been many of the latter kinds of films? No – but there never have been, and there certainly weren’t piles of them in the 1950s. Meanwhile, he identifies “The Chosen,” “Shadowlands,” “The End of the Affair” and, especially, “Tree of Life” as serious films about religious questions and living a religiously serious life. If I were making my own list, I’d add the powerful ’90s indie, “Household Saints,” the searchingly skeptical Coen Brothers film, “A Serious Man,” (which, as I’ve noted before, makes a fascinating double-feature with Malick’s “Tree of Life”), and much of the career of Bruce Beresford, with particular emphasis on “Black Robe” and “Tender Mercies,” the latter one of my favorite films of all time. If you add in films that appeal to a spiritually-minded audience without having anything explicitly religious about them, the list is longer. In honor of the passing of Harold Ramis, I’ll mention only one, “Groundhog Day,” the “It’s a Wonderful Life” of this generation.
But Cieply’s dismissive response to two serious films on religious themes reveals that he’s looking for something specific in a religious film. Those are current Oscar nominee “Philomena” and Martin Scorsese’s monumental 1988 drama, “The Last Temptation of Christ.”
“Philomena” is a sweet little film about a sentimental and culturally-sheltered old Irish lady (Judy Dench) searching for the boy she gave up for adoption fifty years ago. The story is told through the eyes of a cynical journalist (Steve Coogan) who agrees to help her on her quest, and in the process uncovers exactly the tale of Catholic corruption and mendacity that he expected to find. But it’s not his story, and the film is only incidentally an exposé of the Catholic church. It’s Philomena’s story, and her story is the story of someone who holds fast to her faith even in the face of real wrong done to her by her church. Our own journey, over the course of the film, follows the journalist’s – we start out condescending to her, to some degree, then feel sympathy, and finally we’re put in our place by her. To read this simply as an “anti-Catholic” film in the vein of “Agnes of God,” as the New York Post did (which is all Cieply tells us about the film) strikes me as willful misreading – unless you define “anti-Catholic” to mean “honest about the sins of the Catholic hierarchy.” (The film is based on a true story.)
Similarly, “The Last Temptation of Christ,” adapted by Paul Schrader from the novel by Nikos Kazantzakis, is a very searching examination of Jesus’s experience, and specifically the idea that, as Christ is supposed to be fully human, he faced the ultimate temptation from Satan in the form of an offer of a normal, happy human life. It’s not only a serious film about the central Christian myth, it’s one that takes that myth deeply seriously. It’s not a skeptical film in any meaningful sense of the word. But, of course, it was protested by Protestant and Catholic groups who didn’t like its emphasis on the human side of Christ’s dichotomous identity, the notion that temptation was something Jesus actually experienced, in a deep way, a test he didn’t pass easily.
These films are in no sense anti-religious; they are obviously serious; and they clearly deal with religion, religious people, and religious themes – and they take the faith of the religious seriously. But they aren’t necessarily films that will make religious people comfortable – they aren’t obviously flattering to religious sensibilities, and may indeed offend those sensibilities. Is that the standard of what counts as a “religious” film – something pious and flattering to religious believers?
I think so, because Cieply’s complaint seems to be about marketing rather than about substance – he’s interested in films that “appeal to a Christian audience.” As Cieply knows, there is a whole industry of Christian filmmaking out there providing that kind of product. Hollywood is perfectly good at flattering its audience – that’s its standard modus operandi, so I wouldn’t be at all surprised if Hollywood tried to break into a lucrative niche market. And if it doesn’t, then Christian filmmakers will fill the void – they are already doing so, much as Tyler Perry has done with a different lucrative niche market that Hollywood has had trouble cracking.
But what Cieply seems to want is a variety of mass-market films with a sensibility that flatters a specifically religious audience. The barriers to that, though, aren’t some kind of anti-religious bias in Hollywood, which was likely as secular in the 1950s as it now, and just as focused on the bottom line. It’s changes in film economics – and cultural changes in the larger society.
As Cieply surely knows, there are more movies being made than ever, covering a wider variety of stories and aimed and a more diverse audience. But the studios are making fewer and fewer films, and the ones they are making keep getting bigger. What this has meant is that the middle-budget film is becoming a thing of the past. Films either get made for under $20 million (and usually much less than that), or for more than $100 million. Films in the former category have trouble achieving the epic scale that Cieply clearly wants to see. Films in the latter category have to be essentially immune from box-office failure, which is why we see so many blockbusters based on properties with a pre-existing audience, so many films with the same formulaic story line, and with protagonists and villains that will play everywhere from Johannesburg to Jakarta.
Meanwhile, in the 1950s American culture was broadly but shallowly Christian; it was not the 1850s. Nor was it like today, when there is a much more substantial non-Christian or even anti-Christian segment of the population, while conservative religious groups are much more engaged in active resistance to the culture at large. The 1950s Waco, Texas family in Terence Malick’s “Tree of Life,” for example, is Christian, but not counter-culturally so; their Christianity doesn’t stand out as a fact. This change in American religious demographics has implications for the marketing of a film; you can’t just assume that a mass audience shares common religious assumptions, which means if your film partakes of a certain set of assumptions you risk confusing or alienating part of your audience. Far safer to stick to Oprah-approved spiritual sentiments without much content.
So, on one level, Linker is right that “Hollywood doesn’t have a religion problem. It has a quality problem.” But to the extent that this is true, it is substantially a function of economics.
I think about this question a lot, because one of my scripts, probably the one I’m most attached to, deals very centrally with religious themes, and takes religion quite seriously indeed. It’s substantially inspired by “Tender Mercies” – and if anybody reading this knows Bruce Beresford, please put us in touch. But I worry whether it could get made, precisely because it isn’t exactly flattering; it’s neither a comfortably secular film nor a comfortably feel-good religious or “spiritual” film. It doesn’t obviously fit in a box. And the movie industry likes its boxes. (Also I worry that it just isn’t good enough – but it’s my baby; I believe it is!)
Meanwhile, the more obvious complaint to make about Hollywood with respect to religion is precisely that it goes in a box – that it’s an “issue” rather than being portrayed as simply a part of life for the majority of characters, whereas in fact this is the reality for much of America (though certainly not all of it). Making movies with serious religious themes is hard because making any film is hard, and making anything spiritually serious is hard, so doing both is almost impossible. Including religion as a routine component of character just requires research.
A variety of states are contemplating statutes that would affirmatively allow various kinds of discrimination by private actors against gay couples. The asserted concern is that same-sex marriage violates deep and sincere religious beliefs of many people, and that, in the absence of a specific immunity carved out in law, private individuals may be required to provide services that, to the provider, feel tantamount to an endorsement of such a marital state.
Assuming, for the sake of argument, that such efforts are sincere (they may or may not be, but I think that’s the right assumption to start with), the problem of course is that an explicit carve-out in law allowing discrimination on the basis of sexual orientation is prima facie invidious discrimination. It’s worth pointing out that the Supreme Court has already struck down laws manifesting such discrimination for failing the rational basis level of scrutiny – the lowest level of scrutiny. I’m skeptical of their reasoning, which appears to me more to suggest a heightened level of scrutiny is being applied, comparable to sex or race-related cases, but so it’s not terribly material why exactly the Court feels such laws are illegitimate; they clearly do.
And, whether or not you endorse the entire edifice of equal protection jurisprudence, it’s a bit of a hard lift, I think, to argue that there is nothing wrong with singling out a class of people as being ok to discriminate against. It would be a much easier lift to provide religious protections without identifying a uniquely “unprotected” class.
So, for example, if the issue is being coerced to provide services for marriage ceremonies that violate one’s religious beliefs, why not write a law specifying that notwithstanding any anti-discrimination statutes, nobody can be required to provide services for a wedding ceremony which violates their religious beliefs? Would that allow florists to discriminate against gay weddings? Yes. It would also allow florists to refuse to provide flowers for a Catholic who was getting re-married after a divorce, or for a Jew marrying a non-Jew, or for an Indian wedding that involved pagan idolatry, or for a polyamorous ceremony taking place on a cruise ship. If providing flowers for a wedding amounts to endorsement, then I can see very good reasons for religious believers of various stripes to object to one or more of the weddings described. (Or maybe not – maybe there is only one group anyone cares about actually discriminating against; notwithstanding what may or may not happen, the law at least would be neutral.) If the issue is protecting florists from feeling they are endorsing weddings that they believe are wrong, then the statute should address that issue generally, and need make no specific reference to gay couples.
Some of these laws are being written even more broadly, in that they cover not just services for a wedding ceremony but any services to gay couples. So, a hotel owner might, under such statutes, be able to refuse a room to two men who are married, even though he would not refuse a room to a man and a woman who are married, or (possibly) even a man and woman who were unmarried. Ditto for restauranteurs, etc. Here, again, it’s unclear why gay couples should be singled out uniquely for being the object of discrimination.
If the issue is that the guest professes something that is religiously objectionable to the proprietor, and promotes it publicly by participating in a ceremony such as marriage (or simply by letting people know he is gay), then presumably there are other such professions that might be made that are equally deserving of protection. For example, I know many people who find proselytism religiously objectionable. Why shouldn’t a proprietor be allow to discriminate against individuals who engage in such activity? What is the difference between endorsing the legitimacy of a gay union and endorsing the legitimacy of Islam, or Mormonism, or even mainstream Christianity? If merely providing a room to a married gay couple counts as endorsing their marriage, then surely providing rooms to a Mormon mission counts as endorsing that mission. Right? A properly worded statute not invidiously aimed at stigmatizing gay couples by singling them out would need to allow for general discrimination against any individual whose declared conduct or identity poses a religious objection to the proprietor or service-provider.
This is roughly what Arizona did. Actually, Arizona went considerably further, making an asserted “substantive burden” on an individual’s religious freedom a legitimate defense against individual violations of any state law, regardless of whether it is generally and neutrally applicable. If I understand the law correctly, not only would it legalize a wide variety of types of private discrimination, not limited to my examples above, but would do much more. It would legalize polygamy and marriage with underage girls (both sanctioned by so-called fundamentalist Mormon groups). It would permit public school teachers to explicitly proselytize to their students (I’m quite certain you could find fringe Protestant groups or individuals who hold that such witnessing is mandatory at all times). I’m not sure, but I think if you founded a Church of Nude Defecation, and declared that God told you the Arizona state legislature was your temple, the state of Arizona could not expel you for practicing your faith in the place that God had designated.
Even if the law isn’t quite as nuts as that, it’s pretty nuts. Most people don’t actually want to repeal the process of balancing different interests by making one principle an absolute trump card. They just want to adjust the balance slightly when they don’t like a particular result. Which is completely fine – continual readjustment is exactly what that balancing act requires.
And this is a balancing act. The principle of non-discrimination is plainly in conflict with the principle that people should be free to deal with whomever they damn well please, and not with anybody else. Both principles are weighty and valuable. If the law required you to provide flowers for your ex-wife’s wedding to the guy who used to be your best friend, you would obviously suffer an injury. Well, somebody morally appalled by gay marriage who is coerced, by the law, into providing flowers for a gay wedding (or else exit the florist business) has also suffered a real injury. But so has somebody who is disgusted by black people eating alongside white people when he is prohibited by law from running his restaurant according to the rules of racial purity to which he ascribes. The question is whether there is any remedy for that injury that doesn’t cause a much greater injury to others.
There is nothing wrong with adjusting the balance of equality-versus-freedom. Of course, as the Arizona law suggests, doing so may get you a lot more than you bargained for. But adjusting the balance only to permit discrimination against married gay couples transparently singles out those couples as uniquely unprotected. It’s practically a textbook example of invidious discrimination in law. If you want to adjust the balance, you have to adjust the balance generally. You don’t just make an exception for people you don’t like.
I am pleased to see that the Pentagon is looking at meaningful force reductions and making some tough choices about what equipment is necessary, and that there is some recognition that this will necessitate some change in mission. But I strongly suspect that the “pre-World War II Army” headlines are designed to alarm, rather than inform.
A few reasons why:
- The Army is the service branch that is being shrunk significantly. There are cuts elsewhere, of course, but we’re hardly going back to a pre-World War II Navy.
- A much bigger reduction in the size of the Army took place after the end of the Cold War and the Gulf War. Over the course of the 1990s, active personnel shrank by roughly 1/3. There was some ramp-up over the course of the 2000s, but the service never approached the Cold War levels of the late 1970s and 1980s, to say nothing of the wartime peaks of Vietnam or Korea. ”Win-hold-win” was a doctrine that took hold in the 1990s, and is not a consequence of the proposed reductions.
- The proposed reduction takes active Army personnel down to a level only modestly below its 2000 level. That level is more than twice the size of the active Army circa 1940.
- Comparisons to the pre-World War II Army are specious anyway because the modern Army operates in such a wildly different technological environment.
So what’s the reason for describing the proposed reductions that way? My base-case assumption is that “lowest levels since 1940″ is just a lot more dramatic than “below the levels of 2000″ or “largest reductions since 1992.” But it is potentially deceptive precisely because it is more dramatic.
The proposed changes in forces structure do not imply a shift non-interventionism. They will make it even more difficult to contemplate long-term, large-scale occupations, but such would have been difficult to contemplate even at a 500,000-person Army. That still leaves very much open the use of force in more “discrete” ways – drones, Special Forces, etc. – that have been the hallmark of the Obama Administration since the beginning of the drawdown in Afghanistan. We should also remember that fighter jock Donald Rumsfeld also advocated a lean and mean Army, and planned the Iraq War precisely as a demonstration of how much we could achieve without deploying an occupation-scale force. We all know how that turned out, but while some learned the lesson, “don’t do that again,” others learned the lesson, “we need to learn how to do that better before we do that again.”
I’m not suggesting that advocates of a more restrained foreign policy shouldn’t be pleased by the proposal. This is the way you turn an aircraft carrier: slowly. It should just be clear that this is another incremental turn away from the Cold War forces structure. It’s compatible with a reorientation of American foreign policy, but it doesn’t constitute such a reorientation.
Damon Linker asks whether gay marriage isn’t the next logical step in a cultural progression that begins with Christianity’s radical egalitarianism:
For Tocqueville, the march of equality was upending age-old institutions and moral habits “in all the Christian world.” It was a “providential fact,” by which he meant that there was nothing anybody could do to stop it.
The ultimate source of the democratic revolution — the motor behind its inexorable unfolding — is the figure of Jesus Christ, who taught the equal dignity of all persons, and declared in the Sermon on the Mount that the last shall be first and the first shall be last, and that the meek shall inherit the earth.
These are among the most subversive teachings ever uttered — and according to Tocqueville, Western civilization has been working out their logic for the better part of two millennia, as political communities have applied Christ’s egalitarian teachings in stricter and stricter terms.
An interesting argument. But what about the other radically egalitarian monotheist religion: Islam?
Islam is supposed to be a brotherhood of believers in which all are regarded equally by the divine. Pharaoh is the Quranic representation of outrageous arrogance, the man who would make a god of himself rather than submit to the only true divinity. Just as Christianity was an appealing religion to the slaves of the Roman Empire, when Islam reached India, it became a primary mode of escape from the caste system for many low-caste Hindus.
There is no hereditary priesthood in Islam (unlike in, say, ancient Israelite religion). The hierarchy of Islamic jurisprudence, at least in its dominant Sunni variety, is in theory both meritocratic and libertarian; you gain authority as an interpreter of Islamic law by convincing other interpreters to follow your interpretations. (This is the way rabbinic authority historically worked as well.) De Tocqueville’s argument that the openness of the priesthood to different classes paved the way for modern democracy should be even more true of Islam.
While most Islamic societies have been monarchies for most of their history, the same is true of most Christian societies; Islam, however, does not have a doctrine comparable to the “divine right of kings.” Indeed, for most of the history of Christianity, the Catholic Church proclaimed that monarchy was the only political system in harmony with Christian principles, reversing course on this point only very recently. Similarly, yes, most Islamic societies were slave societies for most of their histories – but most Christian societies embraced slavery and/or its close cousin, serfdom. Meanwhile, Islam has been far more consistent historically in rejecting religious sanction for race-based slavery or a natural hierarchy among humanity. The same cannot be said of Christian societies.
Finally, Islam emphatically believes in bringing the actual social world into line with its ideal conception thereof, while Christianity frequently gestures in the opposite direction, declaring its kingdom to be “not of this world” and talking about rendering unto Caesar what is Caesar’s. If gay marriage is the “logical” result of a leveling egalitarianism, then surely the Islamic religious matrix is where it would emerge.
Or, you know – not.
When I first read the title of Linker’s piece, I thought he was going to make a different argument for Christianity’s implication in the movement for gay marriage by pointing to Christianity’s rejection of the cycle of procreation and death, its valorization of celibacy and friendship over the value of marriage and clan continuity, and showing how these ideas make it hard to argue against gay relationships as somehow against the proper order of things as ordained by God. The Cathars were a Christian heresy – and a very popular one – that embraced non-procreative sexual activity; the association was strong enough that the coarse term, “bugger,” originates as an epithet for the Cathars. Maybe this is the true intellectual genealogy of marriage: not in Christianity’s radical egalitarianism, but in its rejection of procreation? I certainly think you could write a persuasive column based on that argument.
Except . . . what about the Buddhists? Who also have a celibate priestly class. Who also focus on an escape from the cycle of procreation and death. If this is the intellectual genealogy, why should gay marriage have originated in the secularizing Christian West rather than in, say, Thailand?
Maybe the problem with all these kinds of arguments is that ideas don’t have consequences – at least, not in the way that Linker wants them to. The older I get, the less Hegelian and the more Darwinian I get about the way that culture changes over time. That is to say: I am less and less convinced that a conversation about ideas is the motor of history, and more and more convinced that cultures prove themselves more or less adapted to challenges – material or ideational – that their cultures could not possibly have foreseen.
The political democracy that de Tocqueville studied emerged in the Christian west after the discovery and settlement of the New World; the ructions of the Protestant Reformation and the Catholic Counter-Reformation (was the former the “true” Christian development, and the latter somehow “false”?); the transformation of both the British and French monarchies into more autocratic, less-feudal systems; the emergence of capitalism as an economic system; the rise of the trans-Atlantic slave trade; and a world war between Britain and France. Which of these momentous developments do we fully understand? Which is the inevitable outgrowth of the Christian idea, as opposed to a contingent historical development?
Christianity was around for two millennia before the idea of gay marriage reared its head. Moreover, Christianity arose in a Roman world that was not exactly reticent about sexuality or about the existence of same-sex attraction. Heck, the Emperor Nero married a man! And yet, we are having this argument now, not in Nero’s time. Why, then, should we think that Christianity has anything to do with it?
When I read Augustine’s argument that marriage is a sacrament, I see an argument equally applicable to gay couples as to straight. But that merely demonstrates that his ideas are – in my view, not the view of an orthodox Christian – more readily adaptable to the challenge posed by gay couples demanding recognition than, say, the views of comparable figures in my own religious tradition (Judaism). It doesn’t prove that those gay couples are making their demand because of the inevitable working out of Christian principles, because there is no such thing as the inevitable working out of principles, Christian or otherwise. History just doesn’t work like that.