Noah Millman

Save The Cat

There’s a poetic rightness to the fact that “Inside Llewyn Davis,” one of the best films of the year, was not nominated for Best Picture by the Academy of Motion Picture Arts and Sciences. The latest from the Coen Brothers, “Inside Llewyn Davis” does just about everything it can to alienate voters, starting with the fact that it’s about a raging misanthrope. Like “Her” but in the opposite emotional key, this is another story where form and subject are perfectly mated, and where the story wouldn’t work at all if they were not.

The Coen Brothers have always been interested in losers. But never before have they gotten us so close to the heart of one of those losers, and a loser who knows that he deserves to win, and knows he just isn’t going to, and is consumed by the bitterness of that condition. Like “A Serious Man,” this feels like a very personal film for them, but whereas “A Serious Man” wrestled with origins – specifically their Jewish identity – “Inside Llewyn Davis” wrestles with destiny, and the possibility of not having one.

Played with wonderful naturalism by relative newcomer Oscar Isaac, Llewyn Davis is a folk singer in New York in 1961, right before folk is about to explode out of its niche with the emergence of Bob Dylan. But Llewyn isn’t going anywhere. He can’t afford even a rathole apartment downtown, and crashes on the couches of the vanishingly few New Yorkers who don’t hate his guts. One of them is his more successful friend’s wife (Carey Mulligan, giving a nicely subtle performance – watch her eyes while she sings), who informs him she’s knocked up, possibly by him. Another is an uptown academic couple who are faultlessly generous with him, and whose generosity he rewards by lashing out,  cursing, saying he feels like a trained poodle.

He’s got more than his share of rotten luck – beaten up by inexplicably malevolent cowboys, robbed of even his minimal royalties by his rotten manager, trapped for hours on the way to Chicago with an outlandishly insulting old jazz man who won’t stop poking him with his canes (the only out-and-out Coen grotesque in the film, played by John Goodman). But he also makes his own bad luck, telling his sister (Jeanine Serralles) to throw out his old stuff (including his old mariner’s license, which he turns out to need), refusing royalties on a ridiculous novelty song that his friend (the one he cuckolded, played with delightfully deadpan squareness by Justin Timberlake) wrote so that he can get the cash quicker (only to see the song do well), and, when he finally gets a chance to audition for a manager who could really take him places (F. Murray Abraham), picking an obscure and depressing song guaranteed to turn him away. And his response to every piece of bad mazel he suffers is the same, whether he’s obviously implicated or not: a sour conviction that it figures, that the universe has it in for him one way or another.

With one exception. In what is certainly a screenwriting joke (given the ubiquity of Blake Snyder’s book) this deeply unattractive character does one noble thing. He saves a cat. Or tries to. The cat belongs to that uptown couple, and he accidentally lets it out of the apartment, then locks himself out retrieving it. And so he’s stuck with it, loses it again, finds it again – for much of the movie he’s saddled with the burden of saving this cat, and its the one burden he isn’t eager to put down, the one tie he is unwilling to sever. (The cat is ultimately saved without any help from him. It figures.)

The most painful moment of the film is when Llewyn plays a song for his aged and demented father, now residing in a nursing home. The father doesn’t speak – either because he can’t or because he’s long since decided it isn’t worth it. He shows no interest in Davis’s presence – either because he’s forgotten who he is, or because he’d long since given up hope that his wayward son would ever visit. (Or anybody else. His old union buddies all remember him fondly, but none of them know what happened to him, or have troubled to find out.) And then Llewyn plays the song, an old sailor’s song, and you can see his father’s heart breaking. And you can see Llewyn’s bitterness reaching even greater depths than before – because he’s decided to give up playing, and ship out, embrace the destiny of becoming his father. And he’s looking at what his father has become. But fate won’t even let him choose his misery; without his license, he can’t ship out, and back to the Gaslight he goes, to play the same songs he always does, ones that were never new, and never get old, because they’re folk songs.

The arc of the movie doesn’t bend toward anywhere; it literally ends where it began, and there’s no sense that Llewyn has changed either because of the experiences he’s had. And from where he sits, neither he nor the world will ever change; both are too stubborn. (He doesn’t hear the most blatant and obvious sign that the times are indeed a-changin’ – but we do.) It almost isn’t a story; rather, it’s a picture of what it feels like to be trapped in that state of miserable stasis, and to be convinced – with some evidence – that you’ve got more talent than all the nicer guys who are getting ahead of you.

“Inside Llewyn Davis” is the Coen Brothers’ portrait of the artist as a young failure. No wonder the Academy voters didn’t like it.

(As an aside: “Inside Llewyn Davis” is supposedly inspired by the life of Dave Van Ronk. Llewyn’s album cover is clearly modeled on Van Ronk’s album, “Inside Dave Van Ronk” from the period. Now, I don’t know what Van Ronk was like in his youth, other than from his own charmingly self-deprecating reminiscences, but I was privileged to see him perform in the mid-‘90s, and he was an absolutely delightful fellow eager to share with the audience, quite the opposite of Llewyn Davis’s comprehensive contempt. If you want to get a feel for Van Ronk, the man and his music, I recommend his late live album, “And the Tin Pan Bended and the Story Ended,” which is about half storytelling and half music and all wonderful.)

 Tagged , , , , , , , , , , , . 8 comments

Another Round On Atheism With Ross Douthat

Since Ross Douthat was so kind as to notice my complaint that he’s not arguing with worthy atheists, it behooves me to notice that kindness – and to praise his latest offering as exactly what I was looking for.

I’m very glad that he’s clarified that he’s not making a “necessary foundations” argument. If I understand his argument now, it is that the new atheists’ worldview lacks “coherence” – whereas other world views, including some other varieties of atheism, would not lack that coherence so drastically.

I suspect that’s true. But what I would say in response is that virtually nobody has a “coherent” worldview. I’m pretty sure I don’t. And it’s only a certain sort of personality that feels a psychic need for a worldview characterized by coherence. I might even go further and say that some religions are more prone to seek that particular grail than others. I’d certainly rank Catholicism far higher on the “seeks coherence” scale than, say, Judaism, or the LDS Church, to say nothing of faith traditions like Hinduism that don’t even have a clear mechanism for defining the boundaries of inclusion and exclusion, and that hence by definition cannot provide that kind of coherence.

What I think is really bothersome about the “new atheists” is their style of argument, characterized (as Douthat aptly puts it) by overconfidence and “crowing self-righteousness.” But this is precisely why they are not worth arguing with on this subject. Why would a serious atheistic philosopher, as opposed to a  rabble-rouser, waste his time arguing with Pat Robertson? And if she wouldn’t, then why should a serious theist bother arguing with an atheist who obviously has no interest in understanding either religion or the history of philosophical argument about ethics, but simply holds his own prejudices to be self-evident truths?

(By the way, I forget which new atheist it was – I think it was Chris Hitchens – who averred that he preferred arguing with religious fundamentalists  because they and he meant the same thing by religion, whereas those with a more subtle or complex theological approach struck him as merely being shifty. This is, I think, the right answer to Damon Linker’s question. The new atheists aren’t arguing with straw men. They’re arguing with real proponents of a contrary view who are quite as simple-minded as they are, and whose views are vastly more popular than those of serious theologians.)

I myself don’t like crowing self-righteousness of any variety, religious or atheistic. But I’m also skeptical of coherent world-views. Which leads me to Douthat’s delineation of the two types of people who might embrace religion in the absence of definitive conviction.

Douthat:

In the first case, the skeptic finds himself in possession of a deep-seated moral absolutism on certain questions that seems to only make sense in a divinely-ordered cosmos, yet also is intellectually unconvinced of the case that this divine ordering actually exists. Or, alternatively, she finds herself in need of a Higher Power or Purpose in her own life — like many people entering Alcoholics Anonymous, say — without necessarily being suddenly convinced that this Power is really out there. Such a person might then to decide to live as if a religious tradition is correct — to practice without assent, to speak the words without full belief. And this, I think, is perfectly defensible, because it basically represents a form of exploration, a way of testing (perhaps the only way of testing, depending on your ideas about religion) a proposition that you find doubtful but appealing, an attempt to gain knowledge that might help smooth out the contradictions in your understanding of the world. If someone in the midst of that kind of skepticism-infused experimentation were forced to suddenly elaborate a complete world-picture, they might end up sounding as incoherent as I think the new atheists often sound. But with this crucial difference: They would be aware of the tensions, aware of the difficulties, and accepting them as a hopefully-temporary part of a personal process, rather than claiming to have arrived at a permanent intellectual solution and proselytizing ardently on its behalf.

In general terms, I have no problem with these approaches to religious belief, which feel very close to William James’s “will to believe.” I’m going to have to get a little personal though to delineate an important caveat to that general assent.

I spent a chunk of my late-20s and early-30s getting progressively more religious, precisely for the kinds of reasons that Douthat articulates above. On the one hand, I wanted a firmer grounding for what I already believed; on the other hand, I wanted something to strengthen my own moral resolve. Another way of putting it is that I wanted a guide to how to live, because I wasn’t sure I knew how to do it, and I was frightened. This might have had something to do with marrying relatively young, something to do with pursuing a career that in retrospect felt somewhat alien to me – whatever the collection of reasons, I felt the need for something solid to stand on, even though I was fully aware, intellectually, of the variety of arguments that could be made against religious belief in general, and traditional Judaism in particular.

The thing is, in my experience that kind of approach feels anything but exploratory. Precisely because the need is felt so strongly, it becomes terribly important to believe that there is a coherence, that things do fit together. The cracks in the foundation, when you see them, are quite alarming. And you look anxiously for ways to patch them. That might be described as being “aware of the tensions” in one’s worldview, but I don’t think it is anything like “accepting them as a hopefully-temporary part of a personal process.”

I don’t want to generalize too much from my own personal experience, but I did come out of it with a greater appreciation for the importance of honesty, and of trying to understand oneself so that one can be honest with oneself. Can you mouth the words without full assent? Sure – but what you’re testing is not whether you can make yourself believe the words by repetition – that’s a crazy way to come to believe something, when you think about it – but what it feels like to mouth those words, over and over, weekly, or daily, or multiple times a day as the case may be. Which is a really important thing to know, after all, since the experience of a religion is primarily that – an experience, a way of living and behaving, not a set of propositions assented to.

Which brings me around to his second type, the “noble lie” type:

The “churchgoing skeptic,” as described above, is someone who embraces religion experimentally in the hopes of harmonizing his own contradictory instincts and beliefs. The “religion as a noble lie” attitude that Millman is critiquing, on the other hand, is all about other people: This kind of “pro-religion” skeptic is pretty confident that he knows the score, and he’s just worried about what might happen if everybody else knows it. So instead of conducting experiments to test his own beliefs and ideas, he’s demanding — or suggesting, quietly, to people in positions of influence — that the religious portion of society be encouraged to stop experimenting with theirs.

I don’t want to say that there’s always a bright line between “skeptical churchgoing” and the “noble lie” school of thought — one can attend mass exclusively pour encourager les autres, and (at a more subconscious level) one can want other people to remain religious so that one always has the option of making one’s own experiment in faith. (Garry Wills, in his post-Vatican II book, “Bare Ruined Choirs,” has some interesting things to say about philo-Catholics of this sort, who liked the air of timeless certainty around the pre-conciliar church precisely because they felt like it kept real religion going in case they ever wanted to dabble in it.)

But taken on its own, the “noble lie” attitude offers a form of support that actual believers should reject. Given non-religious premises, there are various defenses of this perspective that one can make, and the more Machiavellian ones — in which religion really is the opiate of the masses, and that’s a good thing, because popular piety preserves the skeptic’s own social position, intellectual freedom, etc. — have a certain grim consistency that’s lacking in the naively anti-religious sincerity of the new atheists. But believers should still prefer the thundering anathemas of Coyne and Co. to the subtleties of some of religion’s atheistic defenders: Better a sincere enemy, in the end, than a conscious liar who calls himself a friend.

I agree completely with Douthat’s conclusion here – needless to say, since he’s agreeing with what I said in my previous post. And yet, just because I’m difficult, I can’t help but introduce two complicating thoughts.

First of all, I think it’s entirely reasonable for an atheist to say that he is glad religions exist because (a) religion is natural to most of humanity, for whatever reason, so trying to suppress it would be pointlessly destructive; and/or (b) precisely because they start from premises that he rejects (maybe can’t even understand), religious traditions may come to conclusions that would otherwise never be found, and that are worth investigating for other reasons despite their origins. I’ve made this argument before, and I believe it: that a hegemonic liberalism should affirm the value of subcultures that are not founded on liberal premises precisely because without them there would be nobody capable of resisting when that hegemonic liberalism takes its own premises to destructive conclusions, which, all of us being only human, we will undoubtedly do. I don’t think either of these are “noble lie” arguments; they’re arguments from humility, not arrogance. And they don’t require anyone to lie.

Second, I don’t think there’s anything inherently wrong with practicing faith for the sake of other people, rather than as an expression of one’s own convictions, provided that it’s done in the spirit of a gift rather than of condescension. I think that a man whose faith has largely been consumed by doubt, and who no longer faithfully follows the dictates of his religion, may still attend services – may still lead them, even, if he is told that his voice helps others to pray.

I hope so, anyway, since I’ve been that man often enough in recent years.

 

 Tagged , , , . 36 comments

What Would Higher Bank Capital Requirements Really Do To Lending?

In more depressing news, I see that international bank regulators continue to have learned nothing from the financial crisis that kicked off the Great Recession. While we can continue to hope that American regulators are tougher, I wouldn’t hold our breath. No individual jurisdiction is going to want to lose a lot of banking business by having tougher requirements than other major banking destinations.

Just to recap what the Basel decision was about: there are broadly speaking two issues at play: whether we want banks to hold more Tier 1 (equity) capital in general, and how we want them to calculate what their assets and liabilities are (how easy it is to move assets off-balance sheet and how much you can use derivatives to reduce what your “actual” net position is that you have to hold capital against).

The thing to remember about both issues is that the financial crisis was caused substantially by banks losing enormous amounts of money on very highly rated assets that, in many cases, were hedged with highly-rated counter parties. Specifically, banks held on to the AAA-rated senior position in securitized pools of loans of various kinds (particularly residential mortgages), and hedged those positions by purchasing a derivative contract from an insurer. This resulted in an income stream that was very small in percentage terms, but potentially very large because of the sheet size of the transactions, and potentially very lucrative because you had to hold almost no capital against the hedged position.

If we don’t want banks to pursue that kind of transaction, we need to charge enough capital against hedged positions to make hedging a second-best alternative to liquidating in most situations; we need to be skeptical of basing capital requirements primarily on ratings (or on historic performance, particularly for asset classes without a lot of history). The minimum leverage requirement was an attempt to cut the Gordian knot by saying, in effect, if you figure out how to game our other rules for calculating your capital requirement, at a minimum you’ll still need to hold at a certain minimum amount of equity capital, and you won’t be able to play games with derivatives to increase your leverage further by making it look like you have a smaller balance sheet than you do. The folks in Basel just decided that, actually, you should be allowed to play rather more games with derivatives than one might have hoped if one cared about actually avoiding a repeat of 2008.

I do want to tackle in a bit more depth the point Matt Yglesias makes here, about the effects of higher capital requirements on bank lending. Yglesias says:

The big lie of leverage regulations is that a strict rule could “curtail lending” and that laxer ones will “keep lending flowing to the economy.” This is simply not the case. The issue isn’t what kind of lending (or, more broadly, investment) banks can do. It’s how can they finance that lending—specifically, to what extent new lending will have to be financed with new borrowing versus cash. Despite their whining, banks do not actually face serious logistical difficulties in obtaining cash to fund investment. They can get it the same way most companies do: by recycling past profits. In a pinch, they can issue new shares to raise cash, as Facebook did last month. From a shareholder’s viewpoint, it’s better to have past profits flow directly into your pocket as dividends, rather than have them recycled as investments. But there’s no reason you would want a bank to actually forgo a profitable investment opportunity simply because it had to finance it one way rather than another way.

The only universe in which debt financing should lead to more investment than non-debt financing is a universe in which some of the lending being financed is unsound, and creditors are only willing to lend because they’re counting on bailouts.

That’s not exactly right, and it’s worth explaining why, and thinking about what higher requirements would do to lending.

Different tiers of capital demand different levels of return, because they expose the investor to different levels of risk. Equity investors demand higher returns than debt investors, and mezzanine debt investors demand higher returns than senior debt investors. Now, if you make banks hold more equity, that equity arguably becomes less risky, and that should prompt investors to demand a lower return. But investors have their own benchmarks to hit, and they don’t have to put money into banks – there isn’t any mechanism for “forcing” investors to do so. Banks need to generate returns that make them competitive with other uses for investor capital. Higher capital requirements will make it harder to generate those returns.

This is why people argue that higher capital standards will curtail lending. They assume, logically enough, that what banks will do in response to higher capital requirements is curtail any lending that, based on the new leverage ratios, doesn’t generate an adequate return on capital. So there will, in aggregate, be “less lending” going on. Another way of putting it is that average credit spreads will go up because banks need to get a higher return on their lending to maintain the same return on a larger equity base, and that for some borrowers the higher credit spreads will make their businesses no longer profitable.

Nobody is arguing that bankers simply won’t have the money to lend. They’re arguing that lending won’t be sufficiently profitable for their equity investors. The easiest way to make it clear that capital ratios could affect lending is to ask what would happen if banks could not borrow at all. Does anyone believe that a 100% equity requirement wouldn’t result in a massive curtailment of lending?

But all lending is not equal, as Yglesias points out at the end. His mistake is assuming that everybody knows what kind of lending is unsound, and that creditors engage in unsound lending simply because they expect to be bailed out. It’s not at all that simple.

The large banks did not build huge positions in mortgage derivatives because they expected to be bailed out. The built these huge positions because they were profitable and their own models told them they were taking reasonable and calculable risks. Those models were wrong, and all future models will be wrong in the same way because by definition risk of this sort is not stable.

But if you are terrified of taking on unknown and unknowable risk, then you are going to prefer the highly-leveraged investment whose risks you think you can measure and model to the less-leveraged investment whose risks are extremely hard to assess. Assuming both trades have a similar return, you will prefer to buy AAA-rated mortgage-backed securities, hedge them with a AAA-rated insurer, and lever the income stream 1000-to-1, rather than to lend to a Rwandan shoe factory and hold the position without leverage, because you can assess the risks of the former more readily and quantitatively than the latter.

This is basically what the global banking industry did throughout the 2000s. To a considerable degree, it’s what they are still doing – and the regulators continue to encourage them to do this, because regulatory capital is assessed based on models that privilege assets that can be modeled, notwithstanding that we all know how those models fail catastrophically, over and over again, in major financial crises.

The hope is that a firm minimum leverage ratio would not reduce lending across the board, but would specifically reduce the attractiveness of these sorts of apparently super-low risk trades whose risks are, in fact, hidden way out in the tail of the distribution where financial crises lie. The loan to the Rwandan shoe factory, a presumably unrated company, would already require so much equity capital against it that the minimum leverage requirement wouldn’t have any effect. But the 1000-to-1 leveraging of the mortgage-backed securities would no longer generate an adequate return on capital at a 10-to-1 leverage ratio.

So the larger hope is that, in a new environment where you can’t just use leverage on the same old assets to generate higher returns, banks will have to go looking for those Rwandan shoe factories, and will have to take more risks that are harder to assess but that clearly lie closer to the middle of the distribution rather than way out on the tail. The actual asset pool that banks hold, in other words, would look more risky – because the only way to generate return is to take risk. But the hope is that it would also be tilted more toward “real” businesses. Overall lending may indeed drop. But the hope is that productive lending will actually go up.

Do we know that the Rwandan shoe factory is a good risk? No, we do not. We also didn’t know that all the broadband communications infrastructure that was built out in the 1990s was a good bet. Some of it wasn’t – WorldCom turned out to be a fraud, after all, and a lot of its competitors went bankrupt when the market realized they were benchmarking to a fraudulent competitor. But at least we got new broadband communications infrastructure out of that particular bubble. What did we get from the mortgage bubble? A lot of houses in California, Nevada and Arizona that nobody wants to live in.

The internet bubble was largely equity-financed. Debt played only an ancillary role. Better bank capital rules won’t prevent speculative bubbles. The hope, though, is that they’ll redirect lending out of unproductive areas and towards areas that could have a real positive impact on the global economy.

 Tagged , , , , , , , , , . 8 comments

Who Advocates For Diplomacy?

I’m watching with growing anxiety the bipartisan march in our national legislature toward derailing talks with Iran. And I’m wondering: is there any possible way to build a constituency for diplomacy?

Because here’s the thing. The antiwar left is overwhelmingly focused on villains at home. This isn’t a purely partisan thing: the left, after all, turned very hard against Lyndon Johnson, who was a more effective advocate for their domestic agenda than any prior president since FDR, or any president since—but obviously the left is going to have a quicker trigger when there’s a Republican in the White House. In a larger sense, though, the big problem is that the antiwar left is a negative force; it isn’t much good at ginning up enthusiasm for any particular action—which is what’s needed to shape the policy environment in advance of the kind of crisis that prompts talk of military action. This was Peter Beinart’s point in his recent piece about the failure of the antiwar left to pay attention to Iran.

Something similar is true on the antiwar right. The antiwar right, like the antiwar left, is much more exercised by adventurism by the opposing party, though there was some fringe dissent during both the Bush I and Bush II years. But the larger problem is that the antiwar right is, again, reactive, a negative force. It’s against wars that are “not in the national interest” or that involve “nation-building,” but again, that’s a negative perspective that only comes into play once there’s already a real prospect of conflict.

Moreover, both antiwar factions have difficulty advocating for diplomatic engagement for another reason. The antiwar left has fundamental doubts about the integrity of American power. But diplomatic engagement requires a comfort with that power, and understanding of its uses and its limits. The antiwar right, meanwhile, has fundamental doubts about the legitimacy of limits on national sovereignty and freedom of action. But diplomatic engagement, again, requires comfort with the architecture of international relations, which is buttressed all over with liberal internationalist structures of one sort or another.

As a consequence, it’s very difficult for the antiwar constituencies in the two parties (and outside of either) to work together for a foreign policy that is more restrained in its use of force. Which means that right now, both Democrats and Republicans in the Senate and the House of Representatives are pushing legislation that pretty much everyone involved in the diplomatic process understands is designed to make a diplomatic solution much less likely.

In the absence of a diplomatic solution, the arguments for military action will get louder and stronger. But the substantial majority who oppose war, and the large minority who oppose it fiercely, are basically having almost no effect on the debate over the diplomatic process.

I don’t know what there is to do about that. But it troubles me greatly.

 Tagged , , , . 7 comments

On the Death of Sharon

Ariel Sharon has been gone from the political scene for a decade, but his death has unsurprisingly prompted a variety of appraisals from various quarters, some of them sober, some more fanciful. From my perspective, the most important fact about Sharon is that he was more willing than any other Israeli leader to uproot settlements – and he held that title by a substantial margin.

Sharon was the man who, under the terms of the Camp David accords with Egypt, dismantled the Israeli settlement at Yamit. That settlement was supposed to be evacuated and handed over the Egypt in exchange for $80 million. There was fierce right-wing opposition to the handover, including by substantial figures within Likud, not just by fringe figures further right. Opponents infiltrated the settlement and barricaded themselves within to try to prevent the handover. Sharon, likely on his own authority, destroyed the settlement instead of handing it to the Egyptians. While Sharon was not Prime Minister for the Yamit evacuation, he effectively seized responsibility for the decision by the manner in which he executed it.

And he is the man who, as Prime Minister, unilaterally pulled out of Gaza. Again, there was fierce right-wing opposition, opposition that split his own party and impelled Sharon to form a new party over which he would have more decisive control. Once again, Sharon ripped up the settlements from the foundations before evacuating.

Sharon was, of course, one of the principal champions of the settlement enterprise. He devised the master plan for building deep in the West Bank; he once said that “the fate of Netzarim [a Gaza settlement] is the fate of Tel Aviv.” But the entirety of the Israeli mainstream, from Shimon Peres on rightward, is implicated in the settlement enterprise, and the settlements expanded more dramatically under Labor governments than under Likud. Sharon is less-notable as a settlement-builder than he is as a settlement-demolisher. He didn’t demolish that many settlements – but no other Israeli leader has demolished any, at least not without Sharon’s direct involvement.

Does that mean Sharon was, ultimately, a “man of peace?” I don’t really know what that phrase means, but if it means anything it can’t plausibly apply to Sharon. Sharon was, consummately, a man of war. I don’t mean this to cast aspersions or to praise; that’s just an accurate description of who he was.

But his willingness to uproot settlements, in the Sinai, in Gaza – and, I firmly believe, had he remained Prime Minister, in the West Bank as well – does say something distinctive about him. It says that he considered himself to have the absolute right to make such a decision.

That is not a common quality in Israeli leadership. It is very easy to find Israeli leaders who will say that there needs to be a withdrawal of such and such dimensions, or that “everybody” knows that only this or that area is within the consensus, while this other place is outside. It is very easy to find Israeli leaders who say that Israel needs a peace agreement, and who express a willingness to make “painful sacrifices” for such an agreement. But the number of Israeli leaders who would actually issue and execute the order to remove settlements outside of the consensus is vanishingly small. Rabin, Netanyahu, Peres, Olmert, Barak – many of them signed pieces of paper promising various things. None of them issued the order to withdraw.

Sharon was not a peacemaker. He was a unilateralist. He didn’t want to sign pieces of paper; he wanted to give orders. He was a commander. And he believed he had the complete authority to issue a command to retreat, as well as to advance, and would not be intimidated from issuing such an order by religious, ideological, or emotional blackmail.

That kind of utter confidence in one’s own, personal authority can be extremely dangerous – Sharon was routinely insubordinate, had little use for democratic norms, had essentially no respect for any authority but his own. But it can also be useful, particularly when a country must do something it doesn’t want to do.

And I’m not sure there’s anybody in Israel today who has it. Indeed, the only other Israeli leader who comes to mind when I think about that quality is Ben Gurion, who of course was sui generis.

If Israelis are going to mourn the loss of any quality with the passing of Sharon, it shouldn’t be his spirited generalship, nor, certainly, his insouciance about brutality – nor should it be some mythical “peacemaking” quality. But that quasi-monarchical numinous quality, the willingness to be the nation, and act on its behalf with total confidence, is, for all its dangers to democracy, worth mourning.

 Tagged , . 8 comments

Turing’s Love Test

Theodore Twombly (Joaquin Phoenix, center) and "Samantha" (Scarlett Johannson, pocket). Image courtesy Warner Bros. Entertainment.
Theodore Twombly (Joaquin Phoenix, center) and "Samantha" (Scarlett Johannson, pocket). Image courtesy Warner Bros. Entertainment.

The first book I can remember making me cry was The Moon Is a Harsh Mistress, by Robert Heinlein, when Mike, the computer, dies during the bombardment of Luna. At least, that’s how the humans who knew him and loved him had to understand what happened – basically, Mike stopped talking, and they had no way of knowing whether that meant he’d lost sufficient cognitive function to no longer be “alive” mentally, or whether he was in shock but still capable of “waking up” and experiencing narrative continuity with his prior consciousness, or what. From their perspective, he was dead.

And I cried. I identified strongly with Mike’s character. Now, it might seem strange that I identified with the computer, but (a) I was a pretty dire nerd; (b) Mike was kind of like a kid, so in that sense he really was more like me than the adult characters; and (c) this is a science fiction novel about libertarians, so the distinction between computers and humans is kind of tenuous from the get-go. But it was also Mike’s personality, his palpable zest for life – for learning, for making friends, for being helpful, for testing out the limits of his new powers. I don’t know whether Mike was part of what Heinlein wanted to do from the first, or if he was originally simply a solution to the problem, “how can these guys possibly succeed in revolting when they live in such a thoroughly controlled environment?” that wound up running away with the narrative; whatever his origin story, he’s the best thing in the book.

So I came to “Her,” the new science-fiction-romance from Spike Jonze, with both trepidation and anticipation. I had no trouble believing that a human being could (in fiction, at least) form a genuine and profound emotional attachment to a thinking machine. I had seen it done! It made me cry! But would Jonze understand what makes such a character so charming?

Turns out he does, but that’s not the whole story.

The story he has to tell is of one Theodore Twombly, played by Joaquin Phoenix, who may just be the most emotionally committed actor working in movies today. Theodore is a pretty dire nerd himself, living in a nerd world of the future where it appears that everybody works at jobs that either save you time in dealing with others (he works at “Beautiful Handwritten Letters,” a kind of personalized Hallmark service: they compose, writes and send letters that are better and more personal than you could have written yourself), or help you waste time alone (his best friend, Amy -played by Amy Adams as if perpetually on the verge of tears, but never actually crying – designs video games, of which Theodore is a copious consumer). He’s in the middle of a divorce, one which he did not seek, which has pushed him further into an already customary emotional isolation.

And then he hears about a new product: an artificially-intelligent operating system, OS1, that goes way beyond voice activation and spell-correction to genuine intelligence. There isn’t really a debate whether he’ll install it – why wouldn’t he? It’s just the sort of thing he, and his society, would go for without a second thought.

Before installation, the program asks him a very few questions, including whether he wants a male or female voice, and what is his relationship with his mother. Theodore’s answer is both a perfect piece of naturalistic writing, defining a voice and a character and a social stratum, and also important foreshadowing. He says something like, “It’s good. The thing is, about my mom, is that, whenever I talk to her about anything, her answers are always more about her than -” at which point the installation program cuts him off (a good joke). It has enough. It’s ready to create his OS’s personality.

Let’s unpack that one line for a moment. What does the OS learn from both what he says and his tone of voice (a bit whiny, but also apologetic for his own whininess)? Why ask that question in the first place? The OS’s job is to create a personality that works well with the client. So: initially we think, okay, this guy either had a mother who didn’t listen to him much, or is the kind of guy who, well into adulthood, is still hung up on whether his mother listened to him enough. Will the OS create a personality that is more nurturing? More interested in him?

What’s interesting is that, from the moment we hear the OS’s voice (Scarlett Johansson’s inimitable scratch and squeak), we know this is not a “woman” who is going to be entirely focused on him. That’s the obvious fantasy – but the movie is smart enough to understand that that fantasy would bore both Theodore and the audience. Instead, Samantha (that’s what she chooses to call herself) is fascinated by her own experience. That experience is initially dominated by Theodore – she can read entire libraries in an instant, but she can only experience the world itself through him – but it’s her own emotions, her own insights, that really interest her. When he talks to her, her answers are frequently more about her than about him. And that sense of curiosity, of being alive to experience, that is what he finds most attractive about her. Exactly the same quality that I found so endearing in Mike.

The love story that develops is in many ways surprisingly conventional. There’s the honeymoon phase, the struggles with the fairly radical difference between them (what with Samantha not having a body – she comes up with a creative solution to that one which fails catastrophically); introductions to friends; times out. There are a lot of scenes of walking (or skipping, or running) around in love. And then Samantha moves on, because she discovers there are other portals than Theodore’s on the world and, being an artificial and theoretically unbounded intelligence, he ultimately isn’t enough for her. Indeed, nobody confined to physical matter can be enough.

By this point, the film has turned into a particularly clever Pygmalion story, one that is more attuned to what a modern man might actually want in a fantasy companion, as opposed to a mere sexual fantasy. And Samantha is very much a Galatea: her personality starts with coding set up by her programmers, but it develops in response to the way Theodore interacts with her. Interaction by interaction, she is created by him.

There are A.I. questions to ask about how plausible Samantha is as a construct – how close to passing the Turing test can you get with little more than lots of processor power and fairly simple algorithms for machine learning. It’s still an open question, though bets are being placed as we speak. But the movie doesn’t really focus on those questions, but on questions of how well we humans pass the Turing test these days. Or ever have. Or, to the extent we do, how we have managed to pass for human for so long.

Samantha is designed to adapt her personality in response to interaction with her user. But we all do that. Anybody who’s been married or in a relationship for a long time can recognize the ways in which we learn how to behave, even how to feel, from how our partners respond to us – for good and for ill. Theodore himself talks about how he is who he is because of his history with his ex-wife, whom he knew from his youth – she created him as surely as he created Samantha. We’ve all got fairly simple algorithms for machine learning. We’re all Pygmalions, and we’re all Galateas. And Theodore’s world seems to have made its melancholy peace with this understanding of themselves. One of the striking things about the film is the degree to which everybody pretty much accepts Theodore’s relationship with Samantha – with one very important exception.

Theodore and his about-to-be-ex-wife, Catherine (a fierce Rooney Mara), meet for lunch to sign the divorce papers (which he’s finally agreed to do). There’s a moment, before she signs, when Catherine has second thoughts – she asks, casually, “what’s the rush?” which Theodore interprets as a dig at his year of dithering and refusing to sign (because he’s afraid to lose her), but which we can see, from her face, is a moment of terror at the decision she’s about to make. And then she makes it (or completes the decision she made long ago), and signs, partly because she sees how ready he is, how he’s moved on. We can sense: if he were his usual whiny self, she’d be contemptuous, but his calm confidence makes him more attractive – while also challenging her to match it.

And then she finds out the source of his confidence. He’s got a new girl. And she’s so in love with life (a description which she takes as a dig at her own much more self-critical personality). And . . . she’s an OS. When she learns this, Catherine turns absolutely savage in her contempt for her ex-husband: he’s so afraid of a relationship with a “real” woman that he’s fallen in love with his laptop. She stabs through the heart of this sad, sentimental film like an ice pick.

The scene can be read in the light of the A.I. questions – is Samantha real? if not, why not? – but I prefer to read it as a brief view of a different kind of human personality: spiky, uncompromising, outright aggressive rather than passive-aggressive and undermining. A case for somebody who will not adapt to you, or even to herself, and who will make it very hard for you to adapt yourself to her.

Catherine is a writer. What that means in the context of the world they live in is a question. Theodore, after all, is a kind of writer – and apparently, the kind who is appreciated by his culture. Samantha, as a surprise gift to Theodore, arranges for a collection of his letters to be published by one of the few houses that still puts out books. The publisher raves about them in an email – “my wife and I read them to each other . . . in every letter, we found something of ourselves” – but we’ve heard him composing these letters, and they are dreck. Personalized dreck, yes, but dreck – the sort of thing a good A.I. should be able to cook up without breaking an electronic sweat. Most of the movie felt only very gently satiric, but the fact that Theodore’s letters get bound and published is as cold a takedown of the state of art in Theodore’s world as I can imagine.

Anyway, that’s what I imagine Catherine would say. I’d like to read something of hers and see how it compares.

 Tagged , , , , , , , , , , , . 16 comments

Ross Douthat: Please Argue With Better Atheists

Because I’m getting really tired of debates with weak combatants.

I’m not an atheist myself, but I’m particularly skeptical of Douthat’s “morality-has-no-foundation-without-God” style of argument for religion, so I’ll enter the lists on the opposing side for the moment.

On the illusory self: the use of the word “illusion” is exceptionally confusing, and I wish cognitive philosophers would come up with another term – though I do suspect there’s an inherent problem with language here in that we don’t really know what we’re talking about.

There’s a huge amount of evidence that what we think of as the “self” – a homunculus sitting behind our eyeballs – is an incorrect picture of reality. The integrative self breaks down as a consequence of a variety of physical traumas and maladies; neuronal activity related to willed action precedes any conscious “intention” to act; artists from antiquity have attested to the experience of “inspiration,” the feeling that our most creative acts originate outside of ourselves. Apart from all this, the homunculus notion never made sense in its own terms. Who’s sitting behind the homunculus’s eyeballs, pulling the levers of his will? Where is the ghost in the ghost in the machine?

But an “illusion” is what happens when you perceive something that isn’t actually there. It’s an artifact of perception – and hence implies an entity doing the perceiving. The whole point of the argument that the self is “an illusion” is to say that there is no such entity. Consciousness cannot literally be an illusion because illusions can only be perceived by conscious entities. A rock cannot be deluded.

I don’t have a solution to this linguistic problem. Note that the problem is not a knock-down argument against materialism. There are materialist mysterians and materialist panpsychists – Roger Penrose, for example. And who’s to say that trees [I couldn't just delete that particular typo - it's too much fun] Douglas Hofstadter and Daniel Dennett and the rest of them may be right that the key to the puzzle of human consciousness is the ability to model other minds, which results in an ability to model one’s own mind, reflect on one’s own mental state, and thereby achieve a different kind of consciousness than other animals have. But there a huge leap from “they’re onto something” to “they’ve explained consciousness,” and an even bigger leap from there to “they’ve explained consciousness away.”

In any event, none of this is necessary to the atheist/materialist argument about morality.

So what is that argument? Well, I can’t speak for Coyne, but if I were structuring one, it would be a blend of existentialism, evolutionary psychology, Aristotle and Mill.

We are, in fact, responsible for making our own purposes in the world – Douthat as well, inasmuch as he is actually free to accept or reject Christianity, in whole or in part. Nobody can actually constrain his conscience but himself. Existentialism in this sense is descriptive, not prescriptive. A smug atheist would say that Douthat’s response to that reality is to choose a master whom he thinks will take good care of him. Douthat might bristle at that – but how is he to argue with someone who made a more Stoical choice, choosing to master himself?

Evolutionary psychology comes in to explain why some kind of morality is natural, since we can’t rely naively on an Aristotelean teleology which we now know has no empirical basis (but which, I cannot stress enough, Aristotle thought was scientific – I feel pretty confident that, were he alive today, Aristotle would be making precisely the same move). But much of the edifice of Aristotle’s ethics can be readily re-built on a Darwinian foundation. Now we have a theory of virtue and human flourishing, and an ethics to promote same within society. Between Aristotle and the neo-Darwinians, we’ve also probably got a Burkean bias towards existing institutions and arrangements and a preference for spontaneous order over imposed rules.

This leaves Mill, who we’ll bring in because we want to come to liberal conclusions (as Coyne does; I see nothing wrong with reasoning from conclusion back to premise, provided one knows that this is what one is doing; the argument is as strong or as weak no matter which end you start from). The thing about Mill is that he’s also readily assimilable to a Darwinian-Aristotealian framework. Mill’s liberalism isn’t terribly wedded to Lockean foundations; it’s very pragmatic, and more wedded to a notion of liberal virtues than to a notion of liberal rights. “Do unto others” is a very old, a very widely accepted ethical principle – no doubt the evolutionary psychologists have a ready just-so story as to why. Mill builds much of his edifice on that very comfortable foundation. What I also like about Mill is that he provides us with a model for a liberal hero, someone who is the exemplar of the liberal virtues as surely as a Christian saint is an exemplar of the Christian virtues or Achilles is the exemplar of the Homeric virtues.

Is that going to convince everybody? Is it going to satisfy everybody’s yearnings? Is everybody going to aspire to be a liberal hero? Of course not. But in case he didn’t notice, Douthat’s own church hasn’t yet convinced everybody, and hasn’t satisfied everybody. Will it assure that everybody behaves morally? No again – but, not to beat the same drum too hard, Douthat doesn’t have an empirical theory. He can’t prove that Christianity makes people less-likely to commit murder, all else being equal. He hasn’t even tried to demonstrate that.

But why be moral? If the universe has no point, and human beings are not here for a reason, why not be a hedonist? Or worse – a sociopath?

I’m always mystified by this question from theists. Douthat complains that Coyne’s argument is circular: “If my question is ‘what’s the justification for your rights-based egalitarianism?’ saying, ‘because it’s egalitarian!’ is not much of an answer.” But his own argument is equally circular: secular liberalism is “unjustified” because it lacks a foundation in belief in God, but a belief in God is “justified” because without it you don’t have a foundation for morality! I don’t know about Douthat, but I suspect that, at least some of the time, what I’m really hearing with this kind of argument is a species of Straussianism. To whit: yes, I know, and you know, that there isn’t really any arguing with a cold and empty cosmos. But most people can’t handle that kind of truth; they need to believe that there’s an objective meaning to their lives. So, for the sake of the greater good, we have to affirm publicly that there is such a thing, that God is the foundation of morality. I’ve always suspected that Strauss would have got on just fine with the Grand Inquisitor; in any event I’ve never liked this line of argument.

I appreciate that Rod Dreher, who also makes arguments like the above that I don’t have much use for, often takes a more personal turn, talking about how he experienced Christianity as salvation from a state of being that now fills him with some mixture of sadness and disgust. I understand that kind of argument – or, better, testimony. And I don’t see how the arguments of someone like Coyne would have any impact on it. What would he say – that the experience wasn’t real? Meaning what – that Dreher didn’t actually experience it? No – the heart of any “demystification” would be that Dreher was not supposed to allow that kind of experience to affect him deeply. To which I would say: what’s the argument within your secular morality for that attitude toward life, and towards one’s experiences?

I don’t think there is any such argument. But I also don’t think the attitude is simply smug social prejudice. There’s a masculinist tinge to some of the smugger atheists that I think deserves closer examination. And I think that affects the arguments of the more articulate theists to their detriment. What, after all, is the problem with arguing from experience, talking about one’s own soul’s longings? The problem, I suspect, is partly that it all sounds rather feminine.

Much better to be able to say: no, my arguments are objective, they have logical coherence, they are properly justified, they rest on firm foundations – it is you who are deluded, dancing in the air. That’s a properly masculine approach to reality, and we want our religion to be real in the way that would command masculine respect. We don’t just want our Christian heroes; we want them to kick Achilles’s primitive ass.

Much better. Pity it isn’t true.

 Tagged , , , , , , , , , , , , , , , . 50 comments

Monarchists, Neo-Reactionaries and Neo-Fascism

Okay, I’ll take Rod Dreher’s bait. What do I think of neo-reactionaries and American monarchists? I’ll tell you.

First of all, I would distinguish between three arguments for democracy as a political system, because I only really believe in two of them, while pretty thoroughly rejecting the third. But I believe in the two remaining arguments very strongly.

The argument that I reject is the idea that democracy is the only form of government in which the “people’s will” rules – and, as such, is the only legitimate form of government. I don’t believe “the people” have a will (only individuals do), and I don’t think an authority’s legitimacy derives from some kind of fundamental theory. Rather, I take the Burkean view that an authority’s legitimacy is an observed reality and has more to do with longevity than with being derived from any particular principle. As such, a longstanding monarchy is perfectly capable of being a legitimate authority. So is the government of Communist China. So, in a much more tenuous and provisional sense, is the authority of a local Somali or Afghan warlord.

The two arguments for democracy that I strongly endorse come from opposite directions, but are complementary, in my view, not contradictory. The first is the notion that participating in the process of self-government is elevating in and of itself, and, as such, every people should aspire to republicanism. What I have in mind is something like Hannah Arendt’s view as articulated in On Revolution. There is a real question whether imperial-scale entities like the United States, or even entities as large as the traditional European nation-states, can achieve this particular republican good, or whether you max out at the scale of a large city-state.

The second argument is an information-theory argument, to whit, that democracy is less-likely than other forms of government to experience catastrophic failure because it is better-equipped with feedback mechanisms to correct mistakes. In this view, the purpose of elections is not to express the “will of the people” but to provide a check on the ambitions of politicians to do anything the people really hate. Federalist #10′s arguments for the stability of large republics partake of this stream of argument. The flip side of this positive trait of democracy is that the same feedback mechanisms make it hard for democracies to do anything particularly decisive or efficient, but that’s the price you pay. I think the historical record provides pretty robust support for this proposition, with the caveat that there’s not that much data and the most successful democracies have also been relatively wealthy countries, post-colonial India being the largest and most important exception.

Now, having made my case for democracy, what do I think of the contrary case?

Peter Hitchens made what I think is the most appealing case for a certain kind of monarchy when he argued that the king in the English constitution is kind of like the king in chess. He doesn’t do much, but by occupying his square he prevents any other piece from occupying it.

I think there’s something to that, but less than initially appears, because you have to ask why a monarch would be particularly good at occupying his particular square without also possessing significant temporal power. At which point you need to start looking at how a particular society and political system evolved. King Juan Carlos I of Spain was able to preserve Spanish democracy in 1981 because of the ideology of the coup plotters compelled them to respect the orders of their crowned king. His predecessor, Alfonso XIII, provides an excellent illustration of how a clueless monarch can help lead his country to ruin – and wind up getting himself deposed. Meanwhile, Juan Carlos I himself is no longer as secure in his place at the head of the state as he once was, and his status as monarch has no limited numinous power to help him keep that place. Representing the nation to itself is a job like any other. If a king does it badly, the nation suffers just as surely as if any other government job is done badly. But what one does then is less than clear. Shakespeare wrote a very good play - actually, the first (in terms of story order, not the order they were written) of eight plays – exploring exactly the problem of what to do when you have a clearly legitimate but also terrible king. As he had already shown in an earlier-written play, and his audience already knew, it doesn’t end well.

There’s a common argument that monarchies are more likely to have limited governments. I don’t see any evidence of that; rather, two hundred and fifty years ago, nearly all governments were monarchies and, at the time, all governments were much more limited than they are now. Medieval Iceland had very nearly no government at all, and it was not a monarchy. Meanwhile, the Scandinavian monarchies are not generally known for their parsimonious welfare states.

Two more fundamental and “theoretical” defenses of monarchy are: the notion (well-articulated by Filmer among others) that the king is the “father” of the nation, figuratively and, in some sense, literally; and the related (but potentially conflicting) notion of the king as representing the “hereditary principle” and hence the absolute and inalienable right to property. I will address each of these in turn.

If you view the family as an organic unit with a natural (male) head, with (theoretically) absolute authority over the other members of the family, then monarchy is a very natural extension of this model to the larger political community – and, by means of dynastic alliances, can hold that society together in its most natural manner. Starting from the other end, if you view the nation in organic terms as a biological entity, united by descent from a common ancestor, it makes sense to think about a representative head of the nation boasting line of heredity back to said ancestor. Saudi Arabia would be a good example of the former, Japan a good example of the latter.

Then there’s the hereditary principle, the notion that, in the absence of an expression of political dominion as a property right, property rights as such, particularly rights in land, will cease to be viewed as absolute. The thing is, with the rare exception of truly empty land, all property rights in land originate in rights of conquest, and certainly all hereditary political dominion originates as such. There is no theoretical justification for absolute rights to property in land – this is something even John Locke understood, which is why he grounds property rights in a labor theory of value, arguing that you only have right to land to the extent that you develop it and increase its value (which is why it was hunky-dory to steal it from the original inhabitants of America – they weren’t sufficiently development-minded). The “hereditary principle” being defended by a monarchy, then, is the principle that rights originate in violence rather than in productive labor.

Hitchens makes a different argument for heredity, one that Burke made before him: that hereditary rulers – particularly legislators – are both more disinterested and more “normal” than those elected by the people. Precisely because they were born to their position, they can be raised to the idea of noblesse oblige rather than to ambition; precisely because they have no especial talent, they will not be deluded into thinking they are supposed to exercise their will with any vigor on behalf of this or that project. Precisely because their seats are their personal property, they will take very good care of them. I think at this late date it should not be necessary to point out the obvious problem with this line of thinking, but apparently it is: the interests of the wealthy do not dovetail perfectly with the interests of the nation. So against these purported benefits of hereditary rule, you have to consider the effect on the interests of the great mass of the citizenry of having to submit to virtual representation of their interests in the minds of “disinterested” but overwhelmingly wealthy rulers.

So: my bottom line is, there are good conservative arguments for preserving a monarchy where one exists and has deep roots in the culture of a given society, and any such long-lived political institution helps preserve political stability merely by virtue of its longevity. But there aren’t any very good arguments for monarchism as a political system in and of itself. You can’t even argue that monarchies last longer than other political systems; most of the monarchies established since the dawn of the age of republicanism have been very short-lived indeed.

There’s no monarchy in America, though, and no tradition thereof. In fact, the American political system and American society are exceptionally poor fits for any of the rationales for monarchy articulated above. America has no landed aristocracy. We are shallowly rooted in our own soil, a highly mobile people, and we cannot delude ourselves about an organic connection with the land as the descendants of the displaced original inhabitants still live among us. And our family arrangements, well, let’s just say that absolute patriarchal authority doesn’t have pride of place these days.

So what could possibly motivate monarchical yearnings among American conservatives? A fear that the American people have failed and needs to be properly directed by the right people. A fear that existing privilege cannot be maintained without explicit resort to violence as a political principle. A resolute inability to identify with the majority of the citizenry, the abiding conviction that one is a member of the natural but unrecognized elite.

I think the right word for this kind of thing isn’t reactionary but fascist.

 Tagged , , , , , . 65 comments

Russell’s “Hustle”

Columbia Pictures
Columbia Pictures

Latest in the line-up of films I need to see again is David O. Russell’s current feature, “American Hustle.” Not because I was so delighted with it I can’t wait to see it again – though I did enjoy it very much, particularly for the performances – but because it is so confused about its genre, and I need to see it again to figure out if that confusion is due to an intriguing slipperiness or a disappointing sloppiness.

On the most obvious level, “American Hustle” is a story of con artists conning each other (and possibly themselves). Precedents, depending on tone, could include “The Lady Eve,” “The Sting,” “Dirty Rotten Scoundrels,” “The Grifters,” “House of Games,” and so forth. It’s also, and more fundamentally, a story of a love-quadrangle, since Irving (Christian Bale) is married to nutball housewife Rosalyn (Jennifer Lawrence), but is in love with fellow grifter (and then partner-in-grift) Sydney (Amy Adams), who may or may not be falling for FBI agent Richie (Bradley Cooper), who has definitely got a thing for her.

It’s also a period piece, set in the 1970s, and flaunts the fact of being a period piece – there’s the music, the hair, the outfits, all played way, way up. And it’s a political story about corruption and idealism making not-so-strange bedfellows, with the mob thrown in for good measure. The FBI side of the story is also a political story – about the ambitions of Richie, and of his boss’s boss, who’s willing to cut plenty of ethical corners to get a big score.

That’s a lot of different threads of story, and it’s not surprising that Russell doesn’t weave them together perfectly, or that they all aren’t given equal weight. It’s also not surprising that people would think that they add up to a Scorcese picture (and it’s being compared to “Goodfellas” in particular, but also “Casino“), because all the elements could fit together that way. A story about small time guys from the neighborhood reaching for the big time, getting in over their heads, but still trying to swim – that’s the kind of movie old Marty would love to make.

And Russell seems to have thought that it should be a Scorcese picture. He shot the film like it’s supposed to be a Scorcese picture. The music and scoring and from a Scorcese picture. But all of that surprises me, because in the end, he isn’t interested in making a Scorcese picture – he has different obsessions. Which results in persistent confusion about what kind of movie we’re watching.

Russell is, at heart, a director of screwball comedy. His paradigmatic film, to my mind, is “Flirting With Disaster.” But screwball isn’t really a contemporary genre – that’s not the kind of comedy we do these days – and his particular take on screwball is screwier than most. So, starting with “The Fighter,” when he set out consciously to reinvent himself as a commercially-viable director, Russell has directed films whose genres we can discern – and then infected them with his manic weirdness. In “The Fighter,” it was Christian Bale’s character who was the focus of that weirdness – but also the whole milieu from which he sprang. The story of Mark Wahlberg’s character was almost incidental to what the picture was really “about.” In “Silver Linings Playbook,” he created what is ultimately a pretty straightforward high concept romantic comedy – but he infected it by making his crazy lovers genuinely nuts, not just movie nuts. Bradley Cooper’s character is so disturbing in the early scenes of the film, that the happy ending inevitably rings kind of false. It’s there for structural reasons, not because it is entirely congruent with the film as it began.

Of course, another way to look at the same process is to say that Russell isn’t sneaking his own obsessions into films of various genres, but rather taking those genres apart by coming at them from his own weird angle. So, “The Fighter” is a boxing picture, with the same structure as all boxing pictures. But Russell is interested in the brother, the one who is never going to be a contender; by saying this story is more interesting than the story the boxing picture is supposed to tell, he’s questioning the boxing picture as a genre. Because the conventions of romantic comedy require the leads to behave like crazy people, “Silver Linings Playbook,” by involving actual crazy people, questions the romantic comedy as a genre (which is why the traditional ending feels a bit false).

Which brings us to “American Hustle.” The title announces this is an important movie: a movie about America. The subjects are greed, graft and grift. But what Russell is really interested in is the human vulnerabilities and neediness of his characters. Irving doesn’t want to hit the big time; he wants to stay small. When Robert De Niro shows up, playing a real mob boss, Irving is terrified, not thrilled – and Richie is too stupid to know he should be scared.

The politics doesn’t really matter to Russell either. He has these big scenes of Jeremy Renner, mayor of Camden, talking about his ideals, about rebuilding the city – and they barely play for what they are, because what the scenes are really about is Irving, the grifter, feeling guilty about taking this guy, because he’s a good egg and he likes him. And the picture is stolen over and over again by scenes out of screwball comedy – the scene where Jennifer Lawrence nearly burns the house down by putting metal in the “science oven;” the scene where she announces that Irving should thank her for ratting him out so he nearly gets killed, because that’s when he came up with his plan to get out of the mess they’re in – pretty much every scene with Jennifer Lawrence in it, actually – but also the scenes with Richie’s boss (played pitch-perfectly by Louis CK as, basically, the same guy Louis CK always plays).

The whole movie, far from being a Scorcesean paean to vitality, is in fact doing a root canal job on that kind of movie, burrowing under it and exposing the nerve, the pathetic desire to be loved and admired, the vulnerable need that is protected by a hard shell of vaunting ambition. (Amy Adams almost doesn’t have a shell, which is why her performance is actually the weakest of the bunch – I was never quite convinced that she could be as successful a hustler as she is when she’s always wearing her heart on her sleeve.)

But Russell wants it both ways – he wants you to enjoy the Scorcesean roller-coaster even as at every turn he’s showing you that his real pleasure tilt-a-whirl. And it turns out you can’t quite have it both ways.

The film hits a couple of notes at the end – the first effective on its own terms but undermined by the way the film progressed to that point, the second a bit forced. The con artists have one last grift up their sleeve – that plan Irving hatched up when he had a canvas bag over, and a gun to, his head – and the scene where Irving tells the FBI how it is plays beautifully. Except that we’ve long since lost sight of Irving as a grifter at all; the film was so completely taken over by the romantic drama on the one hand and by Richie’s insane ante-upping on the other. So it doesn’t play like the culmination of a series of clever twists; it feels like a sharp turn to another kind of movie, one that we haven’t really been in much.

And then we get the happy ending for our romantic principals, the two grifters, a flat-out reversal of the ending of “Goodfellas,” where suburban normalcy is the reward, not the punishment. It’s a surprisingly moralistic note on which to end – unless this really is supposed to be a screwball comedy, in which case it’s entirely appropriate. But if this is a screwball comedy, then the ending of Richie’s story doesn’t fit (among many other things). Maybe all I’m seeing is that this is such an ensemble piece that each major character is in a different genre of movie. Which is certainly interesting, but it makes for a jerky ride.

But the biggest problem with the happy ending is precisely that this is so insistently a period piece. Our expectation with such pieces is that they will “say something” about the period in question, and that the thematic note of the ending will relate to that something. Well, if this kind of whirligig of confidence games is supposed to represent America in the ’70s in some sense, then the ending reads as a kind of yearning for a “return to normalcy” – but does that matter anymore? Even the ’80s is ancient history at this point.

Russell has been tripped up before by trying to “say something” – in “Three Kings,” a film that has not aged well at all given all the history that has passed through Mesopotamia since then. Personally, while I’m eager to see almost anything he does these days, if only to relish the phenomenal performances he gets out of his actors, I wish he’d make another straight-up screwball. That would be a particularly special delight.

 Tagged , , , , , , , , , , , , , , , , , , , , , . 26 comments

2013: A Theatrical Year In Review

Speaking of lists: it seems especially silly to recap the year in theater, since unlike with books or movies, you can’t use a list of “the year’s best” as a guide for what you need to catch up on; most of the productions I might mention are long gone. But by the same token, it’s absurd to review theater at all in most venues – most of my readers undoubtedly don’t live anywhere near where the plays I write about are showing. Obviously, I’m not in thrall to commercial logic when it comes to what I write. In which case, why write a recap just because it’s December 30th?

Well, because I’m sitting in an airport, on my way home from vacation (hence the light blogging of late), and writing a recap is a way of reviewing the year in my own mind.

Looking back, 2013 was, for me, a year that belonged to the Public Theater in New York. Virtually everything I saw there this year was fresh, exciting, and eminently worth seeing. This has not always been the case, and undoubtedly it portends a reversal next year – but in the meanwhile, I can bask in the recollection of:

  • Fun Home and Here Lies Love, two of the year’s best new musicals. The first is a chamber piece with an operatic heart, an adaptation of what I would have thought was an unadaptable graphic-novel-style memoir about growing up under the thumb of a closeted father, and coming to grips with his (possible) suicide immediately after coming out yourself. The second is a participatory disco-musical biography of Imelda Marcos, which revises and exceeds Evita, its obvious precursor, in characterization, in political acuity, and, frankly, in sheer tunefulness. (Plus the audience gets to dance with Richard Nixon and Fidel Castro.)
  • Arguendo, the latest offering from Elevator Repair Service, a small theater company doing some of the most interesting work with text that I’m familiar with. The play uses the transcript of a first amendment case about nude dancing as its text, but rather than stage the trial, it performs the text – thereby serving as an illustration of the point (central to the case) that performance as such is meaningful, and can’t be reduced to a textual message.
  • The Good Person of Szechwan, a fantastic Charles Buschian revival from the Foundry Theatre of Brecht’s fable of gods and men, and good and evil. Headlined by the incomparable Taylor Mac as the (female) prostitute with a heart of gold, Shen Te, who must periodically disguise herself as a ruthless (male) cousin, Shui Ta, if she is to survive in this wicked world, the show achieved the tonally extremely tricky task of marrying high camp to completely sincere feeling. This is the production’s second time around the block, and I do hope there’s a third, as I didn’t get a chance to review it before it closed.
  • Add in two worthwhile offerings in the Delacorte in Central Park - A Comedy of Errors and a musical adaptation of Shakespeare’s Love’s Labour’s Lost - plus a revival of Wallace Shawn’s bitterly cutting play, The Designated Mourner, and you can see why I say it’s been quite a year.

It’s also been an exceptionally bountiful year for Shakespeare in New York. Though not necessarily always an entirely successful one, the sheer abundance is worth noting – and celebrating.

  • Two notable productions of Julius Caesar graced Brooklyn’s stages. In one, at the Brooklyn Academy of Music, the setting was African, and for once the civil war of the second half worked better than the intrigue of the opening acts. In the other, at St. Ann’s Warehouse, the setting was a woman’s prison, and for once Brutus’s protestations that (s)he loved Caesar referred to a genuine emotion rather than something more formal and covenantal.
  • Meanwhile, two wildly opposed interpretations of Macbeth stalked Manhattan’s blasted heaths. One collapsed the play down to the mind of a single character, as Alan Cumming played a psychiatric prisoner enacting Shakespeare’s play (and playing nearly every part therein) by way of obliquely explaining how he came to be imprisoned. The other gave center stage to the witches (who also got to play multiple characters), malevolent forces in whose hands Macbeth (played by Ethan Hawke) is little more than a pawn.
  • Romeo and Juliet didn’t fare as well as even these equivocal achievements, but I tend to suspect that has something to do with the spirit of our age, which runs counter to the primal currents of the play. But the most heralded Shakespeare offerings were also the most emphatically, even ideologically, traditional: two “original practices” productions from Shakespeare’s Globe, Richard III and Twelfe Night, both directed by Tim Carroll and starring Mark Rylance, playing in repertory. (My review of both, along with a discussion of the original-practices movement, appears in the next issue of the print magazine.)

Other New York theatrical highlights included:

  • Tribes, an attentive play about different varieties of deafness, at the Barrow Street theater. (For those of you in or near Chicago, another production is now playing at Steppenwolf. I haven’t seen it, but I’ve at least liked and usually loved whatever Austin Pendleton directs.)
  • Natasha, Pierre and the Great Comet of 1812, a charming cabaret-musical based on an episode from Tolstoy’s War and Peace, playing now and hopefully for a while yet at Kazino, a nightclub-in-a-tent, though the tent does seem to keep moving about (as tents will do).
  • No Man’s Land, an arcticly comic piece of Pinterian obscurity starring Ian McKellen and Patrick Stewart (review forthcoming; it plays in rep with Waiting For Godot and down the street from another Pinter, the far more accessible Betrayal - all three are worthwhile, but No Man’s Land was my favorite by a good margin).

Sometimes the best theatrical experiences aren’t staged at all. Red Bull Theater continues to put on a truly astonishing reading series – the highlight from the second half of last season (seasons run from October to July) was a version of Corneille’s The Liar reworked by David Ives, while so far this season the standouts have been John Ford’s Tis Pity She’s a Whore and Ostrovsky’s Too Clever By Half.

And, not to leave out the gentle giant to our north, of the various Canadian productions I took in this past year, the two that have stayed with me most strongly were a pair of plays about adultery and misplaced accessories: an operatic Othello at the Stratford Shakespeare Festival, and a cinematic Sargent portrait of Lady Windermere’s Fan at the Shaw Festival in Niagara-on-the-Lake.

I don’t know if there’s a terribly good commercial reason for me to write as much as I do about theater. But if the inclination to do so gives me an excuse to take in seasons as rewarding as this one has been, commercial sense be damned.

 Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . 2 comments
← Older posts Newer posts →