A few months ago, I wrote a piece arguing that “natural law” arguments didn’t make sense to me because they depended on a concept of “nature” rooted in Aristotle, who thought he was doing science, but advocates of such arguments today don’t try to rest them on a scientific understanding of human nature (or natures) but on a dogmatic interpretation of Aristotle. If you want to rescue natural law reasoning, I argued, you’d have to find a way to base your ethics in contemporary scientific understandings of human nature, which probably means evolutionary psychology.
Now, via Andrew Sullivan, I am directed to a piece by Thomas de Zengotita arguing that this won’t work. This is probably a mistake on my part, since I’m not a trained philosopher (“kids, don’t try this at home”), but I guess I can’t help going another round.
In a nutshell, Zengotita argues that the problem is that Darwinian evolution is purposeless; our natures are the result not of design but of a series of accidents. And it’s impossible (he argues) to root an ethical system in a nature that originates in accident:
No matter how advanced the natural science—the naturalistic fallacy—the assumption that something is morally good because it is natural—is philosophically secure. . . . More broadly, given the claim that “Action X is good because the genetic program that triggers it, and our approval of it, was naturally selected for,” one can still ask whether it is good to do what we are genetically inclined to do. That is, asking that question still makes sense because—even using examples favored by evolutionary psychologists—the answer would appear to be: sometimes yes (help a friend) and sometimes no (kill the “other”).
It comes down to this: we cannot find truly ethical guidance in a nature shaped by evolution. Natural selection is random—random as to the mutations that produce variation, random as to the accidents of circumstance that make one variant adaptive and another fatal. Natural selection may indeed be responsible for something like a “mother instinct” that inspires tender mammalian behaviors of which we all approve. But natural selection may also be responsible for our instinctive tendency to fear what is strange and attack what is feared, thus contributing to the pageant of slaughter that has been human history. Ethical thought must take into account what Darwinian nature has made of us, and political provision must be made for that. But nothing ethical per se—nothing good or bad or even meaningful is to be found there.
Sullivan interprets him to be saying that you can’t derive an “ought” – an ethical system – from an “is” – an explanation for why we act the way we do. But that’s not what Zengotita is saying, because his preferred ethical approach is phenomenological – rooted in the experience of psychology rather than a genealogy thereof. And this, it seems to me, also involves deriving an “ought” from an “is” – it’s just a different “is” that we care about.
(Actually, I’ve never understood what you are supposed to derive “ought” from if not “is.” What, apart from “is,” is there to derive from? Deriving “ought” from “should be” without any reference to “is” amounts to making “ought” axiomatic. Why that is better than deriving “ought” from “is” escapes me.)
The “is” that Zengotita cares about is the experience of consciousness. That is to say, he wants to derive ethical “oughts” from how people actually experience their own actions and the actions of others. But this activity itself presumes capacities that not all humans possess. Autistic individuals are often described as not being able “properly” to model other minds. In a very different way, neither are sociopaths. But both autistic people and sociopaths are conscious. So Zengotita’s phenomenology is really a phenomenology of “normal” minds – which is right and proper, but brings us back to the question of what makes such minds “normal.” And the answer to that has to be something like “working the way they were supposed to work” – which, if you drop the idea of an intelligent designer, brings us back, ultimately, to Darwinian genealogies of function.
But so what? Why is that a problem?
Zengotita describes the early modern outlook that Darwin overthrew as follows:
What early moderns saw in nature was purpose—rational purpose, divine purpose. When they looked at an equation in classical mechanics, they saw a “law” in the full sense of the word, and when they looked at the relevant experimental results, they saw something like obedience to that law. “Let there be light” made for beautiful poetry, but F = MA was the word of God. When they looked at a healthy body, early moderns also saw conformity to a designer’s intentions. But, in this realm, one also encountered mortality and disease. Here, for some reason, a sort of disobedience came to pass, a malfunctioning. Why that should be so was the subject of debate, but almost no one questioned the framework of interpretation. Modern medicine was founded on the metaphor of repairing malfunctions of bodily mechanisms.
And so it was, and all the more so, when early moderns looked upon human history—the carnage, the absurd superstitions, the institutionalized barbarities. The conclusion was inevitable. Here was disease of another order, a malfunctioning of another kind. Again, there was much debate over why this should be, but the basic framework of interpretation remained. The question became: what were the Maker’s designs for His human creatures as social beings, what were those natural laws and how could His creatures cure the diseases of history in accordance with them? Modern political and ethical thought took shape on that foundation, dependent on the idea that nature was, in the words of Thomas Hobbes, “the art whereby God hath made and governs the world.”
He then asserts that the recognition that so much order arose spontaneously out of chaos makes it impossible to think in this manner anymore. But why? Our bodies are designed to function in certain ways – designed as the result of a spontaneous process, yes, but designed nonetheless. I don’t think there’s a lot of controversy about how our lungs, say, are supposed to work, or what constitutes poor lung functionality. Why does it matter whether the lung was designed by intelligence or whether it is an orderly arrangement that emerged from chaos according to the operation of simple rules on a diverse array of molecules? The lung’s job, with respect to the human organism, remains the same, either way.
As does the role of the physician. Whether you posit a designing intelligence with a transcendent purpose for humanity or you don’t, a pulmonary surgeon will face the question of whether surgery is “worth it” in a particular case of lung disease. Factors will include the likelihood of a cure, the likelihood of complications, the life expectancy of the patient, the expense (and how that expense will be reimbursed, if at all), etc. None of those factors go away if you say that God has a transcendent purpose for the patient, as He does for every human being. Nor do they go away if you do not posit any such purpose from outside the universe. You can still tell whether an organism is functioning well or poorly even if its only “purpose” is to function well.
You may be noticing that, under the surface, I’m pretty skeptical of deontological approaches to ethics. And yes, I think Darwin’s theory makes such approaches less-credible-sounding. But not all ethics are deontological, and I don’t think deontological approaches to ethics are ultimately that much more credible even if you posit a rational agent that designed the universe, including human nature.
Aristotle understood man to be a social animal, who can only flourish within the context of a community. His ethical concern was therefore the operation of that community such that it contributes to human flourishing. That approach strikes me as entirely compatible with an understanding of human nature updated to reflect the (still infant) science of evolutionary psychology.
As, indeed, is Zengotita’s preferred phenomenological approach to psychology, and whatever ethics may be derived therefrom. Zengotita, in the last part of his essay, outlines such an approach by boiling down Jonathan Haidt’s five moral foundations “to see if there is an aspect of the phenomena that might bring ethical unity to the modules—an aspect that would not need explaining, an aspect we simply understand as the rightness or wrongness in them all.” Which sounds like a fine idea – I like starting with phenomenology just fine, even if I don’t think it’s any more “grounded” than anything else.
Here’s what he comes up with:
The regions of being-in-the-world in which “my” is justly placed are vast and varied and caught up in constant improvisation as well—for they follow the contingent logic of Wittgenstein’s language games; they are as historical as we are. People are not poeticizing, still less are they mistaken, when they speak of “my neighborhood” or “our song” or “her mother.” In all those cases, beyond the merely legal, we are talking about ways of being in the world that have property dimensions, as it were—an ethical aspect that subsists in all embodiments of mind.
This continuum highlights the aspect of human deeds and situations that we recognize as essentially ethical, and irreducibly so. In those violations, we understand wrongness immediately, and in their complements, we apprehend a rightness in the arrangement of things. The ethical aspect of the human condition emerges with consciousness itself, constitutive of Being-in-the-world and the human form of life. And where consciousness is not—in brain modules, for example—there is no ethics either.
Back to Haidt’s moral taste buds . . . it is impossible to miss what these responses have in common—and that is what actually makes them wrong. They are all violations of embodiment, of an arrangement of things “possessed” in various ways and to various degrees by the sources of the intentions that constitute them, by the agents embodied in them.
Right: that’s why arranged marriage, according to which a woman is prepared by her parents for bodily penetration by a man she just met, is wrong, while the hookup culture, according to which a woman is prepared by her classmates for bodily penetration by a man she just met, is right. Or, possibly, the other way around. Or possibly either is right, or wrong, depending on the woman’s expectations going into the encounter. Which would reduce our ethical insight to “violating someone’s understanding of what is right is wrong.” Not even “do unto others as you would have them do unto you” but “don’t make any sudden moves.” That’s not a bad ethical insight, as it happens, but I don’t think it gets us terribly far.
A naive, bloggy evol-psych approach to the same question would ask, “how have men and women evolved different strategies for sexual success?” and would look at the question, “what social arrangements will contribute to human happiness?” through the lens of the (inevitably tentative) answer to the prior question. A less-naive, more-scientific-than-bloggy approach would not jump directly from hypothesis to prescription, but would use the hypothesis as the basis for a research program. The plasticity of human nature, including our sexual nature, is neither infinite nor zero, and mapping the contours of that plasticity will help us understand the practical limits of any program to “reform” our ethics, whatever philosophical stream that program originates in.
Personally, I view much of Aristotle’s approach to ethics as relatively easy to rescue from the “death of God” – that is to say, from dropping the assumption of a single intelligence that serves as the ground of Being. For that matter, I view much of Hegel’s approach as relatively easy to rescue from the residue of eschatology implied by an “end state” to history. Most of my conversations never end (or threaten not to, anyway); why should the dialectic of history be any different? Meanwhile, a scientific study of human nature, based, as any study of biological entities must be, in an appreciation of natural selection, strikes me as not only compatible with an Aristotelean-Hegelian framework for thinking about ethics but kind of necessary to make such a framework effective in dealing with real problems.
Ah, Julius Caesar. That old middle school torture. Shakespeare’s most-heady, least-sexy play, the one Sam Johnson called “cold and unaffecting.” If you set it in ancient Rome, it’s apt to be a crashing bore. If you do it in modern dress, you can do interesting, even powerful things with the politics - but after Antony’s funeral oration, when actual civil war breaks out, the play falls apart, and the situation is apt to seem comical rather than affecting. How to teach this rather arthritic dog of war to once again cry havoc?
Well, the Royal Shakespeare Company’s Gregory Doran had an thought: set it in Africa, with an all black African cast. Caesar is the reigning populist dictator; Antony his energetic and good-looking right-hand man; and the conspirators are plotting a coup that will make them the new big men of the country.
At first glance, it sounds promising. Where, in a modern Western setting, the plotting and fighting would inevitably come off as metaphorical, in Africa coups and counter-coups are an all-too present reality. And then there are the supernatural elements; the soothsayer, certainly, should play better than usual. But the conceit runs into trouble almost immediately.
We open with the people celebrating Caesar’s triumph and anticipated ascension to the throne. (This process begins before the lights go down, actually – a move I always like, no matter how well-worn.) Then in come two men – “tribunes” in the text, but they are in uniform here, either gendarmes or soldiers of some sort – who chastise the people for their enthusiasm. Caesar’s triumph, after all, was over a fellow Roman – Pompei, whom once they loved. Why then do they cheer?
In the context of the play, the scene is intended to set the stage for Cassius’s entreaty to Brutus. The threat Caesar poses to the Roman constitution is potentially genuine – we know that because of the tribune scene; and the Roman people cannot be counted upon to do any more than follow the top dog – we know that from the same. Thus Brutus has a serious dilemma: the only way to save the constitution may be by violence, and yet violence may also fail. But, in this production, who do these uniformed scolders work for if not for Caesar himself? And if for Caesar, then why do they object to the public’s adoration of him? Right from the start, the African setting has muddled the politics, and that’s a huge problem for a play whose tragic hero takes politics, and political principles, very seriously indeed.
It gets worse with Cassius’s appeal to Brutus. Cassius (a bug-eyed, nervous Cyril Nri) talks his usual envious line of how he was a greater soldier than Caesar – but Brutus is supposed to be animated by some kind of noble idea. What is it? Why do these men talk about being reduced to the status of slaves by Caesar? What was their status before his rise? Who were they? Who are they? Whence comes their power? Brutus is supposed to be the reluctant conspirator, who loves Caesar but fears his ambition; if we don’t know what the idea is that animates him, then how is he any different from Cassius?
He isn’t, in this production. He’s “nobler” only in that he doesn’t take bribes and has a far more winning personality. But his trust in the nobility of Antony looks like pure political stupidity (which it is, of course, but it shouldn’t be only that) rather than principle; and his big speech before Caesar’s funeral saying that, though he loved and honored Caesar, he had serve him with “death for his ambition” falls grossly flat. What, other than ambition, has Brutus himself been fighting for?
The plus side of the conceit begins to show itself with Antony’s funeral oration, and carries through strongly into the tricky tent scene in Act IV. Ray Fearon makes an exceptionally plausible demagogue as Antony, and his segue from dictator’s right hand to would-be dictator coldly checking off names for execution has never seemed more natural. And for once, the tent scene really works. Brutus is as hot under the collar as Cassius is, and Cassius’s distress at apparently losing Brutus’s favor feels completely unfeigned. And when Brutus composes himself to “receive” the news of his wife’s death (news he already knows), it’s not a moment of chilly put-on stoicism. It’s deeply affecting.
Paterson Joseph’s Brutus turns out to be the most interesting performance in the show, as well it should be, but I can’t decide whether I finally thought it worked. This Brutus is anything but reserved – and, frankly, there’s not much about him that’s “noble” as we usually construe that word. He is honest, open-hearted, sincere; but his most shocking attribute is a sense of humor. He jokes (to the audience!) that he can read by the light of the comets and other celestial exhalations that prophecy Caesar’s doom; he laughs at his somnolent servant, Lucius (Simon Manyonda); he chuckles mordantly at his own army’s prospects in the coming fight at Philippi. He’s a very winning character – much more personally appealing than Antony, which is quite a novel reversal. For this novelty alone, the production is worth seeing.
It felt like this approach was a necessary alternative to making Brutus the noble defender of Roman liberties, because, in this production, we cannot identify what Roman liberties are to be defended, nor any obvious threat to them from this Caesar (as played by Jeffrey Kissoon, a rather out-of-touch old dictator, more Duncan than Julius, which only further suggests that Brutus is animated by ambition just like the other conspirators). But if we have nothing but personal qualities to draw us to Brutus, no higher purpose, then we can only be drawn to him if we forget, for a moment, that he’s an assassin, a traitor, a murderer. And if we forget that, then what’s left of Shakespeare’s play?
What’s left is a very affecting second half, which is not my usual reaction to productions of Julius Caesar. I don’t want to be too hard on Gregory Doran. Julius Caesar is an exceptionally tricky play. Most successful productions I have seen were actually successful only in the first half, and fell apart in Acts IV and V. This production seems consciously to have worked backwards, starting with the civil war and its end, and how to make that play effectively. As a consequence, for once the play gets stronger as it wears on. Which, come to think of it, is more fun than having a strong opening.
Julius Caesar plays at the Brooklyn Academy of Music’s Harvey Theatre through April 28th.
Rod Dreher comes to Andrew Sullivan’s defense on the subject of taking Islamic violence seriously.
What distinguishes Islam is that its founder practiced violence, whereas Jesus quite obviously favored the exact opposite – nonviolence to the point of accepting one’s own death. Unlike Christianity, but like Judaism, Islam also claims sacred land, and, along with extremist forms of Judaism, the divine right to repel intruders from it. Religion is dangerous enough. A religion founded by a violent figure, with territorial claims, and whose values are at direct odds with modernity is extra-dangerous. Which other major world religion believes that apostates should be killed? Or regards negative depictions of the Prophet as worthy of a death sentence?
This is true, and it’s important to say. It gives Islam the respect of taking it seriously. When a Christian murders, as many have done, sometimes with church sanction, he acts in direct contravention of Christ’s example and command. When a Muslim murders, he sometimes carries out Muhammad’s command, which is to say, Allah’s.
To which I can only say: yeah, well, that’s just, like, your opinion, man.
I have learned to be wary of people who say that their opinions are “serious” and other people’s aren’t. “Serious” people are the ones who “knew” that Iraq had an active nuclear program and who “know,” today, that Iran has similar ambitions and cannot be deterred. “Serious” people are the ones who know that we are at “war” with terrorists, and that other metaphorical understandings of our situation aren’t “serious.” Where the rubber meets the road, “serious” means, “expecting violence.” It isn’t the same thing at all as “knowledgable,” but rather the mirror image in ignorance of the platitudinous cotton candy of multiculturalism that Dreher, Sullivan and I alike disdain.
We don’t need more seriousness. Nor do we need more sugary platitudes. We need knowledge.
Samuel Huntington‘s line – “Islam has bloody borders” – struck me as correct at the time it was made. But correct or not, it was an observation of reality, and consequently subject to empirical verification. You can actually count up inter-communal conflicts and see how many involve Muslims. Then the question becomes: why?
If we were to test the proposition, “Islam is inherently more violent than other religions,” we’d need to compare Islamic civilization across time and space to other civilizations (and control properly for other factors). Are Dreher and Sullivan quite sure of what the result of such a comparison would be? Are they quite sure that, say, things like cousin marriage, or a burgeoning population of underemployed males, or the legacy of Cold War-era arms races, or the coincidence of massive oil wealth in the hands of a particularly puritanical sect on the Arabian peninsula, or the intrusion of Zionism, or the demographic decline of Christian Europe (and Russia), or the ructions of modernization meeting a subordination of women that pre-dates Islam, or . . . well, there’s a long list of theories for why Islam’s borders are bloody now. Are we quite sure that those theories are less-correct than the theory, “they are getting their ideas from a bad book?”
Dreher says that when a Christian “murders,” he acts in direct contravention of divine command. Fine: but what is murder? Is it “murder” to wage war to liberate the Holy Land? Or to obliterate the Cathars? Or to convert the Lithuanians? Or to reconquer Spain? I’m quite sure those who prosecuted those wars in the divine name would have been distinctly puzzled by the suggestion that their actions constituted murder – as opposed to justified killing. And, of course, “murder” is prohibited in every civilized society.
Meanwhile, it’s my people who wrote Psalm 137, a prayer for vengeance that ends with glee at the thought of dashing our enemies’ children’s brains on rocks. And yet, over the sweep of history since the rabbinic period, one would have to call the Jewish people among the least-prone to extreme inter-communal violence. We can debate the reasons for that historical fact, but what it should show at a minimum is that the syllogism, “violent texts are a primary cause of inter-communal violence,” needs some work.
Dreher and Sullivan alike are Christians. I’m not. They assume that Jesus’s call to “turn the other cheek” means that Christianity has acted as a historic brake on violence. As a Jew, I have to question that assumption. After all, the number of Christian countries in history that have been governed according to principles of non-violence is exactly zero. Someone from a religious tradition whose founding texts articulated rules about when violence is justified or permitted might look at the long history of Christian violence – not just violence by Christians, but violence undertaken with the Church’s encouragement and undertaken in the name of Jesus – and say: gee, maybe saying “turn the other cheek” backfires, makes all violence seem equally sinful, and therefore opens the gate to truly horrific behavior?
I’m not endorsing that view – I’m just saying that there are perfectly logical arguments that can be made that completely reverse the Christian apologetic claim that because Jesus preached non-violence and Muhammad (like Moses) led an army, therefore Christian civilization is inherently less-violent than Muslim (or Jewish?) civilization. Obviously, if you’re a Christian, you’ll find a Christian apologetic argument congenial. But that doesn’t mean it has analytical value.
For that matter, the United States was founded by genocidal racist slave-trading colonialists. Does that mean the Constitution is essentially and irredeemably racist? Isn’t that where the “bad book” theory logically leads?
Again, I’m not saying that religious (or other foundational) texts are irrelevant. I’m certainly not saying that all religious (or political) traditions are the same. I’m saying that the syllogism, “bad book = bad acts,” is highly suspect, and obviously so. There may be a very good argument that Islamic civilization has a distinctive problem with modernity that will be very difficult for it to solve, precisely because of the nature of its founder and the historic understanding of its founding revelation. I would expect to hear that argument from liberal Muslims first and foremost, because they are the ones who would most be interested in solving it.
Which brings me to one last note. In passing, Dreher says:
Obviously many, many Muslims choose less bloodthirsty interpretations of these verses, and this is the sort of thing that non-Muslims should encourage, for the sake of peace.
This is another formulation I think we should properly suspect. There is very, very little that non-Muslims can “encourage” with regard to Muslim interpretation of their sacred texts. We can “encourage” Muslim leaders to silence, jail or kill individuals we consider to be a threat. And by all means, we should ask questions – heck, we should sometimes ask impolite questions if it’s necessary to do so to get real answers. But I think Dreher would be quite offended by the suggestion that the proper role of Muslim leaders is to “encourage” Christians to interpret their own religion in a way that is more congenial to Muslim interests or feelings. Why wouldn’t Muslims feel the same way about Christians “encouraging” them to interpret their holy book the way Christians prefer? And if it would, then isn’t the kind of “encouragement” that Dreher says we should engage in more than likely to backfire?
UPDATE: Dreher’s later post‘s title gets it right: “Uncle Ruslan, a Good American.” Exactly. A good American - because the identity we share is “American,” not Muslim. So it’s not for us to say who is a good Muslim and who isn’t, and to encourage Muslims to be the “right sort” of Muslims from our perspective. But it is for us to say what makes a good American and what doesn’t – and to be as firm (or, if we prefer, as lax) as we like about applying our own standards in that regard, as Americans.
It’s tough for me to be objective about Jesse Berger’s Red Bull Theatre. His track record really has been extraordinary. His company first came to my attention with their electrically theatrical (and humanely moving) production of The Duchess of Malfi from 2010, and with pretty much everything they’ve done since - The Witch of Edmonton, Genet’s The Maids, and, earlier this season, Ben Jonson’s Volpone - they’ve gone from strength to strength. (And in between shows, they stage “Revelation Readings” of off-the-run classics that frequently outshine other venues’ full-scale productions; highlights for me have included David Ives’s adaptation of Corneille’s The Liar and Tom Stoppard’s adaptation of Pirandello’s Henry IV, but I’m quite sure I’ve missed much of the company’s best work, they do so much.) So it goes without saying that you should see their current offering, Strindberg’s Dance of Death as adapted by Mike Poulton, directed by Joseph Hardy.
I’m coming late to Strindberg, but I’m trying to make up for tardiness with enthusiasm. And there’s been some excellent Strindberg on offer in New York these days, from the Creditors that BAM brought over from London three years ago to last year’s South African adaptation staged at St. Ann’s Warehouse, Mies Julie. What’s striking to me is how perfectly contemporary Strindberg (with a little adaptation) feels; like Chekhov, once you shake off some of the surface period trappings with a bit of adaptation, you discover a frighteningly accessible internal world – accessible to our own sensibilities, and, frighteningly, accessible to an internal depth that gives us pause.
Dance of Death is another story of a couple’s descent into games of mutual torture. Edgar (Daniel Davis) and Alice (Laila Robins) have been living together for twenty-five years on a Swedish coastal island where Edgar is captain of artillery. But is this living? To marry Edgar, Alice gave up a promising stage career. His career, meanwhile, stalled midway. Before long, marriage had decayed into trench warfare, with the children periodically sent over the top for fruitless charges on the enemy’s guns. Now, the kids are grown, and moved to the mainland. They have no friends; they can’t even keep servants anymore, because who would want to live in such a miserable house? She yearns for his death; he alternated between gleeful anticipation of same and spiteful determination to live forever.
Cutting into this fatal waltz is Gustav (Derek Smith), a former lover of Alice’s who may (or may not) have introduced her to Edgar, who has (improbably) moved back to this bleak island seeking refuge from his own mediocre career and failed marriage (which may in fact have been wrecked by Edgar). Immediately, Alice and Edgar assail him with a view to winning his allegiance in the marital contest. For a while, Gustav believes that he’s an active player in the game, capable of either pitying Edgar or destroying him, of loving Alice or rejecting her, but it becomes clear by the end that he’s been nothing but another piece of ground for them to fight over, and that he’ll be destroyed long before either of them give way.
Comparisons to Edward Albee’s Who’s Afraid of Virginia Woolf? are entirely apropos; indeed, particular scenes from that masterwork feel like they were directly inspired by scenes from Dance of Death. Another point of comparison, though, would be Bergman, particularly “Scenes From a Marriage.” In Who’s Afraid, the games George and Martha play are ultimately a survival strategy – specifically, their mutual strategy for maintaining her sanity, a strategy that, by the point the play begins, has just about run its course; and the play builds to revelations that are intended to be purgative (the last act is called an “exorcism” for a reason). Edgar’s arc is superficially similar; he is also revealed to be far more in control of events than it appeared at first. But there is no purgation; Edgar and Alice end, largely, where they began, but having renewed their vitality through combat, and from the blood of Gustav, their latest victim, the only kind of life either of them can actually participate in. Bergman’s film couple similarly can’t stop dancing around each other, can’t stop being cruel to each other and can’t stop loving each other, and expressing their love in part through cruelty. Even divorce doesn’t alter their dynamic except superficially. This is part of what I mean by saying that Strindberg feels contemporary – there is no sense that social arrangements are to blame for, or that alterations in those arrangements could cure, a malady written on the fabric of the soul.
I am less and less interested in “reviewing” productions. I suppose I’d say this one dragged a bit in the latter half of Act I, partly because there was never enough electricity between Smith and Robins ever to spark a flame, partly because the text has them dancing in a circle for a bit there. But I found Davis’s Edgar fascinatingly elusive; he led me around by the nose quite as effectively as Edgar led Alice and Gustav. In the end, I saw why she would chose to stay with the old vampire, and that’s the bloody heart of the play.
The Dance of Death plays at the Lucille Lortel Theatre through May 4th.
I’ve refrained from saying much about Boston because I don’t have any special information, expertise or knowledge, and I’m not particularly interested in yoking that tragedy to my own political hobby-horses (or straining to prevent others from doing same). Our culture’s penchant for gnawing on the bloody bones of tragedy is one of our least-attractive qualities.
I’ve said this before, and the more I say it, the more I like it. (Some might say that is characteristic of my own self-involvement.) I like it because it continues to accrue new meanings.
The nice thing about thinking we’re in “The Battle of Algiers” is that it opens up space for a discussion of policy. Thus, we can debate whether the problem is an imperial foreign policy (or whether the problem is that, like France in the 1960s, we’re inviting in immigrants from the very regions affected by that foreign policy). We can debate whether we are being insufficiently vigilant in the war on terror, or the war on radical Islam, or whether our excessive vigilance is precisely what is fueling radicalism and terrorism.
But I’m increasingly skeptical that this line of thinking has any utility. The Boston bombers were legal immigrants who came as children from Dagestan, a region minimally affected by American foreign policy. The older Tsarnaev, Tamerlan, appears to have been introduced to radicalism by an American citizen who converted to Islam. His own Imam seems to have been among those alarmed by his radical turn. So far, all the evidence suggests that he was inspired by groups like al Qaeda, but not actively recruited by any such group. The brothers appear to have seen themselves and their resort to terrorism as part of something much bigger than themselves. But it’s not clear that this was true anywhere but in their own heads. Regardless of their professed ideology or inspiration, fundamentally they were space monkeys.
There’s probably a policy question to be asked about whether we’re making more space monkeys than we might, whether some cultures or countries are producing them in especially alarming quantities these days. There are undoubtedly things to say about the pace of job creation relative to changes in the size of the labor force, about the fate of masculinity in an era of global deindustrialization, the effect of mass-communications on traditional cultures, etc. But cultures and economies are slow-to-turn ships, so any policy questions thereby implicated are not “solutions” to any near-term security concern.
I guess my point is: the quest for perfect security is just as foolish when it’s pursued under the banner of anti-imperialism as it when it’s pursued under the banner of neoconservatism. We should be advocating a more restrained foreign policy because our current, highly forward defense posture is wastefully expensive in blood and treasure, eroding our constitutional order, and creating more problems overseas than it solves. But even after a hypothetical rethinking of American foreign policy, we’ll still be the richest and most powerful country on earth, a symbol of the order of things as they are. And as such, a potent target for space monkeys everywhere.
I am angry at George Eliot today. I just finished reading The Mill On the Floss, which, as per usual with me, took me far too long to finish in spite of my being thoroughly caught up in the story. When I finish reading Rod’s wonderful book (which is going much more swiftly) I may have some thoughts on the small-town panopticon inspired by both books, but right now I am too busy being angry.
I’m angry at the ending. (If you don’t like to have a 150-year-old novel “spoiled,” you should stop reading now.) Eliot, having taken us with her heroine, Maggie, on a tempestuous journey guided by (mostly noble, if mis-guided) passions, has finally brought her home to port, and to the question of how, after such a journey, and the knowledge that comes with it, one is to live. The important relations of her life are largely resolved – peace is made with her cousin, Lucy, with her faithful of resentful lover, Philip, and with her seemingly most-unlikely aunt, Mrs. Glegg; no such peace appears possible with her fierce brother, Tom. And I looked to learn whether Maggie would, in fact, have to leave St. Ogg’s, or whether she would carve out a place for herself there in spite of the town’s disapprobation; whether she would “settle” for Philip after all, or resolve on a solitary life; whether she would carry a torch for the plainly undeserving Stephen, or whether, as she aged out of passionate intensity, she came to see what was so manifestly unattractive about him to this reader.
And then she up and throws a calamity at her characters in the form of a massive flood that drives Tom and Maggie, the estranged brother and sister, briefly together before killing them.
And I’m still fuming about it! It felt like she took people I believed in as people and had long since stopped reading as schematic figures – Tom all pride and judgment, Maggie all passion and sentiment – and roughly yoked them back into an allegorical relation that could not but kill them. Why would she do this to them? To me?
I found myself meditating on Mary Anne Evans’s own life (with which I fear I am insufficiently familiar to meditate adequately) to try to make sense of her decision, wondering where she might have been in her own journey. Evans’s life attests to an appreciation for setting one’s star by love – physical and spiritual – above either personal or societal interest, and Maggie Tulliver, who so fruitlessly struggles to orient herself by some other star, is a transparent author-surrogate. But it felt to me as though something wasn’t yet worked out – that would be worked out later, in Middlemarch - about this character; that, in 1860, there was still a reason she could only be redeemed through annihilation.
It’s a testament to the power of the writing that I could be invested enough to be angry at it. But I am curious whether others have had that experience with this novel. It felt, to me, like a very late failure of moral imagination on Eliot’s part – but, even if I hold to the notion that it’s a failure, that doesn’t mean I’ve got the cause right. The reason could as easily have been commercial – which would tell us something about the sensibilities of the Victorian reading public – or an instance of structural considerations predominating over questions of character (I realize that where I “wanted” the novel to go, in its last pages, would amount more to settling in or even petering out, which is usually less-satisfying than ending with a bang).
Any Eliot scholars among my readers care to help me out?
The 16th-Century Inflation Caused By Spanish Silver Drove A Real Increase In Wealth. Just Not In Spain.
I begin to feel like a broken record, but Matt Yglesias has a cool post up about “Game of Thrones” and (what else) monetary policy:
As you watch members of House Lannister and House Tyrell scheme for control over King’s Landing here’s something to keep in mind. The Westeroi conventional wisdom that that the Lannisters are the richest house in the Seven Kingdoms is dead wrong. House Tyrell is number one in all the ways that count. . . .
Gold is useful primarily because it’s a convenient medium of exchange (who wants to carry all that wheat around) and a durable store of value (keeping a whole bunch of horses alive and healthy is itself a resource intensive process). So people with claims over valuable real resources will often end up accumulating gold. But though the Lannisters have more gold than anyone else, that’s not how they got their gold. They just own gold mines.
Now don’t get me wrong, you’d rather own gold mines than not own them. But the ability to pull shiny metal out of the ground is trivial compared to the power of a well-fed army. Imagine a scenario in which the Westerlands are out of food, and the Reach is out of gold. The Tyrells and their bannermen will need to curtail their consumption of luxury goods until they can manage to sell food for gold, but the austerity will be survivable if a bit unpleasant. The Lannisters, by contrast, are going to find that if they try to trade a whole big pile of gold for a whole big pile of food that the price of food will skyrocket. The illusion of Lannister wealthy is based on the idea that we can take the marginal price of an ounce of gold, then multiply that by the total quantity of the Lannister gold supply, and then conclude that the Lannisters are hyper-wealthy. In reality, any effort to mobilize all that metallic wealth will lead to inflation rather than the ability to mobilize vast quantities of real resources.
You can see this historically from the Spanish conquest of the New World and the ensuing influx of newly mined “treasure.” This appeared to give the Habsburg dynasty a decisive wealth edge vis-à-vis its European rivals, but the Habsburgs’ struggles with France led to the inflationary “price revolution” and ultimately the victory of a French state built on the control of real resources—productive agricultural land and a large population. My guess is that by the time the Song of Fire and Ice is concluded we’ll see something similar. Real resources—not shiny gold—are the true test of wealth and the real source of power.
Indeed! I suspect most goldbugs would agree – that’s why they argue that you can’t create real resources by debasing the currency.
Yglesias’s rejoinder to that would be that the optimal level of inflation is the one that corresponds to complete mobilization of real resources. If you have lots of people sitting around unemployed, that’s prima facie evidence that people are hoarding money rather investing it in productive activity (or purchasing goods and services that cause others to invest in the productive capacity necessary to deliver those goods and services). Make holding money less-attractive, and you make that investment (or consumption) relatively more-attractive, and the economy moves in the direction complete mobilization.
And this feeds back into the economy’s productive capacity in two ways. First, idle human resources decay in value. Apart from the human tragedy, that’s the reason why long-term unemployment is a serious national economic problem. So if you mobilize those resources, you are preserving – or enhancing – national wealth by maintaining or increasing the productive capacity of the workforce. Second, when the economy is operating at or near full-employment, the incentives to develop productivity-enhancing innovations are maximized – because this is the only reliable way to increase profits. By contrast, when labor markets are slack, it’s more possible to squeeze profits out of labor concessions, and more possible to scale up production simply by hiring more people to operate the same processes at larger scale. And those productivity-enhancing innovations are what primarily drive national wealth in a modern economy.
So Yglesias believes that the amount of money – whether made of metal or paper – circulating in an economy does affect the mobilization of real resources. And he believes that the mobilization of real resources does affect the productivity of the economy, and thereby contributes to increases in real wealth. So why did a massive infusion of silver from the mines of Bolivia leave Spain economically hollowed out, with other powers – France, England, the Netherlands – having leaped ahead in terms of productive capacity?
That’s a complicated question, but part of the answer is surely that Bolivian silver enabled Spain to purchase more goods – consumer and capital goods – abroad. It therefore created powerful incentives in, for example, the Netherlands to invest in productivity improvements. Spain felt wealthier, because they actually were wealthier – they had a larger share than previously of claims on the globe’s productive capacity, because they had a bigger share of the world’s mediums of exchange (gold and silver). But because, at the margins, their society was less-well-positioned to increase its productive capacity, capital flowed to better-positioned societies, and those societies saw a durable increase in wealth.
Moreover, the distributional consequences of the silver infusion made the situation worse. After all, the increase in wealth didn’t accrue to Spanish society as a whole – because Spanish artisans and peasants weren’t much involved in generating the wealth. Rather, it accrued to the top of Spanish society: the court and those who depended on the court for power and profit. Because they suffered from inflation but didn’t participate directly in the influx of wealth, the infusion of silver made ordinary Spanish people poorer, and therefore even less-able to invest in improving their productive capacity.
How is that relevant to us today? Well, this was kind of my point in my Dark Matter post. If America isn’t actually a net-debtor because we’ve invested successfully in profit-making enterprises abroad while foreign investors in America have overwhelmingly preferred low-yielding government bonds, that should reassure us that we’re not going to have a currency crisis any time soon. But it shouldn’t reassure us at all about what we’re doing to our nation’s productive capacity in the long term. Indeed, it should make us more worried if selling bonds (not much different from printing money or digging up silver) and investing the proceeds abroad is an economically viable strategy for a good long while. Because that means we might keep doing it long after its social costs become manifest.
The subject came up at lunch recently, apropos of a writer for Mondoweiss who is apparently the son of people some of our guests knew. The young fellow spent some time in Gaza and has become a professional pro-Palestinian and anti-Israeli (not necessarily the same thing) activist. There was much clucking around the table about the shame, until someone asked the question: well, would it be better or worse to have a son who became an extreme left-wing anti-Zionist—versus having a son who became a right-wing settler?
I would not describe the people around the table as right-wing in general, nor right-wing within the specific spectrum of Jewish opinion about the “situation” in the territories. In the American context, these were liberal Democrats; in the Israeli context, these were probably Yesh Atid types with no love for Netanyahu. But the immediate answer of the bulk of the group was: the settler would be obviously preferable. He would, after all, still be “family” in some sense, even if wayward.
But the mere fact that the question could be asked suggests that, on some level, the group understood that the settlement project as a whole occupies extreme ground. That a “settler son” was the appropriate hypothetical to compare to the “traitor son.” And how do we really decide when, and in defense of what, or whom, extremism actually is a vice? And what are we supposed to do then, when the extremist is “in the family?”
This is an abstraction for most of us, because most of us aren’t in situations where extremism presents as a realistic option. If my son, when grown, decides to become a settler in the West Bank, or decides to become a pro-Palestinian agitator, either decision would require a conscious distancing from the life he was actually living, in America. If you are closer, physically, to an intense conflict where extremism naturally finds fertile soil, you will inevitably wind up with extremists “in the family.” And, probably, “traitors” as well, at least in the eyes of extremists.
Of course, the whole point of abstractions like “nationalism” is to create an emotional affinity that substitutes for actual proximity—to make us “feel with” people we don’t know, and see them as virtually family. Ditto with abstractions like “the proletariat” or “the victims of imperialism/colonialism”—the “traitor” has, in a sense, chosen not so much to affiliate with the “enemy” as to affiliate with an alternative imagined community, with its own rules for inclusion and exclusion, and its own extremists and traitors.
It’s probably possible to understand all of our substantive commitments as choices of who our families—literal or figurative—are.
So, returning to the question around the table: I think the settlement enterprise as a whole has been a catastrophe for the State of Israel. It’s obvious, to me, that a nice Jewish boy digging in somewhere in the West Bank is a far bigger problem for the State of Israel than a nice Jewish boy blogging about how Zionism was a historic crime and Hamas is part of the proletarian vanguard (or whatever). But, conceptually, “flipping” the valence on the two hypotheticals—calling the settler the “traitor” and the Mondoweiss-nik an “extremist”—doesn’t really work. Or, if it does, what it really amounts to is changing one’s own allegiances—from an allegiance to the Jewish nation to an allegiance to the “international left” or some similar abstraction.
There is always the alternative of simply writing off anyone who stays off the yellow line in the middle of the road. But inevitably this implies a thinning of all of one’s allegiances. Until you don’t really have a family, literal or figurative, left.
Far more difficult to say: no, these fellows are both, in some sense, family. But the harder path makes for more interesting discussions around the table.
The term comes from this paper from 2005, which argues, in so many words, that America’s apparent massive net-debtor status is an accounting illusion caused by not marking our assets to market.
Here’s the argument, somewhat simplified.
The United States has been running trade deficits since, basically, forever. What that means is that America purchases more stuff abroad than we sell. A trade deficit implies a capital surplus – the “extra” money we send abroad must be “recycled” by investment back in the United States. What that looks like, on the surface, is that Americans are buying stuff abroad, and financing their purchases by mortgaging our country. And, in the traditional green-eyeshade mode of looking at this, one day it’ll all come crashing down when our foreign creditors call in their debts and we have to sell Alaska or Apple Computer or whatever to the Chinese to pay back the loans we took out to buy all that cool stuff.
The “Dark Matter” theory says that this is incorrect because it assumes that the net capital surplus (equal to the net trade deficit) is the only number that matters. What should matter, rather, is the relative value of national assets and liabilities. And that requires looking at foreign investment in the U.S., and U.S. investment abroad, separately, and to look not only at the annual investment numbers but the change in value of those portfolios over time.
We don’t have mark-to-market numbers for these investment portfolios, but we do have values on the earnings from those portfolios. We know how much we pay to service our debt (public and private) owned by foreign entities, and we know how much we receive on the assets we’ve accumulated abroad. And it turns out that the net income number is positive, relatively stable, and has been rising over a long period of time. What the paper concludes from this information is that the real value of America’s investments abroad has risen faster than our liabilities have accumulated, and that the apparently massive accumulated trade deficit is just an accounting fiction. It’s not that we mortgaged Alaska to buy a Lexus or a Mercedes. It’s that we mortgaged Alaska to buy Eurodisney, with a little money left over to buy a Lexus or a Mercedes; and Eurodisney has proved so profitable that if we sold it we could easily pay off the Alaskan mortgage.
That’s a really interesting argument! The obvious question to ask is how such a thing could be – why should America earn so much more on its investments abroad than foreigners earn on their investments in America. Hausmann and Sturzenegger (the authors of the paper) make the argument that the primary driver of America’s higher returns on its investments abroad relative to foreign investments in America is America’s greater knowledge advantage. We’re applying intellectual property and expertise (brands, technology, management skill) to foreign assets, causing them to yield more than could be achieved by another investor; by contrast, foreign investors in the U.S. overwhelmingly are buying government bonds, and so are not leveraging any intellectual property or expertise of their own to increase the value of their investment. But they acknowledge other possibilities – for example, that the immaturity of the Chinese financial markets creates a large preference for safe American securities, so that (in effect) the Chinese are using the American financial system as an intermediary, investing in American bonds to indirectly finance American companies’ investments in (among other places) China, and in effect we’re earning profits (as a nation) simply by virtue of being middlemen. I’d also argue that the dollar’s status as the premier global reserve currency plays a big role here – it’s a lot easier to make profits on your international investments when you pay a lower interest rate on the money you borrow than your competitors because of the currency you’re borrowing in.
If the theory is true, it goes a long way to discounting fears that we’re “turning into Greece” because we’re living beyond our national means. With proper accounting, we see that we’re not a net-debtor at all; we’re a net creditor. And the only thing that would change that picture materially would be a sharp drop in returns on our investments abroad, something that has never been observed in recent years (possibly because, in recent years, we haven’t seen any major global economies go Communist – the easiest way to imagine a sudden and precipitous drop in the value of our foreign investments is to assume that they are simply seized by foreign governments). But the Greek scenario has never been particularly applicable to America; countries that borrow exclusively in their own currency can have lots of financial problems, but literally replicating the Greek experience isn’t one of them.
On the other hand, I’m not sure how much it should reassure anyone concerned about the effects of global imbalances like these on America’s social structure. After all, if I understand the argument correctly, America is borrowing more money abroad than we’re investing abroad, and earning sufficient return on the investment that we can easily pay the interest on what we borrow. But a great deal is hidden in that “we.” The entity doing the net borrowing is overwhelmingly the Federal government. The entities doing the investing abroad are overwhelmingly American corporations. And the distributional consequences of rising Federal debt and rising corporate returns are not identical. America may be behaving like a successful giant hedge fund but we’re not all investors in the hedge fund – some of us are just creditors to it. And the political consequences that flow from such an arrangement would tend to cement it in place.
Instead, it should reassure folks who are concerned about those distributional consequences that there’s plenty of return sloshing around to redress them. After all, if there’s so much return from Eurodisney that we can afford to borrow more than we need to pay for that investment, and spend the excess on a Lexus or a Mercedes, maybe we could think about better ways to deploy that “excess” borrowing for something other than current consumption. Something that would upgrade America’s physical and human capital, say.
What is Stanley Kubrick’s horror masterpiece, “The Shining” about? That is to say, where does the horror come from?
Is it about writer’s block? (“All work and no play makes Jack a dull boy.”) Alcoholism? (We know Jack had a drinking problem before they get to the hotel, and he shows all the brittle signs of a dry drunk.) Autism? (Danny is certainly a special child, and that can take a toll.) The claustrophobia of the Oedipal triangle? (What finally sets Jack permanently on a demonic path is when Wendy believes he caused the bruises on Danny’s neck, and runs from him carrying her boy, screaming, “you son-of-a-bitch – how could you?”) The anomie of modern existence? (All Danny seems to do up there in that hotel is ride in circles and watch television.)
Or does the horror come from the hotel, from echoes of the things that happened there before that were “not all good” – like the grisly murder of two twin girls by their father, Delbert Grady? (Grady is the one who ultimately seduces Jack to murder.) Or from horrors that date before the hotel’s construction – such as the Indian burial ground that, we are told, lies beneath the hotel’s foundations? (Native American motifs abound in the hotel, from the stained-glass windows to the pictures on the walls to the cans of Calumet baking powder in the storage room.) Or from the infernal regions themselves? (The bartender, Lloyd, first appears when Jack offers to sell his soul for a drink; when he tries to pay for it, he says Jack’s money isn’t good, that the drink is courtesy of “management” and that Jack, who wants to know who’s buying the drinks, needn’t concern himself with that question – at this point.)
Or is it just cabin fever?
The answer would appear to be, “yes,” which is to say, “no.” Coleridge referred to Iago’s “motiveless malignity” but this is deduced from the fact that Iago supplies us with too many motives for his actions – his injured pride at being passed over for promotion in favor of Cassio; his contempt for Othello’s own undeserved reputation; his conviction that Othello – and Cassio as well – have been carrying on with his wife, Emilia. Precisely because so many motives are readily supplied, we see that we are to distrust them all, and stop looking for a proper motive.
The same is true of the source of the horror in “The Shining.” If we look for it, we find it with alarming ease – indeed, we find a plethora of plausible sources. Which makes us doubt that any of them can be it. After all, if Jack’s alcoholism is to blame, then why tell us about the Indian burial ground? And, as with Iago in Othello, this should lead us to conclude that this movie isn’t playing by horror rules; that the search for a cause of the horror is to miss the point. It is the cause.
“The Shining” is a very cold film, rarely putting us “with” the victim or creating the pulse-quickening suspense of seeing the knife as it approaches the victim from behind (there is one such shot, and it’s a notable exception). Shots tend to be long and symmetric; geometry predominates over anything organic. The hedge maze is the emblem of the film.
But it’s not a puzzle to be solved, and its unsolvability is what engenders the intellectual horror. The films contains numerous perplexing gaps of continuity. Some might be written off as errors – the chair that disappears between one shot and the next, for example. The characters who enter one pantry and emerge from another. And if the hotel has an impossible geography, well, that’s the movies for you – a set is not a real place. But why should a typewriter change color? And some discontinuities are in dialogue. Why should Jack, at one moment, tell Lloyd he’s been dry for five dreadful months, and then tell him at another point that the injury to his son happened three years ago, when we know that he gave up drink after he hurt Danny one drunken night?
You don’t notice these discontinuities when you watch the film, but the cumulative effect is for the hotel to become a dreamlike environment. We don’t ask why that’s what it is – because we are experiencing being in that nightmare state, which is how the horror is brought home to us. In much the same way, Othello puts us in the psychological position of falling under Iago’s spell, as Othello does, in part by having Iago spin a literally impossible tale – if you timeline the play, there was literally no opportunity for the supposed affair between Cassio and Desdemona to happen, and yet nobody in the play seems to notice this.
All of which is to say that Kubrick, in his intellectual way, is offering us the experience of the mind breaking down, rather than telling a story about a mental breakdown. As such, if we are to keep our heads we have to surrender to the experience, for a time, but remember that, like Dick Halloran says to Danny about the nightmare visions he has, it’s just pictures. It isn’t real.
But not everybody can keep their heads.
I saw “The Shining” again recently at a midnight showing at IFC, pursuant to seeing a new documentary about crackpot theories of the “real” meaning behind the movie, called “Room 237” (a reference to the room that appears to be the epicenter of horror in the hotel, for reasons that – again – are not explained). Some of these theories are making category errors about elements that really are in the film. So yes, there are a bunch of Native American motifs and references, but no, “The Shining” is not an allegory of the genocide of the American Indians; the whole Indian burial ground under the hotel is a horror cliche, which is why it’s in the film, but “The Shining” would be a far more conventional horror movie if it were simply a story of Native American ghosts exacting revenge. And yes, the use of the hedge maze should recall the legend of the minotaur, but that’s what we call an allusion, not a secret, esoteric meaning.
But then there are the theories that make more than a category error. Like the fellow who thinks “The Shining” is a secret confession of Stanley Kubrick’s involvement in faking the moon landing. (Though, as he takes pains to make clear, he isn’t saying that the moon landing itself was faked; he’s saying that the footage of the moon landing was faked.)
Cranks, of course, will be cranks. But what would induce someone to make a movie like this? To step through the film frame by frame, play it backwards, put all these crank theories out there as serious efforts to grapple with a work of art? Even if it’s true that Kubrick liked to put allusions in the corners and backgrounds of the frame; even if it’s true that he liked to pepper his films with the visual equivalent of Joycean puns, that doesn’t mean there’s a “secret message” in the film. Why would there be?
The resort to esoteric, secret meanings behind reality is a psychological comfort when the capriciousness of that reality is too threatening. When we badly need reality to make sense – to be sending us a message – secret codes and vast conspiracy theories provide that sense.
So in a way, the existence of “Room 237″ is a testament to the success of “The Shining” in capturing the unassimilable horror of reality. If it weren’t so terrifying, nobody would see the need to tame it by explaining what it’s really about.