Reihan Salam is a traitor to his alma mater:
Whenever critics have griped about the way Stuyvesant does business, my inclination has long been to say, essentially, “Screw you.” Going to Stuyvesant is one of the best things to have ever happened to me. I met two of my lifelong best friends there, and being surrounded by thousands of the city’s scrappiest strivers, most of whom were immigrants or the children of immigrants from New York’s outer boroughs, taught me more than I ever learned from any teacher. The same goes for most of the alums with whom I’ve kept in touch over the years.
Yet recently, as Mayor Bill de Blasio, state lawmakers in Albany, and the United Federation of Teachers have called for scrapping Stuyvesant’s current admissions formula, I’ve come to the reluctant conclusion that Stuyvesant should close its doors. The same goes for elite public high schools like it across the country.
As a Bronx Science alum with a bit of a chip on his shoulder about people (wrongly) thinking I went to the second-best high school in the city, I say: them’s fighting words. How did Salam come to this conclusion?
Well, Stuyvesant has a student population that doesn’t look very much like New York. New York’s public schools are 70% African-American or Latino, but Stuyvesant’s student body has only 3% representation of those groups. But it’s not majority-white – far from it. Stuyvesant, in keeping with a longstanding tradition of catering to intellectually-gifted immigrant strivers, is over 70% Asian.
This is a longstanding political problem – but that’s not the reason Salam has jumped ship. Instead, he argues that Stuy’s pedagogical model just isn’t very good:
Pedro Noguera, also a professor at the Steinhardt School . . . raised an obvious but largely neglected point, namely that Stuyvesant and the other specialized schools aren’t actually that great: “I would not tell a top African-American student to go to one of those schools.” Rather, Noguera explained, he’d encourage such a student to attend a school that offered a more supportive environment and a higher quality of education. He told Capital that the specialized high schools offer “a total sink-or-swim environment,” which he would not hold up as a model.
Noguera is exactly right. The politicians and the education experts who are so fixated on the racial balance at Stuyvesant neglect the fact that Stuyvesant is not built to support and nurture students who need care and attention to excel academically and socially. It is a school that allows ambitious students who know how to navigate their way around a maddening, complex bureaucracy to connect with other students with the same skill sets. Being in a fiercely competitive environment spurs a small number of sleep-deprived students to stretch themselves to the limit, to compete for admission to elite universities. The truth is that while Stuyvesant certainly does send many hyperaggressive students to the Stanfords and MITs and Princetons, students who find themselves in the bottom half of the class often languish without the support they’d get at other schools.
Giving some number of black and Latino students a boost in the admissions process won’t suddenly vault them into the top of the class or erase their need for a supportive environment. It is all too easy to imagine that the locus of segregation would simply shift. Stuyvesant High School as a whole might look more like New York City. But would the top quarter of the class look like it, or would it still be dominated by the kind of students who don’t need a supportive environment to max out their GPAs? Like Noguera, I strongly suspect that the kind of very good black and Latino students who might be admitted to Stuyvesant if grades and attendance were taken into account would be better off elsewhere—and I think the same is probably true of many Asian and white students as well, if not most.
I agree with this – but I’m not sure why that’s a reason to close Stuyvesant, unless Salam believes that a “total sink-or-swim environment” isn’t a good model for any gifted students.
In my experience, only a fraction of gifted students truly benefit from such an environment. But that fraction can benefit to a great extent. As I’ve written about in this space before, the formative experience of my youth was participating in competitive high school debate, which I did at a very high level. I learned more from debate than I did from any class, and I learned so much precisely because it was a ruthlessly competitive activity, pitting me against my peers around the country in contests with unambiguous winners and losers.
Is that the only beneficial pedagogical experience? Certainly not. Is it the best way for most students to learn? I strongly suspect not as well. But it’s invaluable for certain kinds of kids – and not just for debate nerds. It’s a valuable experience for gifted athletes, musicians, math whizzes, etc. But, unavoidably, a ruthlessly competitive environment will produce losers as well as winners.
That’s an argument for a diversity of institutions, for there not being a single “crown jewel” in the system that everyone acknowledges is the “best” school to be from. And guess what? New York has a lot of other excellent schools that don’t select the way Stuyvesant does – as Salam acknowledges:
I have a theory about declining white representation at Stuyvesant. I seriously doubt that it’s because New York City is no longer home to white eighth-graders from affluent families who have expansive vocabularies and solid critical thinking skills and who are more than capable of scoring well on the entrance exam. I’ve met more than my share of such young people. My gut tells me that Stuyvesant has grown steadily less attractive to white families with the kind of social and cultural capital that helps people get ahead in America. These families are seeking out other options, and so have savvy families of all ethnic backgrounds. Over the past three decades, New York’s wealth boom has contributed to soaring endowments at the city’s elite independent schools, virtually all of which are keen to attract talented black and Latino students and which obviously cater to academically gifted white students as well.
More consequential still has been the rise of smaller public high schools, which offer well-defined curriculums that are a better fit for the large majority of students, gifted or otherwise, who need a bit of hand-holding. If you were a college-educated native-born parent living in New York who knows your way around the local high schools, is it obvious that you’d want your child to go to Stuyvesant instead of an excellent school with a mellow, hippie-ish vibe, or one that offers intensive instruction in Mandarin? Would it be obvious if it entailed a grueling commute, like the hour-and-a-half one-way commutes that were routine for friends of mine traveling from the far reaches of Staten Island, Queens, and the Bronx? It might have been obvious from the 1970s to the 1990s, when middle-class flight devastated the city’s local high schools, and when getting your nerdy kid into a specialized high school was the only way to ensure that she wouldn’t get beaten up every day at lunch. Fortunately, New York City has come a long way since then.
Right: there are more and more alternatives, both within the public school system and outside of it, and therefore Stuyvesant is less and less the “best” school in the system, and more and more the exemplar of a particular model. Why does that make it obsolete? If it’s obvious that, for many bright and talented students, the sink-or-swim environment of Stuyvesant would be less-than constructive, isn’t it similarly obvious that “an excellent school with a mellow, hippie-ish vibe” might not be ideal for the kind of student who might thrive at Stuyvesant?
Of course, there’s also this:
There is another reason why in-the-know parents appear to be turning away from Stuyvesant. These days, it doesn’t seem to be doing a good job of keeping its students on the ethical straight-and-narrow. In 2012, dozens of Stuyvesant students were caught cheating on a statewide Regents exam, the results of which were utterly inconsequential for the students involved. These were bright kids with bright futures, and they thought nothing of texting the questions on the (totally meaningless) Regents exam to their fellow students. The reporting that followed the scandal, from Vivian Yee of the New York Times and others, made it clear that this particular cheating incident was part of a larger pattern. The students involved in the scandal had grown so accustomed to cheating that it was second nature. And why wouldn’t it be? When you get enough bright young people together and you tell them that academic achievement is everything but that you’re going to load them with enough homework to last several lifetimes, it’s inevitable that corners will be cut.
I am genuinely surprised that Salam’s response to the cheating scandal is to say: the problem is ruthless competition rather than lack of consequences for cheating. Where else in American public life would he apply that wisdom? Stuyvesant has been a ruthlessly competitive place for a long time. Has it also been a hive of corruption? And is he convinced that there is no corruption in the less-nerdy redoubts of the American meritocracy?
There is an enormous difference between saying “we don’t care about your social graces or your family background – all we care about is your academic achievement” and “we don’t care about whether you earned it or stole it – all we care about is your score.” Salam surely knows the difference. Does he see no value in an institution based on the former? Does he really think it will inevitably devolve into the latter, that there’s no way to build an institution that is both highly competitive and ethical?
The core argument against specialized schools is integrationist: that public education is supposed to build a citizenry bound by common experience of equality of treatment. Note that this is very different from what Salam articulates as the goal of integration: “Traditionally, desegregation efforts have been designed to get students from deprived backgrounds to rub shoulders with students from more affluent and stable families, in the hopes of fostering meaningful interracial friendships and spreading the norms that contribute to success later in life.” This is both historically and practically incorrect. Desegregation was fundamentally about assuring equality of treatment. Schools that disproportionately drew wealthier students, brighter students, students from the dominant class, ethnic or racial background, were overwhelmingly likely to get more resources and attention from the system. Schools that had the opposite character, whether because of legislated segregation or simply as a consequence of patterns of residential segregation, would be relatively neglected. And on top of that, the experience of segregation would teach all parties that segregation was natural, normal, a matter of desert – which, in turn, undermines democratic norms.
This is not a trivial objection to selective public schools – it has real teeth. Unfortunately, it’s also true that large, socially-integrated institutions can quickly become internally segregated – kids are extremely good at seeking out their own “kind” and ostracizing outsiders. And it’s also true that large, socially-integrated institutions will, perforce, have an institutional character that is amorphous, one that is not optimally suited to bringing out the best in many of their students – including, quite possibly, the kinds of students who would thrive at Stuyvesant.
There’s an inevitable tension between promoting the democratic experience of equal treatment for all, and promoting the kind of diversity between institutions that makes both for institutional strength and the opportunity for different kinds of students to find a more optimal environment. That tension cannot finally be resolved; we just have to live with it, sometimes leaning more one way, sometimes more the other.
But as long as we have institutional diversity, why shouldn’t the nerds get a school of their own?
My review of Thomas Piketty’s book, which appears in the current issue of TAC, has been on-line for the past week. Please do check it out if you haven’t already. I’m particularly interested to hear from knowledgable readers of the book whether I am right about the importance of the tail-off in demographic growth in the developed world to the predictions Piketty makes for future growth and inequality. Piketty alludes to the subject a number of times, but never really focuses on it.
There were a few points I made in the review that I couldn’t elaborate on adequately because of space (and because they would be too tangential to the main topic). This probably won’t be the last post I write to pick up on one of those threads – in this case, the question of extreme levels of executive pay.
One of the much-noted oddities of Piketty’s analysis is that his macro thesis is that our future will be one of “patrimonial capitalism” where inheritance matters more than did for much of the 20th century, whereas his data demonstrate that, particularly in the U.S., the growth in inequality over the past three decades has been driven substantially by growth in wage income at the top. This is due in part to the huge pay packages earned by top-performers in finance, but only in part; there just aren’t enough people in finance to dominate the trend. Rather, Piketty asserts, most of the extreme pay packages are in the corporate sector, and accrue to people he dubs “super-managers.” This is a problem for his thesis, because while class origin may be a very important leg up in becoming a “super-manager,” these positions are not actually inherited.
How much this micro-disparity matters to the macro thesis depends on your theory of why pay packages at the top have risen so dramatically. Piketty argues that it reflects self-dealing on the part of the managers, who are able to cow insufficiently independent boards into over-paying them – and he argues in favor of that proposition through a variety of demonstrations that pay appears not to be well-linked to productivity. But this is not an uncontested position. Scott Sumner, in a post that largely deals with another interesting topic to which I may return – ethnicity and productivity - suspects higher productivity really is the driver, and links to a paper that argues that because pay has increased dramatically at the top of a variety of different professions – finance, law, executives of public corporations, executives of private corporations, and athletics – these increases are reflective of a kind of structural change in the economy to “winner-take-all” dynamics, possibly driven by technology.
But what do we mean by “productivity” in this context?
A hedge fund manager earns huge fees for managing capital. Assume, for the sake of argument, that the business is ruthlessly meritocratic: returns are strictly a function of how much money the manager makes for investors in a given year. Now, assume that hedge fund-managers as a category make a lot more money than other comparable finance professionals. What you’d expect, in that case, is a migration of talent from the rest of finance towards hedge fund management – and, as a consequence, some erosion of hedge fund returns and/or hedge fund fees as competition both for investment opportunities and for investor dollars increased.
But another thing you’d expect to happen is for other finance professionals to see their pay increase – because banks and brokerages would need to pay more to prevent their employees from defecting to hedge funds. You’d also expect to see banks and brokerages trying to get into the hedge fund game themselves, chasing those higher returns – which, in turn, would require competing head-to-head with hedge funds for talent. And that would, once again, put upward pressure on finance packages generally.
Now, if we assume that finance is a normal industry, then all of the above should be unproblematic. Talent should migrate to higher-margin activity, and the rest of the industry should adjust. If finance as a whole is more lucrative than other industries, then, similarly, there should be an adjustment as talent pours into finance, and finance would represent a larger fraction of employment and of the economy.
But finance isn’t like other industries. Finance is just a mechanism for allocating resources efficiently. It doesn’t “produce” any goods or services that anybody wants for their own sake. It’s more comparable to law or accounting than to industries like health care, computer software, automobile manufacturing, retailing or education. If finance is growing as a percentage of the economy, that’s prima facie a problem, not a neutral fact.
One way it might be a problem is that pay scales in finance inevitably affect pay scales in other industries, for the same reason that pay scales for one activity within finance inevitably affect other parts of finance. If a trader can make much more money at a hedge fund than at a traditional bank or broker, then she’ll leave unless the bank or brokerage finds a way to raise her pay so she will stay. If traders make much more money than traditional bankers, people will start to leave traditional banking unless pay scales increase to encourage them to stay. So, similarly, if finance is an obviously more-lucrative route than other aspects of business, then pay scales for non-finance executives will have to rise to keep talent from flowing into finance.
This is what I meant when I said the following in my review: “I suspect this income escalator is driven secondarily by self-dealing, but primarily by competition for talent with a fantastically remunerative financial sector.” [Note: there is a typo in the review where "with" was replaced with "within," which, obviously, changes the meaning.] If the financial sector becomes incredibly lucrative, it will draw more and more talent to it, which will depress pay scales in finance (relative to what they would otherwise have been) but which will also raise pay scales for executives in other areas who have (or had, earlier in their careers) the requisite skills to choose to move into finance. Financialization may, therefore, be one important driver of increasing inequality generally between executives and other salaried employees.
Is financialization another species of rent-seeking, though? I suspect it is – but this analysis would still hold even if it isn’t. Finance could grow as a percentage of national income if a large percentage of financial services are, effectively, being exported – if we’re capturing a larger and larger percentage of the world’s demand for financial services. If that were true, then the rise of finance would not be evidence of some kind of corruption in the heart of the American economy. But it would still drive inequality in other sectors of the economy in ways unrelated to productivity, as described above.
I should stress, I’m not sure I’m right about this by any means – I’m really just speculating. But finance loomed so large in the change in the American economy since 1980, and the internal dynamics of finance are sufficiently different from many other industries, that I think it’s always worth raising questions about whether financialization is implicated, even if, on its face, the phenomenon in question looks much broader-based.
Because I haven’t had the time to write anything of my own, and because he’s already done a better job than I would have done, allow me to associate myself with Damon Linker’s recent column on the current Israel-Gaza war, which he dubs, The Stupid War:
Note that I didn’t say The Immoral War. With Hamas and smaller jihadi groups hurling rockets at Israeli cities from the Gaza Strip, Israel is clearly justified in responding. . . And though the lopsided body count — over 150 Palestinian dead compared to zero Israeli casualties — is striking, it’s not Israel’s fault that its Iron Dome defensive shield has been so effective at protecting Israeli citizens from the more than 800 missiles that have been launched at the country in the past two weeks. If militants in Gaza had better weaponry or Israel was less adept at protecting itself, many would be dead on the Israeli side.
So yes, Israel is morally justified in defending itself against incoming missiles. But that tells us nothing at all about whether the war is wise. And it most certainly is not.
Why not? Because the only reason the war is happening at all is because of Netanyahu’s political miscalculation, and because there is no realistic and concrete aim to be achieved:
Instead of responding like a statesman to the kidnapping and murder of the three Israeli teenagers, by announcing the facts of the case right away and seeking to dissipate the predictable rage, [Netanyahu] went out of his way to encourage it, hoping he could marshal it for political purposes.
He was wrong. And that appalling error of judgment is what has brought us The Stupid War, which will accomplish absolutely nothing beyond creating yet more suffering, mostly on the Palestinian side. What can Israel possibly hope to gain from its ferocious bombing campaign? It certainly doesn’t seem to be stopping the volley of Hamas rocket attacks into Israel. Does Netanyahu expect Palestinians to be cowed into submission? You can’t send an effective realpolitik threat when your opponent considers the status quo worse than any bombing campaign Israel dares engage in.
And what if Israel went farther and all but leveled the Gaza Strip and killed thousands of Palestinians? They might be cowed into submission then, but at the cost of inspiring worldwide condemnation the likes of which Israel has never seen. Even Netanyahu surely knows better than to turn Israel into one of the world’s foremost pariah states in this way.
So what can Israel possibly hope to achieve?
Maybe a brief suspension of Hamas rocket attacks. Maybe. But soon enough, the region will find itself in a new, even more volatile status quo, weighed down even more heavily by anger and injustice, grievance and fear. Israel’s air strikes can lead nowhere but more provocation, more retaliation, and more tragedy for all sides.
And that’s why this war is so stupid.
Indeed, if the Swedish Academy gave a Nobel Prize for political idiocy, Benjamin Netanyahu’s performance over the past month would make him a shoo-in.
The only thing I would add is that Operation Protective Edge shouldn’t be called The Stupid War. Operation Cast Lead and Operation Pillar of Cloud were similarly campaigns that Israel backed itself into without a clear plan or objective, and which reached predictably equivocal and unsatisfying conclusions. The Second Lebanon War might be characterized similarly. Netanyahu has some real competition for his Nobel. And no doubt will have more competition in the future.
The problem is that what the social secessionists are asking for does not seem all that reasonable, especially to young Americans. When Christian businesses boycott gay weddings and pride celebrations, and when they lobby and sue for the right to do so, they may think they are sending the message “Just leave us alone.” But the message that mainstream Americans, especially young Americans, receive is very different. They hear: “What we, the faithful, really want is to discriminate. Against gays. Maybe against you or people you hold dear. Heck, against your dog.”
I wonder whether religious advocates of these opt-outs have thought through the implications. Associating Christianity with a desire—no, a determination—to discriminate puts the faithful in open conflict with the value that young Americans hold most sacred. They might as well write off the next two or three or 10 generations, among whom nondiscrimination is the 11th commandment.
To which Rod Dreher:
If that’s how it has to be, that’s how it has to be. Fidelity to what one believes to be religious and moral truth is more important than popularity. We live in a post-Christian society. It’s going to get much worse for non-conforming Christians before it gets better. How Obama responds to this letter will be a critical bellwether.
But that “if” is really just changing the subject. Does Dreher believe that’s how it has to be? That is to say, does he believe that “fidelity to . . . religious and moral truth” requires Christians to, say, refuse to bake cakes for gay weddings, or any of the other “secessions” that Rauch is talking about?
I think the answer is, mostly, “no.” That is to say, I think Dreher’s belief is that traditional teachings about homosexuality are non-negotiable, but that this doesn’t imply that Christians are obliged in any way to “secede” from a society that rejects those teachings. Christians may be obliged to believe that physical love outside of lifelong marriage between a man and a woman is sinful; they may even be obliged to believe that the determination to pursue such a love and to deny its sinfulness is even more sinful. Does that mean Christians are obliged not to take pictures of their sinful unions? That they are obliged not to hire them to teach their children?
I’m not a Christian, but if I understand correctly, the traditional view would be that “writing off” generations of people would literally be consigning them to hellfire. I’m pretty sure that, for a traditional Christian, that’s an abhorrent choice to make. So how can Dreher blithely accept that as merely a regrettable necessity if the culture at large becomes “post-Christian”?
Rauch, it seems to me, is much more correct in the way he formulates the question. The question traditional Christians are faced with is a pragmatic one, a question about which course will lead to better results, not a question of fundamental principle. The question is whether they should bend over backwards as far as they, without violating their essential teachings, to welcome gay Christians and non-Christians, and meet them wherever they are, or whether they should “build a fence around the law” to make sure that they themselves are not contaminated by a too-close relationship with a certain category of (from a traditional Christian perspective) obstinate sinners.
I don’t mean to suggest that the latter approach is obviously false. I’m Jewish, and though I am very critical of this aspect of my religion, “build a fence around the law” is a venerable Jewish concept. A traditionally Orthodox Jew not only would not attend a gay wedding – he wouldn’t attend a Christian wedding, because that would (from his perspective) be participating in an idolatrous ceremony. But traditional Orthodox Jews also do not live under a religious obligation to spread their teachings to the ends of the earth.
My point is that the question itself is one of consequences. If the former approach is correct, then the folks who are looking for broad exemptions to allow discrimination against sexual minorities are actively harming their own cause. That’s not something to champion.
Dreher could be right that how the Obama Administration deals with these kinds of requests will be bellwether for how the Democratic Party, or even the government more generally, is perceived by traditional Christians. But it’s equally true that these choices to opt for secession are bellwethers for how traditional Christians are perceived by liberals, and even by the general society. That’s not a point to be brushed aside by saying, “that’s how it has to be.”
I resist, in general, the tired conservative complaint that the ’60s ruined everything. Except in one area: the great American musical. From Hello, Dolly! to Hair, it’s a parade of false, manipulative, overwrought sentiment.
Sometimes, a production will be so good that I forget my objections, at least for the duration of the show – such was the case with last year’s Fiddler on the Roof at Stratford. And sometimes, no matter how good the production, it still won’t be enough. Such was the case with this year’s Man of La Mancha.
I should stress that Don Quixote is one of my favorite books of all time, one I’m over-due to re-read and which has served as inspiration for a number of (mostly unfinished) projects of my own. But the musical adaptation could not be further from it in spirit. Gone is the picaresque, and with it all social commentary. Gone is the fruitful ambivalence we feel towards the Don and his madness. Gone is Sancho’s peasant cunning. Gone, most indefensibly, is any motion, any activity at all – the musical begins with Miguel de Cervantes in prison, and fundamentally never those claustrophobic confines.
Instead, what Leigh, Darion and Wasserman give us is a hymn to self-aggrandizing fantasy. And for all the Don’s pretensions to chivalry, self-aggrandizing is the right word. How else can one account for the appalling treatment of Aldonza – by the writers, not the muleteers?
This woman is idolized by Don Quixote as his chaste and pure Dulcinea. He doesn’t see who she could be, her potential; he sees a pure and total fantasy, with no relationship to reality – indeed, he knows nothing at all about her reality, and he does not want to know. She’s disturbed by his devotion, then angered, asking him for the simple recognition of seeing her as she is, loving her for that, if he can, not for a fantasy. The Don won’t budge. But he does come to her rescue when she is treated brutally by that night’s john, and as a consequence of his intervention she is gang-raped and beaten. And even when she throws this abuse at his feet, the Don won’t change: he sees what he wants to see.
But who shows up at the Don’s bedside, to rekindle the fires of imagination when he has made his Christian peace with the necessity of dying? Aldonza, who now proclaims that he enabled her to see something more true than her reality, and that all the abuse she suffered doesn’t matter so long as she can dream that impossible dream. That we’re told, by the musical itself, that this ending has been invented by Cervantes to satisfy the demands of his audience of fellow-prisoners, only makes the insult to the actual audience – who, of course, are similarly gratified – commendably plain and direct.
Man of La Mancha must surely be Paul Wolfowitz’s favorite musical. They should perform it in DC some time with a case of Iraqi refugees, and see how people react.
As for this production: the principal artistic choice of director Robert McQueen and designer Douglas Paraschuk is to keep the reality of the prison where Cervantes is staging his play continually present, a la Marat/Sade. If the underlying argument of the musical were more persuasive, this could be a powerful choice; given the extreme weakness of that argument, the main result I discerned was that I was not carried away by the Don’s fantasies. I never experienced the delight that Sancho or the Innkeeper clearly feel in being charmed by the mad knight, and never saw the world as Quixote sees it. Tom Rooney’s performance similarly leans toward the Cervantes end of his dual role – I never forgot that this was a man playing Quixote rather than Quixote himself. On the plus side, his was a very real-feeling Cervantes – I felt his need to play Quixote, which was quite touching, more than any authentic madness.
Steve Ross was an absolute delight as Sancho, the real highlight of the show from my perspective – he seemed completely genuine, never mugging, and his comic timing was impeccable. Shane Carty was also a pleasure to watch and listen to as the Innkeeper and the judge of Cervantes’s trial by his fellow prisoners, and I could listen to Sean Alexander Hauk (the Padre) sing just about anything. I didn’t find Robin Hutton’s Aldonza terribly convincing, but I really do think it’s a thankless role – I almost think I’d find a performance more alarming if it did convince me.
I understand that the big musicals are a key financial tent pole for the Festival, and that therefore they have every reason to be especially conservative about programming in this area. And there’s no question, the popularity of the big ’60s meatballs appears to be undiminishable. But in the past twenty years, Stratford has programmed Camelot twice, Fiddler on the Roof twice, and now Man of La Mancha for the second time. By contrast, they haven’t done Carousel since 1991. They haven’t done Candide since 1978. They’ve never programmed A Little Night Music, nor have they ever done The Most Happy Fella.
Of course, in the past few years Stratford has also programmed musicals like Jesus Christ Superstar and Jacques Brel Is Alive and Well and Living in Paris - I’m not saying they are in a rut. I could just use a break from the ’60s altogether.
Man of La Mancha runs at Stratford’s Avon Theatre through October 11.
Because I just don’t have much more to say on the whole religious freedom question and I’m tired of saying what I’ve already said.
On the subject of “are corporations people,” meanwhile, I feel like Jacob T. Levy makes a pretty interesting argument why, even if you believe corporations have a robust set of constitutional rights, that not only doesn’t imply supporting the result in Hobby Lobby but may well cut the other way:
The general doctrine of corporate personhood is right: corporations can enter into contracts, own property, and be held liable for wrongdoing or debts *as separate entities* from the various natural biological persons involved– and this is a necessary and valuable organizational innovation.
The particular doctrine of corporate persons as holders of constitutional rights is right: the corporation qua property owner has, for example, 4th Amendment rights against its property being unreasonably warrantlessly searched, and 5th Amendment rights against it being taken for public use without compensation, or against being deprived of it without due process of law. . . .
Hobby Lobby seems to me to stand for a very different proposition: “[P]rotecting the free-exercise rights of corporations like Hobby Lobby, Conestoga, and Mardel protects the religious liberty of the humans who own and control those companies.” . . .
The judgment today maintains that a closely-held corporation like Hobby Lobby is so close to the natural persons behind it that it’s not really a distinct corporate person at all; it’s just a costume that the Green family puts on and takes off as it suits them.
The decision has to pierce that veil because corporations qua corporations have no particular reason to hold religious views of any kind:
Notice that the right of a corporation to freedom of the press or to be secure in its property against searches or expropriation makes perfectly good sense in terms of the corporate person’s own interests, regardless of who its owners happen to be. Corporate religious liberty isn’t like that. The reason we have the emphasis here on “closely-held” corporations is because the corporate veil is being pierced in order to look directly at the natural persons behind it. . . .
[H]obby Lobby, a for-profit corporation like IBM, can’t be described as itself having a religious belief. Making sense of that idea requires making the corporate person disappear from the description and talking about the Green family, treating the “closely held” corporation as if it were a partnership or sole proprietorship that doesn’t have a corporate-style separateness from the natural persons. Try as I might, I can’t persuade myself that that’s right. Corporations are persons, or corporations are made out of people– the two thoughts lead to very different conclusions, and I think protecting the former requires rejecting this kind of easy recourse to the latter.
His view is basically congruent with Patrick Deneen’s view of the place of corporations in our collective life, but coming from the opposite end.
I’m not sure I agree with that view, because it presumes a radical dichotomy between for-profit entities, which can only have financial interests, and actual people, which can have a variety of interests and values. (Not-for-profit entities can, presumably, also have interests and values other than profit, by definition.) You can believe in the idea of corporate personhood without believing that private corporations must be profit-maximizing entities, but can have some characteristics of a community, albeit a hierarchical one rather than a democratically-organized one. But it’s still an interesting counter to the line that Hobby Lobby was yet another decision in favor of corporate power.
(Ok: I’ll talk about the religious freedom stuff briefly. I think it’s appropriate for the government to guarantee access to contraceptive services – I think it’s a positive good. I’d like that to be achieved in a way that doesn’t make religious believers feel they are directly providing a service they consider profoundly abhorrent, because I believe in a robust conception of freedom of religion. I see a clear distinction between that and the cakes-for-gay-weddings business, because there is nothing abhorrent about providing a cake – the abhorred (by the baker) act is the wedding, and the baker is not providing that; allowing her to refuse service is pretty plainly discrimination against people whose behavior she disapproves of, and the only question is whether we think it’s invidious and whether the class discriminated against deserves any protections. Whether we should be more or less vigilant about policing discrimination in general is another matter. I think my hypothetical Scientology school network is a tougher nut because Scientologists believe mental health services do active harm, and I really do tend to think that the reason the Court wouldn’t recognize a right to deny mental health services in such a case is that it simply wouldn’t treat the moral logic of the Church of Scientology with the dignity that it accords the views of the Catholic Church. Which is pretty much what the Court said in Hobby Lobby when it disclaimed any possible application of this decision to minority religions that object to transfusions, etc. And yes, that troubles me.)
After taking in last year’s excellent Blithe Spirit at the Stratford Festival, I argued that Coward’s play anticipated Seinfeld in its characters’ utter self-involvement and the play’s fundamental misanthropy. Well, the director of this year’s Coward, Alisa Palmer, seems to have been of the same mind about her play: “Hay Fever reminds me of Seinfeld, a show whose creators, like Coward, pre-empted their own critics by declaring, cheekily, that their show was ‘about nothing.’”
I should be delighted that we agree, but, you know, I’ve got my critic hat on. And the thing is, generally nothing will come of nothing. Palmer’s production comes perilously close to proving the truth of Lear’s statement, though the play is redeemed by certain key performances.
Hay Fever‘s plot (which Coward himself admitted was minimal to the point of near-nonexistence) revolves around the by-now well-worn scenario of throwing a bunch of squares and a bunch of cool, artistic types into close proximity. Judith Bliss (Lucy Peacock), retired stage actress, her children, Sorel (Ruby Joy) and Simon (Tyrone Savage), and her novelist husband, David (Kevin Bundy), have each, unbeknownst to the others, invited a romantic prospect down for the weekend. They are each appalled by having their respective plans upset by the others, and respond by treating each other’s guests with outrageous rudeness.
Over the course of the first evening, the opening pairings, which are obviously mis-matched in terms of both age and temperament, get more-plausibly recombined. These recombinations prompt Judith to bouts of extreme theatrical excess in which her family, knowing the routine, join her, to their guests’ distinct alarm. Not so much Blissed-out as simply exhausted, the invitees decamp collectively first thing the next morning, leaving their hosts to comment on their rudeness, and resume their normal, quarrelsome family life.
The play could be read as a satire on the artistic sensibility – or, alternatively, as a satire on the lack of sensibility of the squares – or both, something like the movie, “Impromptu.” But Coward isn’t engaged in social satire, because satire requires an affirmative set of values against which a society may be judged. Hay Fever has no such values – it’s blissfully relativistic. Instead of values, it has manners. But nobody agrees what those manners ought to be – and it is here where the play approaches the Seinfeldian.
It’s significant that what characters on all sides of Hay Fever are primarily concerned with is the rudeness of the other characters, a rudeness that can be manifested by too little attention or too much or simply the very wrong kind. Seinfeld‘s plots frequently revolve around characters asserting or violating norms of behavior, and engaging in wild theatrics around the necessity of upholding said norms, notwithstanding that said norms are invariably completely spurious. The Bliss family inhabit a somewhat similar world, inasmuch as they, by virtue of their position as artists, have a kind of professional responsibility to be able to play all sides, emotionally, in a scene, and hence can’t take any of them seriously. Each member of the family plays this out a bit differently; the novelist likes to see things as they really are, and then pretend that they are otherwise, while the actress dives right in without bothering to discern whether there is a reality of any kind at question. But it amounts to much the same thing either way: they all believe that sincerity is the key thing in art; once you can fake that, you can fake anything.
That’s what I see in the play, at any rate. To realize that vision requires giving each visitor to the Bliss household a distinct integrity, a vision of how one ought to behave, that will be upended by the Blisses. The only character who I saw manifesting such a vision was Sanjay Talwar’s professional diplomatist, Richard, and it’s not an accident I think that Talwar’s touching performance gets the most heart-felt laughs of the night. (It’s also not an accident, I think, that the diplomatist is the only character definitively to transgress against propriety, in making a pass at the married Judith Bliss.) Gareth Potter’s bluff boxer, Sandy, is perfectly plausible, as is Cynthia Dale’s predatory minx, Myra. But they don’t come into sufficiently sharp focus; we don’t see clearly how their respective senses of the way people ought to behave is disturbed by the ways in which the Blisses transgress. (Dale, in particular, seems more put out that she isn’t getting over than furious that David has already read – indeed, written, many times – the script she’s reading from as she tries to seduce him.)
Something’s off on the Bliss side of the fence as well, though Peacock’s antics never failed to bring a smile to my face, and Bundy got at the heart of the matter in his scene with Dale. I have a sneaking suspicion that Palmer harbors sentimental feelings for their family, and that she’s directed them to play big even when they aren’t formally “playing” a role so that we’ll get that these are lovable eccentrics, and fall for them as she has. But if she has sentimentalized them, it must be because they are artists, like she is. And the thing to remember about the Blisses is that they aren’t really great artists – they aren’t even really that good. They’re successful pros; that’s all. The play Judith loved doing so much sounds ghastly – her children tell her it’s ghastly, and even she knows it’s ghastly. But it was a great role because it gave her so much scenery for her to chew. David has no pretensions to greatness; he’s a hack novelist and he knows he’s a hack novelist. There is no Chopin here, no Delacroix.
Coward is honest enough to see the Blisses’ eccentric manners as tribal markers rather than signs of any higher calling. But if that’s all they are, then this can’t be a story about hapless squares running afoul of charmingly outrageous artists. Which is a good thing, actually, because that particular story just isn’t terribly funny, and no amount of “amping up” of the acting nor layering-on of slapstick will really make it so.
Hay Fever plays through October 11th at Stratford’s Avon Theatre.
Eugene Ionesco’s play, The Killer, is rarely produced, and I think I understand the reason. It’s over-long (particularly the third act, which feels interminable), highly abstract (the principal character is taken to speechifying in airy generalities about his experience), and yet also distinctly dated (the intimations of incipient fascism in Ma Piper’s politician campaign feel rooted in France in the 1950s, and have nothing like the universal resonance of the transformations in Rhinoceros).
But the core of the play is a meditation on original sin understood as perversity, a notion ideally suited to Ionesco’s theater of the absurd. And the Theater for a New Audience’s current production is an excellent opportunity to experience this notion played out on stage.
The Killer begins with Ionesco’s everyman character, Berenger (Michael Shannon), arriving in a new “Radiant City,” a thoroughly planned urban development in a part of his city that he’s never been to. The name and era suggests a Le Corbusier-esque modernist utopia, but the town as described sounds more like something out of “The Truman Show” – roses, manicured lawns, beautiful brickwork, and a crystal dome that keeps out all foul weather (the roses are watered by drip irrigation). (Wisely, designer Suttirat Larlarb shows us none of this – the Radiant City exists entirely in our mind’s eye.) Berenger, escorted by the Architect (Robert Stanton), the civil servant responsible for creating this utopia, comes to life in the space, connecting it with an experience in his childhood of a kind of euphoria in which he perceived the world as radiant, and experience that, although he never felt anything like it again, kept him going through the pointless tedium of mundane life. But here, perhaps, here he could have such an experience on a daily basis.
Berenger is so carried away, he falls in love, or something resembling it, with the architect’s pretty blonde assistant, Dennie (Stephanie Bunch), who arrives on the scene announcing she’s going to quit – against the Architect’s most strenuous advice. By the end of the scene, during which Dennie not only doesn’t return Berenger’s affections but barely takes note of his existence, Berenger is convinced they are engaged. He determines to buy a house in this perfect city in which they might live together.
And then, immediately after agreeing to purchase a house in the community, the whole vision comes crashing down. It turns out there’s a serial killer on the loose in this ideal environment, whose method is to lure people (always in threes: a man, a woman and a child) to a lagoon, and shove them in. People have grown so frightened that they generally don’t leave their houses, and are moving out of the neighborhood en masse, but still the killer never lacks for victims. His most recent victim: Dennie.
Berenger is appalled, heartbroken, distraught that his vision of happiness has been so thoroughly violated. He demands that something be done – but the Architect breezily avers that all possible steps have already been taken, to no avail. After a depressing dinner with the Architect at a pub by the bus station, he trudges home, back to the dreary rainy city that he left – but determined to bring the killer to justice, somehow. End of Act I.
It’s a marvelous beginning to the play, anchored by two splendid performances. Stanton’s Architect is a picture of punctilious perfectionism, quietly proud of his creation but never smug, smiling blandly at Berenger even as we can tell that he desperately wants to get back to the office. (The Architect’s dialogue with Berenger is repeatedly interrupted by calls from the office, which the Architect answers by pulling a ’50s-era corded phone out of his pocket. Whether this is Ionesco’s prescient original direction, or director Darko Tresnjak’s brainstorm, it brilliantly revitalizes what has become a cliche in our cellular age.) And I applaud Tresnjak for the decision to cast Shannon against type as Berenger. Looking at Shannon’s craggy, pitted face, we feel how he has been oppressed by ordinary life, and there’s something so incongruous about seeing Shannon skip about the stage in glee and prostrate himself before his beloved Dennie – he makes Berenger come alive as a specific character, and therefore makes him more universal than he would be if played by a more obvious naif.
You can see, I’m sure, why I describe this play as a meditation on original sin, the Radiant City alluding obviously to the Garden of Eden, the killer as the serpent in the grass (who seduces his victims rather than merely surprising them). Ionesco would seem to be satirizing our efforts to get back to that garden, mocking, along with our Promethean presumption, our specifically male presumption to see every woman as a potential Eve. (Berenger’s overtures to Dennie come off as especially creepy in the age of Elliot Rodger, and connect him, surreptitiously, to the killer, who, Berenger suspects in Act III, has his own “issues” with having been rejected by women.)
But that kind of satire is secondary to Ionesco’s primary aim. From the moment we learn about the killer’s existence, Ionesco unsettles us by making him seem, well, silly. How does the killer lure his victims to their deaths? He tries begging, and selling them trifles, but this never works. What works is offering to show them a picture of “the Colonel.” This, the Architect informs us, nobody is able to resist. Why? We have no idea, and the play has no interest in telling us. The point is its absurdity.
We finally get to see this Colonel’s picture in Act II, when Berenger comes home to his dingy flat (under the management of the Concierge, played by the always delightful Kristine Nielsen, who doubles as the proto-fascist politician Ma Piper in Act III), and finds his perennially unwell friend, Edward (a deliciously Renfieldian Paul Sparks), waiting for him. Edward, to Berenger’s surprise, seems to know all about the killer. This knowledge becomes less surprising when we learn that Edward’s briefcase, which he is loathe to let leave his hand, contains all the materials connected to the killer – a map of the Radiant City marked where the killer has struck, a diary detailing his attacks, the trinkets the killer tries to sell, and dozens of photos of that Colonel. Edward expresses mystification as to how all of this material came to be in his possession – and Berenger, surprisingly, never suspects him. Instead he enlists his aid to bring these materials to the police so they can finally catch the killer.
Act III, the weakest act by far, consists of Berenger’s continually-frustrated attempts to reach the police, and to recover the briefcase (which Edward sneakily avoids taking with him when they leave Berenger’s flat), and then of his solitary confrontation with the killer. Berenger attempts, at length, to convince the killer that his career of crime is absurd, using a variety of psychological philosophical frameworks, even venturing into Christian theology, to no avail. The killer does nothing but laugh in response. This, along with the satire of Ma Piper, is the most thuddingly obvious part of the play, the point being that the primal urge to kill not only cannot be reasoned out of us, but cannot even be comprehended.
But to me, the heart of the matter is that photo of the Colonel, which points to a different absurdity, a different perversity. It’s not only that our urge to kill is irrational and perverse. Our victimhood, our susceptibility temptation, is even more ludicrous. What does us in is not the promise of worldly wealth and fame, not sexual or delirious experience. What none of us can resist is a completely unexceptional photo of a mustachioed officer.
The Killer plays at the Polonsky Shakespeare Center through June 29th.
I’ve been enjoying the back-and-forth between Ross Douthat and Matt Yglesias over the true strength of the Democratic coalition. I thought Yglesias was getting the better of the argument in the first round, but in his most recent contribution I think Douthat lands a blow on himself, whether he realizes it or not. Here’s the last paragraph of Douthat’s post from this morning:
I would add, as a coda, I’m not at all persuaded by Yglesias’s initial premise either — the idea that Clinton’s polling advantages are the result of ideological unity, rather than a case of her brand covering over disagreements that would matter more if she weren’t running. Give me an Andrew Cuomo-versus-Elizabeth Warren tilt for the nomination, for instance, and I’d wager that all manner of intra-Democratic divisions would suddenly matter much more than they seem to today. For that matter, give me a candidate exactly like Hillary who doesn’t have her mystique and history, and it’s easy to imagine the issues she’d get challenged on. (Let’s just say there’s a reason Robert Kagan likes her.) I agree with Yglesias that the Democrats are relatively unified, especially by their party’s historical standards … but it’s her aura that’s sealing that unity, rather than the other way around.
I think Douthat is entirely right about the point I bolded. Indeed, I’d go further: without her mystique, she’d be a long-shot for the nomination. Consider her resume. First Lady is a ceremonial post. Her one substantial task in that post, organizing health care reform, was a political failure of the first order. Next, she was Senator from New York, where she compiled a respectable record of constituent service, but did not distinguish herself either as a legislator or in intra-party policy debates. Next, she was a failed Presidential candidate. And finally, she was Secretary of State, where, again, you really have to squint to see a substantial record of accomplishment. And the State Department hasn’t been a stepping-stone to the White House for quite some time. She’s an organized and disciplined politician, but she’s rarely noted for her political charm or acumen.
The one thing that distinguishes her from your typical Democrat is that she is substantially more hawkish, having taken the hawkish side in essentially every political debate from Bosnia and Kosovo through Afghanistan and Iraq and into the Obama-era debates over Libya, Syria and Ukraine. If she weren’t Hillary Clinton, that fact would not only make her a long shot; it would probably be disqualifying.
But I think that cuts against his ultimate point, that it is only Hillary Clinton’s mystique that is holding the coalition together. On the contrary, it’s the high degree of policy consensus, combined with a steadily-strengthening conviction of the perfidy of the opposing party, that holds the Democratic coalition together. The mystique is what holds the Clinton campaign together, not the party.
If Hillary Clinton had died in 2013 after sustaining that concussion, there would be a real tussle within the Democratic Party over who is best positioned to lead the party to another Presidential victory, and potentially a real debate over how far to the left to lean on economic issues. But I have a hard time picturing the kind of debate over fundamental direction that characterized the 1984, 1988 or 1992 Democratic races, or the 1976 or 1980 GOP contexts. In Hillary Clinton’s absence, the 1988 Republicans would seem to me to be a likely model for what we’d expect from Democrats leading up to 2016.
Instead, it’s overwhelmingly likely that Clinton will be the nominee. And she probably has an as-good or better shot to win the Presidency as any other Democrat. But what then? The biggest risk to the future of the Democratic coalition is events, and the decisions future Democratic Presidents make to respond to them. Given her hawkish inclinations, it may be that, far from being what holds the Democratic coalition together, Hillary Clinton’s decisions in office, if she becomes President, could be a significant risk factor for the future of that coalition. And an entirely avoidable one, were it not for that mystique.
My point was not that we are obliged to “fix” Iraq, or that the Iraqis have an infinite claim against us. You can’t be obliged to do the impossible, and obviously claims can’t be infinite. But claims can be very large without being infinite, and we shouldn’t pretend they don’t exist.
Nor was my point that there is “no difference” between action and inaction – those are Sullivan’s words, not mine. Obviously, there is an enormous difference between killing somebody and not preventing their death – and a really very big difference between the two if you don’t have any plausible means of prevention ready to hand. What I said is that inaction, for a hegemonic power assumed to be engaged essentially everywhere, is a kind of action. A policy of indifference is also a policy. That’s the difference between the United States and Sweden, and that difference is a consequence of the differences in our relative power.
Nor did I argue for military intervention, which I think would be counterproductive. As I said in the piece, leaving a residual force would have given us little leverage to drive a political settlement in Iraq, and in the absence of a political settlement violence was likely to resume, and escalate, as it has. I agree with Tom Ricks: we should not be surprised at how Iraq has deteriorated, and recent events not only don’t prove we should have left a residual force, but arguably prove the opposite – that leaving a residual force would have put us in an even worse situation.
I made the following analogy:
ISIS may be likened to the Khmer Rouge, who might never have come to power in Cambodia had we not bombed that country as part of our failed effort to defeat North Vietnam. Then, of course, it was our old enemy, Vietnam, that kicked out the Khmer Rouge from Cambodia. Similarly, if ISIS is prevented from overrunning Iraq, it will probably be because of intervention by Iran.
That doesn’t sound to me like a call for America to engage in air strikes or re-insert troops. Because it isn’t.
I will note in passing that I strongly opposed any intervention in Syria, among other reasons because I strongly suspected we’d wind up, unintentionally or not, supporting precisely the kinds of groups that have coalesced into ISIS.
Now, having said that, here’s how Sullivan ends his post:
Leave it alone. And do what we can to protect ourselves. That doesn’t guarantee anything. But intervention guarantees far worse.
That’s the attitude I’m arguing against. No, we can’t “fix” Iraq, and renewed military intervention would be counterproductive. But we do owe the Iraqis more than a determination simply to “protect ourselves.” We owe it to them to do what we can to ameliorate the situation.
So what can we actually do?
The single most helpful thing we can do, it seems to me, is to work to prevent this from becoming a regional war. That means working with Turkey, Iran and Saudi Arabia to see that all of their interests will be harmed by such a war, and that instead their interests lie in laboring to produce a political settlement in Iraq. We have less influence than we once did in Turkey, and we have very little influence in Iran. One would hope that after all this time we have some influence in Saudi Arabia, though there’s only intermittent evidence of this. Nonetheless, we should use what we have, and try to bring these powers, potentially enemies of each other, into something resembling concert. This is basically what Leon Hadar advocates, and, as he notes, we won’t be able to force any of these powers to do what we wish - all we can do is try to influence them through diplomatic engagement, both with carrots and sticks.
But here’s the thing: there will be costs associated with both those carrots and those sticks. We have other goals with all of these countries; we can’t get everything we want. Whether we should pay those costs or not is partly a function of how much we feel we owe Iraq.
Daniel Larison also responded to my post, and also seems to think I favor renewed military intervention, which I do not. In fact, I agree with much of what he writes, particularly this:
The question is not whether the U.S. has done a great deal to create the current situation in Iraq–obviously it has–but what the U.S. can constructively do to remedy the country’s many woes. A government may be responsible for something and nonetheless be completely unqualified to repair the damage it has done. While there is a certain justice to the idea that the people responsible for breaking something are obliged to fix it, that takes for granted that they have the first clue how to rebuild what they’ve destroyed.
But I have a bone to pick with this:
If we took this definition of “indirect responsibility” seriously and applied it consistently, there is almost no event in the world for which the U.S. would not be somehow “indirectly responsible.” That way lie madness, endless conflict, and exhaustion.
I understand what he means, but I think he’s confusing what I intended as description for prescription. My definition of “indirect responsibility” is simply to say that once you have positioned yourself as a global hegemon, declared yourself “indispensable” and arrogated to yourself rights that are not granted to any other state, of course you are indirectly responsible for just about everything that happens, to a greater or lesser degree depending on the situation. The madness lies not in my description of reality but in the reality itself.
We should seek to change that reality. Perhaps I am overly pessimistic, but I assume that this will be a difficult and lengthy labor, with many setbacks along the way. I am hard-pressed to name another hegemonic power that acceded peacefully to a more multi-polar reality. Most empires decay before crumbling with catastrophic speed. Moreover, a policy of – let’s call it “diplomatically-engaged restraint” – may well produce some results that look practically indistinguishable from that crumbling. Indeed, that’s exactly what John McCain sees in Iraq right now. We should be aware of that fact – and that was the point of my parting line about “minding our own business” not necessarily leading to any kind of solution to the world’s conflicts.
In the meantime, we emphatically do not need to intervene everywhere to solve every problem. But that’s not the same thing as saying that we don’t need to have a policy – or that our policy can plausibly be tailored to a narrow vision of the national interest. We are simply too powerful, and too enmeshed in too many commitments.