This is school break week for my son, so we’re away on a family vacation. And because I don’t understand the concept of “beach read” I brought Michel Houellebecq’s acclaimed novel, The Elementary Particles along. (Soumission is not yet available in English; this is preparatory reading for when it is.) I will hopefully finish it today.
I have a bunch of thoughts about the book, which has struck me by turns as touchingly sharp in its portrait of one very sad character (Bruno, who feels like something of an author-surrogate) and quite dull in its sweeping indictments, which amount to the assertion that this character, in his boredom and misery, is the exemplary hero of our age. It strikes me that the need to assert this – to pontificate upon the nature of society – is an index of failure as a novelist, the inability to see the world through any lens but that of his deeply alienated protagonist. That’s not how Dostoevsky’s Underground Man or Gogol’s mad diarist – or Roth’s Portnoy – earn our empathy, and through that empathy our appreciation for their understanding of the world.
I keep feeling that what Houellebecq sees as a novel of ideas is really just a novel of punditry, a degenerate notion of an idea that really amounts to an attitude or orientation. When Tolstoy or George Eliot lecture me in the midst of a novel, I put up with it because I know I am in the presence of actual ideas worth grappling with. With Houellebecq, so far at least, all I get is the feeling that I know the type, I’ve heard this before, and if I haven’t already been convinced I’m not going to be.
Perhaps that just means he was still searching for his ideal form, and perhaps now he has found it: what Mark Lilla, in his thought-provoking review of Soumission, called a “dystopian conversion tale.” We’ll see what I think when that novel comes out in English. But color me skeptical: I don’t think that ideas, the currents of civilization, can actually be blamed for one’s own personal inability to connect with other human beings or find meaning. The Elementary Particles is best when it stares unflinchingly at that condition as embodied in a single person, and worst when it engages in the evasion of blaming the world for that condition.
There’s a lesson in that for novelists – but also for pundits.
Let’s take a quick look back at my before-they’re-hatched analysis of the latest Israeli election’s chickens, and see how I did.
1. This election is not about the rise of the left or the fall of the right, but about a reshuffling of the left and right into new configurations.
Likud is now projected to win about 30 seats. Habayit Hayehudi and Yisrael Beiteinu are now projected to win 14 seats, for a national bloc total of 44 seats.
That same bloc has 43 seats in the current Knesset.
2. This election is about the fickle center.
In the prior Knesset, there were two “centrist” parties: Hatenuah and Yesh Atid. They had 25 seats between them, and sat in the government in the last Knesset. This time, Hatenuah ran on a joint list with Labor as the Zionist Union, and Yesh Atid was widely expected to be a partner of the Zionist Union rather than Likud.
Well, Yesh Atid is projected to have lost 8 seats, and we don’t know how many seats the center-left lost, but if you add Meretz, Hatenuah and Labor together, that coalition went from 27 seats to a projected 28 seats.
However, a new center-right party – Kulanu – was born in this election, led by another defector from Likud unhappy with Netanyahu. Kulanu won 10 seats.
So, in broad strokes, the centrist leaders from the last Knesset have decided to switch from center-right t0 center-left. As a consequence, they lost a bunch of seats to a new center-right party.
3. This election is about the rise of the Joint Arab List.
The Joint Arab List did indeed come in third, with a projected 13 seats. What this means for the future depends on what the Arab parties do with their newfound clout.
If I were to advise them, it would undoubtedly come off as trolling. But it seems to me that the only way ultimately to effect change is to make political demands: we will sit in coalition if you agree to such and such. The Arab parties are now a large enough bloc, as a joint force, that they should start saying what that if consists of. Then the left-wing Zionist parties can debate whether it’s worth acceding to those demands or not.
I can guarantee you that the gap will be too large right now. But that doesn’t mean it will be too large forever. It won’t start narrowing, though, until someone lays down a plausible marker for where the negotiation begins.
4. This election proves the Israeli electorate doesn’t really believe Netanyahu’s scare-mongering on Iran.
Correct: but it also proves that the Israeli electorate does believe his scare-mongering on the Israeli Arab vote and on negotiations with the Palestinians. His final pitch was: you have to stop the left from winning because the left is soft on the national question, and the only way to stop the left from winning is to vote Likud, not for any other right-wing party. The pitch worked: the other right-wing parties dropped, in some cases by more than projected, but Likud surged.
5. The election proves that parliamentary systems aren’t all that.
Consider the rise of the Arab bloc. Why isn’t the third-largest party in the Knesset a plausible coalition partner?
Well, perhaps the problem is a structural one. So long as Israel has a proportional-rep system, there’s a natural logic to having Arab parties. So long as there are Arab parties, they will gravitate toward the center of gravity of Arab politics – which is decisively opposed to Zionism as such. So any Zionist party that forms a coalition with them has to navigate, as part of the coalition agreement, the question of the nature of the state.
By contrast, if Israel had a quasi-Presidential system, the President could campaign for Arab votes along with Jewish votes. The Arab vote would naturally gravitate toward the left side of the aggregate Israeli spectrum, Jewish and Arab together. And the Arab vote would, by definition, matter in a way that it still does not today.
On the other hand, it’s possible that under a Presidential system the Jewish vote would simply move more to the right as the left became more associated with Arab votes, so that Arab voters were just as effectively shut out of the system. After all, Mississippi voters divide very neatly on racial lines, resulting in right-wing and white domination at the gubernatorial level (regardless of whether the party expressing that domination is Democratic or Republican).
But that result would just reinforce my point that the decisive questions relate to the underlying structure of a given society, and that good institutions can at best marginally ameliorate deep divisions in that society. Israel’s institutions aren’t doing an obviously better job of that than American institutions.
Tomorrow, Israelis go to the polls, in an election called by the sitting prime minister to strengthen his hand, and that looks increasingly likely to weaken it, and possibly to cost him his office. But the true import of the election may lie elsewhere. Herewith five matters to consider about the upcoming election.
1. This election is not about the rise of the left or the fall of the right, but about a reshuffling of the left and right into new configurations. Currently, Labor and Hatenuah (“The Movement”), a moderate party founded by a Likud defector, have 21 seats between them. These two parties are now running as an alliance, the Zionist Union, projected to win about 25 seats. Meretz, the most left-wing Zionist party, has 6 seats, and Yesh Atid (“There Is a Future”), a centrist (on security and national questions), secularist and liberal party has 19 seats. These are the most-plausible coalition partners for the Zionist Union, and they are projected to win about 5 and 12 seats respectively. In other words, a coalition currently totaling 46 seats is expected to drop to 42 seats, plus or minus.
The right is a mirror image. Likud currently has 18 seats. Yisrael Beiteinu (“Israel is Our Home”) has 13 seats, and Habayit Hayehudi (“The Jewish Home”) has 12. In the new Knesset, Likud is projected to have around 22 seats, Yisrael Beineinu to shrink to 5 seats, and Habayit Hayehudi to 11 seats – but a new center-right party, Kulanu (“All of Us”) founded by a breakaway Likud MK, is expected to garner about 9 seats, and is a likely partner for a Likud-led coalition. So a bloc currently numbering 43 seats is expected to grow to 47 seats.
Meanwhile, in the same polls that show the Zionist Union outpolling Likud, Isaac Herzog, head of the Labor party that forms the largest part of the Zionist Union, trails Benjamin Netanyahu by 14 points on the question of who was the “most appropriate” candidate to be prime minister.
And that’s supposed to be a big left-wing victory.
2. This election is about the fickle center. What’s wrong with the above analysis is that Yesh Atid and Hatenuah were both members of the 19th Knesset governing coalition that Netanyahu lead, but now they are expected to be part of a center-left government. As a result, both parties are projected to lose seats – but their rumps will be better coalition partners for the left. So this election does mean a shift to the left: a shift of centrist party-leaders who now prefer a center-left coalition to a center-right one.
More to the point, for the first time in a while these centrist parties are not running as alternatives to left and right, but as logical partners for the left. Yesh Atid expects to join a Labor-led government. Hatenuah is running on a joint list with Labor. The typical pattern recently is for such parties to collapse completely – as Kadima and Shinui did. If that doesn’t happen, it’s a kind of victory for the left.
So perhaps the better way to describe the election is: the left was so weak going into this election that even a significant set of defections from the center-right is not enough to get them close to being a governing coalition.
3. This election is about the rise of the Joint Arab List. Israel’s Arab citizen have full political rights, and there are several parties representing largely Arab voters in the Knesset, including an Arab nationalist party, two Islamist parties, and a formally non-sectarian party that has some Jewish members and leadership, a descendant of Israel’s old Communist party. This year, for the first time, all of these parties have united into a single joint list under the new leadership of Ayman Odeh of Hadash (the former Communists).
These diverse (indeed, ideologically contradictory) parties have united because of a new law raising the vote percentage threshold for inclusion in the Knesset. Every one of them was at some risk of not clearing the new bar; if they did not hang together, they might very well hang separately. As a result, the joint list, though projected to win not many more seats than the collection of constituent parties currently hold, may wind up being the third-latest party in the Knesset.
Does that matter? In terms of coalition politics, probably not. The list has said they would not serve in a Zionist government, and the major parties have all ruled out including the list in their governments anyway. That doesn’t preclude the Arab-dominated parties from supporting a congenial governing coalition from the outside – but if would do so together they would likely have done so separately.
The real significance will be if the parties are able to work together over a longer period of time, and thereby transform Israeli Arab politics into something with a bit more heft. There are really only two plausible happy futures for Israel’s Arab population: recognition as a national minority and concomitant greater autonomy for Arab-dominated areas within Israel; or greater integration into an Israel that has evolved in an explicitly post-Zionist direction.
Hadash, the leading partner in the joint list, explicitly advocates the latter, but either goal will require the existence of a party with substantial heft that can put itself in a position to make demands of the Israeli state. Whether the joint list evolves in that direction remains to be seen.
4. This election proves the Israeli electorate doesn’t really believe Netanyahu’s scare-mongering on Iran. If there were any validity to that scare-mongering, and Israel faced an unprecedented risk to its very existence, many things would be happening, none of which are. The major parties would be talking about the necessity of a unity government. There would be across-the-board support for the prime minister in his efforts to rally international support for Israel in its hour of crisis. There would be grim debate about what Israel will do if it cannot rally that support, and must act alone. You would see, in other words, all the signs Israel exhibited in the run-up to war in 1967.
Instead, Netanyahu has been undermined by economic discontent, but even more so by a sense of fatigue with his schtick, which hasn’t changed over his entire tenure in politics. Netanyahu’s pitch has always emphasized warning of the enormous threats Israel faces, and the unreliability of all other candidates in standing up to those threats. He has made no progress in actually addressing those threats because, from Netanyahu’s ideological perspective, progress is not actually possible; the threats are permanent and need to be faced with implacable resolve forever. Unless one exercises the kind of totalitarian control of a North Korean god-king, this is a viewpoint that wears thin after some time, even if the objective threats are significant and longstanding.
5. The election proves that parliamentary systems aren’t all that. Matt Yglesias made a bit of a stir a couple of weeks ago predicting the doom of American democracy because we have a presidential rather than a parliamentary system. Under a presidential system, you have two branches – executive and legislative – that each have substantial authority, and that each get a mandate directly from the people. As a result, when they clash, there’s no “principled” way to resolve the conflict, raising the specter of endless gridlock, extra-constitutional attempts to do an end-run around that gridlock, and finally civil strife between partisans of each opposing faction.
The fact that France’s presidential fifth republic has proven much more stable than the preceding two parliamentary republics should have been enough to raise real questions about this thesis, but Israel provides another useful counter-example. Because of proportional representation, Israel has always had a riot of parties representing distinct demographic segments or ideological interests, and has never had a majority government (every government has been a coalition). But in this next election, the most popular candidate for head of government (Netanyahu still is that) may lose the election, the winning party may prove unable to form a government (because of a lack of plausible coalition allies), and any coalition that does form may well have to reject the views of a substantial majority of voters on certain crucial issues (for example, the relationship between religion and state) in order to build a functional coalition.
Moreover, apart from the parties representing discrete segments of the population (ultra-Orthodox Jews, settlers, Israeli Arabs), no party will be able to identify a constituency to which it is accountable, as would be the case in a district-based system. This is one reason for the perennial instability of Israel’s political system: the voters do not have any clear way of assigning blame and punishing those who have failed to deliver on lunchbox issues, and so increasingly vote on identity-based questions.
Israel tried to resolve these longstanding problems two decades ago by switching to a quasi-presidential system involving direct popular election of the prime minister. This, of course, only made the problems worse.
All of which suggests that, relative to the underlying social structure, institutional design may not be quite as important as some political scientists think.
Rod Dreher is convinced the GOP establishment has finally decided to ditch social conservatism entirely:
A large group of Republicans have signed a “friend of the court” brief supporting same-sex marriage. The conservative friend who tipped me off to this says:
It’s quite the litany of GOP staffers, politicians, and consultants. It’s getting increasingly harder to contend that social conservatives have any home in the GOP. I think we’re in the process of transitioning from “useful idiots” to “political liability.” Social conservatives like to ask, “Why would you want a ‘bigot’ to bake you a cake anyway?” Well, why would you want to be part of liberal party anyway?
Let any social and religious conservatives who have eyes to see recognize that this list represents the GOP Establishment. You have likely heard of only a few of these folks, but the list represents the people who actually make the party work. The social liberals have won decisively. Let there be no more illusions.
And from this he concludes . . . that social conservatives would “still be compelled to vote Republican,” based on the “hope that there are some principled libertarians in the GOP.”
Principled liberals who care about actual religious liberty are simply presumed not to exist. Similarly, no thought whatsoever is given to the possible existence of unprincipled liberals who could be swayed by the prospect of a large number of votes.
But why not? Wouldn’t it be worth a bit of effort to find out if they exist? Wouldn’t it be worth a great deal of effort to try to cultivate such people? Wouldn’t it seem to be a better use of energy than continuing a strategy that, by Dreher’s own lights, has gone from failure to failure?
The analogy is often made between social conservative support of the GOP and African-American support of the Democrats. But the analogy is imperfect. Both groups fret periodically that overwhelming loyalty to one party makes it easy for that party to take them for granted. But most African-American voters have a set of bedrock economic interests dovetail better with the Democratic Party’s positions than with those of the GOP. If the GOP were the party of reversing mass-incarceration, greater scrutiny of the power of the police, etc., with the Democrats taking the opposite side on all these matters, and if the GOP overwhelmingly rejected appeals to white racial solidarity, then, over time, the African-American vote might split between those who voted more based on economic interest and those who voted more based on issues related to personal freedom – roughly the way the African-American vote split after FDR and before LBJ. Until that day, there’s no contest.
Now, I think it is very much in the African-American community’s interests to try to bring that day about. I think it’s more in their interest than it is in the GOP’s interest, which is why I think African-American leaders should be courting Rand Paul and other GOP libertarians rather than the other way around. But precisely because the interests of the African-American community are diverse, and because so many of them fit better within the Democratic issue matrix, I can understand why most African-American leaders don’t see the point in trying to cultivate options outside the Democratic Party.
But if Dreher is correct (and I’m not saying he is), then social conservative support for the GOP is single-issue driven. It’s “all about religious freedom,” he says. If that one issue were taken off the table, socially conservative voters would be free to vote their consciences – and their economic interests. If he really believes that, why is he so sure that there is no trade to be done there? That there is no point in even trying to negotiate with the other party?
Let’s imagine a handful of significant figures in the religious right stating publicly that their support was up for grabs based on a single issue: religious freedom. Come up with legislation providing adequate protection, and any candidate signing on would earn their support – or their firm neutrality if both candidates signed on. This would be true regardless of those candidates other views – including on social issues. Meanwhile, absolutely no further support would be provided to either party generally.
Does Dreher really think that the Democratic Party wouldn’t take a serious look at somebody credible who said that? Or that such a statement wouldn’t at least prompt a real debate in Democratic ranks about how to respond?
Dreher talks a lot about the Benedict Option, the building of independent communities organized around a common moral and spiritual commitment, which could keep the flame of Christian civilization alive during a new dark age. He always stresses that he doesn’t mean withdrawing from the world, but merely seceding from the larger culture. Well, there are such communities in America, and it’s worth noting how some of them play the political game.
Consider Kiryas Joel. This village in Orange County, New York, was designed as an enclave of the Satmar Hasidic sect. Satmar are the most insular of Hasidic sects, going to enormous lengths to keep themselves uncontaminated by the larger culture. But they participate in commerce – and they most certainly participate in politics. Specifically, they vote as a bloc for whichever candidate best-supports the narrow interests of the community.
And, funny thing, but politicians respond to incentives. This is a community that rigidly separates the sexes and imposes a draconian standard of personal modesty – and that strives mightily to impose that norm as a public matter in their community. Don’t even talk about homosexuality. But none of that prevented a Democratic candidate for Congress from earning their support by promising to help them with facilitating the community’s growth. And with their help, he narrowly won his election against a Republican who had previously earned the Satmar community’s favor.
I am not writing a brief for Kiryas Joel or Satmar. I think that kind of insulation is extremely destructive, not only for the individuals involved but for any kind of authentic spiritual life. But it seems to me that this is what the Benedict Option looks like in the real world – or, rather, this is a somewhat extreme end of what it might mean.
And my real point is that that approach – a focus on nurturing a spiritual community, maintaining however much integration with the rest of the world as is compatible with that priority, and orienting one’s politics on the specific needs of your community – is completely compatible with playing the two parties off against each other. Satmar stands opposed to basically everything the Democratic Party stands for. Heck, it stands opposed to basically everything America stands for. For that matter, it stands opposed to basically everything the rest of the American Jewish community stands for as well – it’s resolutely anti-Zionist, extremely socially conservative, refuses to cooperate with non-Hasidic groups – it even has a hard time getting along even with other Hasidic groups. And it still gets courted by Democrats.
And their freedom is pretty darned secure.
If that’s really the only issue, well, there you go. And if it isn’t the only issue, then let’s talk about the whole panoply of possible questions that might rightly affect somebody’s vote, and make an open-minded assessment of the relative merits of different candidates and parties. Anything is better than, in the wake of what you see as another round of betrayal and abuse, working to convince yourself you have no alternatives.
Hamilton, the wildly unlikely new hip-hop musical about the “ten-dollar founding father without a father” based on the Ron Chernow biography, has been hyped so much I almost didn’t want to see it. But believe the hype: Lin-Manuel Miranda’s new musical is truly revolutionary – and also a deeply moving work of art, and a sincere love letter to a particular vision of America.
Start with the music. Yes, it’s been 20 years since Rent established the viability of the musical in a contemporary musical idiom, almost 30 years since a rap-influenced song from a Broadway musical first charted, and Mr. Miranda himself has done musicals before in a hip-hop/R&B mode. But Hamilton takes it to a whole new level, in part because the individual numbers don’t have sharp corners, but weave into each other. This is musical storytelling par excellence, and the music feels supremely at home in our world. And on top of that it’s really good.
That would be impressive enough an achievement in telling a contemporary story. But this is a historical play, about a period far removed from our mores as well as our music. Or is it? The most unexpected achievement of Hamilton is that it genuinely bridges that huge gap in time. It doesn’t make us feel like we are in a period piece, nor does it engage in cutesy anachronism. Instead, it makes us feel like what happened then, with these people, could be happening right next door; that the founding fathers were our close cousins in spirit; that we would know each other if we met.
We see the fetish for dueling not as some romantic archaism, but as very akin to the contests for honor and supremacy that bloody streets today; we see the tomcatting and consequent sex scandals and we know not just “it was ever thus with men in power” but that we could have a conversation with these 18th-century types, and we’d know what we were talking about. By the time Thomas Jefferson (Daveed Diggs) and Alexander Hamilton are engaging in rap battles about policy in front of George Washington (Christopher Jackson), it not only feels completely appropriate, the substance of discussion is actually clearer than it would be from, well, treating it like a seminar.
The sheer amount of territory covered is breathtaking. We start with a panorama of colonial New York, and Hamilton’s arrival from the West Indies. We have the drama of the revolutionary war. The debate over the Constitution. Then the conflict between Hamilton and Jefferson in the Washington administration. Then the “revolution of 1800″ and Aaron Burr. It’s a huge canvas – and a vast amount of expository information is imparted at lightning speed. And we hear it. We take it in. Just as an educational achievement, it’s a wonder.
But this isn’t “Schoolhouse Rock!” There’s a beautiful, even heartbreaking three-sided love story with Hamilton’s wife, Eliza (Philippa Soo) and her sister, Angelica Schuyler (Reneé Elise Goldsberry). There’s a powerful story of fathers and sons, with Washington playing Hamilton’s surrogate father and Hamilton struggling to set his own son, Philip (Anthony Ramos) on an honorable path. And there’s Burr (Leslie Odom, Jr.), the chorus figure as well as the nemesis, who comes off as just as much an American exemplar as Hamilton, who poignantly represents the rage of the second-place finisher in a winner-take-all meritocracy, the would-be Gatsby who couldn’t quite pull off the trick of total self-creation.
And then there’s Hamilton himself. Played by Mr. Miranda (when was the last time that happened on Broadway – book by, music by, lyrics by, and starring), he is both perpetually young and prematurely old. (“I’m just 19 but my mind is older” he sings, and sounds like he’s about 12.) His story is a quintessentially American one, of a young guy on the make and determined to make it to the top on his own merits and in his own way, and because this is his story and America’s story that turns out to be what America was about all along. Yes, there’s lots of high-minded talk of revolution and freedom but what that comes down to when you strip away the pretense is a bunch of guys who didn’t want to wait, who were ready to take their shot and weren’t going to give it away.
That’s not the only way to understand America by any means, but it is a pretty good way in for a contemporary audience – and not just a New York audience. More to the point, Hamilton, a young hustler who couldn’t hide his own ambition if he wanted to, but who also had a profound sense of honor and integrity for which he was willing to sacrifice, well, ultimately everything, is the perfect figure to deliver the message that those are attributes that can coexist in a single person, and a single nation.
Oh, and a brief word about the casting. Nearly the entire cast is non-white (the big exception among named roles is George III, played by Jonathan Groff), but this isn’t exactly black Shakespeare either. This is emphatically not color-blind casting, nor (obviously) traditionally color-conscious casting. Every actor on stage plays his or her part(s) (there’s quite a bit of double-casting – some of it very clever) in his or her own skin, and voice, while also playing these characters from a very different era. But because the language moves so smoothly between the contemporary and the period, the actors can embody their characters without projecting double consciousness. It can almost make you believe in an America where, though we bear the monumental stain of slavery (which comes up repeatedly in the show), we’ve managed to wash out the twin stain of white supremacy, and see these founding fathers also as founding brothers. Almost.
The show isn’t perfect – nothing is – but I think it will only get better as it transitions to Broadway. They’ll have the opportunity to tighten up a second act that gets a bit episodic now (always a risk in a biopic). The show will benefit, I think, from the bigger sound you can create in a Broadway house, and some of the choreography will benefit from a bit more elbow room – though, honestly, what it would really benefit from is bringing it to the audience, Great Comet-style. (I get chills imagining how the duels would play if the audience were between the combatants.)
But they’ll really have achieved something if they bring it all the way to the audience – if schools from East New York to West Hollywood decide that instead of showing “1776,” this year they’ll have the kids perform the Hamilton-Jefferson rap duels. Then we’ll know Mr. Miranda didn’t just reach the bourgeois – he rocked the boulevard.
Hamilton runs at New York’s Public Theater through May 3rd, but good luck getting a ticket – it’s way sold out. Tickets for the Broadway run beginning July 13th at the Richard Rogers theater just went on sale.
Daniel Larison is not nearly outraged enough by the Senate’s intrusion into the negotiations with Iran. What Senator Cotton and his colleagues are doing is deliberately trying to cripple America’s ability to conduct foreign policy. And, at least in terms of domestic politics, it may well work.
First of all, let’s dispense with the notion that there is any principled Constitutional question at issue whatsoever. Senator Tom Cotton does not believe for one instant that the President is incapable of entering into binding agreements with foreign governments, nor does he believe that Congress or subsequent administrations can dispense with such agreements without cost. We know this because he believes that the Budapest Memorandum – which was not a treaty and was not submitted to the Senate for ratification – constitutes a binding promise to support Ukraine in its conflict with Russian-backed separatists, and that he believes failing to live up to this promise poses grave danger to the credibility of U.S. foreign policy generally. Here is video of Senator Cotton saying as much.
So imagine, if you will, a Senate faction opposed to the Budapest Memorandum (perhaps out of fear that it signing it would oblige America to do precisely what Senator Cotton thinks it does oblige us to do) sending a letter to the Ukrainian president in 1994 alerting them that such an agreement would properly have to be submitted to the Senate for ratification, and might well fail, and that a subsequent president could revoke its guarantees at will. A hypothetical Senator Cotton-equivalent would readily understand that the purpose of such an effort was to obstruct the president’s ability to conduct foreign policy, and would be alarmed at the implications for the credibility of America’s commitments around the world.
Right? Of course right.
This has nothing whatsoever to do with the Senate’s responsibility with respect to “advice and consent,” and nothing to do with asserting the legislature’s proper role in foreign affairs – a role that the legislature continues conspicuously to abdicate. (The Obama administration has been in regular violation of the War Powers Resolution for years, but while individual senators and representatives have pointed this out from time to time, Congress has taken no action, and I don’t expect it ever will.) I don’t expect much from most of the crowd of signatories, but Senator Paul in particular should be excoriated for participating in this stunt. If he thinks this is a “constitutionally conservative” move, he needs to have his head handed to him by people who actually know what they are talking about.
Substantively, the view of the Republican leadership appears to be that any of America’s threats to use force, however ambiguous or slight, must be backed up vigorously for fear of a loss of “credibility.” Diplomatic agreements, however, are not to be taken seriously, because they may be discarded whenever a new leadership disagrees with what a previous administration agreed to. They affirmatively wish such agreements not to be credible, so that they are never entered into. And, funnily enough, if you cripple America’s diplomacy you’ll have lots of opportunities to demonstrate the “credibility” of America’s threats to use force. Which is exactly the goal – because such situations play to the GOP’s strengths as a brand.
And unfortunately, the Republican leadership may well be able to achieve their goals. Not internationally – I doubt the Iranians did more than roll their eyes at this stunt – but domestically. It turns out to be relatively easy to manipulate the public into supporting a more aggressive foreign policy. If talks with Iran fail (which they might have done regardless), it is vanishingly unlikely that the American people will blame the GOP leadership in any way that matters. On the contrary: if the talks fail, the country will be more supportive of a more aggressive stance toward Iran, which will redound to the benefit of the GOP generally and its more hawkish members in particular. So long as that’s the electoral dynamic, there are literally no disincentives for this kind of outrageous behavior.
To paraphrase the immortal words of Senator McConnell, the negotiations with Iran may prove to have been a hostage worth shooting.
A few weeks ago, I learned that students are exposed to this sort of [morally relativistic] thinking well before crossing the threshold of higher education. When I went to visit my son’s second grade open house, I found a troubling pair of signs hanging over the bulletin board. They read:
Fact: Something that is true about a subject and can be tested or proven.
Opinion: What someone thinks, feels, or believes.
This comes from Common Core standards, McBrayer says. What’s wrong with this? McBrayer goes on:
First, the definition of a fact waffles between truth and proof — two obviously different features. Things can be true even if no one can prove them. For example, it could be true that there is life elsewhere in the universe even though no one can prove it. Conversely, many of the things we once “proved” turned out to be false. For example, many people once thought that the earth was flat. It’s a mistake to confuse truth (a feature of the world) with proof (a feature of our mental lives). Furthermore, if proof is required for facts, then facts become person-relative. Something might be a fact for me if I can prove it but not a fact for you if you can’t. In that case, E=MC2 is a fact for a physicist but not for me.
But second, and worse, students are taught that claims are either facts or opinions. They are given quizzes in which they must sort claims into one camp or the other but not both. But if a fact is something that is true and an opinion is something that is believed, then many claims will obviously be both.
It’s pretty shocking to read the examples he found, and the evidence from his own child’s moral reasoning that this instruction is having a corrosive effect. McBrayer concludes:
In summary, our public schools teach students that all claims are either facts or opinions and that all value and moral claims fall into the latter camp. The punchline: there are no moral facts. And if there are no moral facts, then there are no moral truths.
This kind of nihilism cannot work in the real world, the world that they will encounter, the philosopher says. Read the whole thing. It’s important.
Well, I suppose it is important – but are we entirely sure that second-graders are prepared to handle Gettier counterexamples?
More seriously, let’s look at a very real-world second grade type example, and see where facts and opinions enter into the discussion.
Eddie takes Billy’s cookie without permission. Billy protests to the teacher. What are the facts of the case?
- Eddie took the cookie.
- The cookie belonged to Billy.
- Billy did not give Eddie permission.
These are all facts of the case. If they can be established, then we know what happened.
To know what consequence follows, you need to know some other facts – facts of law. In this case, these are:
- Taking other people’s things without permission is not allowed.
- The teacher is the one who determines what happened, and what the consequence is if a student does something that is not allowed.
Those are the facts of the law: matters of the rules and who has jurisdiction over a given question. The teacher duly establishes the facts, and sends Eddie to the principal’s office.
What would be an example of an opinion? Well, Eddie could say that his punishment of being sent to the principal’s office is unfair, because he gave Billy a cookie last week so Billy has to give him a cookie this week. That’s an opinion. It’s certainly not a fact.
It’s also moral reasoning, and it should properly be engaged, so as to develop Eddie’s moral reasoning further. The teacher should say that he understands it feels unfair that Billy didn’t reciprocate in cookie exchange, but that this still doesn’t justify taking Eddie’s cookie without permission because – and here the teacher would have to give a second-grade level explanation of why this is wrong. For example: if everybody took whatever they thought they deserved, people would be taking from each other all the time, and there would be lots of fights. Or: how would you feel if you were Billy and somebody took your cookie without permission because he felt you owed it to him? Wouldn’t you feel that was wrong? Regardless of what he said, he would need to present an argument – which could be debated. Because that’s how moral reasoning works.
But he would also have to say: even if it feels unfair, Eddie has to suck it up, because the fact is that he, the teacher, gets to decide this question.
The point is: a debate about whether or not it’s wrong for Eddie to take Billy’s cookie is different in kind from a debate about whether or not Eddie actually took Billy’s cookie, and we need some kind of nomenclature for distinguishing the two questions. “Fact” versus “opinion” will do fine.
If there’s a problem here at all, it’s not with what constitutes a fact but with what constitutes an opinion – that is to say, a failure to distinguish between an opinion and a preference. For example: the statement “vanilla is the best flavor of ice cream” is an opinion. It’s also a stupid opinion because “best flavor of ice cream” is not really a thing. What the speaker really means is “I like vanilla ice cream best” or possibly “most people like vanilla ice cream best.” Either of those statements are statements of fact, not opinion – facts about individual preferences.
Now, what about “George Washington was the greatest American President?” That’s clearly a question of opinion, not fact, right? Ok – but is it a stupid opinion like “vanilla is the best flavor of ice cream?” The answer depends on whether a word like “greatest” has any social meaning. If it doesn’t – if we can’t reason together about what makes for greatness – then it’s a stupid opinion, because the only statements we can actually make are factual statements about personal preferences, our own or others’. But if it does – if we can reason together about what greatness means, its relationship to goodness, or to sheer historical importance – then it’s not a stupid opinion, because we can share it, debate it, and have our minds changed about it.
Which brings me to a final test proposition.
“There is no God but God, and Muhammad is His Prophet.” Fact? Or opinion?
Obviously, it’s a problem if you teach the above proposition as an example of fact. And if it isn’t a fact, then it’s an opinion. But if you were a pious Muslim parent, and learned that your child was taught that a central tenet of your religion was “just a matter of opinion,” you’d be unhappy, right? That statement is certainly more than a statement of fact about personal preference (“I like Islam best!”) – but it’s also not really something subject to public dispute, by which I don’t mean that such dispute is blasphemous or forbidden but that it’s a category error, at least within modernity, to argue with a proposition like the above in the way that you might argue about whether it was right or wrong for Eddie to take Billy’s cookie, or whether George Washington was the greatest President.
The statement, “There is no God but God, and Muhammad is His Prophet,” is a creedal statement, an affirmation. It’s something more forceful and substantial than a preference, not really subject to public reason like an opinion, and not subject to verification like a fact. It belongs in its own category of statements.
My question is whether McBrayer thinks moral truths belong in that same category. If so, then I would say that he is the one arguing against moral reasoning – arguing, in fact, that moral reasoning is impossible and that therefore what we need to teach children is obedience to moral commands. That view has a venerable history in Western and non-Western philosophy, but I dissent from it.
If he doesn’t think moral truths belong in that category, then I think he is just objecting to the specific words chosen by the Common Core, and not to the distinction itself. Because the distinction hanging over the bulletin board is entirely valid and even essential to moral reasoning. If it’s not being used that way, to promote the development of moral reasoning, but instead to wall us all off from each other with our indisputable personal preferences, the problem isn’t with the distinction itself, but with the fact (if it is the case) that we’re taking our view of opinion from Jeffrey Lebowski.
Once upon a time, Berlin was where the world was most likely to end, the point where the armies of freedom and of tyranny — or, if you prefer, the armies of progress and of reaction — stood eyeball to eyeball, wondering who would blink first.
At the height of superpower tension, right as the Berlin Wall was being constructed, Billy Wilder directed the film One, Two, Three, about the divided city and continent — and one Coca Cola executive’s schemes to conquer both sides of the Iron Curtain. The film’s satire was wide-ranging, encompassing conniving American executives, spoiled Southern belles, inadequately de-Nazified German workers. And our great superpower rival — against which America stood ready to incinerate half the world, and against which we were enjoined by our new president to “bear any burden, pay any price” — was portrayed as poor and incompetent, its officials petty, lustful, backstabbing, and clownish. In other words: not much different from the folks at Coca Cola.
It is unfortunately difficult to imagine a similar film being made today. And that’s a shame. It would be helpful if we could remember that our rivals and enemies share with us a full respective measure of human stupidity and vice. It would be even more helpful if we could remember just how extraordinarily weak our current enemies are, relative to ourselves and relative to those we’ve faced in the past.
I say this not because I believe knowledge of our common humanity will enable us to see past our differences, nor because if we realized how weak our opponents are we would be bolder in confronting them. On the contrary: every single war fought by humanity was fought between groups of human beings, and most of the time both sides recognized that fact. And substantially weaker opponents are frequently able to deny their would-be conquerors victory — just ask George III. Or, for that matter, George W. Bush.
But if we had a more realistic view of our opponents, then we would realize that our conflicts with them are far less existential than we are often led to believe. Which would be comforting, because many of them are also far less likely to be resolvable than we would like to believe, either by diplomacy or by force.
Andrew Bacevich begins his book, Washington Rules with a meditation on Berlin similarly intended to call attention to how much we got wrong about the Cold War. Specifically, right after the wall came down, he crossed over into East Berlin – and he saw, suddenly, just how weak an opponent the Communist East was. That insight led him to question the verities of much of his prior Cold War thinking. In much of the rest of the American establishment, it led instead to triumphalism. And triumphalism has now turned to an existential crisis as we realize that we cannot actually dictate terms to the entire world.
My own view is that the situation with Russia is hopeless. We have very few levers to change Russian behavior in the short term. Risking war over Crimea or eastern Ukraine would be absurd, sanctions are unlikely to have any material effect, and arming the Ukrainian government will just escalate the scale and cost of civil war. Meanwhile, Russia under Putin or under a successor is unlikely to be ready to admit that it has come to a stable accommodation with the West even if one were offered. Neither carrots nor sticks are likely to be efficacious.
But the situation is also not very serious. Russian revanchism is bad news for Ukrainians, Moldovans, Georgians, etc. Their independence just got much more expensive than they can afford. But the international system will not fall apart if we are unable to reverse Russia’s intervention in Ukraine, nor is Russia crazy enough to attack Germany, or even Finland. We should keep the situation in perspective and set policy accordingly.
The situation in Iran is not similarly hopeless, but we shouldn’t get our hopes up too much either. Iran is not going to be “turned” into a U.S. ally – because that wouldn’t actually serve either American or Iranian interests. But Iran does have a lot to gain from normal relations with the United States, and very little to gain from actually building a nuclear weapon. It’s possible that there is a window currently open to ending a period of fruitless hostility, and it behooves us to make every effort to go through it if it is.
But it’s also possible that there is no such window, that Iran’s regime depends too much for its legitimacy on active hostility to the West and to the United States specifically, and that therefore we really are in a zero-sum situation. That may be the most likely scenario, in fact. But even in that case, we can’t lose sight of the overwhelming disparity in power and resources between the United States and Iran, and the relative insignificance of the latter in the larger scheme of world affairs. A failure to improve relations with Iran would be a disappointment. It would not be a catastrophe.
There is really only one country on earth of whom one could say that whether we manage our relationship with them well or poorly has potentially existential implications, and that is China, whose importance to the world economy and to the future environmental health of the planet rivals ours, and whose potential military strength does as well. Fortunately, we don’t seem to be doing as catastrophic a job on that front as we sometimes seem to be elsewhere.
So on China, I’m nervously optimistic. On the rest of the world, a cheerful pessimism strikes me as a useful tonic.
Teachout is reviewing a new Hope biography, by Richard Zoglin that calls Hope the “entertainer of the century.” Why is it that Hope was phenomenally popular for much of the 20th century, but is now virtually unknown by people under the age of 60? Teachout says:
But Zoglin, for all his admirable thoroughness, inexplicably fails to emphasize the central fact about Hope and his career—one that not only goes a long way toward explaining why he was so successful, but also why we no longer find him funny.
Simply: He wasn’t Jewish.
What was missing from his style? Even though Hope was a first-generation European immigrant, there was nothing remotely ethnic about his stage manner. He was among the few successful WASP comics of his generation, and despite the fact that he hired such Jewish writers as Larry Gelbart and Mel Shavelson, the jokes they penned for him lacked the sharp ironic tang of Jewish humor that is to this day one of the essential ingredients in American comedy.
So, clearly we need a list of currently living non-Jewish comedians of note.
I’m tempted to start with Eddie Murphy and Dave Chapelle and Chris Rock and so forth, but I understand that this would be met with “That’s not what I meant.” Ditto if I decided to mention Tina Fey or Amy Pohler or heck, Carol Burnett, who is still out there doing great work and whose classic show has aged marvelously well. Anybody who’s plausibly an outsider is, by a kind of magic switcheroo, an insider. Which is completely unfair. You don’t get much more all-American than Bill Cosby (and the cloud he’s now under has no bearing on this particular question).
I guess I could mention Patton Oswalt or Louis CK or Zach Galifianakis or Stephen Colbert but I’m sure they’d all get axed for being too interesting. So what are we really saying here? That anything non-bland is implicitly Jewish, like the way anybody from New York is implicitly Jewish? That’s ridiculous, right? What – are we going to posthumously circumcise Charlie Chaplin because he’s aged better than Hope has?
Fine: are there any genuinely non-threatening but perfectly successful comics who are white, male, not Jewish, all-American, still living, and who, let’s put it this way, just don’t seem like Lenny’s children.
Here’s my very quickly assembled, too-short list of extremely well-known names:
- Johnny Carson. Check out his classic bits. Are they the funniest bits in history? No. Have they aged well? Surprisingly.
- Throw in Jay Leno and Conan O’Brien – heck, throw in David Letterman. Contrary to popular belief, Jews did not invent irony – just ask the British. We invented anger.
- Dana Carvey. You don’t make a career playing George H. W. Bush by seeming even vaguely Jewish.
- Will Ferrell. Leader of the Frat Pack. Not Jewish. Not even Italian.
- Steve Martin. An oddball? Sure. Threatening? Not really. Jewish? Not at all.
- Hey, what about Drew Carey? By any reasonable measure, Carey is extremely successful and well-known. If you don’t think he’s funny, you’re moving the goal posts.
Give me a bit of time with the Google and I’m sure I could put together a much longer list. American comedy, like America, is much more diverse now than it was in Hope’s day. That doesn’t mean there are no non-ethnic comedians, or that white bread comedians can only be funny by aping a “Jewish” style. It just means that there’s no one style, and no one comic, who can be as overwhelmingly dominant as Hope was in his day.
Bob Hope’s work may have aged poorly because that just sometimes happens. Somebody who is just made for a particular era doesn’t age well into another. Somebody else ages better. But that applies to Jewish comedians as well. Mel Brooks and Carl Reiner will be funny forever. You know who also hasn’t aged so great? Lenny Bruce, the original Jewish rebel comedian.
You know who else feels more and more dated as time goes by?
(h/t Steve Sailer for that delightful bit of Canadian humor. Hey – do Canucks count?)
UPDATE: For those who are interested, Adam Gopnik’s take on Hope can be found here.
Damon Linker has a new column about Jeb Bush and the Iraq War that really should be several columns – it goes in so many fruitful directions, but isn’t able fully to explore any of them. But that’s the nice thing about having an old-school blog – I can spend as much space as I like exploring whatever I wish.
The first potential column is about how Jeb Bush will have a hard time addressing the Iraq War:
[The Iraq War] remains very unpopular outside the fever swamps of the far right, so defending the decision to launch it could be a kiss of death in the general election. Calling it a mistake, on the other hand, would be viewed as a swipe at his brother, which would risk looking peevish and threaten to ignite a GOP civil war.
I have no idea how Jeb will finagle the issue. (Judging by his statements so far, he’ll try to have it both ways by asserting he’s his “own man” while hiring a bunch of retreads from his brother’s old neocon-ish foreign policy team.)
I actually think this is going to be much easier than Linker thinks. Bush doesn’t need to come out full-throatedly in favor of or against the Iraq War to win the GOP primary. He can just say that his brother was somewhat naive about how tenacious our enemies would be, and hence the first couple of years after the invasion went really badly. But by the end of his term he had won the war that Obama then lost. This appears to be exactly what Republican primary voters want to hear: they don’t want to re-litigate that war, but rather to frame any discussion about that war in the context of where they want to go from here, which is toward crushing ISIS mercilessly.
In the general election, then, he will face a candidate – Clinton – who supported the Iraq War wholeheartedly, and the Libyan war, and pushed for direct intervention in Syria. So Bush will not have to do any “defending” of the decision to invade Iraq. On the contrary – he’ll be able to go on the offensive with respect to more recent foreign policy failures. Clinton, after all, has a track record. Jeb Bush does not. It’s just a depressing fact that so long as there are two solid hawks up there on stage, we will not get a meaningful foreign policy debate in the general election.
That’s why we badly need a peace candidate in the race. But I only grow firmer in my conviction that there isn’t really a constituency for one.
Linker’s second potential column is about how, to-date, we’ve re-litigated the Iraq War the wrong way:
Ever since it became clear in the first months after the 2003 invasion of Iraq that there were no weapons of mass destruction in the country, the Iraq War debate has focused on intelligence failures and how the administration of George W. Bush (aided and abetted by mainstream media outlets) supposedly misled the public into supporting the war.
This has always been a distraction and a misconstrual of the state of the argument prior to the invasion.
The fact is that just about every intelligence agency in the world (and not just the Pentagon’s Office of Special Plans) believed that Saddam Hussein possessed WMDs. The debate hinged on whether these weapons constituted a threat sufficiently large enough to justify toppling Hussein. I came down firmly on the side of No, along with Barack Obama, Pat Buchanan, Dominque de Villepin, and a few staffers at The Nation. (That’s an exaggeration, but not by much. Or so it felt at the time.)
This is a hugely important point, but one that substantially undercuts his first potential column. We’ve re-litigated Iraq this way precisely because it is very comfortable ground for hawks. And that’s exactly why the question “well, if Iraq really did have WMD, would the invasion have been justified then?” will not be asked in 2016.
We know that because we’re actively debating the Iranian nuclear program, and how we should deal with it, right now. That program is a known fact. We don’t know whether the Iranians intend to build a bomb, but it is reasonable to assume that the goal of the program is to make the country nuclear-capable. From the perspective of the hawks, the goal of diplomacy is to prevent that outcome – and if we can’t prevent that outcome through some combination of diplomacy and sanctions, we have to be prepared to use force in the “last resort.” Of course, that threat itself shapes the contours of diplomacy in ways that may prevent diplomacy’s success.
To make the contrary case, the case for ruling out the use of force to prevent Iran from developing nuclear weapons, you have to make the case that war would be worse than allowing Iran to succeed. That doesn’t mean giving up – there are other carrots and sticks that can be deployed to, hopefully, find a middle ground that allows Iran to say it is developing a nuclear capability while allowing the rest of the international community to say that it has proper safeguards in place to prevent Iran from easily “breaking out” and building nuclear weapons – and that provides proper incentives (positive as well as negative) for Iran to want to remain a non-nuclear state. But it means recognizing that the hawks might be right about Iran’s intentions with respect to its nuclear program – that it intends to get a bomb eventually one way or another – and concluding that preemptive war is still not worth it.
That’s not a case that anybody running in the general election in 2016 is likely to make – including Rand Paul in the unlikely event he gets the GOP nod.
Linker’s third potential column is about what really motivated going into Iraq:
Immediately after the attacks of Sept. 11, there seemed to be a realization on the part of some in the foreign policy establishment both inside and outside the Bush administration that the events of that morning signaled something new: sub-state actors could declare war and inflict levels of harm we formerly assumed only a state could accomplish.
That’s what was going to make the War on Terror different. After Afghanistan, it wouldn’t be waged against states. It would target sub-state actors within states — usually states too weak to combat terrorists operating within their borders. This meant the war would be largely covert, with victories unheralded and defeats unannounced. Its signature would be special-ops raids, surgical missile strikes, and drone warfare.
But as we had already learned by the summer of 2002, when planning for the invasion of Iraq really got rolling, this new kind of war could be frustrating. It didn’t produce enormous casualties, like traditional land wars often do, but it also produced little glory. Victory was muddy, indeterminate. The war’s governing mood was ambivalence. The enemy could easily melt away into obscurity only to crop up in another country thousands of miles away. It could be maddening, like a global game of Whack-a-Mole.
And that, more than anything else, is why we found it so tempting to declare war on a country. Finally something familiar! Something satisfying!
Except that the Iraq War wasn’t just a distraction. It actively set back the War on Terror by creating a new failed state, right under our noses, where Islamist terrorism could breed. (As everyone now knows, the Islamic State was incubated in the chaos of the U.S. occupation of Iraq.)
There are several problems with this chronology – starting with the fact that the planning for the Iraq War began in the 1990s, with support from both parties, and that a big part of the motivation for the Iraq War was that the Persian Gulf War – a classic war between states – had such an “unsatisfying” outcome.
Linker suggests that we invaded Iraq because that fit our “Cold War” mindset, a mindset we were more comfortable with than we were with the ambiguous War on Terror. But this is highly problematic. The Soviet Union itself was only intermittently seen as simply a state in a world of states; at least as often – and almost exclusively if we’re talking about the American right – it was viewed as the head, or, better, the lead instrument, of a transnational, ideological force known as Communism. And many of the conflicts of the Cold War were fought not between but within states. The Soviet Union supported revolutionary movements around the world; the United States sponsored counter-revolutionary forces, including coups (Iran, Chile, Guatemala, etc.) and, by the 1980s, insurgencies (Angola, Nicaragua, Afghanistan, etc.).
More importantly, of the three largest inter-state conflicts of the period – Korea, Vietnam and Afghanistan – two resulted in total defeat for the more directly-engaged superpower, and one resulted in a bloody and painful stalemate. There’s a reason why many Americans, particularly but not exclusively on the right, were never comfortable with the Cold War – precisely because it was “unsatisfying” in all the ways that Linker describes the War on Terror as being.
These problems reading the past lead Linker to a questionable conclusion:
Barack Obama seemed to understand all of this. He strongly opposed the Iraq War and as president quickly returned the War on Terror to its original strategy of employing mainly covert ops and drone strikes.
And yet, as if to prove that he could be just as foolish as George W. Bush, Obama repeated his predecessor’s mistake when he approved air strikes against Muammar Gaddafi’s government in Libya. This time it wasn’t fear that tempted a president to act. Obama’s a Democrat, after all, so he was motivated by a bleeding heart — by the humanitarian imperative to protect the rebellious civilians of Benghazi against the Libyan air force.
And it worked. Until it didn’t.
Just like in Baghdad.
The problem here is not the analysis of Libya (I can quibble over how “humanitarian” the motives really were, but that would truly be a quibble), but the implicit argument that President Obama got the War on Terror “right” prior to or apart from Libya. The evidence from Afghanistan, Pakistan, Yemen, west Africa and elsewhere is at best equivocal on that point. Covert ops and drone strikes have indeed killed lots of bad guys. It’s not obvious that we are making “progress” though.
This, of course, is Linker’s point about the War on Terror being ambiguous, without clear metrics for victory, etc. But, as with the Iranian nuclear program, if we really want to have a debate about this question, we have to ask not only what means are more successful and what are more counterproductive, but how we respond if all options present a real likelihood of further destabilization.
Linker’s ultimate conclusion – that someone needs to stand up for the devils we know (Assad, Qaddafi, Hussein, Mubarak, etc.) against the devils we don’t (al Qaeda, ISIS, etc.) – sounds world-wearily serious, but it’s actually comforting, because it suggests that there is actually a clear choice to be made, just one that leaves us feeling morally ambiguous and unsatisfied. But I don’t think the choices are nearly that clear.
That doesn’t mean we should opt for insane-but-clear over less-insane-but-muddled.
It just means that we shouldn’t hold out much hope for a robust debate that includes the deeply pessimistic perspective on our current conflict that, I have come to suspect, Linker and I share.
My latest column for The Week is up. It’s about the Marquette brouhaha that Rod Dreher, among others, has been blogging about. In it, I bravely blame the mob of on-line harassers for everything that’s wrong in the world.
Absent the mob, the initial recording by the student would likely have been of interest only to the philosophy department’s faculty. “My teacher wouldn’t let me make a valid argument” is hardly a page-one news story. Absent the mob, the professor’s blog post would similarly have raised few hackles; it would have been no worse a breach of etiquette than saying the same things out loud in the faculty lounge.
That mob is what transformed this situation from a routine and largely uninteresting ivory tower spat into a dark precedent for academic freedom. Which is what this is. It has become all too difficult to draw clear contours around the new implicit restrictions on academic speech, which would appear to put professors in the distinctly odd position of being less free to criticize one another in print than civilians not crowned with the blessing of tenure.
And, ironically, the original complaint of the student — that he wasn’t allowed to discuss a particular topic because the teacher feared he would offend other students — is likely in part a consequence of the toxic debate environment the online mob has helped create. It is probably not an accident that demands for “safe spaces” and ever-expanding definitions of harassment are features of the same landscape as 4chan and Reddit.
Check it out there.
Assuming Prime Minister Netanyahu actually winds up speaking to Congress at all, he’s made it clear that he will be speaking “as a representative of the entire Jewish people.” This has already prompted a campaign by Jewish opponents (including liberal Zionists) to declare in response: no, Mr. Prime Minister, you don’t. But does Bibi have a point?
From my perspective, yes, and no.
Yes, in the sense that Israel, by its own lights and for its entire history, represents the satisfaction of the legitimate national aspirations of the Jewish people. Even if you, as a Jew, deny the existence of such national aspirations; even if you, as a person of Jewish descent, deny affiliation with anything called “the Jewish people,” inasmuch as such a people exists (whether or not you affiliate with it) and inasmuch as that people has any legitimate national aspirations (whether or not you ascribe to them), Israel has declared itself to be their satisfaction, and a wide array of national governments around the world have concurred.
So what Netanyahu is saying has some logic to it. He’s the head of the Israeli government. The State of Israel is the nation-state of the Jewish people. Hence, he is a representative of the entire Jewish people. Saying “you don’t represent me” sounds an awful lot like the “not in my name” crowd from the Bush years, but you don’t get to disown a government that you don’t like and take it back when you like it better. It’s either yours or it isn’t.
First, “representative” is a very specific word. Israel may undertake, on its own initiative and for its own reasons, the defense of the interests of Jews outside of Israel. Indeed, by its own ideological lights, it is obliged to do so. But that doesn’t mean Israel is in any meaningful sense representing those interests. Those interests are presumably already represented by the governments where those individuals reside – indeed, they must be, unless we are to understand Jews in the diaspora as merely temporarily resident aliens.
To make an analogy, Vladimir Putin may see himself as the secular champion of Orthodox Christians everywhere. The Russian electorate may decide to endorse this ambition, and endorse a foreign policy of intervening to defend Serbs and Bulgarians and Greeks and so forth against their enemies. Serbs, Bulgarians and Greeks might even be pleased to have Vladimir Putin in their corner. But Vladimir Putin would not, thereby, become the representative of Serbs and Bulgarians and Greeks. And it would be very weird for, say, Angela Merkel to consult with Vladimir Putin as if he were – rather than talking to the leaders of Bulgaria, Greece and Serbia directly.
Second, Benjamin Netanyahu is not the head of the Israeli state. He’s the head of the Israeli government. If someone from Israel were to posture as the “representative” of the Jewish people around the world, logically that would be the head of state, which in Israel is the President.
Israel’s Presidency is a fairly weak office, but that’s not actually the problem with this formulation. The problem is that the Presidency, elected by the Knesset and generally parceled out to a well-regarded over-the-hill politician, is not an office with a lot of symbolic heft. If you really want someone to be a the living symbol of an organic nation, someone members of that nation can look to and love even if they are not citizens, then what you are looking for is a monarch.
Third, Bibi is not coming to Congress to say: I, as the representative and defender of Jews worldwide say: you must protect your Jewish citizens better, or you’ll have to deal with me. That’s an argument that, perhaps, he could make in France. On the contrary: he is coming to Congress to say: I, Prime Minister of Israel, say that the negotiations with Iran must be scuttled, lest their nuclear program develop into an existential threat to Israel. And because I am the representative of Jews worldwide, you can be assured that America’s Jewish citizens back me up.
That’s not “representation.” That’s a demand for fealty. To which the response, “no, Mr. Prime Minster, you don’t represent me” is singularly appropriate in a way that it would absolutely not be coming from Bibi’s fellow citizens.
I understand where Damon Linker is coming from in his latest column on President Obama’s predilection for playing professor-in-chief, but I think it behooves him to consider the possibility that the President is not confused about his role, but is consciously trying to do something different with it – possibly something foolish, but possibly not. It’s hard to know without examining what that “something” is.
But let me start with a cranky quibble. Linker says:
What Obama’s comments demonstrate is that he lacks a sufficient appreciation of the crucial difference between politics and morality.
Broadly speaking, morality is universalistic in scope and implication, whereas politics is about how a particular group of people governs itself. Morality is cosmopolitan; politics is tribal. Morality applies to all people equally. Politics operates according to a narrower logic — a logic of laws, customs, habits, and mores that bind together one community at a specific time and place. Morality dissolves boundaries. Politics is about how thisgroup of people lives here, as distinct from those groups over there.
That’s certainly one way of understanding morality – but far from the only one. Etymologically, “morality” comes from a Latin root that relates to manners – and manners are indisputably a historically- and culturally-rooted matter, and not at all universalistic. Aristotle, and modern-day Aristoteleans, would surely agree that it’s specious to talk about ethics and morality as something independent of a community’s self-understanding. Historically, in the United States, “morals” legislation has been overwhelmingly particularistic in orientation, either referring explicitly to (Protestant) Christian conceptions of morality or more generally to “community standards” that turn out to be rooted in same.
What Linker appears to have in mind is Kantian conceptions of morality. I know he’s read his Hegel, so I know he knows how this conception can be attacked – but more to the point, he knows that even within the liberal tradition there are other ways of coming at the problem.
The same criticism can be leveled at Linker’s account of politics: this is one way of understanding the realm of politics, but hardly the only one. For Aristotle, politics was distinguished from ethics inasmuch as the former treated questions of collective organization, the latter questions of individual good (said good being only truly discoverable within the context of a collective). The two realms, though, were inextricably related; politics wasn’t just a matter of tribalism, but of discovering truths about the best way to organize groups of human beings in harmony with their natures. And the Greeks eagerly exported this conception across the empire Alexander conquered. In modern terms, virtually all politics have appealed to and advanced some conception of the good. The American Revolution was a political movement, but it was dedicated to a bunch of propositions that went beyond “we don’t like paying taxes so we’re going to stop now.”
I’m aware that Linker is, himself, dedicated to certain propositions about the distinction between these spheres. I merely ask that he recognize that his is a distinctive program with both political and moral implications, as opposed to something everyone agrees on and for which the President of the United States merely “lacks appreciation.”
So: Linker thinks politics should be tribal, while morality should be Kantian. And he’s upset that the President, in his remarks . . . did what exactly?
If the president truly believes that ISIS poses a dire threat to the United States — one requiring a military response that puts the lives of American soldiers at risk, costs billions of dollars, and leads to the death of hundreds or thousands of people on the other side of the conflict — then it makes no sense at all for him simultaneously to encourage Americans to adopt a stance of moral ambiguity toward that threat.
Does Obama want us to kill the bloodthirsty psychopaths of ISIS? Or does he want us to reflect dispassionately on the myriad ways that they’re really not that different from the grandfather of my friend from Mississippi?
I’ll say it again: as an intellectual exercise, Obama’s remarks weren’t wrong. Christianity has been invoked to justify a wide range of moral atrocities down through the millennia, and the Crusades, Inquisition, and Jim Crow are all excellent examples. I would welcome and praise an essay by Ta-Nehisi Coates making that exact point.
But Ta-Nehisi Coates isn’t the president of the United States, and Barack Obama isn’t a writer for The Atlantic.
A wise president understands that his role is categorically different from that of a journalist, a scholar, a moralist, or a theologian. It’s not a president’s job to gaze down dispassionately on the nation, rendering moral judgments from the Beyond. His job is to defend our side. Yes, with intelligence and humility. But the time for intelligence and humility is in crafting our policies, not in talking about them after the fact.
As I say, I get completely where Linker is coming from. He wants the President to make a practical, not a moral, case for engagement in a war against ISIS, and to leave it at that. Here’s a threat, here’s how we’re going to address the threat, and our boys have the threat well in hand. All in a day’s work.
But let’s consider a variety of possible reasons why the President might have taken the rhetorical tack he took beyond what I acknowledge is a personal preference for the lectern.
First of all, he may genuinely be concerned not only about ISIS but about the possibility of inflaming American moralistic nationalism by engaging with ISIS. The President may feel that, properly aroused, our country might very quickly get behind a far more robust effort to “kill the bloodthirsty psychopaths” and might not, in fact, be so particular about who else gets killed in the process. The reaction to his prayer breakfast speech, and the disposition of the opposition party on these matters, suggest that such fears are not entirely specious. And so, he wants to let us know, this is not a great crusade against evil. It’s more like a police action against a particularly monstrous group of criminals. He may want us to understand ISIS as more the heirs to the Manson family than to Saladin.
Second, he may be concerned about diplomacy. The best – likely the only – effective response to ISIS must be one rooted in the Sunni world. Perhaps America can help, but we can’t lead except from behind. And so, he repeatedly returns to formulations of the conflict calculated not to offend the sensibilities of allies in the region who we need to occupy the front lines. Part of that ritual formulation is to say: this is not a conflict between tribes; it’s a conflict between good and evil within another tribe; and we’re taking the side of good not because we are the good tribe but because taking the side of good is good.
Third, he may be thinking not as President of the United States but as Leader of the Free World. Linker may decry the fact – plenty of writers here at TAC decry it daily – but the United States occupies a quasi-imperial position in the world system. We’re the global hegemon, the hyper power, the indispensable nation. Whether that is a good thing or a bad thing, a crown we should covet to keep or a poisoned chalice we should try to put down, it’s still a fact. And it’s obvious that the President views his role as managing that position as effectively as possible. From the perspective of that position, the Muslim world is not a foreign tribe but a difficult and restive province far from the imperial center.
Or, you know, maybe he was thinking that this was a prayer breakfast, a singularly appropriate place, one would think, to speak from a position “beyond” tribal politics. Of course, if politics and morality are to be treated as strictly separate realms, then a prayer breakfast is a singularly inappropriate place for a President to speak at all. Maybe Linker’s problem isn’t the President’s failure to conform to the “bully pulpit” expectations Americans have, but to those expectations in the first place. But those expectations have a long lineage.
President Obama’s warnings about the danger of self-righteousness owe an obvious debt to Niebuhr, but they also trace back to President Lincoln’s warnings about Northern self-righteousness in the cause of anti-slavery. Lincoln was acutely aware that the South’s cause was self-interested, but that awareness led him not to condemnation but to compassion, because he understood that it implied that the anti-slavery North, if it had the climate and history of the South, would likely have adopted the same stance. Right was still right, and wrong was still wrong, but judgment belonged to someone more exalted than the President. Nobody should pat themselves on the back for choosing right; very likely, the choice was less-costly for them than for somebody who chose wrong. Our proper stance is charity for all, malice toward none.
Perhaps that puts a finger on the real problem. Lincoln used complex moral language to address a nation wracked by civil war, but ISIS is not us, and perhaps wars are fought more effectively when the people are encouraged to see the enemy as uniquely evil, ourselves as uniquely good. But if so, that’s an argument against American hegemonism, against limited war – and against engaging with ISIS. It’s an argument that a properly “tribal” politics is inconsistent with our quasi-imperial position, and that if we want to avoid carrying the flag of the crusades we had better carry no flag at all so far from our territorial waters. It strikes me as strange to choose to marry a policy argument of that sort to a critique of the President for being insufficiently solicitous of the sentiments of Johnny Jingo.
Or maybe Lincoln was just confused about the distinction between morality and politics.
Readers of this space are familiar with my attachment to The Book of Job. My personal favorite “double feature feature” film pairing was anchored by a discussion of how each film – “The Tree of Life” and “A Serious Man” – related to that masterwork of religious philosophy.
Well, if I wanted to, I could now revise that piece to a triple feature feature – because one of the more powerful films of the past year is the Oscar-nominated Russian film, “Leviathan,” from director Andrey Zvyagintsev - and guess what? At its heart, this movie is also a meditation on Jobian themes.
The story of the film is simple. A fairly ordinary Russian man, Kolya (Aleksey Serebryakov), denizen of a town on the Barents Sea coast, is facing the loss of his property. Using whatever the Russian equivalent of eminent domain, the corrupt local mayor (Roman Madyanov) plans to summarily kick Kolya off the land he and his family have lived on for decades. Kolya is convinced that the mayor plans to build himself a palace on the land, and is determined to do whatever is necessary to keep what is his. More specifically, he invites his old army buddy, Dmitriy (Vladimir Vdovichenkov), a Moscow lawyer, to come up and not-so-subtly threaten the mayor with exposure of his many misdeeds if he doesn’t back off – or at least offer a fair market rate of compensation to Kolya for the loss of his valuable beachfront property.
At first, it looks like the plan is going relatively well. The mayor is intimidated by the august names Dmitriy casually drops, and even more intimidated by the dossier he has compiled. Though he rages at his flunkies, his rage feels impotent – he’s clearly seriously considering caving, at least on the point of compensation.
But the local bishop (sounding very like a proponent of the “Orthodox Jihad” that Rod Dreher talked about on his blog) tells him, in so many words, to gird up his loins like a man. God gave you any power or authority you may have. If you are using it for God’s ends, you should not flinch, doubt or hesitate – because it was for these ends that you were entrusted power in the first place. At the same time, Kolya’s camp unravels with startling rapidity. His young wife, Lilya (Elena Lyadova), sleeps with Dmitriy, and is caught in the act by Kolya’s son, her stepson. Kolya beats him up, and before Dmitriy has recovered from these injuries he finds himself threatened by the mayor’s goons. Dmitriy flees back to Moscow, leaving Lilya to her guilt and Kolya to face the mayor’s wrathful vengeance alone. And, if you can believe it, this is only the beginning of Kolya’s troubles.
Why does Lilya cheat on her husband? Her actions are never really explained, but my sense is that we are supposed to see Dmitriy through her eyes as Kolya’s natural superior. He’s younger, better-looking, smarter. He’s also the true savior of the family if the family is to find one – Kolya cannot save them himself. He is a man who, he says, believes in facts, in objective reality – he is not deluded that God is going to engineer an outcome that sentiment might favor. If we are to see the world in that way – which is how Kolya, defying the “righteous” authority of the mayor, implicitly does – is Dmitriy not a more appropriate man to cleave to than Kolya? That’s my sense of what her actions signify. And in the end they leave her utterly lost.
In the depths, facing the loss of everything he ever cared about, and facing yet further loss to come, Kolya turns to a priest, who tells him the story of Job as a parable of obvious relevance to Kolya’s life. “To whom do you pray?” he asks him – this, to the priest is the decisive question. He offers Kolya no answers, but only a choice: whose authority do you accept, as total and absolute? Kolya doesn’t see the point of praying at all if there is no promise of reward – the reward that Job received, at the end of the biblical book. And so he goes to meet his end with no consolation.
The foregoing may make the film sound a bit pat. It isn’t. This is a film rich in life, from the stunning cinematography (by Mikhail Krichman), to the powerful ensemble acting, to the painful cross-currents of these characters lives (particularly the fault line that divides stepmother from stepson), to the humor provided by the ensemble of peripheral characters, particularly a corrupt police officer who leans on Kolya for free repairs of his truck, and Lilya’s mouthy best friend from the local fish packing plant. One can appreciate the film fully without paying any attention to the way in which it uses the philosophical and theological themes that I’m focusing on.
But I’m going to focus on them anyway, because they interest and move me – and because I love the Book of Job too much from them not to.
“Leviathan” presents a fairly bleak reading of the Book of Job, one that emphasizes the absolute and unfathomable scope of God’s power and authority. Faced with such awesome majesty, the only proper attitude is utter submission, with which the reservation of any personal pride or status is incompatible. It is, frankly, a reading that doesn’t sit well with me. But then, I am disinclined to identify divine authority with any temporal, human authority, whether the state, religious authorities, or my own conscience (and I, like any good scholar but also like the devil, can cite scripture to my purpose if I’m so inclined). That, indeed, is precisely part of the point I take from God’s voice from the whirlwind: God’s authority is different from, incommensurate with temporal, human authority. Human authorities you may critique for being unjust, and demand satisfaction of them. Human authority proceeds from and may be bound by and shaped by positive law, because it aims at the satisfaction of human ends, like fairness and justice. But to demand these things of God is to making a category error. You cannot critique a whirlwind.
I don’t see the whirlwind demanding submission – I see it urging Job to raise his eyes, not lower them. When the Book of Job talks of Behemoth and Leviathan, I imagine quasars and black holes, the monsters of physics; I imagine a universe of laws, but laws the depths of which will never be sounded to the bottom. I am, I suppose, more like Dmitriy than not.
But in the context of autocracy, which has deep roots in Russian soil, the priest’s interpretation has perhaps more resonance. What does the voice from the whirlwind sound like to a mind conditioned to understand law as proceeding from authority rather than the other way around? From such a mind’s perspective, the only way to know the law is to know whether authority is righteous, meaning whether it aims at ends that God approves. Which is precisely how the bishop tutors the mayor. And from such a mind’s perspective, the thuggish, abusive, cruel mayor is in fact more humble than poor, suffering, Job-like Kolya.
That, to my mind, is the point of the ending, which reveals that the mayor was indeed, from a certain perspective, aiming to serve God’s ends. Many Western viewers are reading the film as a satiric story of corruption in modern Russia. But perhaps this is not the only way to read it. Indeed, perhaps it is not the way that Russia’s Ministry of Culture originally read it – which would explain why they initially supported the film, facilitated its financing and production, and promoted it internationally, only to turn on it when they saw it described in the Western press as a critique of Putinism. Because, with just a little turn of the head, the film can be read not as an indictment, but instead as a tragedy, the very tragedy that Hobbes identified when he first contemplated the problem of authority: that, once you establish the necessity of authority as your bedrock political principle, you immediately establish the necessity of absolutism, and the impossibility of any formal reservation for the individual against that authority.
And as long as we’re talking about 2016, I’ve got a peeve to air. It’s become a commonplace that the GOP has huge “bench strength” coming into the 2016 presidential contest, while the Democrats have a much thinner bench. But I wonder how we’re measuring this.
Presidential candidates tend to emerge from other high political office – the Vice Presidency, governors’ mansions, the Senate – or, much more occasionally, from other exalted perches of our national life. The GOP currently controls significantly more governorships than the Democrats do, so that immediately gives them a larger bench. But apart from that, I’m hard-pressed to identify what makes the GOP bench stronger.
Here’s a list – not exhaustive, but extensive – of names frequently bandied about for the GOP line in 2016:
- Jeb Bush – former Governor of Florida, brother and son of 43rd and 41st Presidents, respectively.
- Ben Carson – retired neurosurgeon and media personality.
- Chris Christie – Governor of New Jersey.
- Ted Cruz – Senator from Texas.
- Carly Fiorina – former CEO.
- Mike Huckabee – former Governor of Arkansas and media personality.
- Bobby Jindal – Governor of Louisiana.
- John Kasich – Governor of Ohio.
- Sarah Palin – former Governor of Alaska, former Vice Presidential nominee.
- Rand Paul – Senator from Kentucky.
- Mitt Romney – former Governor of Massachusetts, former Presidential nominee.
- Rick Santorum – former Senator from Pennsylvania.
- Donald Trump – businessman and media personality.
- Mike Pence – Governor of Indiana.
- Rick Perry – former Governor of Texas.
- Marco Rubio – Senator from Florida.
- Scott Walker – Governor of Wisconsin.
It’s a broad field – even George Pataki – former Governor of New York - and Lindsey Graham – Senator from South Carolina – are making noises about running. Clearly, there are a lot of Republicans who see themselves as potential Presidents, and who think this is a year to at least consider going for it.
Now, here’s list of Democrats, with comparable credentials to the GOP list above:
- Joe Biden – Vice President.
- Cory Booker – Senator from New Jersey.
- Jerry Brown – Governor of California.
- Sherrod Brown – Senator from Ohio.
- Andrew Cuomo – Governor of New York.
- Russ Feingold – former Senator from Wisconsin.
- John Hickenlooper – Governor of Colorado.
- Amy Klobuchar – Senator from Minnesota.
- Gary Locke – former Governor of Washington, former ambassador to China.
- Martin O’Malley – former Governor of Maryland.
- Deval Patrick – former Governor of Massachusetts.
- Ed Rendell – former Governor of Pennsylvania.
- Bernie Sanders – Senator from Vermont.
- Brian Schweitzer – former Governor of Montana.
- Elizabeth Warren – Senator from Massachusetts.
- James Webb – former Senator from Virginia.
This list has some names that look tired to me and some who have a record of accomplishment – but so does the GOP list. It has some currently serving Governors and Senators, and some who served in years past – but so does the GOP list. It has some who won reelection by large margins and some who lost their last bids for office – but so does the GOP list. Like the GOP list, it has current or former Governors and Senators from some of the largest states – New York, California, Ohio, Pennsylvania – as well as from smaller states.
Jim Webb scrambles categories in as fascinating ways as Rand Paul. Russ Feingold, on the merits, seems to me as plausible and interesting a candidate as Mike Huckabee – both are very unlikely to win the nomination, after all, but I can more easily see Russ Feingold making interesting waves on his way to losing than I could Mike Huckabee (at least this time around). I don’t know why current Governor Andrew Cuomo (leaving aside the shadow cast by Sheldon Silver’s recent arrest) is any less plausible or formidable a candidate than former Governor Jeb Bush. And is Jerry Brown really more extreme or absurd a candidate than Rick Perry? Why, exactly?
Of course, the Democratic list is essentially irrelevant, because Hillary Clinton is the overwhelmingly dominant figure in the Democratic field. Anybody who chooses to challenge her is trying to slay a giant – or simply to make a point. That either rules out or diminishes a lot of candidates who might otherwise seem like plausible contenders, from Joe Biden to Brian Schweitzer. There’s nobody on the Democratic side who has remotely the kind of institutional and popular support that Hillary Clinton does. But if she had been felled by a piece of falling masonry in 2013, it’s not clear to me that the Democratic bench would look so terribly weak.
What looks relatively weak is the Democratic agenda – which is pretty normal after going on eight years with its usual mix of accomplishment, compromise and failure.
But it’s not like the GOP is distinguishing itself on that score.
I’ve been thinking more about Scott Walker and his potential staying power. And the more I think about it, the more I think he’s got a real shot at the whole thing. I would certainly not call him the front-runner. He’s got a lot to learn, and a lot to prove before it’s worth talking about him in those terms. But he’s got a really valuable card to play that I can’t quite figure out how his major opponents are going to answer effectively. And that card could make him quite dangerous.
Unlike Jeb Bush, Chris Christie, Marco Rubio, Ted Cruz, Bobby Jindal, Rick Perry, Rand Paul or the various also-rans, Scott Walker picked a high-profile battle over a core issue that both the establishment and more insurgent types care about – the status and position of public sector unions. His opponents rose to the challenge, and threw everything they had into the battle to defeat him – to the point of trying to get him recalled before the next scheduled election. The showdown went down in a purple-to-blue state. And Walker won, unequivocally.
Jindal and Perry can point to very conservative things they did as governors – but Louisiana and Texas are very conservative states. Could they do the same in Washington? Ted Cruz can tout his purism – but he’s accomplished literally less than nothing, with his antics having demonstrably backfired in multiple instances. Chris Christie and Jeb Bush can tout their own records – but their opponents can turn around and point to things in those same records that offend the faithful, including not merely compromises but issues that they ran on and advocated forcefully. Rand Paul . . . well, Rand Paul is Rand Paul.
Scott Walker can say to anyone touting their conservative bonafides: “you talk the talk, but I walked the walk.” But he can also credibly say, “you’ve got to know when to hold ‘em and know when to fold ‘em,” without sounding like a moderate squish – because in one very high profile situation, he held ‘em, and he won.
And sometimes, you win the game when you fold a hand. In his most recent confrontation – an attempt to change the mission of the Wisconsin public university system – Walker folded – partially. The changes to the mission statement (which would have reduced the mission of the university system to “meet[ing] the state’s workforce needs”) have been scrapped. So Latin isn’t dead yet. But the $300 million in funding cuts remain, so Latin’s probably going to have a tough time surviving, ultimately. Backing down on a symbolic issue may in fact take the pressure off the more substantive changes. If so, Walker may have another victory to tout.
I don’t know whether he’s skillful enough to do so, but if he is, he can play this card over and over again against every one of his primary opponents. And I can’t think of a really solid answer any of them can make. (Well, other than his policies are bad ones, but I somehow think that answer won’t go over well in a GOP primary.)
That doesn’t mean Walker wins – you need more than one good card to win. But this card really is a killer.
Apropos of some of the discussion happening on this site about the nature of poverty, and at the risk of sounding horrifically upper-middle-SWPL-whatever, I wanted to relate some thoughts about a recent visit I made to the island of Dominica.
Dominica is a very small country. They have only 70,000 people, and though the country is physically tiny it is not especially dense. (96 people per square kilometer, versus 349 for neighboring Martinique, 300 for Grenada, 258 for Trinidad and Tobago.) It’s not a deeply impoverished country – on a purchasing power parity basis, per capita GDP is up there with Romania and Venezuela, and three times the level of the Philippines. But it is far from wealthy. And in the rural southeast, where we were, people live very simply; in rural areas, every other household is impoverished, according to the UNDP.
It’s hard to get ahead in Dominica. The country is very mountainous – it’s quite young, geologically, and still volcanically active – and its poor system of roads mostly go straight up and down with the gradient of the mountains. It takes a long time to get anywhere, and there are few expanses of flat land suitable for large-scale agriculture. And because it’s a small island, imported goods are relatively expensive – and a lot of different kinds of goods need to be imported. Combined with the limited amount of local capital, what you have is a formula for under-development. It’s just very hard to see what Dominica could produce that would integrate readily into the global supply chain.
Wisely, I suspect, the government is not trying to encourage industrialization, and is trying to move away from agricultural products (mostly bananas) that have a hard time competing with industrial-scale operations elsewhere anyway. It’s focused on eco-tourism and on specialized services a small island could plausibly provide: a cheap medical education (import medical students, export doctors), and the kinds of financial services that cluster around tax havens. All of which seems to be working reasonably well, but none of which is going to provide a rapid rise in living standards in the countryside.
But the thing is: this impoverished countryside looks quite healthy. People walk everywhere; they have no choice but to stay fit. Life expectancy at birth is 76.5 years, comparable to Poland and Uruguay. Food grows everywhere, free for the taking, and fresh water is abundant; the island is quite edenic in that regard. It’s never cold. As one of our guides told us repeatedly, even if you don’t have two nickels to rub together, you’ll never die of hunger or thirst on the island.
Moreover, precisely because the island isn’t suited for massive luxury development, there was less of a sense than in some parts of the Caribbean of a steep power gradient between tourists and locals. I wouldn’t go so far as to say we felt like guests rather than tourists, but the master-servant dynamic that kicks in easily in such situations was far more attenuated than usual.
We tend to think of wealth as luxury – having lots of stuff – and poverty as the lack. But that may not be the best way of thinking about it, at least once we’re talking about people above the subsistence level. Absolute poverty and its ills – starvation, malnutrition, etc. – are still rampant in parts of the world, and even in pockets of the developed world, and deserve serious attention. But once we get above that level, measuring poverty just in terms of income or in terms of goods becomes more problematic. Agatha Christie once wrote that she never thought she would be so rich as to be able to afford a car, nor so poor that she could not afford servants. What constitutes a lot of valuable “stuff” is always relative, both to the standards of the time and to what the Joneses have.
Perhaps a better way of thinking about poverty above subsistence is in terms of the experience of freedom. Someone who is literally imprisoned is living a deeply impoverished life even if they have three squares a day. Someone who cannot afford not to work 16 hours a day is also impoverished – even if they treat as necessities some consumer goods (a cell phone, say) that previous generations would have considered unimaginable luxuries. Ditto for someone whose prospects and opportunities are so narrowly circumscribed that they feel no choice in their future. In the ancient world, someone who could not afford to feed himself might sell himself into slavery; in the 17th and 18th centuries, he might bind himself to indentured servitude. How different are the “choices” many of the world’s poor make today, particularly the ones lucky enough to be integrated into the global supply chain?
How does Dominica fare from that perspective? The barriers to self-improvement are, as noted, quite steep; that’s one reason the island has had a perpetually high emigration rate. If one of our guides touted the fact that it was impossible to starve, another guide – who had lived in America – complained of the island’s provincialism, and missed both the consumer paradise of America and the opportunity America afforded him to make a living doing something he wanted to do (tinkering with cars, as it happened), instead of having to fit into one of the few roles available on the island. Unemployment is very high among Dominica’s poor; there are clearly many who don’t have the choice to do much of anything.
But it is possible merely to live quite freely in Dominica, free of many of the fears that stalk the poor of wealthier countries. That’s not a quality to ignore when we think about poverty.
I’m not discounting the seriousness and importance of absolute levels of poverty. Nor am I dignifying the notion of someone earning $150,000 per year being “house-poor” because they bought a brownstone in Clinton Hill for $2,000,000. The latter is a real thing, but it is not poverty; it’s a conscious (possibly wise, possibly foolish) choice to spend a lot of money on one consumer good-slash-investment, leaving much less for other goods and investments.
All I’m saying is that the frame that defines poverty as having relatively less stuff than most people is, itself, impoverished. And that a richer definition would focus on how we live, and how much choice in how we live, and not just on what we have.
Reihan Salam has written an interestingly revealing essay for Slate about how he came to be conscious of his class identity – and, implicitly, how that shaped his emerging political consciousness. He begins:
I first encountered the upper middle class when I attended a big magnet high school in Manhattan that attracted a decent number of brainy, better-off kids whose parents preferred not to pay private-school tuition. Growing up in an immigrant household, I’d felt largely immune to class distinctions. Before high school, some of the kids I knew were somewhat worse off, and others were somewhat better off than most, but we generally all fell into the same lower-middle- or middle-middle-class milieu. So high school was a revelation. Status distinctions that had been entirely obscure to me came into focus. Everything about you—the clothes you wore, the music you listened to, the way you pronounced things—turned out to be a clear marker of where you were from and whether you were worth knowing.
By the time I made it to a selective college, I found myself entirely surrounded by this upper-middle-class tribe. My fellow students and my professors were overwhelmingly drawn from comfortably affluent families hailing from an almost laughably small number of comfortably affluent neighborhoods, mostly in and around big coastal cities. Though virtually all of these polite, well-groomed people were politically liberal, I sensed that their gut political instincts were all about protecting what they had and scratching out the eyeballs of anyone who dared to suggest taking it away from them. I can’t say I liked these people as a group. Yet without really reflecting on it, I felt that it was inevitable that I would live among them, and that’s pretty much exactly what’s happened.
So allow me to unburden myself. I’ve had a lot of time to observe and think about the upper middle class, and though many of the upper-middle-class individuals I’ve come to know are good, decent people, I’ve come to the conclusion that upper-middle-class Americans threaten to destroy everything that is best in our country. And I want them to stop.
Allow me to say, as a member of this same class, that I completely know where Salam is coming from, as well as where he’s arrived at. There’s a reason why SWPL is a thing, and there’s a reason why getting that it’s a thing is yet another SWPL class marker. There is something especially annoying about the smugness of those who mistake status markers for virtue, and who act as if these badges of virtue put them above any interrogation of their class interests. And it’s especially awful when you feel in danger of becoming one of these people.
And I understand where he’s coming from in terms of personal history as well. My high school experience was more like his experience prior to high school: we Bronx Scientists all felt like broadly middle-middle urban kids. (The difference has something to do with the Bronx versus Manhattan, and more to do with the near-decade difference in our ages; New York changed a lot in ten years.) But I had something more akin to Reihan’s shock when I got to college, where I regularly felt like the guy who didn’t own a proper jacket among the swells in their tuxedos.
When I interrogate myself honestly, though, that had very little to do with the actual class backgrounds of my college classmates, and mostly to do with my own psychosocial development. Yes, there were plenty of prep school grads and the like. But my roommates freshman year were from: Columbia, Missouri; Youngstown, Ohio; and Temple, Texas. Not a son of Groton among them; I, the son of a public high school teacher in the Bronx, was the sophisticated urbanite of the bunch. And frankly, they fit in, socially, better than I did. It wasn’t that I didn’t come from the right class; it was that I didn’t have any class – that, frankly, I was an argumentative slob.
It didn’t feel that way at the time, though. And that feeling – the feeling that I had been sized up by my betters and found wanting – was an important undercurrent leading me in a more right-wing direction politically a few years after graduation. Pecuniary interest was obviously important as well – I was working on Wall Street, and though my career had not yet taken off that was all the more reason to want to keep the path of that potential career as smooth as possible. But that kind of self-interest only makes it more important to find “objective” justifications for one’s opinions.
I’m not suggesting that Salam is animated by those kinds of resentments. Frankly, he has always presented to me as remarkably free of resentments. And he is nothing if not socially adept, in his unique, Salam-y way. I can’t even picture him being a slob. I’m just saying that a certain cliche – the bright, socially-inept striver identifying with a right-wing political program so as to stick it to his social “betters” – is a cliche for a reason: because it’s a pretty common. But it’s highly destructive of sensible analysis.
And that’s because any sensible analysis has to start with objective class interests, not the status markers of class.
Indeed, the heart of Salam’s complaint is precisely that the upper middle class have too much influence:
We often hear about the political muscle of the ultrarich. Billionaires like the libertarians Charles and David Koch and Tom Steyer, the California environmentalist who’s been waging a one-man jihad against the Keystone XL pipeline, have become bogeymen for the left and right respectively. The influence of these machers is considerable, no doubt. Yet the upper middle class collectively wields far more influence. These are households with enough money to make modest political contributions, enough time to email their elected officials and to sign petitions, and enough influence to sway their neighbors. Upper-middle-class Americans vote at substantially higher rates than those less well-off, and though their turnout levels aren’t quite as high as those even richer than they are, there are far more upper-middle-class people than there are rich people. One can easily turn the Kochs or the Steyers of the world into a big fat political target. It’s harder to do the same to the lawyers, doctors, and management consultants who populate the tonier precincts of our cities and suburbs.
Salam proceeds to lay out a detailed brief against the mass upper class in policy terms: from their support for unproductive tax breaks like the mortgage interest deduction, to restrictive zoning rules that keep housing prices high in urban areas, to cartel-preserving licensure requirements that keep dental assistants from hanging out their own shingles, to a backwards immigration system that lets in nannies but keeps out doctors.
It’s all very Institute for Justice - but it’s also the kind of stuff that Matt Yglesias, liberal scion of the mass upper class, has been writing about for years. In other words, you can contextualize the kind of policy criticism Salam is making within a general libertarian critique (government will always be co-opted by those who already have power; here are examples how upper-middle-class professionals use government to shut the gate on the middle class; we need less government so nobody can rig the game that way). Or you can contextualize it within a general left-wing critique (here are examples of how upper-middle-class liberals act to protect their class interests to the detriment of the poor and middle class; we can’t let a left-wing politics be compromised by the need to keep a large and wealthy class on-side just because it makes the right sounds; we need a class-based politics that doesn’t get hijacked by cultural politics). These are both frameworks for talking about how to reduce the political influence of a favored class, and create an opening for new entrants.
But Salam doesn’t make either argument. Instead, he’s says we need to guilt the upper middle class into being a more civically-responsible gentry:
What can we do to break the stranglehold of the upper middle class? I have no idea. Having spent so much time around upper-middle-class Americans, and having entered their ranks in my own ambivalent way, I’ve come to understand their power. The upper middle class controls the media we consume. They run our big bureaucracies, our universities, and our hospitals. Their voices drown out those of other people at almost every turn. I fear that the only way we can check the tendency of upper-middle-class people to look out for their own interests at the expense of others is to make them feel at least a little guilty about it. It’s not much, but it’s a start.
It reminds me of the way that Charles Murray ended Coming Apart with a similarly exhortatory plea – successful people just shouldn’t shut the gate; instead, they should spend more time in Fishtown because . . . well, because they should.
Salam’s complaint up front is that upper middle class liberals act like they are distinctly virtuous – they obey the law, pay taxes, raise their kids right, and have all the right political opinions – and that this virtue exempts them, in their own minds, from criticism, allowing them to be as ruthless as they like in protecting their individual interests. But by calling for a more virtuous gentry, Salam isn’t puncturing their pretensions – he’s implicitly endorsing them. Because if they aren’t actually any better than anybody else, then why expect them to be?
The thing is, it’s completely normal for people to pursue their own economic interests. That’s precisely what you’d expect people to do under most circumstances. It’s when people act against interest that requires explanation. So why the fury that the upper middle class, as Salam sees it, acts out of selfish motives? Why say that this behavior threatens to destroy everything great about America? Why make this about the kind of people the upper middle class are? If the problem is that they have too much power, then that’s the problem. And let’s talk about how to tackle that.
Because here’s the thing: there is no virtuous class out there. Contra William F. Buckley, a collection of random names from the Boston phone book would do a terrible job running the country. Take a look at the fate of lottery winners and reality television stars if you want to see what happens when fortune and fame descends on individuals nearly at random. The urban mass upper class has a host of ridiculous pretensions about itself – and more’s the pity for them. (Er, us.) But if you think the pretensions to virtue of other distinct classes that are more generally endorsed by the culture don’t have pernicious political and social effects, well, I’ve got a military-industrial complex to sell you.
My advice to would-be class traitors like Salam is: don’t let your predispositions get in the way of your analysis. Here’s a good example of what I mean from right here in New York. Our Mayor, Bill de Blasio, hails from Park Slope, the capital of mass upper class Brooklyn. (That’s my neighborhood as well.) It would be very easy to assume that, as such, he must be following precisely the playbook that Salam describes: posturing as a liberal but in fact acting to preserve the prerogatives of the mass upper class.
But, at least in terms of housing development, the mayor has done something rather different, and has pursued a very pro-development line. I cannot tell you the number of conversations I’ve had with brownstone owners raging about how this mayor is worse than Bloomberg, how he doesn’t care about preserving the historical character of neighborhoods or about the opinions of the local community – he just wants to build. This is not what Park Slope thought it was buying.
Now, de Blasio is trying to yoke that pro-development stance to an affordable housing plan that perhaps Salam would be skeptical of as being too regulatory in nature – but that’s not my point. My point is that de Blasio is acting against precisely the entrenched class interests that Salam thinks are so problematic – against the people who want to pull up the drawbridge. But his cultural politics line up perfectly with the kinds of liberals Salam knows dominate the mass upper class.
Which matters more? That, it seems to me, is the question.
I’ve got a new column up at The Week:
Economically, allowing Greece to leave the euro and default on its debt might be the best thing for all parties. After a period of disruption, Greece would be able to grow again. The eurozone, meanwhile, would have demonstrated that it can distinguish between risks worth taking (Ireland) and risks not worth taking (Greece), and that it is not as brittle as might have been thought. But even if it is economically sensible — indeed, arguably because it may be economically sensible — a Grexit would have deeper implications for the trajectory and meaning of the European project.
That project was originally intended to be something new, neither a traditional state nor a mere customs union, but a kind of supra-national governance that would supplant nationalism, and end the possibility of intra-European conflict. Entry into the EU would tutor Italians and Portuguese in German thrift, and would cement Western democratic norms in countries like Ukraine and Turkey. It was a mechanism for defining — and expanding — the meaning and boundaries of European civilization.
A Grexit would redefine both the meaning and those boundaries. Countries like Poland are going to be properly leery of adopting the euro once it is clear that handing over control of monetary policy does not come with any implicit fiscal guarantees. Greece’s new government is already providing Moscow with diplomatic support as Europe debates the possibility of further sanctions in response to Russian intervention in Ukraine. They will presumably only adopt a more pro-Russian line in the wake of a Grexit.
If it happens, a Grexit will make it clear that there are not only rules for becoming “European” but also rules you have to abide by to remain a European in good standing — rules over which supplicant states have little influence. Rationally, every state — even those in the heart of Europe — will necessarily recall their primary, national allegiances, knowing that these are all they can count on when the chips are down.
The point is not that a Greek departure from the euro would be catastrophic, or that Brussels (or Berlin) ought to see itself as in some kind of competition with Moscow for the allegiance of peripheral European states. Russia’s willingness to waste blood and money on such a competition probably does it more harm than benefit; that was certainly the lesson Gorbachev took from the Brezhnev years. The point is that a willingness to let Greece leave signals precisely that Brussels — and Berlin — do not see themselves as being in that kind of competition. That the European project is no longer about defining a civilization.
Is that a good thing? I’m not sure – my feelings about the European Union are complex.
On the one hand, I understand and even admire the aims of the European project. European civilization was almost destroyed by national rivalries, so I can understand why Europe’s leaders wanted to find a political arrangement that made war in the heart of Europe seem impossible. I understand why France wanted an arrangement that magnified her potential influence, and I understand why Germany wanted an arrangement that made it possible for her to have influence again. And I also see some ancillary effects of the European project that may be positive. Union has made it more possible for smaller European nationalisms – Catalonia, Scotland – to asset themselves in a less traumatizing manner than would probably be the case without that superstructure.
On the other hand, Europe is kind of obviously ridiculous, with poor democratic accountability, no clear definition of what is properly Brussels’s business and what belongs at the national level, and what in practice amounts to a consensus-based method of governance that makes it impossible to take difficult decisions. In the past, Germany was the strongest advocate of reforming these deficits and moving towards a proper European federalism, but in the wake of the financial crisis that is much less true.
I have a great deal of sympathy for Germany’s attitude toward Europe generally – that if they are going to be on the hook fiscally that there had better be fiscal accountability – but in practice this attitude has been enforced in a punitive manner rather than being the spur to institutional reform. As a consequence, the Euro has become something of a Hobson’s choice for small countries – join and you become a German colony; don’t join and you potentially forfeit a substantial competitive advantage vis a vis your neighbors. That’s not a structure that is going to achieve the goals that the founders of the Union intended.
Where I wind up is that America needs a strong Europe, whether it’s a federal Europe or a Europe of states. We need a strong Europe because a weak Europe will be an American dependency and will encourage us in our worst imperial pretensions, while a strong Europe will be both a more useful ally and a check on those ambitions. But the current arrangements leave Europe institutionally weak, and historically the United States has abetted that weakness by focusing almost exclusively on pushing for a larger Europe rather than a more functional one.
So I’m kind of hoping that Greece forces a reckoning, and that the reckoning doesn’t burn the house down completely.
Anyway, check out the column there.
Forgive me if I see Andrew Sullivan’s departure from blogging as more than just a routine retirement by a pioneer in a new media field. Rather, I see it as an extremely negative omen for that very field.
Andrew Sullivan was not just one of the pioneers in creating the blogging form, and in demonstrating how you create a personal brand on the web. Beyond that, he was one of the first to understand that what he was doing, most fundamentally, was not writing, or even editing, but curating – organizing the vast trackless swamp of the internet into material that his audience would be interested in.
And beyond that, he was pioneering a business model that I believed held the best hope for anybody getting paid for producing “content” in the age of on-line distribution. He asked his audience to pay, to subscribe to what amounts to “the web as I see it.”
I say that that is the best hope for anybody getting paid for producing content based on the following syllogism.
First, there are only three ways to monetize traffic. Either you give everything away for free and sell advertising. Or you get people to pay for specific content. Or you get people to pay for a subscription to a whole suite of content.
The problem with the first is that on-line advertising is massively deleterious to the on-line reading (and watching and listening) experience. And it doesn’t work very well in terms of motivating purchases. And most efforts to mitigate the one or the other are massively corrupting of the creative or journalistic enterprise (as Sullivan was well-aware).
The problem with paying for specific content is that you don’t know whether the content is worth purchasing until after you’ve purchased it, which creates a substantial barrier to purchase. If you’re talking about a feature film for which you can consult Metacritic or Rotten Tomatoes or whatever to learn whether it’s likely to suit, that’s one thing. But if you’re talking about a news article, or a web short, or a poem, that’s not an option.
The problem with subscriptions is that, generally, the way they are enforced is by creating a paywall around the content. Nothing gets inside the wall unless it was worth paying for up-front. And once it’s inside the wall, the only way to access it is to be a subscriber. This creates a two-tier world where most people are producing and distributing stuff without compensation, hoping to get them “hosted” by sites that don’t pay them, and eventually to “graduate” to paid work. But the prevalence of so much free work means that there is constant, brutal pressure on compensation for content-creators.
The solution to this dilemma is one that I’ve described – with apologies to “Big Bill” Haywood – as One Big Paywall. In very broad strokes, this would be a scheme whereby content-creators band together to require micro-payments from content aggregators for traffic driven their way, in exchange for not cluttering up access to that content with extraneous advertising and the like. Such a scheme would make it possible for content-creators to put their material out there for general consumption without worrying about either hiding it behind a paywall or getting paid nothing.
Without going into a great deal of detail of my thoughts about how to bootstrap into such a scheme, I’ve long felt that it depended, ultimately, on the success of curators in turning themselves into subscription services. A free curator is always going to pursue a mass audience, and this will skew the kind of content (and advertising) that it features toward the lowest-common denominator. A subscription service has the possibility of pursuing a niche audience – and niches can be quite lucrative. And much of the most interesting content is going to be aimed at some kind of niche.
Andrew Sullivan was my test case, in a way. If he was able to “make it” on a standalone basis, with a subscription model, then it was possible. If it’s possible, other people will do it – not exactly the same way, but with variations. And once it’s clear that it’s a “thing,” one could pursue my idea of One Big Paywall – because there would be moneymaking curators to negotiate with, and with whom the content-creators signing up for such a scheme would have a natural symbiosis.
But we don’t know whether he made it. It’s too soon to know. All we know is that it didn’t go bust immediately, and that there was no way to keep the venture going without Andrew Sullivan consistently and obsessively at the helm.
That’s a very negative fact for the future of that model. There just aren’t very many people like Sullivan in the world, who combine his speed as a writer, his breadth of taste, his skills as an editor, his manic energy, his head for the business side – it’s just a huge conglomeration of valuable traits. And he didn’t institutionalize them the way Steve Jobs or Walt Disney or Harold Ross did in their own various ways. Even though Andrew Sullivan did only a small fraction of the writing or the curating of the Daily Dish, without him blogging full time, apparently, there is no Dish.
I’ve been told that, in order to build a real, monetizable audience on the web, I need to post at least three times a day. Obviously, some of my colleagues here do exactly that. But it’s a completely insane demand. Virtually none of the critics or opinion-writers of yore could have met it – and those that could have would probably have destroyed themselves doing so, to say nothing of destroying their lives. The only reason anyone adheres to such a standard is precisely that there is no reliable way to monetize good work as such.
I probably sound like Leon Wieseltier here, but I could not disagree with him more. I have no interest – none – in preening lamentations for the great age of culture now past and gone. I loathe nostalgia – but I also recognize that culture is shaped by market structure, and that the market structure we have – and which is a consequence of decisions made a long time ago, some consciously but many unconsciously – is exceptionally brutal to anyone trying to make a living writing, making music, shooting movies, while also providing more ready opportunities to “break in” than ever before. I want to retain the latter while mitigating the former. That means changing the market structure, not posing as a solon while manifesting mostly ignorance.
Andrew Sullivan’s retirement is a blow personally, because, while I never met him, he has always been generous in linking to me, both here and at my prior perches. That he has been so generous in spite of the fact that my first on-line interaction with him was acrimonious in the extreme (and unnecessarily personal – on my part) is a testament to his admirable ability to look past the sort of thing that would lead many successful people (particularly in media) to hold a lifelong grudge, simply because what Sullivan cared about most was whether the work was interesting – to him and to the readership he cultivated. But that’s not the main reason his departure is a blow. The main reason is that the torch has not been passed. There is nobody else out there doing what Andrew Sullivan did, nor is there any prospect for someone to do it. There’s nobody else I can think of who I would say: if he or she links to me regularly, then people will read what I write.
Which make me very sad, for any younger versions of myself out there looking to give what they have to give, creatively, and to get something for it, without being fatally consumed by the endeavor.