You can’t spend much time in right-of-center circles without hearing, often in the comfort of an open bar, that America’s poor don’t have it too bad. Yes, there are about 45 million people below the official poverty line. But that doesn’t mean that they’re suffering under the conditions we see in photos of the dustbowl or the old industrial slums. A Heritage Foundation report observes that “When LBJ launched the War on Poverty, about a quarter of poor Americans lacked flush toilets and running water in the home. Today, such conditions have all but vanished. According to government surveys, over 80 percent of the poor have air conditioning, three quarters have a car, nearly two thirds have cable or satellite TV, half have a computer, and 40 percent have a wide screen HDTV.”
Megan McArdle makes a version of the same argument in her comment on Joni Ernst’s State of the Union response. She reminds Internet snarks who mocked Ernst’s story about using plastic bags to protect her only pair of shoes that this was a common practice until pretty recently. According to McArdle, “we forget how much poorer we used to be, and then we forget that we have forgotten.” These days, even people without much money enjoy a material abundance of which their grandparents could only have dreamed. (Rod Dreher remembers the story of his own family here.)
That’s true, as far as it goes. Pretty much anything that’s made in a factory is cheaper and higher-quality than it used to be. I admit to moments of SWPL enthusiasm for craftsmanship (or the Brooklyn facsimile). But let’s get real: expanded access to consumer products is a good thing.
But that doesn’t mean poverty is exaggerated by ungrateful whiners. Goods and services that depend on skilled human labor cost more than they used to. Curiously, McArdle relies on figures from 1987 to make her case that American households face lighter expenses for necessities than they used to. That ignores the increase in prices for childcare, healthcare, and higher education over the last decade or so.
So people can accumulate possessions while maintaining a relatively low standard of living. Indoor plumbing won’t take care of your kids, and an Xbox won’t send them to college. The poor are also more likely to suffer from “diseases of affluence” such as obesity and diabetes. Unlike the truly affluent, however, they can’t afford to have them treated.
Material deprivation is also not always the most wrenching aspect of poverty. As Karl Polanyi argues his study of the Industrial Revolution, the lack of meaningful work and a secure social position can be worse than low wages or high consumer prices.
It’s important to question depictions of Dickensian poverty in the media, which often focus on exceptional cases. And we should resist nostalgia for a mythical time when folks didn’t have much but their dignity. But the grinding, uncertain lives of poor Americans today is a problem that new shoes and air conditioning won’t solve.
Literary Addendum: McArdle draws several examples of the bad old days from Laura Ingalls Wilder’s Little House novels for children. She claims:
…what really strikes you is how incredibly poor these people were. The Ingalls family were in many ways bourgeoisie: educated by the standards of the day, active in community leadership, landowners. And they had nothing.
This is a serious misreading of the books. As Wilder’s autobiography makes clear, the reason that the Ingalls family seems poor is that they were poor. There was nothing bourgeois about them, except perhaps Ma’s (relatively) advanced education. Even in the idealized version presented in the books, the Ingallses fail, time and again, to realize their dream of becoming independent farmers. That’s why the story ends, rather tragically, with Pa working as a clerk in a railroad town, a fate that he’d dragged his family thousands of miles to avoid.
Jonathan Chait burned up the Internet this week with his critique of so-called political correctness. Among many responses, Amanda Taub‘s stands out for its denial of Chait’s basic premise. According to Taub:
…there’s no such thing as “political correctness.” The term’s in wide use, certainly, but has no actual fixed or specific meaning. What defines it is not what it describes but how it’s used: as a way to dismiss a concern or demand as a frivolous grievance rather than a real issue.
This is a curious response. Sure, people use the term in different ways. But Chait provides a perfectly serviceable definition: “political correctness is a style of politics in which the more radical members of the left attempt to regulate political discourse by defining opposing views as bigoted and illegitimate.”
I don’t think Taub would deny that this political style exists, although one may quibble with some of Chait’s examples. What she objects to is the way Chait describes it. In her view, calling denunciations of putatively bigoted opinions “political correctness” allows their advocates to avoid taking those criticisms seriously. So, in a feat of rhetorical jujitsu, Chait becomes guilty of the same tendency he opposes: ruling views he rejects out of respectable conversation.
This dispute is an object lesson in the pernicious effect of political correctness—or whatever you want to call it—on intellectual and political debate. Arguments about ideas devolve into wrangling about words. The conduct of politics by means of semantics sometimes reaches comic heights. In his piece, Chait reports an incident in which,
UCLA students staged a sit-in to protest microaggressions such as when a professor corrected a student’s decision to spell the word indigenous with an uppercase I—one example of many “perceived grammatical choices that in actuality reflect ideologies.”
But there’s nothing important at stake in the phrase “political correctness”. So let’s drop it, at least provisionally, and focus on the phenomenon that Chait describes. Contrary to popular perception, it’s not just a product of youthful exuberance among student activists or the ease and enforced brevity of Twitter. It’s rooted in a philosophical critique of the liberal theory of discourse.
Although it has precedents in Kant, this theory received a definitive formulation in John Stuart Mill’s On Liberty. According to Mill, the truth is most likely to emerge from unrestricted debate. Although Mill did not use the metaphor, such a debate is conventionally described as a “marketplace of ideas,” in which vendors are free to offer their wares and customers are at liberty to purchase only the best goods.
There are two problem with this image. The first is that it assumes that consumers of ideas are in a position to judge which most closely approximate the truth. But that may not be the case. In order to make good purchasing decisions, customers need a certain level of background information and capacity for comparison.
In order to make the intellectual market function properly, Mill proposed that participation be restricted to “human beings in the maturity of their faculties.” In the most obvious sense, that means that we should not rely on judgments by children or the insane.
But Mill did not stop with ruling out those who had not yet reached the age of majority, or whose reason was in some way deranged. He also argued that the liberty of thought and discussion was not appropriate for “those backward states of society in which the race itself may be considered as in its nonage.” When it comes to “barbarians,” Mill reasoned, it is appropriate to use coercion, just as it is appropriate for parents to monitor their children’s reading. The implication for contemporary politics was that Britain was justified in practicing a kind of tutelary imperialism.
That conclusion might rule Mill off the syllabus at some universities today. But it actually reflects an important and potentially damaging tension in his argument. Mill defends the unrestricted exchange of ideas. Yet he also accords to those he judges fully rational the authority to determine who gets to participate in that exchange—and to enforce the education of those who don’t make the cut. For Mill, in other words, intellectual freedom presupposes a period of enlightened despotism.
The second problem emerges more directly from the quasi-commercial dimensions of Mill’s epistemological model. Mill assumed that all normally-constituted adults who had received a basic education were capable of reliably picking and choosing among intellectual offerings. That assumes they are unaffected by the sellers’ attempts to influence their choices.
But consumer preferences are influenced by advertising, reputation, the way products are presented, habit, and so one. In practice, it’s not easy to get shoppers to consider buying something new and different, even if it really is better than its competitors. Most of the time they buy the same products from familiar brands.
Some Marxists call the factors that interfere with judgment “false consciousness.” They argue that false consciousness accounts for the failure of revolutionary ideology to attract adherents among the working class in the developed world. On this view, it wasn’t outright repression or censorship that prevented the workers from adopting a Marxist perspective. It is was the subtle and concealed influence of capital on their ability to exercise their capacity to make their own decisions.
These tensions in Mill’s defense of intellectual freedom were recognized in the 19th century. What we now call political correctness was first articulated in the 1960s by the brilliant German-born philosopher Herbert Marcuse. Marcuse’s achievement was to turn Mill’s argument for free discussion, at least in a modern Western society, against its explicit conclusion.
Marcuse undertakes this inversion, worthy a black belt in dialectical reasoning, in the 1965 essay “Repressive Tolerance.” In it, Marcuse argues that the marketplace of ideas can’t function as Mill expected, because the game in rigged in favor of those who are already powerful. Some ideas enjoy underserved appeal due to tradition or the prestige of their advocates. And “consumers” are not really free to chose, given the influence of advertising and the pressures of social and economic need. Thus the outcome of formally free debate is actually predetermined. The ideas that win will generally be those justify the existing order; those that lose will be those that challenge the structure.
This prong of the argument is close to the standard critique of false consciousness. But Marcuse links it to Mill’s distinction between those who are and are not capable of participating in and benefitting from the unrestricted exchange of ideas.
According to Marcuse, many people who appear to be rational, self-determining men and women are actually in a condition of ideological enforced immaturity. They are therefore incapable of exercising the kind of that Mill’s argument presumes. In order to make debate meaningful, they need to be properly educated. This education is the responsibility of those who are already shown themselves to be capable of thinking for themselves—in this case, left-wing intellectuals rather than Victorian colonial administrators.
One might wonder how either Mill or Marcuse could be so sure that their kind of people knew what was best for others. The answer is that they regarded the truth as obvious. Mill was convinced that progress has demonstrated the obsolescence of non-Western culture, just as it had exposed the falsity of geocentric astronomy. In a postscript to the original essay, Marcuse expressed similar confidence in the rationality if not the linear character of history:
As against the virulent denunciations that such a policy would do away with the sacred liberalistic principle of equality for ‘the other side’, I maintain that there are issues where either there is no ‘other side’ in any more than a formalistic sense, or where ‘the other side’ is demonstrably regressive…
In Marcuse’s hands, Mill’s justification of enlightened despotism in undeveloped societies becomes a justification of enlightened despotism over the majority undeveloped individuals. The central difference between Mill and Marcuse is that the former believed that the necessity of despotism had passed, as least in the West. Marcuse contended intellectual freedom had to be be deferred until more people are likely to develop the correct opinions:
…the ways should not be blocked on which a subversive majority could develop, and if they are blocked by organized repression and indoctrination, their reopening may require apparently undemocratic means. They would include the withdrawal of toleration of speech and assembly from groups and movements which promote aggressive policies, armament, chauvinism, discrimination on the grounds of race and religion, of which oppose the extension of public services, social security, medical care, etc. Moreover, the restoration of freedom of thought may necessitate new and rigid restrictions on teachings and practices in the educational institutions which, by their very methods and concepts, serve to enclose the mind within the established universe of discourse and behavior—thereby precluding a priori a rational evaluation of the alternatives. And to the degree to which freedom of thought involves the struggle against inhumanity, restoration of such freedom would also imply intolerance toward scientific research in the interest of deadly ‘deterrents’, of abnormal human endurance under inhuman conditions, etc.
This passage is remarkable for the degree to which it prefigures so-called political correctness. Marcuse’s thought is that it is impossible for radical ideas to win a “free debate” in an society characterized by many forms of inequality. Therefore, debate should be restructured in ways that favor the weak and lowly. Marcuse goes on to speculate:
While the reversal of the trend in the education enterprise at least could conceivably be enforced by the students and teachers themselves, the systematic withdrawal of tolerance toward regressive and repressive opinions and movements could only be envisaged as the results of large-scale pressure which would amount to an upheaval.
Marcuse’s emphasis on students and professors encouraged the transformation of the universities that’s been exhaustively discussed by writers such as Roger Kimball. But his hopes for “large scale” pressure were disappointed until fairly recently, partly because the repressive tolerance thesis is as offensive to ordinary people as it is attractive to academics.
The advent of social media changed that dynamic. In addition to tilting public discourse toward the young, who are more likely to use these platforms, they make it easier for those whom Marcuse frankly described as subversives to organize and target the withdrawal of tolerance.
To be clear, I’m not suggesting that Gawker commenters are secret Marcusians. Actually, they’d probably benefit from reading this extraordinarily learned, subtle thinker. But they have absorbed a simplified version of Marcuse’s critique of Mill. In Marcuse, this critique culminates in an endorsement of legal as well as social pressure to hasten progress:
Different opinions and ‘philosophies’ can no longer compete peacefully for adherence and persuasion on rational grounds: the ‘marketplace of ideas is organized and delimited by those who determine the national and the individual interest….The small and powerless minorities which struggle against the false consciousness and its beneficiaries much be be helped: their continued existence is more important than the preservation of the rights and liberties which grant constitutional powers to those who oppress these minorities. It should be evident by now that the exercise of civil rights by those who don’t have them presupposes the withdrawal of civil rights from those who prevent their exercise…
How long until his unwitting heirs come to the same conclusion?
Samuel Goldman is assistant professor of political science at The George Washington University.
The LORD is a man of war: the LORD is his name.
There is an old story that the Archangel Michael and the devil feuded over Moses’ remains. While Michael aimed to convey the prophet’s body up to heaven, Satan was determined to keep him buried in the dirt.
If the comparison is not impious, we may speak of a similar contest for custody of Leo Strauss. According to his admirers, Strauss earned a place among the angels by promoting Greek-inspired rationalism and a cautious liberalism. Strauss’s critics contend that he was a demonic figure, who encouraged his acolytes to disregard both scholarly probity and basic morality in favor of a Nietzschean will to power.
Strauss’s supporters held the upper hand so long as debate was focused on works that Strauss prepared for publication after his arrival in the United States in 1937. Yet they have struggled to explain the early works in German that have come to light over the last decade. These texts do not appear to be the work of a liberal rationalist. In a notorious letter to philosopher of history Karl Löwith, Strauss even expressed support for “the principles of the Right, fascist, authoritarian, imperialist principles…”
Robert Howse, who teaches law at New York University, is the latest combatant in the Strauss wars. In Leo Strauss: Man of Peace, Howse defends Strauss from his enemies while distancing him from some of his self-appointed friends. Howse acknowledges that Strauss flirted with extremism. But he argues that Strauss devoted the rest of his career to t’shuvah, a Hebrew word that is usually translated “repentance.”
Howse’s study has the merit of drawing on newly available sources from Strauss’s intellectual maturity: the archive of seminars made available by the Leo Strauss Center at the University of Chicago. And Howse is among very few writers on Strauss who are sympathetic to their subject without being sycophantic. Despite these virtues, I do not think Howse wins the battle for Strauss’s legacy, at least if this means distancing him from the politics of national self-assertion. That is because Howse does not draw the connection between Strauss’s early critique of liberalism and his lifelong Zionism.
Howse focuses on political violence. Departing from Strauss’s alleged influence on supporters of the Iraq War, Howse asks whether Strauss thought violence should be regulated by a normative standard or deployed according to its user’s interest. He answers that Strauss sought “a middle way between strict morality and sheer Machiavellian[ism].”
In itself, this conclusion is not very interesting. Every significant political theorist, including Machiavelli, has tried in some way to steer between the rocks of moral absolutism and political solipsism. Howse’s contribution is an argument about the character of the “middle way” that Strauss preferred. In courses on Thucydides, Kant, and Grotius from the 1960s, Howse finds Strauss praising the Nuremberg trials, United Nations, and nascent European Community. He argues that Strauss was essentially a Cold War liberal internationalist.
To make his case, Howse has to refute an interpretation of Strauss that has become dominant over the last decade or so. According to this interpretation, the most important influence on Strauss was the reactionary legal philosopher Carl Schmitt. In seminal works from the 1920s, Schmitt argued that the basis of politics is the distinction between friend and foe realized in mortal combat.
The Schmitt connection has been a centerpiece of attacks on Strauss since the mid-1990s. But Howse is more interested in confronting Strauss’s allies than in rehashing old debates. In particular, Howse accuses Heinrich Meier, the German scholar who edited Strauss’s Gesammelte Schriften, of “misreading Strauss as a hyper-Schmittian.” According to Howse, Meier not only inflates the significance of Schmitt for Strauss but also presents Strauss as agreeing with Schmitt’s politics of existential opposition.
Howse’s response to Meier has several dimensions. On the textual level, Howse shows that there is not enough evidence to support claims that Schmitt was among Strauss’s most important interlocutors. Strauss wrote a 1932 review of Schmitt’s seminal work, The Concept of the Political, that Schmitt recognized as the most searching he received. On that basis, he wrote a letter of recommendation for the Rockefeller Foundation grant that allowed Strauss to leave Germany. But these facts demonstrate no more than a professional relationship between scholars. And Schmitt’s anti-Semitism may have given him personal reasons to wish that Jewish intellectuals would make their careers elsewhere.
On the philosophical level, Howse reaffirms that Strauss was deeply critical of Schmitt’s approach. Although Schmitt claimed that he was distinguishing politics from morality, his argument was based on the assumption that a life devoted to existential confrontation is more worthy than one devoted to peace and prosperity. Although opposed to Christian and bourgeois norms, this assumption is inextricably normative. Strauss exposed Schmitt’s hard-boiled realism as a cover for his own brand of moralism.
Finally, Howse also offers a plausible if not novel account of the historical setting in which Strauss could have been attracted to antiliberalism without endorsing Schmitt’s theory of enmity. Given the failure of the Weimar republic, it seemed that a politics of militant self-assertion was necessary to protect Germany’s Jews. Strauss’s praise of “the principles of the Right, fascist, authoritarian, imperialist principles…” has to be read with this consideration in mind. The full context of the letter makes it clear that Strauss believed that only such principles were capable of standing up to the National Socialist regime.
When he wrote those words in 1933, Strauss may have been thinking of Mussolini, who was at the time an opponent of Hitler. As the 1930s continued, however, he associated them with Churchill. After Strauss arrived in the United States, he was known for insisting that “I am not liberal, I am not conservative, I always follow Churchill.” As Paul Gottfried pointed out in Leo Strauss and the Conservative Movement in America, Strauss was far more enthusiastic about England than the land of his birth.
Howse suggests that Strauss’s admiration for Churchill indicates his growing appreciation for liberal democracy. This is true, but only partly. What Strauss admired in Churchill and England was not bourgeois virtue or popular government. Rather, it was the “Roman” element he praised in the letter to Löwith.
This element consisted, in the first place, of a defiant militarism. In the letter to Löwith, Strauss quotes Virgil’s exhortation to the Romans “to rule with authority … spare the vanquished and crush the proud.” He elides the interstitial phrase “impose the way of peace.” The elision suggests that for Strauss the Roman way is the way of war and empire.
In 1934, Strauss identified a Roman quality in British parliamentary debate. But what he was praising was fairly specific: Churchill’s eloquent support for rearmament. There were compelling political and personal reasons for Strauss’s enthusiasm for military resistance to Germany. Nevertheless, it is worth recalling this view was deeply unpopular in the mid-’30s. Strauss’s praise for Parliament rests on its senatorial rather than its plebiscitary character.
Strauss’s affection for patricians was an important part of his near-worship of Churchill. As an aristocrat, soldier, and enthusiastic imperialist, Churchill personally represented the survival of premodern virtues within liberal democracy. It is easy to forget that Churchill’s critics often castigated him in terms similar to those Strauss used in his letter: imperialist, authoritarian, even fascist. In Strauss’s view, however, these were precisely the qualities that enabled Churchill to take a lonely stand against Hitler.
For Strauss, then, “the principles of the Right” were not so distant as they might now seem from the values that helped saved Europe from Nazi domination and its Jews from extinction. In 1941, he explained to an audience at the New School for Social Research that “it is the English, and not the Germans, who deserve to be, and to remain, an imperial nation: for only the English … have understood that in order to deserve to exercise imperial rule, regere imperio populos, one must have learned for a very long time to spare the vanquished and to crush the arrogant: parcere subjectis et debellare superbos.”
Howse argues that the lecture on “German Nihilism” from which this passage is quoted shows Strauss continuing his critique of Schmitt. And this is true as far as it goes: Strauss rejects the “warrior morality” that finds meaning in confrontation with a mortal enemy.
But that does not mean Strauss rejected violence as such. Like the letter to Löwith, “German Nihilism” culminates in a defense of war and empire. For Strauss, the problem with Schmitt was not that he placed violence at the center of politics. It was that he did so for the wrong reasons.
What did Strauss believe were the right reasons for violence? From the crisis of the 1920s, he learned that coercion was necessary to secure order. In the intellectual autobiography that he added to the 1965 English translation of his book on Spinoza, Strauss explained that the Weimar Republic “presented the sorry spectacle of justice without a sword or of justice unable to use the sword.” Without endorsing dictatorship, Strauss followed Machiavelli in regarding reliable execution as the central responsibility of the state.
But this is an argument about domestic politics. In a seminar on Thucydides taught just a few years before the Spinoza preface was published, Howse finds Strauss insisting that “foreign relations cannot be the domain of vindictive justice.” Force must be used in international affairs, but only to the extent necessary to secure a minimum of justice.
Howse observes that Strauss’s view that justice must be tempered by moderation was reflected in his assessment of the Nuremberg trials. While the Versailles treaty after World War I was a vindictive application of collective responsibility, the Nuremberg trials attempted to distinguish individual criminals from collaborators.
There is an implicit contrast here to Schmitt, who rejected the Nuremberg tribunal as an exercise in hypocritical moralism. As Strauss had shown, Schmitt himself was moralist when that suited his purposes. For Strauss, on the other hand, the fight against the Nazis was unquestionably a just war. He had argued in 1941 that England was not only permitted to fight Germany but had a moral right to do so.
At the time, Strauss had expressed this position using the rhetoric of empire. He was far from the only one to describe just causes in now politically incorrect terms. In 1942, the newly promoted Churchill explained that “I have not become the King’s First Minister in order to preside over the liquidation of the British Empire.” In Churchill’s view as much as Strauss’s, the war against Germany was a war for empire.
After the war, old-fashioned empire became untenable. In addition to economic and military obstacles, the principle of self-determination that encouraged resistance to the Nazis made it impossible for former imperial powers to justify their domination of other peoples.
Surprisingly, Howse finds Strauss relatively accepting of this change. In the Thucydides seminar, Strauss juxtaposes “empire” and “freedom from foreign domination” as the two greatest goals of politics. The suggestion is that the strong cannot be blamed for seeking empire. On the other hand, the weak cannot be blamed for resisting it. After about 1945, however, the moral and technological balance of power shifted in such a way as to give the resisters the upper hand. The imperial powers could no longer claim the right of the stronger because they were no longer stronger.
Howse argues that the Kant and Grotius seminars show Strauss searching for an acceptable order for the world of nation-states that replaced the old empires. He finds that Strauss, although resolutely anticommunist, expressed enthusiasm for a federation of republican states similar to the one suggested by Kant. Nevertheless Strauss, like Kant, rejected a world state as unavoidably tyrannical. The best alternative would be a federal arrangement involving shared sovereignty combined with respect for national particularity—perhaps in ways comparable to the European Union.
Howse thus concludes that the mature Strauss was a liberal internationalist. Although not naïve about the necessity of war, he believed that war should be waged for the sake of a more just order. According to Howse, this Strauss is far from the belligerent nationalist who is supposed to have inspired the neoconservatives. Like Socrates, he is a man of peace.
Yet there is something missing from Howse’s portrait of Strauss as a liberal internationalist. That is a detailed consideration of the role of Zionism in Strauss’s thought about violence.
In his intellectual autobiography, Strauss describes his earliest political decision as a commitment to “simple, straightforward political Zionism” at the age of 17. Throughout the 1920s, he was active in the Revisionist movement led by Vladimir Jabotinsky. In the 1930s, Strauss endorsed the “the principles of the Right, fascist, authoritarian, imperialist principles…” as the only basis for defense of Germany’s Jews. In the 1940s, he offered a moral defense of the British Empire partly because of the mercy it offered to the vanquished—including the Jews settled in Palestine. In the 1950s and 1960s, Strauss lectured and wrote extensively on Jewish themes, rarely failing to voice his admiration and gratitude for the foundation of the State of Israel.
These facts are barely mentioned in Leo Strauss: Man of Peace. In fact, the only explicit reference to the State of Israel that I have found comes in the conclusion, when Howse mentions Strauss’s 1957 letter to National Review defending Israel from accusations of racism. As part of his polemic against the neoconservative appropriation of Strauss, Howse assures readers that, “This was an act of loyalty to the Jewish people, not to the political right.”
Howse may be correct about Strauss’s intentions. But Strauss’s personal relationship to the American conservative movement is not the most important issue. Strauss’s lifelong commitment to Zionism tells us something important about his views on political violence. In this decisive case, he endorsed the politics of national self-assertion that Howse contends he had rejected by end of his career.
Strauss makes this point obliquely but unmistakably in the “Note on Maimonides’ Letter on Astrology” that he composed in 1968. In the letter, Maimonides attributes the destruction of the Second Temple to the fact that the Jews relied on magic to provide their defense, rather than practicing the art of war and conquest like the Romans who defeated them.
Strauss describes the remark as “a beautiful commentary on the grand conclusion of the Mishneh Torah: the restoration of Jewish freedom in the Messianic age is not to be understood as a miracle.” The Mishneh chapters that Strauss cites clarify this statement, explaining that the only difference between between the current age and the Messianic era will be “emancipation from our subjugation to the gentile kingdoms.”
For the mature Strauss, in other words, the redemption of the Jewish people was not mystical event. It is a political condition, defined by the reestablishment of Jews’ sovereignty in their own land. The achievement depended on much the same unsettling principles that Strauss endorsed in the infamous letter to Löwith. It may not be a coincidence that they were written almost exactly one year after Israel won control of the Temple Mount.
Strauss may have hoped the Jewish State could eventually become a respected member of a peaceful international federation. Nevertheless, this passage suggests that t’shuvah may not have been the central theme of Strauss’s career. Rather than enacting a return from extremism to moderation, Strauss’s thought about political violence was remarkably consistent concerning the nation that he cared most about. When it came to the Jewish people, Strauss felt that he had nothing to repent.
Samuel Goldman is assistant professor of political science at The George Washington University.
The infamous former D.C. mayor Marion Barry died yesterday. Most of the obituaries and remiscences that have appeared so far are properly respectful. Even so, they can’t avoid mentioning Barry’s reputation for corruption and troubles with the law, especially his 1990 arrest on drug charges. Despite these problems, which might have doomed a lesser politician, Barry remained beloved in many parts of the city. How could citizens of the district continue to support him?
Part of the answer, as Adam Serwer points out, is that Barry was a very good politician. At the beginning of his career, he cultivated an image as an advocate for the District’s black majority, while reassuring the white elite that he was ready to do business. It’s easy to forget now, but Barry won election in 1978 largely due to votes from the Northwest quadrant. He lost much of that support when he ran for reelection. But by then he could rely on other allies.
But there was more to Barry’s success than tactical brilliance. He practiced urban politics like an old-fashioned ward boss, dispensing jobs and contracts as personal favors to his supporters. In some ways, the benefits were real: Washington’s black middle class depended heavily on municipal jobs. But they also promoted cronyism and incompetence, and helped bankrupt the city.
It’s tempting to conclude from these results that Barry was a very, very, very bad mayor. Although true in some ways, this assessment mistakes just how historically typical Barry was. If Barry had been been born white, in a different place, and in 1876 rather than 1936, he would likely be remembered as a lovable rogue, who used government to help out people to whom other roads out of poverty were closed.
There is no better exemplar of this type than James Michael Curley, the four-time mayor of Boston, Congressman, Governor, and two-time jailbird immortalized by Spencer Tracy in “The Last Hurrah“ (based on Edwin O’Connor’s novel). As Jack Beatty shows in his riveting biography, from the beginning of his political career before World War I to its end in the 1950s, Curley used government as an instrument in his lifelong mission to improve the lives of Boston’s Irish working class.
Setting a pattern that Barry would follow, Curley started out as a reformer. As challenges to his autocratic practices emerged, however, Curley relied increasingly on appeals to ethnic resentment and a bullying, macho style. In his 1942 Congressional race agains the Brahmin Thomas Eliot, Curley asserted that, “There is more Americanism in one half of Jim Curley’s ass than in that pink body of Tom Eliot.”
Curley’s divisive rhetoric was coarse but not very harmful in itself. Much worse were his policies, which relied on ever-increasing property taxes to pay for public-sector jobs. Boston boomed along with the rest of the country in the 1920s. By the time Curley left office, however, it had entered a decline from which it emerged only under the late Thomas Menino, who broke Curley’s record as Boston’s longest-serving mayor.
Like Barry, then, Curley was by objective standards a lousy mayor (particularly in his later terms). Nevertheless, he remained a hero to his people, who turned out in the thousands at his funeral. Were they sentimental about Curley’s big achievements, such as the construction of the municipal hospital? Grateful for the cash envelopes and no-show jobs Curley distributed around election time? Or unaware how Curley had damaged their city?
The answer probably involved all of these elements. Even in combination, however, they’re inadequate to explain Curley’s role in Boston. Rather than a conventional politician, Curley was a kind of tribal chieftain. More than any particular benefits, he offered his followers the sense that there was someone in power who was like them, who cared about them, and who would do whatever he could to help them.
Before World War II, there were plenty of white chiefs in the Curley mode, among rural Southerners as urban immigrants. By the ’70s, however, urban politics had been so thoroughly racialized that personal leadership and endemic corruption were seen as black pathologies rather than the historical norm. In the final analysis, Barry was a crook who hurt his city. But his greatest crime was being born black and too late to be crowned a “rascal king”.
Samuel Goldman’s work has appeared in The New Criterion, The Wall Street Journal, and Maximumrocknroll.
Last night was a big win for Republicans. And I have nothing much to add to the already enormous literature documenting just what a romp it was (see Michael Brendan Dougherty’s after-action report in The Week). Most of the outcomes were consistent with predictions. But it was surprising how far Republicans outperformed the polls in accumulating large margins of victory.
Even so, Republicans should resist the temptation to conclude that the results give them an enduring advantage at the national level. This was a “wave election” only in the sense that American politics have been stormy for the last decade. I have to run off to teach, so I’ll make my case by means of a listicle. Here are six reasons for caution:
- The president’s party usually loses seats in midterm elections.
- Obama’s approval, while low, is higher than Bush’s at the same point in his presidency.
- We’ve seen this movie before. Remember the “permanent majority” of 2004? How about the “thumping” of 2006? Then there was the “new majority” of 2008. Of course, that was followed by the “Tea Party wave” of 2010. Which didn’t stop Obama from becoming the first president since Eisenhower to win a majority of the vote for a second time in 2012.
- The midterm electorate skews older, whiter, and richer than in presidential years. These are Republican demographics, so Republicans tend to do better. The 2016 electorate, on the other hand, will probably look more like 2008 than 2010. Republicans probably won’t ever win many votes from blacks or single women, but they need to continue doing better among the young and Hispanics (as several candidates did last night).
- The standard explanation of the results is that the election was a referendum on Obama’s policies. That’s not true for the simple reason that most voters have only the foggiest notion of what Obama’s policies are. (Polls on these matters can be misleading because they often ask respondents to choose from a predetermined set of responses to a leading question, which encourages unrepresentative, off-the-cuff answers.) Rather than voting on the success or failure of specific programs, many voters rely on a vague sense that things are going well or badly for the country.
- The biggest factor in voters’ assessment of the direction of the country is the condition of the economy. Right now it’s pretty lousy, despite relatively favorable growth and employment trends. But if these trends continue over the next two years—and they’re far less dependent on Washington that either party likes to admit—they may start to pay off for ordinary people. Should that occur, many will discover that they liked Democrats more than they thought.
The bottom line is that the results show Republicans making good use of a favorable conditions. But there’s no reason yet to think they’re the basis of a durable coalition.
Indulging in what seems to be its regional pastime, Texans are fighting about schoolbooks again. While previous debates centered on science instruction, this time it’s proposed history texts under scrutiny (I blogged about the curriculum standards they’re supposed to meet here). At a hearing last week, books under review by the state Board of Education were blasted for saying too many nice things about Hillary Clinton, not enough nice things about Reagan, and too much about Moses altogether. In widely-reported testimony, the SMU historian Kathleen Wellman argued that the books treat Moses as an honorary founding father—so much so that ”I believe students will believe Moses was the first American.”
I haven’t seen the proposed texts, so Wellman’s criticisms may be justified. But non-academics should be aware one of the most exciting movements in intellectual history in the last decade or so has discovered the extraordinary prevalence of “Hebraic” rhetoric and symbolism in during the revolution and in the early republic.
Needless to say, Moses was not the first American. As Eran Shalev shows in his fine survey American Zion, however, early Americans were remarkably likely to think that their political struggles followed a pattern set by the Biblical Israelites. Washington, for example, was routinely identified as a modern Joshua. And the Union was often compared to federal arrangements among the Hebrew tribes.
The discourse of Hebrew republicanism is an important supplement to more familiar stories about the influence of Locke or the civic republicanism inherited from the Renaissance. At the same time, it’s hard to teach to beginners.
It’s important to avoid reductive arguments that America has a theological foundation. As far as we can tell, Hebraic models were more common in popular discourse than in elite deliberation. So while they played an important role in the public justification of political decisions and institutions, they had little direct influence on their design. And the republican interpretations of scripture on which the patriots relied in the 1770s and ’80s were not only selective, but also fairly novel. As recently as the 1750s, American clergy and laymen had usually argued that the Davidic monarchy was God’s paradigm of good government.
On the other hand, the politics of the revolution and early republic really were infused with Biblical rhetoric and examples. Neglect of this fact promotes the more fashionable dogma that the America revolution and constitutions were products of a secular society.
Textbook writers are thus in a bit of a pickle. Lacking the space to address complicated topics in-detail, it’s almost unavoidable that they traffic in simplifications. Perhaps the Texas books err on the side of evangelical conservatives by making the American founding more religious than it really was. But plenty of history writing makes the opposite error, stressing a few prominent skeptics at the expense of a richly Biblical political culture.
Samuel Goldman’s work has appeared in The New Criterion, The Wall Street Journal, and Maximumrocknroll.
Although I’m a card-carrying political scientist, I’ve never been entirely comfortable with the state of the discipline. Even so, I like to see my profession in the news. So I’ve been amused to find that political science has become the bone of contention in a spat between Ezra Klein and Thomas Frank. Klein argues that social science research has improved political journalism; Frank contends that it’s become an alibi for the status quo. Jonathan Chait and Freddy DeBoer weigh in on behalf of Klein and Frank, respectively. Klein’s followup is here.
The dispute is nominally about the relation between political science and the left. But it actually revolves around the right. Specifically, it’s about explaining Republicans’ electoral success since 1964, which both sides treat as a mystery on the order of Fermat’s last theorem. Basically, Frank attributes conservatives’ success to a decades-long campaign of “organizing and proselytizing and signing people up for yet another grievance-hyping mass movement.” Klein, on the other hand, argues that it’s mostly about the partisan realignment of the South, which has always been conservative, but used to vote for Democrats. Although Klein focuses on the House of Representatives, Chait makes a similar case about presidential elections.
The academic debate has strategic implications. If their relative strength is determined mostly by structural considerations, there’s not much Democrats can do to take control of Congress. On the other hand, Republicans will have a hard time winning the presidency with a coalition based in the inland South and Mountain West. Essentially, the parties will have reversed the positions they held in most of the period between between World War II and 1994, when Republicans owned the White House and Democrats dominated Capitol Hill.
Considered in this broader context, the progressive heyday of the mid-’60s was profoundly aberrant. A temporary constellation of factors—including America’s overwhelming economic advantage following the war, political participation by the youngest baby boomers, and the halo conferred on Johnson by his predecessor’s assassination—combined in a political moment that was without parallel before or since. Since the end of Reconstruction, the American norm has been regionally and institutionally divided, relatively conservative politics. The Republican ascendance since 1964 is in many ways a return to that norm, rather than a puzzling deviation from it.
Frank finds that conclusion too upsetting even to contemplate. Klein accepts it with weary resignation. But neither has any right to be as surprised as he seems to be. Rather than more training in statistics, progressives might benefit from a refresher course in history—which used to have a much more prominent place in the study of politics than it does today.
In his 1981 classic After Virtue, Notre Dame philosophy professor Alasdair MacIntyre offers a provocative diagnosis of the modern condition. Rejecting the assumption that secular modernity is the culmination of centuries of improvement, MacIntyre contends that we live amidst the ruins of Western civilization.
There can be no restoration of the past. Even so, MacIntyre urges readers to take their bearings from a previous experience of loss by pursuing “local forms of community within which civility and the intellectual and moral life can be sustained through the new dark ages which are already upon us.” As he puts it in a famous sentence, “We are waiting not for a Godot, but for another—doubtless very different—St. Benedict.”
Benedict is considered the founder of the monasteries from which Christian Europe would eventually emerge. MacIntyre does not claim to be a successor to Benedict. But his suggestion that civilization can be preserved only by dropping out of modern life has become influential among conservatives with traditional religious commitments.
Rod Dreher has summarized the “Benedict Option” as “communal withdrawal from the mainstream, for the sake of sheltering one’s faith and family from corrosive modernity and cultivating a more traditional way of life.” And small but vibrant communities around the country are already putting the Benedict Option into practice. Without being rigorously separatist, these communities do aim to be separate. Some merely avoid morally subversive cultural influences, while others seek physical distance from mainstream society in rural isolation.
But a neo-Benedictine way of life involves risks. Communal withdrawal can construct a barrier against the worst facets of modern life—the intertwined commodification of personal relationships, loss of meaningful work to bureaucratic management, and pornographic popular culture—yet it can also lead to isolation from the stimulating opposition that all traditions need to avoid stagnation.
American religious history offers a clear example of this danger. Between World War I and the 1970s, conservative Protestants pursued strategies of withdrawal that impoverished their intellectual and cultural lives in ways that have they have only recently begun to remedy. In MacIntyre’s telling, the Benedict Option is a detour that leads back into the center of history as civilization eventually re-emerges from its refuges. But it can just easily become a dead end.
The Benedict Option is not the only means of spiritual and cultural survival, however. As a Catholic, MacIntyre searches for models in the history of Western Christendom. The Hebrew Bible and Jewish history suggest a different strategy, according to which exiles plant roots within and work for the improvement of the society in which they live, even if they never fully join it.
This strategy lacks the historical drama attached to the Benedict Option. It promises no triumphant restoration of virtue, in which values preserved like treasures can be restored to their original public role. But the Jews know a lot about balancing alienation from the mainstream with participation in the broader society. Perhaps they can offer inspiration not only to Christians in the ruins of Christendom but also to a secular society that draws strength from the participation of religiously committed people and communities. Call it the Jeremiah Option.
On March 16, 597 BC, the Babylonian king Nebuchadnezzar sacked Jerusalem after a long siege. In addition to the riches of the city and Temple, he claimed as spoils of war thousands of Judeans, including the king and court. Many of the captive Judeans were settled on tributaries of the Euphrates, which inspired the words of Psalm 137: “By the rivers of Babylon, there we sat down, yea, we wept, when we remembered Zion.”
The Judeans had reason to weep. In addition to the shame of defeat, few had direct experience of foreign cultures. The language and customs of their captors were alien and in some ways—particularly the practice of idolatry—abhorrent. More importantly, the captives could no longer practice their own religion. Banned by ritual law from making sacrifices outside the Promised Land, the exiles were unable to engage in public worship.
Under these circumstances, two ways of dealing with Babylonian society presented themselves. First, the exiles could accommodate themselves to the norms of the victors. They could learn Aramaic and adopt local manners. But this would mean the loss of their national and religious identities. In becoming honorary Babylonians, they would forfeit their status as God’s chosen people. On the other hand, the captives could resist. Taken from their homes by force, they might use force to get back again. Proposals for resistance behind enemy lines were seriously considered. In fact, several Judean leaders seem to have been executed for subversive plotting.
But any military campaign was doomed to failure. The empire was too strong to overthrow or escape. So how should its prisoners conduct themselves? How should they live in a society that they could not fully join without giving up their fundamental commitments?
The question was important enough to attract the attention of God himself. Speaking through the prophet Jeremiah, who remained back in Jerusalem, the Lord commanded the captives to steer a course between extremes of assimilation and violent resistance. In his famous letter to the leaders of the Judean community, Jeremiah reports God’s orders as follows:
Thus saith the Lord of hosts, the God of Israel, unto all that are carried away captives, whom I have caused to be carried away from Jerusalem unto Babylon; build ye houses, and dwell in them; and plant gardens, and eat the fruit of them; take ye wives, and beget sons and daughters; and take wives for your sons, and give your daughters to husbands, that they may bear sons and daughters; that ye may be increased there, and not diminished. And seek the peace of the city whither I have caused you to be carried away captives, and pray unto the Lord for it: for in the peace thereof shall ye have peace.
What is God saying? In the first place, he insists that the captives unpack their bags and get comfortable. True, God goes on to promise to redeem the captives in 70 years. But this can be interpreted to mean that none of the exiles then living would ever see their homes again. After all, the span that the Bible allots to a human life is threescore years and 10.
So the captives are to await redemption in God’s time rather than seeking to achieve it by human means. But this does not mean that that they are to keep their distance from Babylonian society until the promised day arrives. On the contrary, God commands them to “seek the peace of the city whither I have caused you to be carried away captives, and pray unto the Lord for it: for in the peace thereof shall ye have peace.”
“Peace” could be read as the absence of conflict. But this doesn’t fully express God’s directive. In the Hebrew Bible and Jewish tradition more broadly, peace refers to flourishing and right order. What God is saying is that the exiles cannot prosper unless their neighbors do as well. For the time they are together, they must enjoy the blessings of peace in common.
By what means are these blessings to be secured? The reference to prayer suggests that God wants the Judeans to promote peace by spiritual means. But that is not all. God also enjoins the Judeans to promote the common good by means of ordinary life. His very first instruction is to build houses. In other words, the Judeans are to conduct themselves like long-term residents—if also resident aliens.
Reinforcing the point that captivity is for the long haul, God reminds the captives that the dwellings they are to build are not for themselves alone. Instead, they must shelter generations of children and grandchildren, multiplying the community. God’s plan is for expansion and growth, not marginal existence.
The emphasis on securing peace through ordinary life does not absolve the exiles of their responsibility to remain holy. But theirs is to be a holiness based on upright life rather than the independence of a homogeneous community. Reassuring those who feared that they could not continue their relationship with God in exile, God explains, “ye shall seek me, and find me, when ye shall search for me with all your heart.” The Babylonian captivity is thus the origin of Judaism as a law-based religion that can be practiced anywhere, rather than a sacrificial cult focused on the sacred temple.
The piety that God encourages, therefore, can be practiced by ordinary people living ordinary lives under difficult circumstances. God enjoins the captives not only to live in Babylon, but also to live in partnership with Babylon. Without assimilating, they are to lay down roots, multiply, and contribute to the good of the greater society.
The Babylonian captives addressed by Jeremiah bear comparison to the traditionalist dissenters in MacIntyre’s stylized history. Both groups are minorities. Both are prisoners of empires that are unwilling or unable to support their moral and religious commitments. Yet both know that violence is an unacceptable means to achieve their communal goals. It would not work—and more importantly, it offends God.
Where Jeremiah counsels engagement without assimilation, Benedict represents the possibility of withdrawal. The former goal is to be achieved by the pursuit of ordinary life: the establishment of homes, the foundation of families, all amid the wider culture. The latter is to be achieved by the establishment of special communities governed by a heightened standard of holiness.
Although it can be interpreted as a prophecy of doom, the Jeremiah Option is fundamentally optimistic. It suggests that the captives can and should lead fulfilling lives even in exile. The Benedict Option is more pessimistic. It suggests that mainstream society is basically intolerable, and that those who yearn for decent lives should have as little to do with it as possible. MacIntyre is careful to point out that the new St. Benedict would have to be very different from the original and might not demand rigorous separation. Even so, his outlook remains bleak.
MacIntyre’s pessimism conceals what can almost be called an element of imperialism—at least when considered in historical perspective. Embedded in his hope for a new monasticism is the dream of a restoration of tradition. The monks of the dark ages had no way of knowing that they would lay the foundation of a new Europe. But MacIntyre is well aware of the role that they played in the construction of a fresh European civilization—and subtly encourages readers to hope for a repetition.
Jeremiah’s message to the captives is not devoid of grandiose hopes: the prophet assures them that they or their progeny will ultimately be redeemed. But this does not require the spiritual or cultural conversion of the Babylonians.
The comparison between the options represented by Jeremiah and by Benedict has some interest as an exercise in theologico-political theorizing. But it is much more important as a way of getting at a central problem for members of traditional religious and moral communities today. How should they conduct themselves in a society that seems increasingly hostile to their values and practices? Can they in good conscience seek the peace of a corrupt and corrupting society?
In the 2013 Erasmus Lecture sponsored by First Things, Jonathan Sacks, the former chief rabbi of the United Kingdom’s United Hebrew Congregations of the Commonwealth, took up this question with specific reference to Jeremiah. Rejecting Jeremiah’s reputation as a prophet of doom, Sacks argued that Jeremiah’s letter to the exiles fundamentally expresses a message of hope. Despite their uncomfortable situation, the captives are not to resist or separate themselves from Babylonian society. Rather, they are to pursue the fulfillments of ordinary life, practice holiness, and work and pray for the prosperity of the society in which God placed them.
As Sacks pointed out, this pattern has governed much of Jewish history in the diaspora. Between the destruction of the second Temple in AD 70 and the foundation of the State of Israel in 1948, nearly all Jews have found themselves in a condition comparable to that of the Babylonian captives. A small and often despised minority, they have nevertheless taken to heart God’s insistence that their peace depend on the peace of their captors.
This is not a solution to all problems of communal survival, however. The appeal of assimilation has been considerable. Descendants of the captives often took Babylonian names and adopted Aramaic. In modern times, many Jews have not only modified religious practice but rejected Jewish identity altogether. Recent surveys show that Jewishness in America is seriously endangered by indifference and intermarriage. So advocates of more rigorous separation have a point.
Nevertheless, there may be lessons in Jeremiah and Jewish history for Christians and others concerned about their place in modern society. These can be sketched by three ideas.
First, internal exiles should resist the temptation to categorically reject the mainstream. That does not mean avoiding criticism. But it must be criticism in the spirit of common peace rather than condemnation. Jeremiah is famous as the etymological root of the jeremiad. Yet his most scathing criticisms are directed against his own people who have failed in their special calling of righteousness, not the “mainstream” culture.
Second, Jeremiah offers a lesson about the organization of space. Even though they were settled as self-governing towns outside Babylon itself, God encourages the captives to conduct themselves as residents of that city, which implies physical integration. There need be no flight to the hinterlands.
Finally, Jewish tradition provides a counterpoint to the dream of restoring sacred authority. At least in the diaspora, Jews have demanded the right to live as Jews—but not the imposition of Jewish law or practices on others. MacIntyre evokes historical memories of Christendom that are deeply provocative to many good people, including Jews. The Jeremiah option, on the other hand, represents a commitment to pluralism: the only serious possibility in a secular age like ours.
I offer these arguments against communal withdrawal from a somewhat idiosyncratic motive. An heir to the Jewish diaspora, I am a relatively comfortable inhabitant of secular modernity. By what right do I counsel people whose first loyalty is to God?
The answer is: self-interest. While not a member of traditional religious community myself, I am convinced that the rest of society is immeasurably enriched by the presence of such communities in political, cultural, and intellectual life. So while I do fear that practices of separation will be bad for those communities themselves—as the fundamentalist experience of the last century indicates—I am certain that they will be bad for the rest of us. If demanding, traditional forms of religion disappear from mainstream culture, that culture may actually become the caricature of a destitute age on which MacIntyre builds his analysis.
At the same time, it would be cynical to offer a merely instrumental argument for the continued engagement of religious communities with secular society. Although not very observant myself, I found Jeremiah’s letter to the exiles helpful in thinking through this problem. God reminds the captives that they will find peace only in the peace of the Babylonians, that they are to promote the good of the rest of society as well as their own. The Jeremiah Option gives me reason to hope that Jews, Christians, and the rest of us can find peace together.
Samuel Goldman’s work has appeared in The New Criterion, The Wall Street Journal, and Maximumrocknroll.
In a post for The Immanent Frame responding to the Hobby Lobby and Wheaton College decisions, Indiana University professor Winnifred Fallers Sullivan challenges the idea of religious freedom on which those decisions are based. Although a liberal, Sullivan does not deny that private firms and religious colleges are engaged in a kind of religious practice. Rather, she argues that because religion means different things to different people, it’s impossible to systematically distinguish legitimate “religious freedom” from mere rejection of the law:
The need to delimit what counts as protected religion is a need that is, of course, inherent in any legal regime that purports to protect all sincere religious persons, while insisting on the legal system’s right to deny that protection to those it deems uncivilized, or insufficiently liberal, whether they be polygamist Mormons, Native American peyote users, or conservative Christians with a gendered theology and politics. Such distinctions cannot be made on any principled basis…Both the majority and dissenting Justices in these two cases affirm—over and over again—a commitment to religious liberty and to the accommodation of sincere religious objections. Where they disagree is on what counts as an exercise of religion. Their common refusal, together with that of their predecessors, to acknowledge the impossibility of fairly delimiting what counts as religion has produced a thicket of circumlocutions and fictions that cannot, when all is said and done, obscure the absence of any compelling logic to support the laws that purport to protect religious freedom today.
The whole post rewards careful reading. One issue that Sullivan leaves unexamined is what counts as a “principled basis”. I take it that she means a non-historical, more or less universal definition, which would make it possible to distinguish religion from non-religion in a logically consistent way. And she’s right that no such definition exists. To mention only one example, the state cult of the Romans had little in common with what we understand by “religion” today.
But why should American law be based on universal principles that can be applied in a quasi-Kantian manner? A considerable historical literature suggests that the religion clauses of the Constitution emerged from the historical experience of Anglo-Protestantism. They were developed and applied in a society that was assumed to be overwhelmingly Christian and organized, for the most part, into recognizable denominations. Of course, there were always communities whose religion was inconsistent with these assumptions. But it was assumed that they would either be demographically marginal, or identifiable under Christian theological categories.
The problem, of course, is that this world no longer exists. And not only because of secularization or immigration by Catholics, Jews, and more recently Muslims. As Sullivan observes, the American brand of evangelicalism encourages individuals to decide for themselves what religion means to an historically unprecedented degree. So we face the challenge of applying historically and theologically specific concepts of religion, liberty, and so on in a way that obscures their limits and contingency. Thus the knots into which both sides of the Court have had to twist their arguments not only in Hobby Lobby, but also in cases such as Kiryas Joel Village School District v. Grumet.
There’s no obvious solution to this problem. We can neither revive Anglo-Protestant categories in a pluralistic society, nor can we formulate a definition of religion that will satisfy everyone. My own preference is for giving as much deference as possible, consistent with public order, to congregations, non-profit institutions, and yes, private firms, to act in ways that reflect their beliefs about what they owe to God and the world. But that means giving up the dream of cultural hegemony that today inspire the secular Left at least as strongly as it once did the religious Right.
(h/t Samuel Moyn, via Facebook).
Disinvitation season has come and gone. In this year’s enactment of a now familiar exercise, Haverford, Rutgers, and Brandeis, among other other schools, were forced by opposition for students and faculty to alter plans for commencement speakers.
Parallel denunciations of creeping authoritarianism are part of the ritual. But the truth is that critics of the university on both the left and the right get what they really want out of these tiny fiascos: an opportunity to make vehement public statements when little of significance is at stake. Because commencement addresses are, with a few notable exceptions, emissions of immense quantities of hot air. Here are the deep thoughts with which Rice favored the graduates of Southern Methodist University in 2012.
Rather than lamenting the arrogance of administrators or immaturity of students, it’s worth considering how to reform the institution of commencement speeches altogether. After all, there’s no requirement that university import boldface names. Columbia, for example, allows only its president to speak. So here are some suggestions for preventing future commencements from becoming occasions for embarassing disinvitations.
First, give students a role in choosing speakers. This would help gauge potential controversy early in the selection process, as well as building a constituency for the choice. One reason it’s so easy for a relatively small group of critics to push out a speaker is that the rest of the student body has no stake in keeping him. Allowing them to exercise some influence over the initial decision could change that.
But maybe students would use their influence to pick popular culture figures rather than serious types that convince parents and taxpayers that their money is well-spent. That risk could be avoided if universities stopped paying large honoraria. If potential speakers have something important to say, they’ll be willing to do so in exchange for reasonable expenses. Don’t subsidize celebrities—or high-priced “thought leaders” flogging their books.
Next, separate the conferral of honorary degrees from speechgiving. The former implies collective endorsement of the speaker’s career. The latter does not. One of Rod’s readers claims that Haverford opponents of Berkeley chancellor Robert Birgeneau objected to his honorary degree more they did to his speaking invitation. Whether that’s true in this case, there’s a morally and political relevant difference between hearing someone out and allowing an honor to be given in one’s own name.
Finally, revive the old practice of allowing a student elected by students to speak at commencement. This would allow students to express criticism or disapproval of other speakers in precisely the kind of dialogue that both lefties and conservatives claim to endorse.
Any or all of these suggestions would help prevent silly controversies without giving in to the heckler’s veto. But maybe the best solution would be to cancel the speeches altogether. Does anyone really want to be lectured in the inevitable commencement weather of blistering sun or pouring rain?
Few things annoy academics more than being told that their work is irrelevant. So there’s nothing surprising about the backlash against Nicholas Kristof’s column in Sunday’s New York Times. Kristof contended that America’s professors, especially political scientists, have “marginalized themselves” by focusing on technical debates at the expense of real problems, relying too heavily on quantitative methods, and preferring theoretical jargon to clear prose. An outraged chorus of responses (round-up here) rejected Kristof’s generalization as a reflection of the very anti-intellectualism that he intended to criticize.
Some of the responses to Kristof reflect the expectation of public recognition for every contribution to the debate that makes so much academic writing a chore to read. Even so, Kristof ignores fairly successful efforts to make scholarship more accessible—even if you don’t count every blog by every holder of a Ph.D. The Tufts professor and Foreign Policy contributor Daniel Drezner has more than 25,000 Twitter followers—partly due to the success of a book the uses a zombie apocalypse scenario to compare theories of international relations. The Washington Post recently picked up The Monkey Cage (full disclosure: several of my colleagues at George Washington University are contributors) and The Volokh Conspiracy, which are populated by political scientists and legal academics, respectively. Not to mention Kristof’s own employer. Just the day before Kristof’s piece ran, the Times hired Lynn Vavreck to contribute to a new site concentrating of social science and public policy.
At least when it comes to political science, then, it’s just not plausible that “there are fewer public intellectuals on American university campuses today than a generation ago.” On the contrary, there are probably more academics who try to communicate with non-specialist audiences than there were in 1994. One difference is that the public intellectuals of past decades were more likely to engage directly with normative and historical Big Questions. That change reflects the declining influence of political theory in comparison with causal analysis, as well as the weakening of the Cold War imperative of justifying liberal democracy.
Of course, writing for non-specialist readers isn’t encouraged in graduate programs, and doesn’t often align with the requirements for hiring and promotion. But there are anecdotal reasons to think that expectations are slowly changing, as departments struggle to prove their ‘relevance’ in a period of financial retrenchment. If Kristof’s piece promotes these changes, it will have served a valuable function whether or not its argument is compelling.
In fairness to Kristof, however, none of the observations refute his basic claim. That’s because he isn’t actually talking about “public intellectuals”. Rather, he means old-fashioned mandarins, who move easily between Harvard Yard and Washington, usually without encountering many members of the public along the way. In a followup on Facebook, Kristof observes that “Mac Bundy was appointed professor of government at Harvard and then dean of the faculty with only a B.A.—impossible to imagine now.” After nearly a decade as dean, Bundy joined the Kennedy administration as National Security advisor, where his vast intellectual firepower led him to promote and defend the Vietnam War.
What Kristof really offers, then, is less an argument for public engagement by scholars than a plea for another crop of Wise Men who lend conventional wisdom the authority of the academy. Not coincidentally, he presents as an exception to the trend toward academic self-marginalization the former Princeton professor and State Department official Anne-Marie Slaughter, whose resume is as perfect a reflection of the meritocratic elite that Bundy helped create as Bundy’s own pedigree was of the old Establishment. More professors should learn to participate in public debate—including political theorists frustrated by the increasingly technical orientation of political science. But if ‘relevance’ means becoming mouthpieces of our new ruling class, then Kristof can keep it.
Reports that A- is the median grade in Harvard College have reopened the debate about grade inflation. Many of the arguments offered in response to the news are familiar. The venerable grade hawk Harvey “C-” Mansfield, who brought the figures to public attention, describes the situation as an “indefensible” relaxation of standards.
More provocative are defenses of grade inflation as the natural result of increased competition for admission to selective colleges and universities. A new breed of grade doves point out that standards have actually been tightened in recent years. But the change has been made to admissions standards rather than expectations for achievement in class.
According to the editorial board of the Harvard Crimson, “high grades could be an indicator of the rising quality of undergraduate work in the last few decades, due in part to the rising quality of the undergraduates themselves and a greater access to the tools and resources of academic work as a result of technological advances, rather than unwarranted grade inflation.” Matt Yglesias, ’03, agrees, arguing that “it is entirely plausible that the median Harvard student today is as smart as an A-minus Harvard student from a generation ago. After all, the C-minus student of a generation ago would have very little chance of being admitted today.”
There’s a certain amount of self-congratulation here. It’s not surprising that Harvard students, previous and current, think they’re smarter than their predecessors—or anyone else. But they also make an important point. The students who earned the proverbial gentleman’s Cs are rarely found at Harvard or its peers. Dimwitted aristocrats are no longer admitted. And even the brighter scions of prominent families can’t take their future success for granted. Even with plenty of money and strong connections, they still need good grades to win places in graduate school, prestigious internships, and so on.
The result is a situation in which the majority of students really are very smart and very ambitious. Coursework is not always their first priority. But they are usually willing to do what’s necessary to meet their professors’ expectations. The decline of core curricula has also made it easier for students to pick courses that play to their strengths while avoiding subjects that are tough for them. It’s less common to find Chemistry students struggling through Shakespeare than it was in the old days.
According to the Harvard College Handbook for Students, an A- reflects ”full mastery of the subject” without “extraordinary distinction”. In several classes I taught as an instructor and teaching fellow at Harvard and Princeton, particularly electives, I found that around half the students produced work on this level. As a result, I gave a lot of A-range grades.
Perhaps my understanding of “mastery” reflects historically lower demands. For example, I don’t expect students writing about Aristotle to understand Greek. Yet it’s not my impression that standards in my own field of political theory have changed a lot in the last fifty years or so. In absence of specific evidence of lowered standards, then, there’s reason to think that grade inflation at first-tier universities has some objective basis.
But that doesn’t mean grade inflation isn’t a problem. It is: just not quite the way some critics think. At least at Harvard and similar institutions, grades are a reasonably accurate reflection of what students know or can do. But they are a poor reflection of how they compare to other students in the same course. In particular, grade inflation makes it difficult to distinguish truly excellent students, who are by definition few, from the potentially much larger number who are merely very good.
Here’s my proposal for resolving that problem. In place of the traditional system, students should receive two grades. One would reflect their mastery of specific content or skills. The other would compare their performance to the rest of the class. Read More…
Bill de Blasio is mayor-elect of New York. According to many of de Blasio’s critics as well as his supporters, the unsurprising outcome of Tuesday’s election reflects a decisive turn in the city’s politics. The Nation claims that “Bill de Blasio’s exhilarating landslide victory over Joe Lhota in New York’s mayoral election offers a once-in-a-generation chance for progressives to take the reins of power in America’s largest—and most iconic—city.” In National Review, Kevin D. Williamson evokes John Carpenter’s b-movie classic, Escape from New York.
I say: not so fast. Neither turnout and polls nor de Blasio’s career so far support hope (or fears) that he’ll try to transform New York into Moscow on Hudson. Mayor de Blasio will not cultivate the the chummy relationship with Wall Street that Michael Bloomberg did. But there’s likely to be more continuity between their mayoralties than most people expect. In fact, that continuity is a bigger threat to the city’s future than the immediate collapse that some conservatives fear.
First, the election data. As Nicole Gelinas points out, de Blasio’s election was not the mandate for change that the margin of victory suggests.
Preliminary results show that about 1 million New Yorkers voted yesterday. That’s 13 percent lower than four years ago. Back then, remember, many voters disillusioned with the choices—Mayor Michael R. Bloomberg was running for a third term against the uninspiring Bill Thompson, Jr.—just stayed home. Turnout this year was as much as 30 percent below 12 years ago, when Bloomberg won his first victory. Voters supposedly so eager for change this year didn’t show that eagerness by voting. And a slim majority of the people who did vote—51 percent—told exit pollsters that they approve of Bloomberg, anyway. Though de Blasio’s victory margin was impressive, the scale of the win looks less stellar when put into recent historical context. As of early Wednesday, de Blasio had 752,605 votes—a hair shy of Bloomberg’s 753,089 votes in 2005.
So de Blasio did not win the votes of unprecedented number of New Yorkers. And many of those who did vote for him also supported Bloomberg. That doesn’t mean that they like everything Bloomberg did. But there’s no evidence here of a progressive tsunami.
What about de Blasio’s career? The tabloid press paid a great deal of attention to de Blasio’s visits to communist Nicaragua and the Soviet Union as a young man. More recently, however, de Blasio worked as a HUD staffer under Andrew Cuomo, and as campaign manager for Hillary Clinton. De Blasio took liberal positions during his tenure on the city council, particularly on symbolic issues involving gay rights. But this is not the resume of a professional radical.
It’s true that de Blasio made “a tale of two cities” the central theme of his campaign. As many observers have pointed out, however, he lacks the authority to enact his signature proposals: a tax increase on high earners, to be used to fund universal pre-K. Nothing’s impossible, but the chances of the state legislature approving such a tax hike are slim. The same goes for several of de Blasio’s other ideas, including a city-only minimum wage higher than the state’s minimum and the issuance of driver’s licenses to illegal immigrants.
The real issues under the de Blasio’s administration will be matters over which the mayor has some direct control. That means, above all, contracts with city workers, and policing. Will de Blasio blow the budget to satisfy public employee unions? And will he keep crime under control after eliminating stop-and-frisk ?
I’m cautiously optimistic about public safety. New Yorkers didn’t like living in fear, and de Blasio is smart enough to know that he’s finished if crime returns. Stop-and-frisk is not the only weapon in the NYPD’s arsenal.
The unions are a bigger problem. New York’s short-term fiscal outlook is reasonably good, so it’s hard to imagine de Blasio playing Scrooge to faithful supporters.
Even so, de Blasio’s mayoralty is unlikely to be a revolutionary moment. As Walter Russell Mead has argued, the biggest risk is that he pursues the same high-tax, high-regulation, high-service style of government on which Bloomberg relied. Those policies have helped transform much of New York into a gleaming shopping mall, which is admittedly a lot better than an urban jungle. But they are a bad strategy for prosperity in the years and decades to come.
Niccolo Machiavelli is often described as arguing that morality has no place in politics. That’s not quite right. Machiavelli believes that morality is crucial to political success. He just thinks that the important thing is to seem to possess the moral virtues, rather than actually to practice them. In a famous passage of The Prince, Machiavelli puts it this way:
Thus, it is not necessary for a prince to have all the above-mentioned qualities in fact, but it is indeed necessary to appear to have them. Nay, I dare say this, that by having them and always observing them, they are harmful; and by appearing to have them, they are useful…
In a column for Bloomberg, Ramesh Ponnuru implicitly encourages social conservatives to take Machiavelli’s advice. Reflecting on the likely failure of Ken Cuccinelli’s campaign for governor of Virginia, Ponnuru observes that current governor Bob McDonnell won a big victory in 2009 even though he agrees with Cuccinelli on many social issues. So:
Why do they seem to be succeeding now when they failed then? It’s partly a matter of countenance: McDonnell was cheerful (if boring), and Cuccinelli often appears dour and argumentative…Another difference, though, is that Cuccinelli made his name as a conservative crusader, especially on social issues, where McDonnell made his as a bipartisan problem-solver. McDonnell’s Democratic critics had to dig up a 20-year-old grad-school thesis he had written to make him look out of the mainstream; Cuccinelli’s have more recent initiatives and statements to work with.
Ponnuru goes on to contrast Cuccinelli’s likely failure to win election tomorrow with Chris Christie’s likely success:
Socially conservative positions on hot-button issues don’t seem to be a deal-breaker even for the much more liberal voters of New Jersey. Christie has vetoed legislation to grant state recognition to same-sex marriage—a judge later ordered it, though Christie briefly appealed—and vetoed bills to fund Planned Parenthood five times.
Ponnuru’s conclusion is that social conservatives shouldn’t be too upset by Cuccinelli’s defeat, since McDonnell and Christie’s examples show that social conservatives are not necessarily losers in blue and purple states. That’s true, but the distinction between seeming and being is important here. Ponnuru is right that social conservative views are not, in themselves, electoral poison. In other words, seeming to be a social conservative is not a problem—and may in some cases be good politics. Yet actually being one, in the sense of making serious attempts to promote social conservative policies, is and will remain serious obstacle to victory in places like Virginia and New Jersey.
Chris Christie is a good example of this dynamic. Christie knew quite well that his challenge to the gay marriage bill was purely symbolic, since the liberal state supreme court was certain to reinstate the law. What’s more, Christie dropped his opposition as soon as he could credibly claim that the court had forced his hand. This, too, was inevitable in a state in which a considerable majority of voters, including Republicans, favor gay marriage.
Christie’s Machiavellian approach isn’t popular with dedicated social conservatives. The National Organization for Marriage and the Family Research Council have both condemned Christie’s handling of gay marriage. But symbolic conservatism is popular with more moderate voters, who want to express disapproval from gay marriage and abortion, but are uncomfortable with policies that seem intrusive or intolerant.
The lesson of today’s election, then, will not be that social conservatives can compete in moderate and liberal areas if they offer more explicit and articulate defenses of their views. It’s that they can get away with expressing social conservative beliefs so long as they do nothing to suggest that those beliefs are likely to end up enshrined in law. Ponnuru points out that “If Christie wants to run for president, he may find that pointing this out is a low-cost way of appealing to a national constituency that matters a lot in his party.” Somewhere, Machiavelli smiles.
In a post for Prospect, Christopher Fear asks why academic political theory is so remote from political practice. He concludes that it’s because political theorists devote themselves to eternal riddles that he dubs “Wonderland questions” rather than today’s problems. Consider justice, perhaps the original topic of political theorizing:
One of the central questions of academic political philosophy, the supposedly universal question “What is justice?” is a Wonderland question. That is why only academics answer it. Its counterpart outside the rabbit-hole is something like “Which of the injustices among us can we no longer tolerate, and what shall we now do to rectify them?” A political thinker must decide whether to take the supposedly academic question, and have his answers ignored by politicians, or to answer the practically pressing question and win an extramural audience.
Fear is right about the choice that political theorists face between philosophical abstraction and making an impact on public affairs. But he doesn’t understand why they usually pick the former. The reason is simple. Academic political theorists ask academic questions because…they’re academics.
In other words, political theorists are members of a closed guild in which professional success depends on analytic ingenuity, methodological refinement, and payment of one’s intellectual debts through copious footnoting. They devote their attention to questions that reward these qualities. Winning an extramural audience for political argument requires different talents, including a lively writing style, an ear for the public discourse, and the ability to make concrete policy suggestions. But few professors have ever won tenure on the basis of those accomplishments.
Another reason academic political theorists avoid the kind of engagement Fear counsels is that they have little experience of practical politics. Most have spent their lives in and around universities, where they’ve learned much about writing a syllabus, giving a lecture, or editing a manuscript—but virtually nothing about governing or convincing lay readers. How does expertise on theories of distributive justice, say, prepare one to make useful suggestions about improving the healthcare system? Better to stick with matters that can be contemplated from the comfort of one’s desk.
In this respect, political theorists are at a considerable disadvantage compared to professors of law or economics. Even when their main work is academic, lawyers and economists have regular chances to practice in the fields in the fields they study. Within political science, many scholars of international relations pass through a smoothly revolving door that connects the university with the policy community. Political theorists have few such opportunities.
Fear points out that it wasn’t always this way. Before the 20th century, many great political theorists enjoyed extensive political influence. But Fear forgets the main difference between figures like Machiavelli, Locke, Montesquieu, Madison, Burke, Hume, Mill, or Marx and their modern epigones. The former were not professors. Although all devoted to the philosophical truth as they understood it, they were also men of affairs with long experience of practical politics.
The “brilliant and surreal tragedy of academic political theory,” then, is not that political theorists have been diverted into the wrong questions. It’s that political theory is an uncomfortable fit with the university. Academic political theorists gravitate toward the kind of questions that career scholars are in a position to answer.
August 20th was the birthday of the beloved “weird fiction” author H.P. Lovecraft. Like many writers, I confess that I failed to honor the occasion. But Peter Damien did notice that Lovecraft would have turned 123 this year. He wonders why anyone cares:
Because the fact is…he was a godawful writer. He was so bad. I really cannot stress this enough. I’m aware that the quality of a writer’s fiction is very much a matter of personal taste and not objective (and those people who mistakenly believe it is objective and matches up to their own tastes are always wrong). Still, I think we can safely agree that he was really awful as a writer, given that even people who are fans of Lovecraft don’t seem to defend his writing very much.
Damien’s aesthetic judgment is irreproachable. There’s not much to be said in defense of passages like this one from “The Call of Cthulhu“:
Cthulhu still lives, too, I suppose, again in that chasm of stone which has shielded him since the sun was young. His accursed city is sunken once more, for the Vigilant sailed over the spot after the April storm; but his ministers on earth still bellow and prance and slay around idol-capped monoliths in lonely places. He must have been trapped by the sinking whilst within his black abyss, or else the world would by now be screaming with fright and frenzy. Who knows the end? What has risen may sink, and what has sunk may rise. Loathsomeness waits and dreams in the deep, and decay spreads over the tottering cities of men. A time will come – but I must not and cannot think! Let me pray that, if I do not survive this manuscript, my executors may put caution before audacity and see that it meets no other eye.
Yet Damien’s definition of good writing is too narrow. He argues that Lovecraft’s “ideas were themselves amazing things. It’s just that Lovecraft lacked the capability to do anything useful with them himself.” If that were simply true, however, no one would read Lovecraft–or remember anything that he wrote. Lovecraft was a terrible crafter of sentences and had a rather distinctly brute-force approach to exposition. Despite these shortcomings, the fictional universe he created is unforgettable, right down to the ludicrous pseudo-languages he invented for his various creations.
Lovecraft’s profound influence as a creator of worlds suggests that he was a better writer, in the crucial sense of articulating and communicating his ideas, than his more technically accomplished competitors. To criticize his stilted dialogue or Gothic affectations is to miss the point. Clumsy as he is, Lovecraft is remarkably good not only at transporting his readers to places that don’t exist, but at bringing them back with mementos from the journey. What more can we ask of a writer in the overlapping group of genres that includes SF, horror, and fantasy?
Lovecraft has a recent counterpart in George R.R. Martin, the author of the Game of Thrones series. Like Lovecraft, although in a rather different style, Martin writes terrible sentences. But he has other virtues, particularly the ability to balance and pace dozens of parallel storylines and viewpoints.
I’m not suggesting that Lovecraft or Martin belong to the first rank of English literature. Still, I’d rather read their stuff than exercises in technical perfection inspired by, say, Raymond Carver. Is it “good” writing? Who cares. Ph’nglui mglw’nafh Lovecraft R’lyeh wgah’nagl fhtagn!
Jamelle Bouie argues that conservatives are simultaneously obsessed with and oblivious to race. Citing Glenn Beck’s accusation that President Obama hates white people and some conservatives’ public delight in the Zimmerman verdict, Bouie contends that conservatives have adopted a distorted and distorting understanding of racism according to which “anyone who treats race as a social reality is a racist.” It follows that:
Because Obama acknowledges race as a force in American life—and because he even suggests that there are racists among us—he becomes the “real racist,” a construction designed to give conservatives moral high ground, while allowing them to insult Obama. After all, for them, “racist” is the worst accusation in American life.
Bouie is right to criticize the naivëté about race that Dan McCarthy mentioned in his defense of Jack Hunter, which is characteristic of talk radio and other political entertainment. But he misunderstands the conceptual frame that many conservatives apply to these issues.
The background assumption in many conservative arguments about criminal justice or affirmative action is not precisely that any acknowledgement of “race as a force in American life” is racist. Rather, it’s that racism refers only to the kind of eye-popping bigotry recently on display in the film “Django”.
Conservatives correctly observe that this kind of overt hatred is rare today. They wrongly conclude from this that legacies of slavery and segregation are not relevant to modern life–and that anyone who says they are must therefore have ulterior motives.
Jonah Goldberg offers a representative sample of this view. In a post several months ago, Goldberg argued that racism “should be defined as knowing and intentional ill-will or negative actions aimed at an individual or group solely because of their race.”
Note the qualifiers: “knowing and intentional”; “ill-will”; “solely”. According to Goldberg, racism is limited to conscious malice independent of any non-racial considerations. And racism, on this definition, is no longer a big problem.
But this definition seriously obscures the role of race in American society, past and present. To mention only an obvious defect, it excludes the ideas about black inferiority that informed the “positive good” defense of slavery. John C. Calhoun was not Calvin Candy, the psychopathic plantation owner in “Django.” But it is obtuse to deny his racism on the grounds that he believed the slave system was beneficial to blacks.
In more contemporary inquiries, the restrictive definition of racism Goldberg suggests conceals systematic inequities in the economy and other spheres of activity. One need not regard every racially disproportionate outcome as the result of discrimination to understand that it is not simply a coincidence that blacks, who have within living memory been been excluded by law and custom from the vehicles of upward mobility, tend to be poorer and less educated than whites. Colorblindness on these issues is more like simple blindness.
Clumsy as it was, Rand Paul’s speech at Howard University in April was a step toward more serious conservative reflection on race. Although he relied implicitly on a definition of racism as conscious bigotry, Paul at least acknowledged that the bigotry of the past has had unconscious and enduring consequences, which have to be the starting point for arguments about policy. The task for conservatives is to make a plausible case that the policies they favor will be more effective in ameliorating those consequences than either the status quo or progressive alternatives. Until then, our black fellow citizens will be correct in their judgment that we are either hopelessly naïve or playing dumb about the unique and heavy burdens that they continue to bear.
The Atlantic ran a curious article last week. Under the headline “Why Americans All Believe They Are Middle Class”, Anat Shenker-Osorio argues that Americans are encouraged by politicians, the media, and even colloquial language to believe that they belong to the middle class, when in fact they do not. Shenker-Osorio observes:
The puzzle is why so many who do not fit the category (as median family income reported as just above $50,000 defines it) believe they do. Why does the description “middle-class nation” continue to feel appropriate, desirable, or both?
There’s not much of a puzzle here. According to a Pew poll that Shenker-Osorio cites in the piece, Americans don’t all think they’re middle class. On the contrary, they’re pretty good judges of where they belong in the class structure, at least as defined by the distribution of income.
In 2012, the poll showed 32 percent of American identifying as lower class or lower-middle class, 49 percent of Americans identifying as middle class, 15 percent of Americans identifying a upper-middle class, and 2 percent of Americans identifying as upper class. That correlates reasonably well with the 25 percent of American household with incomes of less than $25,000 a year, the 50 percent who earn between $25,000 and about $90,000 a year, the 20 percent who earn between $90,000 and $180,000, and the 5 percent who earn more than that.
The similarities are particularly striking when it comes to the top tier. The top 2 percent of American households have incomes of more than $450,000. That seems like a plausible cutoff for the 2 percent of Americans who call themselves “upper class”.
Regional variations are extremely important in understanding differences at the margins. Well-paid professionals are clustered in cities like New York and Washington. They tend to call themselves middle class or upper-middle class because taxes and high prices, especially for housing, constrain their standard of living. The reason is not that they’re trying to keep up with the Kardashians. In low-cost areas, on the other hand, it isn’t necessarily crazy to consider oneself middle class on a household income of, say, $30,000.
Moreover, class is about more than income. As the great sociologist Robert Nisbet observed, the concept of class developed to make sense of Victorian Britain. It referred partly to differences of wealth and economic activity: businessmen were middle class no matter how rich they were. But it also indicated different mores, speech, and even physical characteristics. When Britain imposed military conscription during the First World War, it discovered that working class recruits were significantly shorter than soldiers from the upper class.
Charles Murray and others have argued the United States is moving toward a similarly visible class system. But it’s still not easy to tell at glance (or a listen) to which class Americans belong. Most look and sound like they belong somewhere in the middle. That’s the historically consistent and still distinctively American phenomenon that the Pew poll reflects.
So it’s reasonable to see the U.S. as a middle-class society, even though far from all Americans identify as middle class. The question is whether it will remain one. The Pew numbers show the number of Americans who identify as middle class declining from 53 percent in 2008 to 49 percent in 2012. Far from indicating that Americans are victims of false consciousness, that figure suggests that the norms and expectations that defined American social reality in the 20th century are eroding fast.
Among the interesting debates of a slow summer is the argument about Ben Bernanke’s replacement as Fed chairman. The White House has not yet indicated its choice. But the main contenders are thought to be Larry Summers, the former Treasury Secretary and Harvard president, and Janet Yellen, the current vice chair of the Fed.
Both Summers and Yellen have the credentials we expect for the position. But Yellen is favored by progressives because she is seen as an advocate of softer money and stricter financial regulation. In addition, many like the idea of appointing a woman to run the Fed.
I have nothing to contribute to the technical aspects of this debate. And the gender issue has more to do with academic politics than any significant public concern.
Nevertheless, the fact that Summers is even in the running for such an important job despite a record of bad judgment tells us something important about the norms that define America’s current ruling class. While the old WASP elite justified its influence on grounds of prudence and respectability, the ostensible meritocracy that replaced it sees intellectual brilliance as the authoritative claim to rule.
My point is not that members of the midcentury Establishment always possessed the qualities they claimed, or that our current masters are actually the smartest guys in the room. Rather, it’s that our ideas about qualifications for power have been transformed–and that Summers embodies this transformation.
Summers is a notorious jerk and slob, and has been wrong about almost every major economic issue for the last twenty years. But he is, by most accounts, a bold and provocative thinker. For his supporters, that outweighs possible limitations of character or judgment.
I am not convinced of Summers’ originality. On big issues such as the repeal of Glass-Steagall or derivatives regulation, he seems a reliable advocate of the conventional wisdom that links Ivy League economics departments to Wall Street.
Yet the novelty of Summers’ ideas isn’t the interesting thing in this debate. Rather, it’s the assumption built into his supporters’ arguments: that aptitude for politics is closely allied with academic ability. The old elite proverbially understood public service on the model of a country club. The new one sees it as a sort of massive graduate seminar.
Although they are rival conceptions of politics, these views are not absolutely opposed. In fact, both are obsessed with pedigree. They just look for different qualities in their exemplary specimens. The archetypal WASP wanted to know where a job applicant prepped; his meritocratic successor will ask where she did her Ph.D. (It is assumed that all the players in these scenarios were undergraduates at Harvard.)
Both conceptions also have blindspots. The country club model elevates superficial collegiality over actual results. The graduate school vision, by contrast, prizes abstract knowledge at the expense of practical wisdom. American history is littered with failures by each approach.
On the whole, however, the defects of gentility are less dangerous to common interests than the defects of intelligence run amok. The great problem with the application of the academic mind to politics, is that policies that work in theory are notoriously ineffective in practice. Summers is, without question, a smart man. As Fed chairman, however, he would be a brilliant mistake.
The University of Wisconsin political scientist Andrew Kydd offers an interesting critique of the spread of concealed carry and stand your ground statutes. Departing from a Weberian definition of the state as a monopoly on the legitimate use of force, Kydd suggests that “[t]he United States is now embarked on an unprecedented experiment, in that it is a strong state, fully capable of suppressing private violence, but it is increasingly choosing not to.” Kydd attributes loosening restrictions on the possession and use of guns to a libertarian fantasy that “the absence of the state will lead to a paradise for individuals.” But he follows Hobbes in predicting grimmer consequences: the replacement of violence under law by anarchic clashes between mercenaries, clans, and vigilantes.
I share Kydd’s concern about the decriminalization of gun violence, which looks to me like a risky solution to an exaggerated problem (violent crime has been falling for years). But his thinking about the relation between violence and the state is too Hobbesian to be convincing. For Hobbes, the “state of war” and the juridical state were mutually exclusive; violence was subject either to monopoly control or anarchic diffusion. For Kydd, similarly, the choice is between, say, the modern UK, in which firearms are very tightly regulated, and Afghanistan, where the strong do what they can, and the weak do what they must.
In the history of political thought, however, this is a false alternative. Following Hobbes, Kydd ignores the (small “r”) republican model of organized violence, in which the law is executed by an armed citizen body. The classical republic is not a state in the Weberian sense because it lacks a standing army or regular police force. On the other hand, it is not simply anarchic: citizens who possess the means of coercion cooperate on a relatively informal basis to enforce laws whose authority they all recognize.
The republic, in this sense, has always been more ideal than reality. But it is an ideal that has played an important role in the development of American political culture, particularly in connection with guns. For the republican tradition, particularly as transmitted by the Country party in British politics, a well-armed, self-organized citizenry poses less of a threat to safety and liberty than a strong state. That is the reasoning behind the 2nd Amendment.
There are serious and perhaps insurmountable obstacles to the revival of this tradition today. Apart from technological changes since the 18th century, the republican theory of violence presumes a relatively small, mostly agrarian society with a strong conception of public virtue. The contemporary United States, by contrast, is more like a multinational empire: a political form that has historically required much more coercive practices of government. Even so, the republican tradition reminds us that Leviathan is not the only possible source of order. We can acknowledge its necessity without regretting its evil.