State of the Union

Democrats Are Not Socialists, and Neither Is Bernie Sanders

Debbie Wasserman Schultz was recently mocked for flubbing a question on Chris Matthews. Asked the difference between Democrats and socialists, Wasserman Schultz tries to talk about the difference between Democrats and Republicans.

The exchange doesn’t reflect well on Wasserman Schultz, who plows through her talking points as if the question had never been asked. But it has a pretty easy answer. Historically, the essential feature of socialism is the demand for public ownership or direct government control of major sectors of the economy. A bit more abstractly, socialists have aimed to eliminate considerations of profit from as many areas of life as possible. They used to the describe this goal as “revolution”, which didn’t necessarily mean violence.

The modern Democratic Party isn’t about revolution. Since FDR, Democrats have consistently supported regulated competition and redistributive policies that direct private profits toward the relative losers in market exchange. These strategies are better understood as “welfarism” than socialism. A concrete example? Compare Britain’s NHS before Thatcher’s reforms to Medicare…or Obamacare, for that matter.

There’s something of a spectrum between these positions. Even so, you don’t meet many socialists in mainstream politics these days. Most “Socialist” parties in Europe abandoned their revolutionary dreams a long time ago. And the self-declared socialist Bernie Sanders offers a welfarist agenda that’s barely updated from the ’50s.

So no, Democrats aren’t socialists. We might be able to have a less stupid discussion of their actual positions if welfarists, and their critics, knew the difference.

Samuel Goldman is assistant professor of political science at The George Washington University.

It’s the Donald’s World, We’re Just Living In It

Christopher Halloran / Shutterstock.com

In late summer and early autumn of 1858, Abraham Lincoln and Stephen Douglas conducted seven debates around the State of Illinois. Thousands of people attended the contests, at which each speaker got 90 minutes to make his case. The result is among the classics of American political oratory. The style is folksy and occasionally silly. But the language is clear and honest, and the arguments are deadly serious.

Last night’s spectacle wasn’t like that. Although we call it a debate, it was really a group interview, with candidates answering questions from the moderators rather than developing and contrasting their positions. This was probably unavoidable, given the number of candidates and short attention spans of viewers. But the reality show format doesn’t inspire pride in American civic life.

Pious lament aside, however, the debate was vastly entertaining. I watched with a boisterous crowd of undergraduates at an ISI Honors Conference. They were, by turns, amused, provoked, and inspired. As the discussion continued late (too late) into the night, they also resisted any consensus about the winners. That’s important: pretty much all the candidates had at least one effective moment.

Rubio was probably the biggest surprise. Although he’s been overshadowed recently, he was the most likeable and relaxed man on the stage. John Kasich made a pretty good impression. He got tangled up in statistics and his dad’s resume, but many of conservative students I watched with were sympathetic to his faith-based argument for expanded healthcare. Huckabee also remains fantastic at doing what he does.

But the big story is still Trump. The Fox News commentators seemed pretty sure that his schtick turned people. We’ll get a sense from polling over the next few days whether that’s true. For my part, I thought his response to several attempted gotcha questions was pretty successful. Remember: a lot of Americans have had trouble with lenders or wavered in their party affiliations. What looks like irresponsibility to media and activist crowds might seem pretty normal beyond those circles.

The biggest losers were Bush, Walker, and Paul. They seemed uncomfortable and didn’t offer any memorable lines. Bush can afford a rope-a-dope strategy while the field clears. Walker and Paul need to provide justifications for their campaigns.

Last come the C’s. I thought Cruz and Carson made little impression. Carson’s a nice man who doesn’t belong in the race. And Cruz is expert at impressing his supporters… and no one else. Several people I spoke to thought Christie turned in a strong performance. Maybe. But I was not impressed by his hug-a-thon with Rand Paul.

The truth is, debates don’t matter much to election results, and early debates matter even less. So it’s not worth a lot of mental energy to analyze the details. As long as Trump remains in the race, he’ll be the center of attention, encouraging the carnival atmosphere. It’s the Donald’s world, we’re just living in it.

Samuel Goldman is assistant professor of political science at The George Washington University.

Can America Learn from German Universities?

Until Hitler took power, German universities were the envy of the world. They had they best facilities, offered the best training, and employed the best researchers. Between 1901 and 1932, scholars based in Germany won 33 Nobel Prizes for academic work (counting the historian Theodor Mommsen, but excluding other winners in Literature).  Americans won just five.

The academic balance of power has changed. American universities dominate international rankings. And German officials periodically warn of a “brain drain” toward the United States. It’s a sad decline for the land of Humboldt, Hegel, and Heisenberg.

You might reasonably conclude that German universities have something to learn from their American counterparts. The Notre Dame professor Mark Roche made that case in a recent book. Last week, he turned the argument around. In a piece for FAZ (link in German), Roche suggests that American universities emulate seven features of German universities: the intellectual independence they offer students; the seminar system; a place of honor for the traditional lecture; double majoring; professors who take a broad view of their subject; respect for the humanities; and a generous attitude toward academic training for non-academic careers.

Roche could have mentioned another appealing aspect of German universities: they’re much cheaper to run. As Rebecca Schuman reminds progressives impressed by the fact that they don’t charge tuition,

German universities consist almost entirely of classroom buildings and libraries—no palatial gyms with rock walls and water parks; no team sports facilities (unless you count the fencing fraternities I will never understand); no billion-dollar student unions with flat-screen TVs and first-run movie theaters. And forget the resort-style dormitories. What few dorms exist are minimalistic, to put it kindly—but that’s largely irrelevant anyway, as many German students still live at home with their parents, or in independent apartment shares, none of which foster the kind of insular, summer-camp-esque experience Americans associate closely with college life (and its hefty price tag)…There is also little in the way academic advising, which in the U.S. is now so hands-on that it has become its own cottage industry within the administration. Over there, you’re expected to know what you need to take, and to take it.

Roche provides useful reminders of the shortcomings of American higher education, which is quite expensive and not all that effective for undergraduates. But I’m not convinced Germany has many lessons to teach.

To begin with, American colleges and universities already do several of things Roche recommends. Double majors, for example, are pretty common.

Some of Roche’s other suggestions are in tension with each other. You can emphasize small seminars and traditional lectures, but probably not both. In any case, the relation between lectures and specialized study in Germany is determined by a model of secondary education that few Americans would accept. German students are ready for advanced work because they attended tracked high schools that rigorously separate the college-bound minority from those destined for trades.

Finally, funding structures make a difference. Because they depend on enrollment rather than direct subsidies, American universities have to compete for customers.  Although they don’t always pay off, football, fancy dorms, and other amenities that attract students are often attempts to balance the books.

The main problem, however, is that Roche thinks too much like a German. His argument implies that there’s just one model of well-run university. This approach goes back to Humboldt himself, who conceived the research university as a rational synthesis of ancient and modern, theory and practice, institution and individual.

American universities have never achieved this ideal, or even seriously pursued it. The truth is, the set of responsibilities they’ve acquired doesn’t make a lot of sense.

That’s not so terrible, though. What we lack in coherence we gain in diversity. In Germany, one university is about the same as another. Americans, on the other hand, can choose public or private, secular or religious, technical or humanistic, urban or rural, and so on. Rather than trying to fix colleges by making them more similar, we should resist standardization, whether it’s justified by economic, political, or even academic considerations. The Germans will always do that kind of thing better, anyway.

Samuel Goldman is assistant professor of political science at The George Washington University.

Another Misguided MOOC

Last week, Arizona State University and edX announced a new program to offer freshman  instruction online. Unless most MOOCs, these courses would offer graduation credits that students could use to continue at ASU or to transfer elsewhere. Although it questions the logistics, Walter Russell Mead’s blog argues that “this kind of experiment is promising, and shows how the mainstreaming of MOOCs could help lower costs.”

Lowering higher ed costs is an important goal, but MOOCification is the wrong way to go. The first reason, as Matt Reed points out, is that community colleges are already extremely cheap. ASU/edX  would charge $200 per credit. Yet students can take similar courses at Maricopa Community College for just $84 per credit.  Students in these courses would also have the benefit of “an actual instructor [to] provide actual guidance and feedback  throughout the course.”

Supporters of the ASU plan might observe that students who enroll in four-year colleges are more likely to get a degree than those with similar SAT scores who start at two-year institutions. But getting a degree is not an end in itself. Even romantics like me think it ought to promote students’ intellectual, cultural, and yes, economic flourishing after graduation.

Thanks to a survey by Gallup and Purdue, we now have a pretty good idea what aspects of college students think helped them in there postgrad lives. Here are the summary results:

rebww2orke-rq77aq4hlxa

So what matters to students, basically, is having personal relationships with professors and participating in extracurricular activities. In other words: the very experiences that MOOCs can’t provide, even if they’re taught by superstar lecturers.

As far as costs go, there’s good news and bad news in these results. The good news is that students don’t need posh dorms, elaborate food, and luxurious gyms. Although we can’t get the money back for monuments of indulgence that have already been built, universities can safely cut back on facilities in the future. If they’re worried that they’ll have trouble attracting paying customers without a lazy river, they might try emphasizing their commitment to what “science shows” really matters.

The bad news is that instructors who aren’t too overworked and stressed out to do real teaching and mentoring don’t come cheap. No one becomes an academic to get rich. In order to do their jobs, however, they need decent compensation, some job security, and reasonable teaching loads and research expectations.

Yet these are the costs that administrators and disruption theorists reliably attack. Somehow, there’s always money for high-tech gimmicks and big wigs’ salaries… but not for the people who do the most important work. MOOCs may be useful in providing instruction in specific areas, particularly for adult students. But they’re a distraction from the real problem of higher ed: how to offer serious instruction in real subjects to more of the students who want them, and to figure out something else to do with those who don’t.

Samuel Goldman is assistant professor of political science at The George Washington University.


The Mirage of a Classless Society

Paul Krugman  (flickr / 00Joshi)

In a recent post, Paul Krugman reiterated his view that conservative critics of the welfare state are petty authoritarians. Citing Corey Robin’s The Reactionary Mind, Krugman explains:

It’s fundamentally about challenging or sustaining traditional hierarchy. The actual lineup of positions on social and economic issues doesn’t make sense if you assume that conservatives are, as they claim, defenders of personal liberty on all fronts. But it makes perfect sense if you suppose that conservatism is instead about preserving traditional forms of authority: employers over workers, patriarchs over families. A strong social safety net undermines the first, because it empowers workers to demand more or quit; permissive social policy undermines the second in obvious ways.

In contrast to conservatism, Krugman argues:

…modern liberalism is in some sense the obverse — it is about creating a society that is more fluid as well as fairer. We all like to laugh at the war-on-Christmas types, right-wing blowhards who fulminate about the liberal plot to destroy family values. We like to point out that a country like France, with maternity leave, aid to new mothers, and more, is a lot more family-friendly than rat-race America. But if “family values” actually means traditional structures of authority, then there’s a grain of truth in the accusation. Both social insurance and civil rights are solvents that dissolve some of the restraints that hold people in place, be they unhappy workers or unhappy spouses. And that’s part of why people like me support them.

I’ve written about Robin’s widely-misunderstood argument in the past. But Krugman’s post is a good opportunity to revisit and summarize my critique. In short, Robin is right that classic conservative theorists were defenders of  economic, social, and political hierarchy against modern liberation movements. But he misunderstands the basis of the position.

The conservative position has never been simply that a hierarchical society is better than an egalitarian one. It’s that an egalitarian society is impossible. Every society includes rulers and ruled. The central question of politics, therefore, is not whether some will command while others obey. It’s who gives the orders.

Radical leftists understand this. That’s why Lenin’s “who, whom?” question became an unofficial motto of Bolshevism. The Bolsheviks promised that a classless society would one day emerge. In the meantime, however, they were open and enthusiastic practitioners of power politics.

Modern liberals find this vision upsetting. So they pretend that their policies are about reducing inequality and promoting freedom rather than empowering some people at the expense of others. They associate inequality with wealth and freedom with liberation from religion and family. So they assume that a society in which rich people, churches, and fathers have less power is ipso facto freer and more equal.

Notice how Krugman’s hostility to these traditional hierarchies blinds him to other kinds of inequality. He praises France because social insurance and stronger protections for employees make it easier for mothers and workers to stand up to patriarchs and bosses. Do they really make France “fairer and more fluid”? In cultural terms, perhaps. But not politically or even economically.

The defining feature of French life is that the welfare and regulatory state Krugman admires is administered by graduates of elite educational institutions. These aristocrats of the universities and civil service are geographically concentrated in Paris and anecdotally quite “inbred.” France is not a class society in the Marxist sense. But it could be described with only minimal exaggeration as an ENAligarchy.

Krugman doesn’t see the énarques as a ruling class that need to be knocked down a peg because their authority isn’t traditional. They wield power over other people’s lives because they got good grades, not because they have a lot of money or are heads of households or leaders of religious communities. But academic meritocracy is not the same thing as a fluid and fairer society. It’s certainly no fairer that some people are lucky enough to be smart than that others are good at making a fortune.

And France is no star when it comes to economic mobility. According to a review of the literature by the economist Miles Corak, France joins the U.S. and the UK as the Western countries with the least intergenerational mobility. Krugman also doesn’t mention that France is a very good place to have a job, but not so hospitable to people looking for work. That’s especially a problem for young people who didn’t go to the best schools.

There are serious arguments in favor of rule by a highly-trained administrative class within a moderately redistributive capitalist economy. Those arguments were a crucial source of the modern liberalism that Krugman endorses, and have recently been reiterated by Frank Fukuyama. What modern liberals really want, however, isn’t freedom or equality—terms that have no meaning before it’s determined for what and by whom they will be enjoyed. As conservatives have long understood, it’s a society in which people like themselves and their favored constituencies have more power while the old elites of property, church, and family have less.

Samuel Goldman is assistant professor of political science at The George Washington University.

What Libertarians (and Conservatives) Don’t Understand About Poverty

Photo by Jeremy Brooks (creative commons)

You can’t spend much time in right-of-center circles without hearing, often in the comfort of an open bar, that America’s poor don’t have it too bad. Yes, there are about 45 million people below the official poverty line. But that doesn’t mean that they’re suffering under the conditions we see in photos of the dustbowl or the old industrial slums. A Heritage Foundation report observes that “When LBJ launched the War on Poverty, about a quarter of poor Americans lacked flush toilets and running water in the home. Today, such conditions have all but vanished. According to government surveys, over 80 percent of the poor have air conditioning, three quarters have a car, nearly two thirds have cable or satellite TV, half have a computer, and 40 percent have a wide screen HDTV.”

Megan McArdle makes a version of the same argument in her comment on Joni Ernst’s State of the Union response. She reminds Internet snarks who mocked Ernst’s story about using plastic bags to protect her only pair of shoes that this was a common practice until pretty recently. According to McArdle, “we forget how much poorer we used to be, and then we forget that we have forgotten.” These days, even people without much money enjoy a material abundance of which their grandparents could only have dreamed. (Rod Dreher remembers the story of his own family here.)

That’s true, as far as it goes. Pretty much anything that’s made in a factory is cheaper and higher-quality than it used to be. I admit to moments of SWPL enthusiasm for craftsmanship (or the Brooklyn facsimile). But let’s get real: expanded access to consumer products is a good thing.

But that doesn’t mean poverty is exaggerated by ungrateful whiners. Goods and services that depend on skilled human labor cost more than they used to. Curiously, McArdle relies on figures from 1987 to make her case that American households face lighter expenses for necessities than they used to. That ignores the increase in prices for childcare, healthcare, and higher education over the last decade or so.

So people can accumulate possessions while maintaining a relatively low standard of living. Indoor plumbing won’t take care of your kids, and an Xbox won’t send them to college. The poor are also more likely to suffer from “diseases of affluence” such as obesity and diabetes. Unlike the truly affluent, however, they can’t afford to have them treated.

Material deprivation is also not always the most wrenching aspect of poverty. As Karl Polanyi argues his study of the Industrial Revolution, the lack of meaningful work and a secure social position can be worse than low wages or high consumer prices.

It’s important to question depictions of Dickensian poverty in the media, which often focus on exceptional cases. And we should resist nostalgia for a mythical time when folks didn’t have much but their dignity. But the grinding, uncertain lives of poor Americans today is a problem that new shoes and air conditioning won’t solve.

Literary Addendum: McArdle draws several examples of the bad old days from Laura Ingalls Wilder’s Little House novels for children. She claims:

…what really strikes you is how incredibly poor these people were. The Ingalls family were in many ways bourgeoisie: educated by the standards of the day, active in community leadership, landowners. And they had nothing.

This is a serious misreading of the books. As Wilder’s autobiography makes clear, the reason that the Ingalls family seems poor is that they were poor. There was nothing bourgeois about them, except perhaps Ma’s (relatively) advanced education. Even in the idealized version presented in the books, the Ingallses fail, time and again, to realize their dream of becoming independent farmers. That’s why the story ends, rather tragically, with Pa working as a clerk in a railroad town, a fate that he’d dragged his family thousands of miles to avoid.

Yes, Political Correctness Really Exists

Marcuse family photo.  CC BY-SA 3.0. via Wikimedia Commons.
Marcuse family photo. CC BY-SA 3.0. via Wikimedia Commons.

Jonathan Chait burned up the Internet this week with his critique of so-called political correctness. Among many responses, Amanda Taub‘s stands out for its denial of Chait’s basic premise. According to Taub:

…there’s no such thing as “political correctness.” The term’s in wide use, certainly, but has no actual fixed or specific meaning. What defines it is not what it describes but how it’s used: as a way to dismiss a concern or demand as a frivolous grievance rather than a real issue.

This is a curious response. Sure, people use the term in different ways. But Chait provides a perfectly serviceable definition: “political correctness is a style of politics in which the more radical members of the left attempt to regulate political discourse by defining opposing views as bigoted and illegitimate.”

I don’t think Taub would deny that this political style exists, although one may quibble with some of Chait’s examples. What she objects to is the way Chait describes it. In her view, calling denunciations of putatively bigoted opinions “political correctness” allows their advocates to avoid taking those criticisms seriously. So, in a feat of rhetorical jujitsu, Chait becomes guilty of the same tendency he opposes: ruling views he rejects out of respectable conversation.

This dispute is an object lesson in the pernicious effect of political correctness—or whatever you want to call it—on intellectual and political debate. Arguments about ideas devolve into wrangling about words. The conduct of politics by means of semantics sometimes reaches comic heights. In his piece, Chait reports an incident in which,

UCLA students staged a sit-in to protest microaggressions such as when a professor corrected a student’s decision to spell the word indigenous with an uppercase I—one example of many “perceived grammatical choices that in actuality reflect ideologies.”

But there’s nothing important at stake in the phrase “political correctness”. So let’s drop it, at least provisionally, and focus on the phenomenon that Chait describes. Contrary to popular perception, it’s not just a product of youthful exuberance among student activists or the ease and enforced brevity of Twitter. It’s rooted in a philosophical critique of the liberal theory of discourse.

Although it has precedents in Kant, this theory received a definitive formulation in John Stuart Mill’s On LibertyAccording to Mill, the truth is most likely to emerge from unrestricted debate. Although Mill did not use the metaphor, such a debate is conventionally described as a “marketplace of ideas,” in which vendors are free to offer their wares and customers are at liberty to purchase only the best goods.

There are two problem with this image. The first is that it assumes that consumers of ideas are in a position to judge which most closely approximate the truth. But that may not be the case. In order to make good purchasing decisions, customers need a certain level of background information and capacity for comparison.

In order to make the intellectual market function properly, Mill proposed that participation be restricted to “human beings in the maturity of their faculties.” In the most obvious sense, that means that we should not rely on judgments by children or the insane.

But Mill did not stop with ruling out those who had not yet reached the age of majority, or whose reason was in some way deranged. He also argued that the liberty of thought and discussion was not appropriate for “those backward states of society in which the race itself may be considered as in its nonage.” When it comes to “barbarians,” Mill reasoned, it is appropriate to use coercion, just as it is appropriate for parents to monitor their children’s reading. The implication for contemporary politics was that Britain was justified in practicing a kind of tutelary imperialism.

That conclusion might rule Mill off the syllabus at some universities today. But it actually reflects an important and potentially damaging tension in his argument. Mill defends the unrestricted exchange of ideas. Yet he also accords to those he judges fully rational the authority to determine who gets to participate in that exchange—and to enforce the education of those who don’t make the cut. For Mill, in other words, intellectual freedom presupposes a period of enlightened despotism.

The second problem emerges more directly from the quasi-commercial dimensions of Mill’s epistemological model. Mill assumed that all normally-constituted adults who had received a basic education were capable of reliably picking and choosing among intellectual offerings. That assumes they are unaffected by the sellers’ attempts to influence their choices.

But consumer preferences are influenced by advertising, reputation, the way products are presented, habit, and so one. In practice, it’s not easy to get shoppers to consider buying something new and different, even if it really is better than its competitors. Most of the time they buy the same products from familiar brands.

Some Marxists call the factors that interfere with judgment “false consciousness.” They argue that false consciousness accounts for the failure of revolutionary ideology to attract adherents among the working class in the developed world. On this view, it wasn’t outright repression or censorship that prevented the workers from adopting a Marxist perspective. It is was the subtle and concealed influence of capital on their ability to exercise their capacity to make their own decisions.

These tensions in Mill’s defense of intellectual freedom were recognized in the 19th century. What we now call political correctness was first articulated in the 1960s by the brilliant German-born philosopher Herbert Marcuse. Marcuse’s achievement was to turn Mill’s argument for free discussion, at least in a modern Western society, against its explicit conclusion.

Marcuse undertakes this inversion, worthy a black belt in dialectical reasoning, in the 1965 essay “Repressive Tolerance.” In it, Marcuse argues that the marketplace of ideas can’t function as Mill expected, because the game in rigged in favor of those who are already powerful. Some ideas enjoy underserved appeal due to tradition or the prestige of their advocates. And “consumers” are not really free to chose, given the influence of advertising and the pressures of social and economic need. Thus the outcome of formally free debate is actually predetermined. The ideas that win will generally be those justify the existing order; those that lose will be those that challenge the structure.

This prong of the argument is close to the standard critique of false consciousness. But Marcuse links it to Mill’s distinction between those who are and are not capable of participating in and benefitting from the unrestricted exchange of ideas.

According to Marcuse, many people who appear to be rational, self-determining men and women are actually in a condition of ideological enforced immaturity. They are therefore incapable of exercising the kind of that Mill’s argument presumes. In order to make debate meaningful, they need to be properly educated. This education is the responsibility of those who are already shown themselves to be capable of thinking for themselves—in this case, left-wing intellectuals rather than Victorian colonial administrators.

One might wonder how either Mill or Marcuse could be so sure that their kind of people knew what was best for others. The answer is that they regarded the truth as obvious. Mill was convinced that progress has demonstrated the obsolescence of non-Western culture, just as it had exposed the falsity of geocentric astronomy. In a postscript to the original essay, Marcuse expressed similar confidence in the rationality if not the linear character of history:

As against the virulent denunciations that such a policy would do away with the sacred liberalistic principle of equality for ‘the other side’, I maintain that there are issues where either there is no ‘other side’ in any more than a formalistic sense, or where ‘the other side’ is demonstrably regressive…

In Marcuse’s hands, Mill’s justification of enlightened despotism in undeveloped societies becomes a justification of enlightened despotism over the majority undeveloped individuals. The central difference between Mill and Marcuse is that the former believed that the necessity of despotism had passed, as least in the West. Marcuse contended intellectual freedom had to be be deferred until more people are likely to develop the correct opinions:

…the ways should not be blocked on which a subversive majority could develop, and if they are blocked by organized repression and indoctrination, their reopening may require apparently undemocratic means. They would include the withdrawal of toleration of speech and assembly from groups and movements which promote aggressive policies, armament, chauvinism, discrimination on the grounds of race and religion, of which oppose the extension of public services, social security, medical care, etc. Moreover, the restoration of freedom of thought may necessitate new and rigid restrictions on teachings and practices in the educational institutions which, by their very methods and concepts, serve to enclose the mind within the established universe of discourse and behavior—thereby precluding a priori a rational evaluation of the alternatives. And to the degree to which freedom of thought involves the struggle against inhumanity, restoration of such freedom would also imply intolerance toward scientific research in the interest of deadly ‘deterrents’, of abnormal human endurance under inhuman conditions, etc.

This passage is remarkable for the degree to which it prefigures so-called political correctness. Marcuse’s thought is that it is impossible for radical ideas to win a “free debate” in an society characterized by many forms of inequality. Therefore, debate should be restructured in ways that favor the weak and lowly. Marcuse goes on to speculate:

While the reversal of the trend in the education enterprise at least could conceivably be enforced by the students and teachers themselves, the systematic withdrawal of tolerance toward regressive and repressive opinions and movements could only be envisaged as the results of large-scale pressure which would amount to an upheaval.

Marcuse’s emphasis on students and professors encouraged the transformation of the universities that’s been exhaustively discussed by writers such as Roger Kimball. But his hopes for “large scale” pressure were disappointed until fairly recently, partly because the repressive tolerance thesis is as offensive to ordinary people as it is attractive to academics.

The advent of social media changed that dynamic. In addition to tilting public discourse toward the young, who are more likely to use these platforms, they make it easier for those whom Marcuse frankly described as subversives to organize and target the withdrawal of tolerance.

To be clear, I’m not suggesting that Gawker commenters are secret Marcusians. Actually, they’d probably benefit from reading this extraordinarily learned, subtle thinker. But they have absorbed a simplified version of Marcuse’s critique of Mill. In Marcuse, this critique culminates in an endorsement of legal as well as social pressure to hasten progress:

Different opinions and ‘philosophies’ can no longer compete peacefully for adherence and persuasion on rational grounds: the ‘marketplace of ideas is organized and delimited by those who determine the national and the individual interest….The small and powerless minorities which struggle against the false consciousness and its beneficiaries much be be helped: their continued existence is more important than the preservation of the rights and liberties which grant constitutional powers to those who oppress these minorities. It should be evident by now that the exercise of civil rights by those who don’t have them presupposes the withdrawal of civil rights from those who prevent their exercise…

How long until his unwitting heirs come to the same conclusion?

Samuel Goldman is assistant professor of political science at The George Washington University.

Leo Strauss: Hawk or Dove?

University of Chicago

The LORD is a man of war: the LORD is his name.
—Exodus 15:3

There is an old story that the Archangel Michael and the devil feuded over Moses’ remains. While Michael aimed to convey the prophet’s body up to heaven, Satan was determined to keep him buried in the dirt.

If the comparison is not impious, we may speak of a similar contest for custody of Leo Strauss. According to his admirers, Strauss earned a place among the angels by promoting Greek-inspired rationalism and a cautious liberalism. Strauss’s critics contend that he was a demonic figure, who encouraged his acolytes to disregard both scholarly probity and basic morality in favor of a Nietzschean will to power.

Strauss’s supporters held the upper hand so long as debate was focused on works that Strauss prepared for publication after his arrival in the United States in 1937. Yet they have struggled to explain the early works in German that have come to light over the last decade. These texts do not appear to be the work of a liberal rationalist. In a notorious letter to philosopher of history Karl Löwith, Strauss even expressed support for “the principles of the Right, fascist, authoritarian, imperialist principles…”

Robert Howse, who teaches law at New York University, is the latest combatant in the Strauss wars. In Leo Strauss: Man of Peace, Howse defends Strauss from his enemies while distancing him from some of his self-appointed friends. Howse acknowledges that Strauss flirted with extremism. But he argues that Strauss devoted the rest of his career to t’shuvah, a Hebrew word that is usually translated “repentance.”

Howse’s study has the merit of drawing on newly available sources from Strauss’s intellectual maturity: the archive of seminars made available by the Leo Strauss Center at the University of Chicago. And Howse is among very few writers on Strauss who are sympathetic to their subject without being sycophantic. Despite these virtues, I do not think Howse wins the battle for Strauss’s legacy, at least if this means distancing him from the politics of national self-assertion. That is because Howse does not draw the connection between Strauss’s early critique of liberalism and his lifelong Zionism.

Howse focuses on political violence. Departing from Strauss’s alleged influence on supporters of the Iraq War, Howse asks whether Strauss thought violence should be regulated by a normative standard or deployed according to its user’s interest. He answers that Strauss sought “a middle way between strict morality and sheer Machiavellian[ism].”

In itself, this conclusion is not very interesting. Every significant political theorist, including Machiavelli, has tried in some way to steer between the rocks of moral absolutism and political solipsism. Howse’s contribution is an argument about the character of the “middle way” that Strauss preferred. In courses on Thucydides, Kant, and Grotius from the 1960s, Howse finds Strauss praising the Nuremberg trials, United Nations, and nascent European Community. He argues that Strauss was essentially a Cold War liberal internationalist.

To make his case, Howse has to refute an interpretation of Strauss that has become dominant over the last decade or so. According to this interpretation, the most important influence on Strauss was the reactionary legal philosopher Carl Schmitt. In seminal works from the 1920s, Schmitt argued that the basis of politics is the distinction between friend and foe realized in mortal combat.

The Schmitt connection has been a centerpiece of attacks on Strauss since the mid-1990s. But Howse is more interested in confronting Strauss’s allies than in rehashing old debates. In particular, Howse accuses Heinrich Meier, the German scholar who edited Strauss’s Gesammelte Schriften, of “misreading Strauss as a hyper-Schmittian.” According to Howse, Meier not only inflates the significance of Schmitt for Strauss but also presents Strauss as agreeing with Schmitt’s politics of existential opposition.

Howse’s response to Meier has several dimensions. On the textual level, Howse shows that there is not enough evidence to support claims that Schmitt was among Strauss’s most important interlocutors. Strauss wrote a 1932 review of Schmitt’s seminal work, The Concept of the Political, that Schmitt recognized as the most searching he received. On that basis, he wrote a letter of recommendation for the Rockefeller Foundation grant that allowed Strauss to leave Germany. But these facts demonstrate no more than a professional relationship between scholars. And Schmitt’s anti-Semitism may have given him personal reasons to wish that Jewish intellectuals would make their careers elsewhere.

On the philosophical level, Howse reaffirms that Strauss was deeply critical of Schmitt’s approach. Although Schmitt claimed that he was distinguishing politics from morality, his argument was based on the assumption that a life devoted to existential confrontation is more worthy than one devoted to peace and prosperity. Although opposed to Christian and bourgeois norms, this assumption is inextricably normative. Strauss exposed Schmitt’s hard-boiled realism as a cover for his own brand of moralism.

Finally, Howse also offers a plausible if not novel account of the historical setting in which Strauss could have been attracted to antiliberalism without endorsing Schmitt’s theory of enmity. Given the failure of the Weimar republic, it seemed that a politics of militant self-assertion was necessary to protect Germany’s Jews. Strauss’s praise of “the principles of the Right, fascist, authoritarian, imperialist principles…” has to be read with this consideration in mind. The full context of the letter makes it clear that Strauss believed that only such principles were capable of standing up to the National Socialist regime.

When he wrote those words in 1933, Strauss may have been thinking of Mussolini, who was at the time an opponent of Hitler. As the 1930s continued, however, he associated them with Churchill. After Strauss arrived in the United States, he was known for insisting that “I am not liberal, I am not conservative, I always follow Churchill.” As Paul Gottfried pointed out in Leo Strauss and the Conservative Movement in America, Strauss was far more enthusiastic about England than the land of his birth.

Howse suggests that Strauss’s admiration for Churchill indicates his growing appreciation for liberal democracy. This is true, but only partly. What Strauss admired in Churchill and England was not bourgeois virtue or popular government. Rather, it was the “Roman” element he praised in the letter to Löwith.

This element consisted, in the first place, of a defiant militarism. In the letter to Löwith, Strauss quotes Virgil’s exhortation to the Romans “to rule with authority … spare the vanquished and crush the proud.” He elides the interstitial phrase “impose the way of peace.” The elision suggests that for Strauss the Roman way is the way of war and empire.

In 1934, Strauss identified a Roman quality in British parliamentary debate. But what he was praising was fairly specific: Churchill’s eloquent support for rearmament. There were compelling political and personal reasons for Strauss’s enthusiasm for military resistance to Germany. Nevertheless, it is worth recalling this view was deeply unpopular in the mid-’30s. Strauss’s praise for Parliament rests on its senatorial rather than its plebiscitary character.

Strauss’s affection for patricians was an important part of his near-worship of Churchill. As an aristocrat, soldier, and enthusiastic imperialist, Churchill personally represented the survival of premodern virtues within liberal democracy. It is easy to forget that Churchill’s critics often castigated him in terms similar to those Strauss used in his letter: imperialist, authoritarian, even fascist. In Strauss’s view, however, these were precisely the qualities that enabled Churchill to take a lonely stand against Hitler.

For Strauss, then, “the principles of the Right” were not so distant as they might now seem from the values that helped saved Europe from Nazi domination and its Jews from extinction. In 1941, he explained to an audience at the New School for Social Research that “it is the English, and not the Germans, who deserve to be, and to remain, an imperial nation: for only the English … have understood that in order to deserve to exercise imperial rule, regere imperio populos, one must have learned for a very long time to spare the vanquished and to crush the arrogant: parcere subjectis et debellare superbos.”

Howse argues that the lecture on “German Nihilism” from which this passage is quoted shows Strauss continuing his critique of Schmitt. And this is true as far as it goes: Strauss rejects the “warrior morality” that finds meaning in confrontation with a mortal enemy.

But that does not mean Strauss rejected violence as such. Like the letter to Löwith, “German Nihilism” culminates in a defense of war and empire. For Strauss, the problem with Schmitt was not that he placed violence at the center of politics. It was that he did so for the wrong reasons.

What did Strauss believe were the right reasons for violence? From the crisis of the 1920s, he learned that coercion was necessary to secure order. In the intellectual autobiography that he added to the 1965 English translation of his book on Spinoza, Strauss explained that the Weimar Republic “presented the sorry spectacle of justice without a sword or of justice unable to use the sword.” Without endorsing dictatorship, Strauss followed Machiavelli in regarding reliable execution as the central responsibility of the state.

But this is an argument about domestic politics. In a seminar on Thucydides taught just a few years before the Spinoza preface was published, Howse finds Strauss insisting that “foreign relations cannot be the domain of vindictive justice.” Force must be used in international affairs, but only to the extent necessary to secure a minimum of justice.

Howse observes that Strauss’s view that justice must be tempered by moderation was reflected in his assessment of the Nuremberg trials. While the Versailles treaty after World War I was a vindictive application of collective responsibility, the Nuremberg trials attempted to distinguish individual criminals from collaborators.

There is an implicit contrast here to Schmitt, who rejected the Nuremberg tribunal as an exercise in hypocritical moralism. As Strauss had shown, Schmitt himself was moralist when that suited his purposes. For Strauss, on the other hand, the fight against the Nazis was unquestionably a just war. He had argued in 1941 that England was not only permitted to fight Germany but had a moral right to do so.

At the time, Strauss had expressed this position using the rhetoric of empire. He was far from the only one to describe just causes in now politically incorrect terms. In 1942, the newly promoted Churchill explained that “I have not become the King’s First Minister in order to preside over the liquidation of the British Empire.” In Churchill’s view as much as Strauss’s, the war against Germany was a war for empire.

After the war, old-fashioned empire became untenable. In addition to economic and military obstacles, the principle of self-determination that encouraged resistance to the Nazis made it impossible for former imperial powers to justify their domination of other peoples.

Surprisingly, Howse finds Strauss relatively accepting of this change. In the Thucydides seminar, Strauss juxtaposes “empire” and “freedom from foreign domination” as the two greatest goals of politics. The suggestion is that the strong cannot be blamed for seeking empire. On the other hand, the weak cannot be blamed for resisting it. After about 1945, however, the moral and technological balance of power shifted in such a way as to give the resisters the upper hand. The imperial powers could no longer claim the right of the stronger because they were no longer stronger.

Howse argues that the Kant and Grotius seminars show Strauss searching for an acceptable order for the world of nation-states that replaced the old empires. He finds that Strauss, although resolutely anticommunist, expressed enthusiasm for a federation of republican states similar to the one suggested by Kant. Nevertheless Strauss, like Kant, rejected a world state as unavoidably tyrannical. The best alternative would be a federal arrangement involving shared sovereignty combined with respect for national particularity—perhaps in ways comparable to the European Union.

Howse thus concludes that the mature Strauss was a liberal internationalist. Although not naïve about the necessity of war, he believed that war should be waged for the sake of a more just order. According to Howse, this Strauss is far from the belligerent nationalist who is supposed to have inspired the neoconservatives. Like Socrates, he is a man of peace. thisarticleappeared-janfeb15

Yet there is something missing from Howse’s portrait of Strauss as a liberal internationalist. That is a detailed consideration of the role of Zionism in Strauss’s thought about violence.

In his intellectual autobiography, Strauss describes his earliest political decision as a commitment to “simple, straightforward political Zionism” at the age of 17. Throughout the 1920s, he was active in the Revisionist movement led by Vladimir Jabotinsky. In the 1930s, Strauss endorsed the “the principles of the Right, fascist, authoritarian, imperialist principles…” as the only basis for defense of Germany’s Jews. In the 1940s, he offered a moral defense of the British Empire partly because of the mercy it offered to the vanquished—including the Jews settled in Palestine. In the 1950s and 1960s, Strauss lectured and wrote extensively on Jewish themes, rarely failing to voice his admiration and gratitude for the foundation of the State of Israel.

These facts are barely mentioned in Leo Strauss: Man of Peace. In fact, the only explicit reference to the State of Israel that I have found comes in the conclusion, when Howse mentions Strauss’s 1957 letter to National Review defending Israel from accusations of racism. As part of his polemic against the neoconservative appropriation of Strauss, Howse assures readers that, “This was an act of loyalty to the Jewish people, not to the political right.”

Howse may be correct about Strauss’s intentions. But Strauss’s personal relationship to the American conservative movement is not the most important issue. Strauss’s lifelong commitment to Zionism tells us something important about his views on political violence. In this decisive case, he endorsed the politics of national self-assertion that Howse contends he had rejected by end of his career.

Strauss makes this point obliquely but unmistakably in the “Note on Maimonides’ Letter on Astrology” that he composed in 1968. In the letter, Maimonides attributes the destruction of the Second Temple to the fact that the Jews relied on magic to provide their defense, rather than practicing the art of war and conquest like the Romans who defeated them.

Strauss describes the remark as “a beautiful commentary on the grand conclusion of the Mishneh Torah: the restoration of Jewish freedom in the Messianic age is not to be understood as a miracle.” The Mishneh chapters that Strauss cites clarify this statement, explaining that the only difference between between the current age and the Messianic era will be “emancipation from our subjugation to the gentile kingdoms.”

For the mature Strauss, in other words, the redemption of the Jewish people was not mystical event. It is a political condition, defined by the reestablishment of Jews’ sovereignty in their own land. The achievement depended on much the same unsettling principles that Strauss endorsed in the infamous letter to Löwith. It may not be a coincidence that they were written almost exactly one year after Israel won control of the Temple Mount.

Strauss may have hoped the Jewish State could eventually become a respected member of a peaceful international federation. Nevertheless, this passage suggests that t’shuvah may not have been the central theme of Strauss’s career. Rather than enacting a return from extremism to moderation, Strauss’s thought about political violence was remarkably consistent concerning the nation that he cared most about. When it came to the Jewish people, Strauss felt that he had nothing to repent.

Samuel Goldman is assistant professor of political science at The George Washington University.

Marion Barry: D.C.’s Rascal King

The infamous former D.C. mayor Marion Barry died yesterday. Most of the obituaries and remiscences that have appeared so far are properly respectful. Even so, they can’t avoid mentioning Barry’s reputation for corruption and troubles with the law, especially his 1990 arrest on drug charges. Despite these problems, which might have doomed a lesser politician, Barry remained beloved in many parts of the city. How could citizens of the district continue to support him?

Part of the answer, as Adam Serwer points out, is that Barry was a very good politician. At the beginning of his career, he cultivated an image as an advocate for the District’s black majority, while reassuring the white elite that he was ready to do business. It’s easy to forget now, but Barry won election in 1978 largely due to votes from the Northwest quadrant. He lost much of that support when he ran for reelection. But by then he could rely on other allies.

But there was more to Barry’s success than tactical brilliance. He practiced urban politics like an old-fashioned ward boss, dispensing jobs and contracts as personal favors to his supporters. In some ways, the benefits were real: Washington’s black middle class depended heavily on municipal jobs. But they also promoted cronyism and incompetence, and helped bankrupt the city.

It’s tempting to conclude from these results that Barry was a very, very, very bad mayor. Although true in some ways, this assessment mistakes just how historically typical Barry was. If Barry had been been born white, in a different place, and in 1876 rather than 1936, he would likely be remembered as a lovable rogue, who used government to help out people to whom other roads out of poverty were closed.

There is no better exemplar of this type than James Michael Curley, the four-time mayor of Boston, Congressman, Governor, and two-time jailbird immortalized by Spencer Tracy in “The Last Hurrah (based on Edwin O’Connor’s novel). As Jack Beatty shows in his riveting biography, from the beginning of his political career before World War I to its end in the 1950s, Curley used government as an instrument in his lifelong mission to improve the lives of Boston’s Irish working class.

Setting a pattern that Barry would follow, Curley started out as a reformer. As challenges to his autocratic practices emerged, however, Curley relied increasingly on appeals to ethnic resentment and a bullying, macho style. In his 1942 Congressional race agains the Brahmin Thomas Eliot, Curley asserted that, “There is more Americanism in one half of Jim Curley’s ass than in that pink body of Tom Eliot.”

Curley’s divisive rhetoric was coarse but not very harmful in itself. Much worse were his policies, which relied on ever-increasing property taxes to pay for public-sector jobs. Boston boomed along with the rest of the country in the 1920s. By the time Curley left office, however, it had entered a decline from which it emerged only under the late Thomas Menino, who broke Curley’s record as Boston’s longest-serving mayor.

Like Barry, then, Curley was by objective standards a lousy mayor (particularly in his later terms). Nevertheless, he remained a hero to his people, who turned out in the thousands at his funeral. Were they sentimental about Curley’s big achievements, such as the construction of the municipal hospital? Grateful for the cash envelopes and no-show jobs Curley distributed around election time? Or unaware how Curley had damaged their city?

The answer probably involved all of these elements. Even in combination, however, they’re inadequate to explain Curley’s role in Boston. Rather than a conventional politician, Curley was a kind of tribal chieftain. More than any particular benefits, he offered his followers the sense that there was someone in power who was like them, who cared about them, and who would do whatever he could to help them.

Before World War II, there were plenty of white chiefs in the Curley mode, among rural Southerners as urban immigrants. By the ’70s, however, urban politics had been so thoroughly racialized that personal leadership and endemic corruption were seen as black pathologies rather than the historical norm. In the final analysis, Barry was a crook who hurt his city. But his greatest crime was being born black and too late to be crowned a “rascal king”.

Samuel Goldman’s work has appeared in The New CriterionThe Wall Street Journal, and Maximumrocknroll.


Republicans Ride an Empty Wave

Last night was a big win for Republicans. And I have nothing much to add to the already enormous literature documenting just what a romp it was (see Michael Brendan Dougherty’s after-action report in The Week). Most of the outcomes were consistent with predictions. But it was surprising how far Republicans outperformed the polls in accumulating large margins of victory.

Even so, Republicans should resist the temptation to conclude that the results give them an enduring advantage at the national level. This was a “wave election” only in the sense that American politics have been stormy for the last decade. I have to run off to teach, so I’ll make my case by means of a listicle. Here are six reasons for caution:

  1. The president’s party usually loses seats in midterm elections.
  2. Obama’s approval, while low, is higher than Bush’s at the same point in his presidency.
  3. We’ve seen this movie before. Remember the “permanent majority” of 2004? How about the “thumping” of 2006? Then there was the “new majority” of 2008. Of course, that was followed by the “Tea Party wave” of 2010. Which didn’t stop Obama from becoming the first president since Eisenhower to win a majority of the vote for a second time in 2012.
  4. The midterm electorate skews older, whiter, and richer than in presidential years. These are Republican demographics, so Republicans tend to do better. The 2016 electorate, on the other hand, will probably look more like 2008 than 2010. Republicans probably won’t ever win many votes from blacks or single women, but they need to continue doing better among the young and Hispanics (as several candidates did last night).
  5. The standard explanation of the results is that the election was a referendum on Obama’s policies. That’s not true for the simple reason that most voters have only the foggiest notion of what Obama’s policies are. (Polls on these matters can be misleading because they often ask respondents to choose from a predetermined set of responses to a leading question, which encourages unrepresentative, off-the-cuff answers.) Rather than voting on the success or failure of specific programs, many voters rely on a vague sense that things are going well or badly for the country.
  6. The biggest factor in voters’ assessment of the direction of the country is the condition of the economy. Right now it’s pretty lousy, despite relatively favorable growth and employment trends. But if these trends continue over the next two years—and they’re far less dependent on Washington that either party likes to admit—they may start to pay off for ordinary people. Should that occur, many will discover that they liked Democrats more than they thought.

The bottom line is that the results show Republicans making good use of a favorable conditions. But there’s no reason yet to think they’re the basis of a durable coalition.


Was Moses the First American?

Charlton Heston as Moses in "The Ten Commandments." Paramount Pictures

Indulging in what seems to be its regional pastime, Texans are fighting about schoolbooks again. While previous debates  centered on science instruction, this time it’s  proposed history texts  under scrutiny (I blogged about the curriculum standards  they’re supposed to meet here). At a hearing last week, books under review by the state Board of Education were blasted for saying too many nice things about Hillary Clinton, not enough nice things about Reagan, and too much about Moses altogether. In widely-reported testimony, the SMU historian Kathleen Wellman argued that the books treat Moses as an honorary founding father—so much so that “I believe students will believe Moses was the first American.”

I haven’t seen the proposed texts, so Wellman’s criticisms may be justified. But non-academics should be aware one of the most exciting movements in intellectual history in the last decade or so has discovered  the extraordinary prevalence of “Hebraic” rhetoric and symbolism in during the revolution and in the early republic.

Needless to say, Moses was not the first American. As Eran Shalev shows in his fine survey American Zion, however, early Americans were remarkably likely to think that their political struggles followed a pattern set by the Biblical Israelites. Washington, for example, was routinely identified as a modern Joshua. And the Union was often compared to federal arrangements among the Hebrew tribes.

The discourse of Hebrew republicanism is an important supplement to more familiar stories about the influence of Locke or the civic republicanism inherited from the Renaissance. At the same time, it’s hard to teach to beginners.

It’s important to avoid reductive arguments that America has a theological foundation. As far as we can tell, Hebraic models were more common in popular discourse than in elite deliberation. So while they played an important role in the public justification of political decisions and institutions, they had little direct influence on their design. And the republican interpretations of scripture on which the patriots relied in the 1770s and ’80s were not only selective, but also fairly novel. As recently as the 1750s, American clergy and laymen had usually argued that the Davidic monarchy was God’s paradigm of good government.

On the other hand, the politics of the revolution and early republic really were infused with Biblical rhetoric and examples. Neglect of this fact promotes the more fashionable dogma that the America revolution and constitutions were products of a secular society.

Textbook writers are thus in a bit of a pickle. Lacking the space to address complicated topics in-detail, it’s almost unavoidable that they traffic in simplifications. Perhaps the Texas books err on the side of evangelical conservatives by making the American founding more religious than it really was. But plenty of history writing makes the opposite error, stressing a few prominent skeptics at the expense of a richly Biblical political culture.

Samuel Goldman’s work has appeared in The New CriterionThe Wall Street Journal, and Maximumrocknroll.

Political Science, History, and the Right

Although I’m a card-carrying political scientist, I’ve never been entirely comfortable with the state of the discipline. Even so, I like to see my profession in the news. So I’ve been amused to find that political science has become the bone of contention in a spat between Ezra Klein and Thomas Frank. Klein argues that social science research has improved political journalism; Frank contends that it’s become an alibi for the status quo. Jonathan Chait and Freddy DeBoer weigh in on behalf of Klein and Frank, respectively. Klein’s followup is here.

The dispute is nominally about the relation between political science and the left. But it actually revolves around the right. Specifically, it’s about explaining Republicans’ electoral success since 1964, which both sides treat as a mystery on the order of Fermat’s last theorem. Basically, Frank attributes conservatives’ success to a decades-long campaign of “organizing and proselytizing and signing people up for yet another grievance-hyping mass movement.” Klein, on the other hand, argues that it’s mostly about the partisan realignment of the South, which has always been conservative, but used to vote for Democrats. Although Klein focuses on the House of Representatives, Chait makes a similar case about presidential elections.

The academic debate has strategic implications. If their relative strength is determined mostly by structural considerations, there’s not much Democrats can do to take control of Congress. On the other hand, Republicans will have a hard time winning the presidency with a coalition based in the inland South and Mountain West. Essentially, the parties will have reversed the positions they held in most of the period between between World War II and 1994, when Republicans owned the White House and Democrats dominated Capitol Hill.

Considered in this broader context, the progressive heyday of the mid-’60s was profoundly aberrant. A temporary constellation of factors—including America’s overwhelming economic advantage following the war, political participation by the youngest baby boomers, and the halo conferred on Johnson by his predecessor’s assassination—combined in a political moment that was without parallel before or since. Since the end of Reconstruction, the American norm has been regionally and institutionally divided, relatively conservative politics. The Republican ascendance since 1964 is in many ways a return to that norm, rather than a puzzling deviation from it.

Frank finds that conclusion too upsetting even to contemplate. Klein accepts it with weary resignation. But neither has any right to be as surprised as he seems to be. Rather than more training in statistics, progressives might benefit from a refresher course in history—which used to have a much more prominent place in the study of politics than it does today.


What Would Jeremiah Do?

Sistine Chapel, prophet Jeremiah / Wikimedia Commons
Sistine Chapel, prophet Jeremiah / Wikimedia Commons

In his 1981 classic After Virtue, Notre Dame philosophy professor Alasdair MacIntyre offers a provocative diagnosis of the modern condition. Rejecting the assumption that secular modernity is the culmination of centuries of improvement, MacIntyre contends that we live amidst the ruins of Western civilization.

There can be no restoration of the past. Even so, MacIntyre urges readers to take their bearings from a previous experience of loss by pursuing “local forms of community within which civility and the intellectual and moral life can be sustained through the new dark ages which are already upon us.” As he puts it in a famous sentence, “We are waiting not for a Godot, but for another—doubtless very different—St. Benedict.”

Benedict is considered the founder of the monasteries from which Christian Europe would eventually emerge. MacIntyre does not claim to be a successor to Benedict. But his suggestion that civilization can be preserved only by dropping out of modern life has become influential among conservatives with traditional religious commitments.

Rod Dreher has summarized the “Benedict Option” as “communal withdrawal from the mainstream, for the sake of sheltering one’s faith and family from corrosive modernity and cultivating a more traditional way of life.” And small but vibrant communities around the country are already putting the Benedict Option into practice. Without being rigorously separatist, these communities do aim to be separate. Some merely avoid morally subversive cultural influences, while others seek physical distance from mainstream society in rural isolation.

But a neo-Benedictine way of life involves risks. Communal withdrawal can construct a barrier against the worst facets of modern life—the intertwined commodification of personal relationships, loss of meaningful work to bureaucratic management, and pornographic popular culture—yet it can also lead to isolation from the stimulating opposition that all traditions need to avoid stagnation.

American religious history offers a clear example of this danger. Between World War I and the 1970s, conservative Protestants pursued strategies of withdrawal that impoverished their intellectual and cultural lives in ways that have they have only recently begun to remedy. In MacIntyre’s telling, the Benedict Option is a detour that leads back into the center of history as civilization eventually re-emerges from its refuges. But it can just easily become a dead end.

The Benedict Option is not the only means of spiritual and cultural survival, however. As a Catholic, MacIntyre searches for models in the history of Western Christendom. The Hebrew Bible and Jewish history suggest a different strategy, according to which exiles plant roots within and work for the improvement of the society in which they live, even if they never fully join it.

This strategy lacks the historical drama attached to the Benedict Option. It promises no triumphant restoration of virtue, in which values preserved like treasures can be restored to their original public role. But the Jews know a lot about balancing alienation from the mainstream with participation in the broader society. Perhaps they can offer inspiration not only to Christians in the ruins of Christendom but also to a secular society that draws strength from the participation of religiously committed people and communities. Call it the Jeremiah Option.

 ♦♦♦

On March 16, 597 BC, the Babylonian king Nebuchadnezzar sacked Jerusalem after a long siege. In addition to the riches of the city and Temple, he claimed as spoils of war thousands of Judeans, including the king and court. Many of the captive Judeans were settled on tributaries of the Euphrates, which inspired the words of Psalm 137: “By the rivers of Babylon, there we sat down, yea, we wept, when we remembered Zion.”

The Judeans had reason to weep. In addition to the shame of defeat, few had direct experience of foreign cultures. The language and customs of their captors were alien and in some ways—particularly the practice of idolatry—abhorrent. More importantly, the captives could no longer practice their own religion. Banned by ritual law from making sacrifices outside the Promised Land, the exiles were unable to engage in public worship.

Under these circumstances, two ways of dealing with Babylonian society presented themselves. First, the exiles could accommodate themselves to the norms of the victors. They could learn Aramaic and adopt local manners. But this would mean the loss of their national and religious identities. In becoming honorary Babylonians, they would forfeit their status as God’s chosen people. On the other hand, the captives could resist. Taken from their homes by force, they might use force to get back again. Proposals for resistance behind enemy lines were seriously considered. In fact, several Judean leaders seem to have been executed for subversive plotting.

But any military campaign was doomed to failure. The empire was too strong to overthrow or escape. So how should its prisoners conduct themselves? How should they live in a society that they could not fully join without giving up their fundamental commitments?

The question was important enough to attract the attention of God himself. Speaking through the prophet Jeremiah, who remained back in Jerusalem, the Lord commanded the captives to steer a course between extremes of assimilation and violent resistance. In his famous letter to the leaders of the Judean community, Jeremiah reports God’s orders as follows:

Thus saith the Lord of hosts, the God of Israel, unto all that are carried away captives, whom I have caused to be carried away from Jerusalem unto Babylon; build ye houses, and dwell in them; and plant gardens, and eat the fruit of them; take ye wives, and beget sons and daughters; and take wives for your sons, and give your daughters to husbands, that they may bear sons and daughters; that ye may be increased there, and not diminished. And seek the peace of the city whither I have caused you to be carried away captives, and pray unto the Lord for it: for in the peace thereof shall ye have peace.
(Jeremiah 29:4-7)

What is God saying? In the first place, he insists that the captives unpack their bags and get comfortable. True, God goes on to promise to redeem the captives in 70 years. But this can be interpreted to mean that none of the exiles then living would ever see their homes again. After all, the span that the Bible allots to a human life is threescore years and 10.

So the captives are to await redemption in God’s time rather than seeking to achieve it by human means. But this does not mean that that they are to keep their distance from Babylonian society until the promised day arrives. On the contrary, God commands them to “seek the peace of the city whither I have caused you to be carried away captives, and pray unto the Lord for it: for in the peace thereof shall ye have peace.”

“Peace” could be read as the absence of conflict. But this doesn’t fully express God’s directive. In the Hebrew Bible and Jewish tradition more broadly, peace refers to flourishing and right order. What God is saying is that the exiles cannot prosper unless their neighbors do as well. For the time they are together, they must enjoy the blessings of peace in common.

By what means are these blessings to be secured? The reference to prayer suggests that God wants the Judeans to promote peace by spiritual means. But that is not all. God also enjoins the Judeans to promote the common good by means of ordinary life. His very first instruction is to build houses. In other words, the Judeans are to conduct themselves like long-term residents—if also resident aliens.

Reinforcing the point that captivity is for the long haul, God reminds the captives that the dwellings they are to build are not for themselves alone. Instead, they must shelter generations of children and grandchildren, multiplying the community. God’s plan is for expansion and growth, not marginal existence.

The emphasis on securing peace through ordinary life does not absolve the exiles of their responsibility to remain holy. But theirs is to be a holiness based on upright life rather than the independence of a homogeneous community. Reassuring those who feared that they could not continue their relationship with God in exile, God explains, “ye shall seek me, and find me, when ye shall search for me with all your heart.” The Babylonian captivity is thus the origin of Judaism as a law-based religion that can be practiced anywhere, rather than a sacrificial cult focused on the sacred temple.

The piety that God encourages, therefore, can be practiced by ordinary people living ordinary lives under difficult circumstances. God enjoins the captives not only to live in Babylon, but also to live in partnership with Babylon. Without assimilating, they are to lay down roots, multiply, and contribute to the good of the greater society.

 ♦♦♦

The Babylonian captives addressed by Jeremiah bear comparison to the traditionalist dissenters in MacIntyre’s stylized history. Both groups are minorities. Both are prisoners of empires that are unwilling or unable to support their moral and religious commitments. Yet both know that violence is an unacceptable means to achieve their communal goals. It would not work—and more importantly, it offends God.

Where Jeremiah counsels engagement without assimilation, Benedict represents the possibility of withdrawal. The former goal is to be achieved by the pursuit of ordinary life: the establishment of homes, the foundation of families, all amid the wider culture. The latter is to be achieved by the establishment of special communities governed by a heightened standard of holiness.

Although it can be interpreted as a prophecy of doom, the Jeremiah Option is fundamentally optimistic. It suggests that the captives can and should lead fulfilling lives even in exile. The Benedict Option is more pessimistic. It suggests that mainstream society is basically intolerable, and that those who yearn for decent lives should have as little to do with it as possible. MacIntyre is careful to point out that the new St. Benedict would have to be very different from the original and might not demand rigorous separation. Even so, his outlook remains bleak.

MacIntyre’s pessimism conceals what can almost be called an element of imperialism—at least when considered in historical perspective. Embedded in his hope for a new monasticism is the dream of a restoration of tradition. The monks of the dark ages had no way of knowing that they would lay the foundation of a new Europe. But MacIntyre is well aware of the role that they played in the construction of a fresh European civilization—and subtly encourages readers to hope for a repetition.

Jeremiah’s message to the captives is not devoid of grandiose hopes: the prophet assures them that they or their progeny will ultimately be redeemed. But this does not require the spiritual or cultural conversion of the Babylonians.

The comparison between the options represented by Jeremiah and by Benedict has some interest as an exercise in theologico-political theorizing. But it is much more important as a way of getting at a central problem for members of traditional religious and moral communities today. How should they conduct themselves in a society that seems increasingly hostile to their values and practices? Can they in good conscience seek the peace of a corrupt and corrupting society?

In the 2013 Erasmus Lecture sponsored by First Things, Jonathan Sacks, the former chief rabbi of the United Kingdom’s United Hebrew Congregations of the Commonwealth, took up this question with specific reference to Jeremiah. Rejecting Jeremiah’s reputation as a prophet of doom, Sacks argued that Jeremiah’s letter to the exiles fundamentally expresses a message of hope. Despite their uncomfortable situation, the captives are not to resist or separate themselves from Babylonian society. Rather, they are to pursue the fulfillments of ordinary life, practice holiness, and work and pray for the prosperity of the society in which God placed them.

As Sacks pointed out, this pattern has governed much of Jewish history in the diaspora. Between the destruction of the second Temple in AD 70 and the foundation of the State of Israel in 1948, nearly all Jews have found themselves in a condition comparable to that of the Babylonian captives. A small and often despised minority, they have nevertheless taken to heart God’s insistence that their peace depend on the peace of their captors.

This is not a solution to all problems of communal survival, however. The appeal of assimilation has been considerable. Descendants of the captives often took Babylonian names and adopted Aramaic. In modern times, many Jews have not only modified religious practice but rejected Jewish identity altogether. Recent surveys show that Jewishness in America is seriously endangered by indifference and intermarriage. So advocates of more rigorous separation have a point.

Nevertheless, there may be lessons in Jeremiah and Jewish history for Christians and others concerned about their place in modern society. These can be sketched by three ideas.

First, internal exiles should resist the temptation to categorically reject the mainstream. That does not mean avoiding criticism. But it must be criticism in the spirit of common peace rather than condemnation. Jeremiah is famous as the etymological root of the jeremiad. Yet his most scathing criticisms are directed against his own people who have failed in their special calling of righteousness, not the “mainstream” culture.

Second, Jeremiah offers a lesson about the organization of space. Even though they were settled as self-governing towns outside Babylon itself, God encourages the captives to conduct themselves as residents of that city, which implies physical integration. There need be no flight to the hinterlands. Web issue image

Finally, Jewish tradition provides a counterpoint to the dream of restoring sacred authority. At least in the diaspora, Jews have demanded the right to live as Jews—but not the imposition of Jewish law or practices on others. MacIntyre evokes historical memories of Christendom that are deeply provocative to many good people, including Jews. The Jeremiah option, on the other hand, represents a commitment to pluralism: the only serious possibility in a secular age like ours.

I offer these arguments against communal withdrawal from a somewhat idiosyncratic motive. An heir to the Jewish diaspora, I am a relatively comfortable inhabitant of secular modernity. By what right do I counsel people whose first loyalty is to God?

The answer is: self-interest. While not a member of traditional religious community myself, I am convinced that the rest of society is immeasurably enriched by the presence of such communities in political, cultural, and intellectual life. So while I do fear that practices of separation will be bad for those communities themselves—as the fundamentalist experience of the last century indicates—I am certain that they will be bad for the rest of us. If demanding, traditional forms of religion disappear from mainstream culture, that culture may actually become the caricature of a destitute age on which MacIntyre builds his analysis.

At the same time, it would be cynical to offer a merely instrumental argument for the continued engagement of religious communities with secular society. Although not very observant myself, I found Jeremiah’s letter to the exiles helpful in thinking through this problem. God reminds the captives that they will find peace only in the peace of the Babylonians, that they are to promote the good of the rest of society as well as their own. The Jeremiah Option gives me reason to hope that Jews, Christians, and the rest of us can find peace together.

Samuel Goldman’s work has appeared in The New Criterion, The Wall Street Journal, and Maximumrocknroll.

Is Religious Freedom Possible?

American Life League / cc

In a post for The Immanent Frame responding to the Hobby Lobby and Wheaton College decisions, Indiana University professor Winnifred Fallers Sullivan challenges the idea of religious freedom on which those decisions are based. Although a liberal, Sullivan does not deny that private firms and religious colleges are engaged in a kind of religious practice. Rather, she argues that because religion means different things to different people, it’s impossible to systematically distinguish legitimate “religious freedom” from mere rejection of the law:

The need to delimit what counts as protected religion is a need that is, of course, inherent in any legal regime that purports to protect all sincere religious persons, while insisting on the legal system’s right to deny that protection to those it deems uncivilized, or insufficiently liberal, whether they be polygamist Mormons, Native American peyote users, or conservative Christians with a gendered theology and politics. Such distinctions cannot be made on any principled basis…Both the majority and dissenting Justices in these two cases affirm—over and over again—a commitment to religious liberty and to the accommodation of sincere religious objections. Where they disagree is on what counts as an exercise of religion. Their common refusal, together with that of their predecessors, to acknowledge the impossibility of fairly delimiting what counts as religion has produced a thicket of circumlocutions and fictions that cannot, when all is said and done, obscure the absence of any compelling logic to support the laws that purport to protect religious freedom today.

The whole post rewards careful reading. One issue that Sullivan leaves unexamined is what counts as a “principled basis”. I take it that she means a non-historical, more or less universal definition, which would make it possible to distinguish religion from non-religion in a logically consistent way. And she’s right that no such definition exists. To mention only one example, the state cult of the Romans had little in common with what we understand by “religion” today.

But why should American law be based on universal principles that can be applied in a quasi-Kantian manner? A considerable historical literature suggests that the religion clauses of the Constitution emerged from the historical experience of Anglo-Protestantism. They were developed and applied in a society that was assumed to be overwhelmingly Christian and organized, for the most part, into recognizable denominations. Of course, there were always communities whose religion was inconsistent with these assumptions. But it was assumed that they would either be demographically marginal, or identifiable under Christian theological categories.

The problem, of course, is that this world no longer exists. And not only because of secularization or immigration by Catholics, Jews, and more recently Muslims. As Sullivan observes, the American brand of evangelicalism encourages individuals to decide for themselves what religion means to an historically unprecedented degree. So we face the challenge of applying historically and theologically specific concepts of religion, liberty, and so on in a way that obscures their limits and contingency. Thus the knots into which both sides of the Court have had to twist their arguments not only in Hobby Lobby, but also in cases such as Kiryas Joel Village School District v. Grumet.

There’s no obvious solution to this problem. We can neither revive Anglo-Protestant categories in a pluralistic society, nor can we formulate a definition of religion that will satisfy everyone. My own preference is for giving as much deference as possible, consistent with public order, to congregations, non-profit institutions, and yes, private firms, to act in ways that reflect their beliefs about what they owe to God and the world. But that means giving up the dream of cultural hegemony that today inspire the secular Left at least as strongly as it once did the religious Right.

(h/t Samuel Moyn, via Facebook).

Posted in , , . Tagged , , , . 27 comments

How to Stop Next Year’s Disinvitation Season

Disinvitation season has come and gone. In this year’s enactment of a now familiar exercise, Haverford, Rutgers, and Brandeis, among other other schools, were forced by opposition for students and faculty to alter plans for commencement speakers.

Parallel denunciations of creeping authoritarianism are part of the ritual. But the truth is that critics of the university on both the left and the right get what they really want out of these tiny fiascos: an opportunity to make vehement public statements when little of significance is at stake. Because commencement addresses are, with a few notable exceptions, emissions of immense quantities of hot air. Here are the deep thoughts with which Rice favored the graduates of Southern Methodist University in 2012.

Rather than lamenting the arrogance of administrators or immaturity of students, it’s worth considering how to reform the institution of commencement speeches altogether. After all, there’s no requirement that university import boldface names. Columbia, for example, allows only its president to speak. So here are some suggestions for preventing future commencements  from becoming occasions for embarassing disinvitations.

First, give students a role in choosing speakers. This would help gauge potential controversy early in the selection process, as well as building a constituency for the choice. One reason it’s so easy for a relatively small group of critics to push out a speaker is that the rest of the student body has no stake in keeping him. Allowing them to exercise some influence over the initial decision could change that.

But maybe students would use their influence to pick popular culture figures rather than serious types that convince parents and taxpayers that their money is well-spent. That risk could be avoided if universities stopped paying large honoraria. If potential speakers have something important to say, they’ll be willing to do so in exchange for reasonable expenses. Don’t subsidize celebrities—or high-priced “thought leaders” flogging their books.

Next, separate the conferral of honorary degrees from speechgiving. The former implies collective endorsement of the speaker’s career. The latter does not. One of Rod’s readers claims that Haverford opponents of Berkeley chancellor Robert Birgeneau objected to his honorary degree more they did to his speaking invitation. Whether that’s true in this case, there’s a morally and political relevant difference between hearing someone out and allowing an honor to be given in one’s own name.

Finally, revive the old practice of allowing a student elected by students to speak at commencement. This would allow students to express criticism or disapproval of other speakers in precisely the kind of dialogue that both lefties and conservatives claim to endorse.

Any or all of these suggestions would help prevent silly controversies without giving in to the heckler’s veto. But maybe the best solution would be to cancel the speeches altogether. Does anyone really want to be lectured in the inevitable commencement weather of blistering sun or pouring rain?

Posted in , . Tagged , , , , , , , , . 5 comments

Where Have All the Public Intellectuals Gone?

Dan Drezner on the radio. eszter / cc
Dan Drezner on the radio. eszter / cc

Few things annoy academics more than being told that their work is irrelevant. So there’s nothing surprising about the backlash against Nicholas Kristof’s column in Sunday’s New York Times. Kristof contended that America’s professors, especially political scientists, have “marginalized themselves” by focusing on technical debates at the expense of real problems, relying too heavily on quantitative methods, and preferring theoretical jargon to clear prose. An outraged chorus of responses (round-up here) rejected Kristof’s generalization as a reflection of the very anti-intellectualism that he intended to criticize.

Some of the responses to Kristof reflect the expectation of public recognition for every contribution to the debate that makes so much academic writing a chore to read. Even so, Kristof  ignores fairly successful efforts to make scholarship more accessible—even if you don’t count every blog by every holder of a Ph.D. The Tufts professor and Foreign Policy contributor Daniel Drezner has more than 25,000 Twitter followers—partly due to the success of a book the uses a zombie apocalypse scenario to compare theories of international relations. The Washington Post recently picked up The Monkey Cage (full disclosure: several of my colleagues at George Washington University are contributors) and The Volokh Conspiracy, which are populated by political scientists and legal academics, respectively. Not to mention Kristof’s own employer. Just the day before Kristof’s piece ran, the Times hired Lynn Vavreck to contribute to a new site concentrating of social science and public policy.

At least when it comes to political science, then, it’s just not plausible that “there are fewer public intellectuals on American university campuses today than a generation ago.” On the contrary, there are probably more academics who try to communicate with non-specialist audiences than there were in 1994. One difference is that the public intellectuals of past decades were more likely to engage directly with normative and historical Big Questions. That change reflects the declining influence of political theory in comparison with causal analysis, as well as the weakening of the Cold War imperative of justifying liberal democracy.

Of course, writing for non-specialist readers isn’t encouraged in graduate programs, and doesn’t often align with the requirements for hiring and promotion. But there are anecdotal reasons to think that expectations are slowly changing, as departments struggle to prove their ‘relevance’ in a period of financial retrenchment. If Kristof’s piece promotes these changes, it will have served a valuable function whether or not its argument is compelling.

In fairness to Kristof, however, none of the observations refute his basic claim. That’s because he isn’t actually talking about “public intellectuals”. Rather, he means old-fashioned mandarins, who move easily between Harvard Yard and Washington, usually without encountering many members of the public along the way. In a followup on Facebook, Kristof observes that “Mac Bundy was appointed professor of government at Harvard and then dean of the faculty with only a B.A.—impossible to imagine now.” After nearly a decade as dean, Bundy joined the Kennedy administration as National Security advisor, where his vast intellectual firepower led him to promote and defend the Vietnam War.

What Kristof really offers, then, is less an argument for public engagement by scholars than a plea for another crop of Wise Men who lend conventional wisdom the authority of the academy. Not coincidentally, he presents as an exception to the trend toward academic self-marginalization the former Princeton professor and State Department official Anne-Marie Slaughter, whose resume is as perfect a reflection of the meritocratic elite that Bundy helped create as Bundy’s own pedigree was of the old Establishment. More professors should learn to participate in public debate—including political theorists frustrated by the increasingly technical orientation of political science. But if ‘relevance’ means becoming mouthpieces of our new ruling class, then Kristof can keep it.


Posted in , . Tagged , , , , , . 9 comments

How to Fix Grade Inflation at Harvard

Reports that A- is the median grade in Harvard College have reopened the debate about grade inflation. Many of the arguments offered in response to the news are familiar. The venerable grade hawk Harvey “C-” Mansfield, who brought the figures to public attention, describes the situation as an “indefensible” relaxation of standards.

More provocative are defenses of grade inflation as the natural result of increased competition for admission to selective colleges and universities. A new breed of grade doves point out that standards have actually been tightened in recent years. But the change has been made to admissions standards rather than expectations for achievement in class.

According to the editorial board of the Harvard Crimson, “high grades could be an indicator of the rising quality of undergraduate work in the last few decades, due in part to the rising quality of the undergraduates themselves and a greater access to the tools and resources of academic work as a result of technological advances, rather than unwarranted grade inflation.” Matt Yglesias, ’03, agrees, arguing that “it is entirely plausible that the median Harvard student today is as smart as an A-minus Harvard student from a generation ago. After all, the C-minus student of a generation ago would have very little chance of being admitted today.”

There’s a certain amount of self-congratulation here. It’s not surprising that Harvard students, previous and current, think they’re smarter than their predecessors—or anyone else. But they also make an important point. The students who earned the proverbial gentleman’s Cs are rarely found at Harvard or its peers. Dimwitted aristocrats are no longer admitted. And even the brighter scions of prominent families can’t take their future success for granted. Even with plenty of money and strong connections, they still need good grades to win places in graduate school, prestigious internships, and so on.

The result is a situation in which the majority of students really are very smart and very ambitious. Coursework is not always their first priority. But they are usually willing to do what’s necessary to meet their professors’ expectations. The decline of core curricula has also made it easier for students to pick courses that play to their strengths while avoiding subjects that are tough for them. It’s less common to find Chemistry students struggling through Shakespeare than it was in the old days.

According to the Harvard College Handbook for Students, an A- reflects “full mastery of the subject” without “extraordinary distinction”. In several classes I taught as an instructor and teaching fellow at Harvard and Princeton, particularly electives, I found that around half the students produced work on this level. As a result, I gave a lot of A-range grades.

Perhaps my understanding of “mastery” reflects historically lower demands. For example, I don’t expect students writing about Aristotle to understand Greek. Yet it’s not my impression that standards in my own field of political theory have changed a lot in the last fifty years or so. In absence of specific evidence of lowered standards, then, there’s reason to think that grade inflation at first-tier universities has some objective basis.

But that doesn’t mean grade inflation isn’t a problem. It is: just not quite the way some critics think. At least at Harvard and similar institutions, grades are a reasonably accurate reflection of what students know or can do. But they are a poor reflection of how they compare to other students in the same course. In particular, grade inflation makes it difficult to distinguish truly excellent students, who are by definition few, from the potentially much larger number who are merely very good.

Here’s my proposal for resolving that problem. In place of the traditional system, students should receive two grades. One would reflect their mastery of specific content or skills. The other would compare their performance to the rest of the class. Read More…

Posted in , . Tagged , , , , . 25 comments

No Left Turn in New York City

Bill de Blasio is mayor-elect of New York. According to many of de Blasio’s critics as well as his supporters, the unsurprising outcome of Tuesday’s election reflects a decisive turn in the city’s politics. The Nation claims that “Bill de Blasio’s exhilarating landslide victory over Joe Lhota in New York’s mayoral election offers a once-in-a-generation chance for progressives to take the reins of power in America’s largest—and most iconic—city.” In National Review, Kevin D. Williamson evokes John Carpenter’s b-movie classic, Escape from New York

I say: not so fast. Neither turnout and polls nor de Blasio’s career so far support hope (or fears) that he’ll try to transform New York into Moscow on Hudson. Mayor de Blasio will not cultivate the the chummy relationship with Wall Street that Michael Bloomberg did. But there’s likely to be more continuity between their mayoralties than most people expect. In fact, that continuity is a bigger threat to the city’s future than the immediate collapse that some conservatives fear.

First, the election data. As Nicole Gelinas points out, de Blasio’s election was not the mandate for change that the margin of victory suggests.

Preliminary results show that about 1 million New Yorkers voted yesterday. That’s 13 percent lower than four years ago. Back then, remember, many voters disillusioned with the choices—Mayor Michael R. Bloomberg was running for a third term against the uninspiring Bill Thompson, Jr.—just stayed home. Turnout this year was as much as 30 percent below 12 years ago, when Bloomberg won his first victory. Voters supposedly so eager for change this year didn’t show that eagerness by voting. And a slim majority of the people who did vote—51 percent—told exit pollsters that they approve of Bloomberg, anyway. Though de Blasio’s victory margin was impressive, the scale of the win looks less stellar when put into recent historical context. As of early Wednesday, de Blasio had 752,605 votes—a hair shy of Bloomberg’s 753,089 votes in 2005.

So de Blasio did not win the votes of unprecedented number of New Yorkers. And many of those who did vote for him also supported Bloomberg. That doesn’t mean that they like everything Bloomberg did. But there’s no evidence here of a progressive tsunami.

What about de Blasio’s career? The tabloid press paid a great deal of attention to de Blasio’s visits to communist Nicaragua and the Soviet Union as a young man. More recently, however, de Blasio worked as a HUD staffer under Andrew Cuomo, and as campaign manager for Hillary Clinton. De Blasio took liberal positions during his tenure on the city council, particularly on symbolic issues involving gay rights. But this is not the resume of a professional radical.

It’s true that de Blasio made “a tale of two cities” the central theme of his campaign. As many observers have pointed out, however, he lacks the authority to enact his signature proposals: a tax increase on high earners, to be used to fund universal pre-K. Nothing’s impossible, but the chances of the state legislature approving such a tax hike are slim. The same goes for several of de Blasio’s other ideas, including a city-only minimum wage higher than the state’s minimum and the issuance of driver’s licenses to illegal immigrants.

The real issues under the de Blasio’s administration will be matters over which the mayor has some direct control. That means, above all, contracts with city workers, and policing. Will de Blasio blow the budget to satisfy public employee unions? And will he keep crime under control after eliminating stop-and-frisk ?

I’m cautiously optimistic  about public safety. New Yorkers didn’t like living in fear, and de Blasio is smart enough to know that he’s finished if crime returns. Stop-and-frisk is not the only weapon in the NYPD’s arsenal.

The unions are a bigger problem. New York’s short-term fiscal outlook is reasonably good, so it’s hard to imagine de Blasio playing Scrooge to faithful supporters.

Even so, de Blasio’s mayoralty is unlikely to be  a revolutionary moment. As Walter Russell Mead has argued, the biggest risk is that he pursues the same high-tax, high-regulation, high-service style of government on which Bloomberg relied. Those policies have helped transform much of New York into a gleaming shopping mall, which is admittedly a lot better than an urban jungle. But they are a bad strategy for prosperity in the years and decades to come.


Posted in , . Tagged , , , , , . 7 comments

Machiavellian Social Conservatives?

Wikimedia Commons

Niccolo Machiavelli is often described as arguing that morality has no place in politics. That’s not quite right. Machiavelli believes that morality is crucial to political success. He just thinks that the important thing is to seem to possess the moral virtues, rather than actually to practice them. In a famous passage of The Prince, Machiavelli puts it this way:

Thus, it is not necessary for a prince to have all the above-mentioned qualities in fact, but it is indeed necessary to appear to have them. Nay, I dare say this, that by having them and always observing them, they are harmful; and by appearing to have them, they are useful…

In a column for Bloomberg, Ramesh Ponnuru implicitly encourages social conservatives to take Machiavelli’s advice. Reflecting on the likely failure of Ken Cuccinelli’s campaign for governor of Virginia, Ponnuru observes that current governor Bob McDonnell won a big victory in 2009 even though he agrees with Cuccinelli on many social issues. So:

Why do they seem to be succeeding now when they failed then? It’s partly a matter of countenance: McDonnell was cheerful (if boring), and Cuccinelli often appears dour and argumentative…Another difference, though, is that Cuccinelli made his name as a conservative crusader, especially on social issues, where McDonnell made his as a bipartisan problem-solver. McDonnell’s Democratic critics had to dig up a 20-year-old grad-school thesis he had written to make him look out of the mainstream; Cuccinelli’s have more recent initiatives and statements to work with.

Ponnuru goes on to contrast Cuccinelli’s likely failure to win election tomorrow with Chris Christie’s likely success:

Socially conservative positions on hot-button issues don’t seem to be a deal-breaker even for the much more liberal voters of New Jersey. Christie has vetoed legislation to grant state recognition to same-sex marriage—a judge later ordered it, though Christie briefly appealed—and vetoed bills to fund Planned Parenthood five times.

Ponnuru’s conclusion is that social conservatives shouldn’t be too upset by Cuccinelli’s defeat, since McDonnell and Christie’s examples show that social conservatives are not necessarily losers in blue and purple states. That’s true, but the distinction between seeming and being is important here. Ponnuru is right that social conservative views are not, in themselves, electoral poison. In other words, seeming to be a social conservative is not a problem—and may in some cases be good politics. Yet actually being one, in the sense of making serious attempts to promote social conservative policies, is and will remain serious obstacle to victory in places like Virginia and New Jersey.

Chris Christie is a good example of this dynamic. Christie knew quite well that his challenge to the gay marriage bill was purely symbolic, since the liberal state supreme court was certain to reinstate the law. What’s more, Christie dropped his opposition as soon as he could credibly claim that the court had forced his hand. This, too, was inevitable in a state in which a considerable majority of voters, including Republicans, favor gay marriage.

Christie’s Machiavellian approach isn’t popular with dedicated social conservatives. The National Organization for Marriage and the Family Research Council have both condemned Christie’s handling of gay marriage. But symbolic conservatism is popular with more moderate voters, who want to express disapproval from gay marriage and abortion, but are uncomfortable with policies that seem intrusive or intolerant.

The lesson of today’s election, then, will not be that social conservatives can compete in moderate and liberal areas if they offer more explicit and articulate defenses of their views. It’s that they can get away with expressing social conservative beliefs so long as they do nothing to suggest that those beliefs are likely to end up enshrined in law. Ponnuru points out that “If Christie wants to run for president, he may find that pointing this out is a low-cost way of appealing to a national constituency that matters a lot in his party.” Somewhere, Machiavelli smiles.


Posted in , . Tagged , , , , . 21 comments

Political Theorists Ask Academic Questions Because They’re Academics

In a post for Prospect, Christopher Fear asks why academic political theory is so remote from political practice. He concludes that it’s because political theorists devote themselves to eternal riddles that he dubs “Wonderland questions” rather than today’s problems. Consider justice, perhaps the original topic of political theorizing:

One of the central questions of academic political philosophy, the supposedly universal question “What is justice?” is a Wonderland question. That is why only academics answer it. Its counterpart outside the rabbit-hole is something like “Which of the injustices among us can we no longer tolerate, and what shall we now do to rectify them?” A political thinker must decide whether to take the supposedly academic question, and have his answers ignored by politicians, or to answer the practically pressing question and win an extramural audience.

Fear is right about the choice that political theorists face between philosophical abstraction and making an impact on public affairs. But he doesn’t understand why they usually pick the former. The reason is simple. Academic political theorists ask academic questions because…they’re academics.

In other words, political theorists are members of a closed guild in which professional success depends on analytic ingenuity, methodological refinement, and payment of one’s intellectual debts through copious footnoting. They devote their attention to questions that reward these qualities. Winning an extramural audience for political argument requires different talents, including a lively writing style, an ear for the public discourse, and the ability to make concrete policy suggestions. But few professors have ever won tenure on the basis of those accomplishments.

Another reason  academic political theorists avoid the kind of engagement Fear counsels is that they have little experience of practical politics. Most have spent their lives in and around universities, where they’ve learned much about writing a syllabus, giving a lecture, or editing a manuscript—but virtually nothing about governing or convincing lay readers. How does expertise on theories of distributive justice, say, prepare one to make useful suggestions about improving the healthcare system? Better to stick with matters that can be contemplated from the comfort of one’s desk.

In this respect, political theorists are at a considerable disadvantage compared to professors of law or economics. Even when their main work is academic, lawyers and economists have regular chances to practice in the fields in the fields they study. Within political science, many scholars of international relations pass through a smoothly revolving door that connects the university with the policy community. Political theorists have few such opportunities.

Fear points out that it wasn’t always this way. Before the 20th century, many great political theorists enjoyed extensive political influence. But Fear forgets the main difference between figures like Machiavelli, Locke, Montesquieu, Madison, Burke, Hume, Mill, or Marx and their modern epigones. The former were not professors. Although all devoted to the philosophical truth as they understood it, they were also men of affairs with long experience of practical politics.

The “brilliant and surreal tragedy of academic political theory,” then, is not that political theorists have been diverted into the wrong questions. It’s that political theory is an uncomfortable fit with the university. Academic political theorists gravitate toward the kind of questions that career scholars are in a position to answer.


Posted in , . Tagged , . 7 comments
← Older posts