State of the Union

Adversary Politics and the Invisible Establishment

The Bush Center / Flickr
The Bush Center / Flickr

Earlier this week, Damon Linker diagnosed an odd phenomenon. No matter how much influence or popularity they enjoy, grandees of the conservative movement including the editor of National Review and Rush Limbaugh deny being part of the “establishment”. Linker questions their grasp of reality rather than their judgment:

I don’t doubt that Lowry and his ideological compatriots really believe this. But it’s nonsense, and the nonsense isn’t benign. It obscures the reality of who exercises power on the right — and allows those who wield it to avoid taking responsibility for the consequences of their reckless rhetoric and foolish mistakes.

This is an important point that helps explain conservatives’ refusal to take responsibility for a series of political disasters over the last few decades. But the cognitive dissonance isn’t limited to the right. Planned Parenthood threw a fit when Bernie Sanders described it as “part of the establishment”. According to the executive vice president of an organization that possesses assets worth close to two billion dollars and is supported by virtually every Democratic politician,

It’s regrettable and surprising to hear Sen. Sanders describe the very groups that fight on behalf of millions of often marginalized Americans — people who still have to fight for their most basic rights — as representing the “establishment”[.]

So there’s more going on than conservative illusions. The denialism that characterizes our politics is a consequence of the fact that both wings of today’s elite have roots in adversary politics.

As Linker points out, the Goldwater movement was the seminal moment for conservatives. For progressives, it was the personal liberation and anti-war campaigns that emerged just a few years later. Members of today’s establishment can’t be honest with themselves because they either came of political age as opponents of the status quo or nostalgically identify with those who did.

It’s impossible to talk people out of their basic self-understanding. What we can do is refuse to play along when the rich, influential, or well-connected pose as outsiders. That goes for Ivy League professors, national media figures, and presidential candidates on the right and the left. And…maybe even Donald Trump.

Why Isn’t My Professor Conservative?

The Heterodox Academy blog is circulating an overview of political opinion among college faculty. As the graph shows, more professors lean to the left today than even a few decades ago. HERI-graph-left-tiltAt National Review, Michael Strain raises questions about this trend. As a member of the mere 5% of professors who identify as conservative, I have some ideas about the answers. My thoughts are interspersed with Strain’s questions below.

1. What drives this? Is there much actual discrimination against conservatives in hiring and tenure decisions at universities? Or is the relative absence of conservatives in humanities and social science departments almost entirely driven by self-selection — is it instead the case that people who go into Ph.D. programs are majority liberal, and that people who graduate with Ph.D.s and who choose to go into faculty positions are (nearly) exclusively liberal?

There’s no single cause. As the original post points out, this is partly a matter of generational replacement. The cohort of professors who started their careers in the’50s and early ’60s was more balanced, with a lot of moderates as well as some conservatives. When they retired, they were replaced by Baby Boomers who came of age in the heyday of the student movement. Some radical activists and sympathizers liked college so much they stayed on. That explains part of the shift around the early ’90s.

Paul Krugman raises a second possibility: that the right took a turn for the extreme that alienated erstwhile sympathizers. The problem with Krugman’s analysis is that it depends on a conflation of conservatism with the Republican caucus in the House of Representatives. A more plausible explanation that emphasizes political events is that the big dip in conservative identification after 2004 reflects opposition to the Iraq War.

Like most conservatives, Strain wonders whether discrimination plays a role. My sense is that there’s not much intentional exclusion. In the natural sciences and many professional fields, politics would be very unlikely to come up in the hiring and promotion process.

Ideology is more obvious in the humanities and social sciences. When talking about discrimination in these disciplines, it’s important to distinguish among “flavors” of conservatism. Speaking broadly, economic libertarianism or foreign-policy hawkishness are considered eccentric but tolerable. Public criticism of the sexual revolution, on the other hand, is not okay. Of all the tribes of the right, conservative Christians face the biggest obstacles.

There may be another contributing factor: the adjunctification of the faculty. During the same period the graph covers, instructors working off the tenure track have become a considerable majority. Adjuncting is not an experience that promotes enthusiasm for conservative principles. A more precarious faculty is a likely to be a more left-leaning one.

2. Let’s say it’s driven by selection. Then why are progressives so much more likely than conservatives to get Ph.D.s? What is it about being a professor and doing research and teaching that are more attractive to liberals than conservatives? What is it about the university environment?

All these considerations have to be taken into account when we think about self-selection. Conservatives are less likely to pursue academic careers because they don’t think they’ll find success in an already Darwinian job market.

They’re probably right, and not just because of discrimination. A more fundamental issue is that conservatives tend to be skeptics about the progressive epistemology that defines the modern university. According to this vision, the goal is to “discover new knowledge”. As a result, research is treated as more important than teaching, and teaching is understood as an assault on prejudice rather than the continuation of tradition.

This conception of the academic enterprise makes  it tough to get through grad school if you see teaching as your main work or are inclined toward curatorial forms of scholarship (even though research is a relatively small element of most academic positions). Conservative social scientists may have fewer objections to this bias toward novelty. But it’s a real challenge for conservatives in the humanities.

3. Is overwhelming liberalism among humanities and social science faculty actually a significant problem? Does it affect research and teaching in the social sciences and the humanities in a non-trivial way?

It is a problem. The absence of conservatives means important questions won’t be asked and possible answers won’t be proposed and tested. A conservative presence is also important for ensuring that the curriculum includes certain classic works and unfashionable topics or methods. Finally, in a monolithically leftist academy, students won’t be exposed to a wide range of arguments and perspectives, leaving them dependent on conventional wisdom. In this respect, a stronger conservative presence is actually essential to the progressive task of challenging prejudice.

On the other hand, these are not the biggest problems the academy faces. More serious than the relative absence of political conservatives is the double threat to liberal education posed by corporatization and grievance politics. Conservatives might wish that students would read more Dante, say, or Tocqueville. But the real danger is that administrators and social justice warriors will agree that they don’t have to read anything they don’t want to.

The real question is what to do about this. Strain argues—and I agree—that ideological affirmative action is a bad idea. A more promising strategy is to reinvigorate conservative intellectual life outside the university, paying more attention to scholarship and the arts and less to politics. We’ll have a stronger case for admission to the academy when more of us make arguments or create works that can’t be ignored.

Samuel Goldman is assistant professor of political science at The George Washington University.

Two Threats to Intellectual Freedom

Matej Kastelic /
Matej Kastelic /

Intellectual freedom is in danger on American campuses. Although they’ve highlighted some real grievances, the recent wave of protests include unprecedented demands for uniformity on controversial issues related to race. Big changes may take longer than protestors would like, particularly on matters subject to faculty governance. But residential life administrators are enthusiastically proclaiming the new orthodoxy in areas under their influence.

It’s important to publicize attempts to exclude politically incorrect beliefs and practices, which sometimes collapse under scrutiny. But social justice activists and their bureaucratic allies are not the only threats to free thought and speech. At many universities, presidents and trustees with business backgrounds are trying to replace relatively unsupervised with scripted content delivery.

A recent incident at the University of Iowa reflects this development. At a meeting of the Staff Council, university president Bruce Harreld reportedly told faculty that there was “one way” to prepare for class and that instructors who failed to do so “should be shot”.

The ensuing controversy has revolved around the shooting comment, which Harreld now denies having made. Worse than this stupid but essentially harmless dead metaphor, however, is Harreld’s background assumption that education is about offering up  previously prepared goods for students to consume. It’s worth noting that Harreld, a former computer executive who was chosen for the Iowa job through a murky selection process, has minimal academic experience and appears confused about what universities do.

The two threats to intellectual freedom are not equally distributed. Political correctness is a bigger problem at elite private universities that enjoy enormous prestige but enroll only a tiny fraction of students (these schools are also not subject to the First Amendment). The corporatization of the curriculum is a greater risk at public universities, particularly below the flagship level, that are less glamorous but do most of the actual teaching. Conservatives who care about higher education can, and should, oppose both tendencies. It is no victory to prevent the rule of fanatics by transferring power to philistines.

How to Fix College Admissions

Flickr Commons

The Supreme Court heard Fisher v. University of Texas at Austin again yesterday. Since the arguments were much the same as the first time, it’s hard to predict what the justices will do. The paradox, essentially, is that the Court has said universities have a constitutionally permissible interest in enrolling a racially diverse class, but prohibited them from using numerical quotas. So they have to design admissions policies that just happen to produce the desired level of diversity, which cannot actually be defined without violating the 14th Amendment.

The problem with this strategy is not that it lets in vast numbers of unqualified students. It is that universities’ commitment to maintaining a specific demographic balance without applying quotas encourages opacity, and even downright dishonesty, in the admissions process.

You might say: universities simply should be prohibited from pursuing racial diversity. The thing is, they’re going to do it anyway, using indirect means if necessary.

Moreover, it’s not just “underrepresented minorities” who have a better shot at admission than their grades and scores suggest. So do athletes, legacies, and students from rural states, among others. Should universities also be banned from pursuing a social and geographic mix? Some critics of affirmative action argue admission should be strictly based on academic criteria. But I’m not convinced our great universities would be improved if they were more like CalTech.

There’s a better way to make college admissions more open and more fair. We could replace the mysterious art of crafting a class with a lottery for all qualified applicants.

In its simplest version, the process would work like this. The application would involve a checklist of more or less objective, externally verifiable criteria. These might include GPA above a certain cutoff, scores of 4 of 5 on a given number of AP tests, and so on. Extracurricular achievements could be considered. For example, there might be a box to be checked by applicants who played a varsity sport.  The application could even ask about socio-economic status, allowing applicants to indicate that their parents had not attended college or that they grew up in a high-poverty census tract.

Suppose the checklist contained ten criteria. Applicants who satisfied, say, six of them would be entered into a lottery for admission. Universities would then draw an appropriate number of admits. The whole exercise would take about two seconds.

In addition to its appealing transparency, a lottery would be extremely cheap. Under this plan, universities wouldn’t have to maintain a large and highly paid admissions office. All they’d need would be a good website on which applicants could enter their information and a few IT workers to manage the database.

A lottery would also relieve stress on applicants and their parents. Rather than driving themselves nuts pursuing all possible achievements, high school students could concentrate on doing well in their strongest subjects or activities.

Critics might argue a lottery would reduce academic quality. But there’s no reason to think students taken at random from a qualified pool would be worse than those selected in head-to-head comparisons. In fact, Harvard already attracts applications from more valedictorians than it can accept.

What about diversity? In the long run, the lottery would produce a student body proportional to the demographics of the applicant pool (which would not necessarily be the same as the general population). If universities aimed to enroll classes that more closely reflected the country as whole, they could encourage applications from underrepresented groups or regions and work with K-12 schools to increase the number of applicants who met the benchmarks. Such efforts would correspond to the original meaning of affirmative action and would not require invidious racial distinctions.

The lottery plan isn’t perfect. One concern is that alumni would hate it, since it would make snobbery about where they went to college harder to justify. This could reduce donations, forcing universities to make more prudent use of their existing resources.

More broadly, elite university universities might lose a bit of their cachet. They would still attract some of the world’s most brilliant students. But they could no longer claim that their careful selection of an exquisitely curated class gives them special moral authority.

Now that I think them over, these might not be such terrible problems.

Samuel Goldman is assistant professor of political science at The George Washington University.

Universities Meet Protest With Process

Chinese mandarins  Wikimedia Commons
Chinese mandarins Wikimedia Commons

The protests that have struck many college campuses over the last few months have been a disturbing spectacle. The exaggerated “demands” of student activists and the groveling of many administrators and some faculty reflect an institutional culture that is, at best, ambivalent about traditional activities of teaching and learning. Rod Dreher and many other bloggers have discussed specific examples: a more comprehensive list can be found here.

But things are not quite as bad as outsiders to academia imagine. One reason is that the byzantine structure of academic governance makes it difficult to change anything, let alone impose the sweeping mandates that the protestors have in mind. At many universities, administrators seem to be counting on this tendency toward inertia to rescue them from keeping their own promises.

Take Emory, which Rod discussed here. In response to a demand for mandatory reporting of professors’ alleged racism, the Emory administration issued the following statement:

Modifications to the course evaluation form are a core component of faculty governance in each school/college.

Each academic Dean will be asked to establish a process in the school/college to review and revise current course evaluations (e.g., add the recommended open-ended questions) as well as make other revisions identified as part of the review. Next, these revised course evaluations will be shared through existing mechanisms such as the Council of Deans, the University Senate, and the ongoing assessments on student learning…The Office of Planning and Budgeting will collect information on the faculty annual evaluations as part of the annual reporting requirement for each school, specifically the nature and number of negative actions regarding faculty members.

Rod describes this statement as a surrender to the thought police. And it may turn out that way… I don’t know. But that’s not actually what the statement says. Allow me to provide a translation from academese into English.

Modifications to the course evaluation form are a core component of faculty governance in each school/college.

Changes to the course evaluation forms have to be approved by the various faculties, on a separate basis in each school or college. The administration cannot impose such changes unilaterally, and takes no responsibility for doing so.

Each academic Dean will be asked to establish a process in the school/college to review and revise current course evaluations (e.g., add the recommended open-ended questions) as well as make other revisions identified as part of the review.

The dean of each college will appoint a committee to overhaul the evaluations. The revisions might include the questions mentioned in the demand, but they also might not. Any changes submitted to the relevant faculty under that college’s existing procedures would be products of the committee report, not read off the demands.

Next, these revised course evaluations will be shared through existing mechanisms such as the Council of Deans, the University Senate, and the ongoing assessments on student learning…

The meaning of this sentence isn’t totally clear, but it suggests an additional layer of consultation and coordination.

The Office of Planning and Budgeting will collect information on the faculty annual evaluations as part of the annual reporting requirement for each school, specifically the nature and number of negative actions regarding faculty members.

The Office of Planning and budget will record whatever information the evaluations collect, whether they include new questions or not.

So what’s the bottom line? That Emory will establish a procedure that is expected to last months or years, with lots of veto points along the way. It’s possible that this will lead to the kind of oversight the protestors want. But it’s more likely to yield vague guidelines that will allow the administration to preen “diversity” without provoking a revolt of the tenured faculty, who are rarely conservative but usually don’t like meddling with their classrooms.

Smothering illiberal demands in process is a risky strategy. It would be better to reassert a core element of academic freedom: the right of instructors to present controversial ideas in their own classrooms without risking official sanction. But that would require a reconception of the university as a place for serious study rather than a playland for personal exploration and progressive politics . In the meantime, we’ll have to hope the process is as long as convoluted as possible.

Samuel Goldman is assistant professor of political science at The George Washington University.

The RJC Farce

Gage Skidmore/Flickr

The Republican Jewish Coalition held its forum for presidential candidates on Thursday. With the exception of Rand Paul, all of the 14 remaining candidates made appearances. The speeches were mostly predictable expressions of opposition to Islamism and support for Israel. But they also included amusing gaffes, such as Jim Gilmore explaining that he prepared for the event by watching Schindler’s List.

What was the point of this venture into the absurd? Not to attract Jewish voters. Fewer than one in three American Jews identifies as a Republican. While that number has increased slightly over the last decade, Jews overall are declining as a portion of the electorate. Politicians don’t usually risk making fools of themselves to win over a small portion of a shrinking demographic. There had to be other targets.

One of those targets can be found on the RJC’s masthead. The casino magnate Sheldon Adelson has donated millions of dollars to Republican candidates in previous elections, and is likely to do so again. Adelson himself was not present. But the forum is widely considered a kind of public audition for his support.

The other target audience was evangelical conservatives, who will make a real difference in the general election as well as the primaries. For many of these voters, “love” for Jews is an article of faith, as well as a litmus test for candidates.

Successful appeals to these audiences are more likely to reduce Republicans’ share of the Jewish vote than to increase it. Israel is less important to most American Jews than it is to Adelson and his allies. And polls suggest they actually support the administration’s Middle East policies, including the Iran deal. So Ted Cruz’s argument that Jews should ignore their disagreements with his social views in favor of his foreign policy probably won’t work.

Jews are concerned about the resurgence anti-Semitism around the world, so they might be expected to welcome Christians’ sincere opposition to Jew hatred. The problem is that they find effusions of eternal love creepy rather than reassuring. Marco Rubio’s invocation of the “Judeo-Christian tradition” was more measured, but is unlikely to be more effective. Although it was evidently well-intentioned, this rhetoric reminds many of Jews of a long history of religious cooptation.

Republicans have been expecting a breakthrough in Jewish support for decades. Despite or because of the RJC’s efforts, it probably won’t happen any time soon.

Samuel Goldman is assistant professor of political science at The George Washington University.

Why America Isn’t Socialist

Before it became a favorite trope of Republican presidential candidates, “American exceptionalism” belonged to the left. The phrase referred to the United States’ puzzling divergence from the pattern of development proposed by Karl Marx. According to Marx, powerful socialist movements or labor parties should arise in advanced economies as workers recognized the opposition of interests between themselves and capital. Why did this fail to occur in America?

Jack Ross is the latest to raise this question, joining an honor roll of intellectuals including Friedrich Engels, Werner Sombart, H.G. Wells, Daniel Bell, and more recently Gary Marks and Seymour Martin Lipset. His book—ostensibly a history of the Socialist Party of America (SPA), which existed formally from 1901 to 1972—reconsiders the successes and, mostly, failures of socialist organizing from the end of the Civil War up to the present. Ross argues that the fundamental error of American socialism was its leaders’ refusal to build a political party on a foundation of unions and agricultural associations, as was done in Britain and Germany. The resulting split between labor and politics condemned the SPA to marginality.

Resisting the social and cultural approaches that dominate academic historiography, Ross argues that the main obstacle to American socialism was bad decisions taken by professional activists in conventions or meeting rooms. In Ross’s view, a terrible precedent was set in 1896 when the Populist Party endorsed the Democrat William Jennings Bryan rather than nominating Eugene Debs as its candidate for president. Debs then took his followers into the Social Democratic Party, a direct ancestor of the SPA, as a dissenting rump. Although it enjoyed electoral success in a few regional strongholds, the SPA would never really get established at the national level.

As the story proceeds into the 20th century, Ross blames sinister forces for pushing the socialist movement toward cooperation with the two-party system. Reversing the conventional interpretation, he associates this “conservative” tendency with the Communists. Usually remembered as the radical wing of American socialism, Ross presents them as its collaborationist “right.”

Because Ross pays little attention to ideas and proceeds chronologically rather than analytically, it is not easy to understand the basis for this characterization. Apparently it rests on the observation that leading figures in the SPA were less enthusiastic about the centralized state than we might expect socialists to be. Drawing on the Jeffersonian tradition, they envisioned a socialist America as an economic democracy embodied by cooperative enterprises and local government rather than a centralized bureaucracy.

These Jeffersonian socialists knew from the experience of the Civil War that military conflict has centralizing and bureaucratizing consequences. They also recognized that overseas expansion was a recipe for permanent militarism. So anti-imperialism, if not outright pacifism, was an important part of their socialist vision. In a phrase Ross repeats throughout the book, they wanted America to be a republic, not an empire.

This vision was appealing to the Northern European immigrants, theologically liberal Protestants, and skilled workers who were central to America’s socialist movements before the First World War. But it was anathema to those self-styled radicals who took their cues from The Communist Manifesto. For enthusiasts of the early Marx, war and imperialism were actually desirable because they hastened the final crisis of capitalism and promoted economic rationalization.

Leon Trotsky was the seminal theorist of this position. During his sojourn in the United States in early 1917, Trotsky mocked American socialists as petit bourgeois dreamers who thought socialism could be achieved by winning elections and who opposed American entry to the First World War. He believed the true road to socialism lay in temporary support for elected governments until a tiny cadre of militants could seize control of the vast powers already consolidated in the state apparatus.

Ross argues that this strategy of boring from within explains the failure of the Socialist Party after World War I. A strong showing in the 1920 presidential election demonstrated the SPA’s resilience in the face of unprecedented harassment by the Wilson administration. But the SPA was hobbled by the defection of Communists to their own party and by the internal influence of doctrinaire Marxists who saw the best opportunity for promoting socialism in cooperation with the Democrats.

Most historians present the Popular Front of the 1930s, which combined support for the New Deal with opposition to Nazi Germany, as the apogee of the American left. Ross, on the other hand, sees it as the moment when American socialists sold their Jeffersonian birthright for a mess of Bolshevik pottage.

The hero of this part of Ross’s enormous book is Norman Thomas, the former Presbyterian clergyman from Ohio whose political and cultural background made him more sympathetic to the isolationists than to FDR in the run-up to World War II, particularly after Roosevelt’s election to an unprecedented third term. Thomas was among the most prominent supporters of the America First movement, which drew part of its membership from the Socialist-led Keep America Out of War Committee.

Noting this almost forgotten overlap between the Old Left and the Old Right, Ross speculates that “a Labor or Farmer-Labor Party, had it emerged before the Second World War, would have profoundly differed from postwar liberalism. It would have in all likelihood been a progressive-isolationist major party, having much more in common with so-called rightwing populism than Cold War liberalism.” Ross contends that such a party would have been more consistent with the historic aspirations of American socialism, and perhaps more appealing to ordinary citizens, than the vision of a welfare-warfare state that Communists shared with Wilson and Roosevelt.

Perhaps. But Ross’s focus on party leaders to the exclusion of the wider political setting obscures the hurdles that such a party would have faced. To begin with, noninterventionism was popular between the world wars. This does not mean socialism was popular. Ross doesn’t see how marginal the SPA was because he emphasizes its positions on foreign policy to the exclusion of its domestic agenda. He forgets that antimilitarism was not a goal in itself for the Socialists but part of a larger ideological package. thisarticleappears copy

Certainly this package included Jeffersonian elements. But it also called for public ownership of large portions of the economy. Despite their shared noninterventionism, then, the SPA was not as close to the Old Right as Ross suggests. It wasn’t as close to the mainstream of progressive politics either. For many Socialists, the New Deal was objectionable less because it was centralizing as such than because it addressed some of the side effects of capitalism without replacing the profit system. In this respect, Roosevelt’s policies really were closer to European corporatism than to Marxism.

But these distinctions held little interest to actual voters. The Socialists lost support to Communists and Democrats because these parties supported policies that appeared to be helping people in their everyday lives. This was particularly true for union members. In addition to FDR’s White House support for labor organization, unionists did well in the military buildup that preceded Pearl Harbor.

So there is little reason to think a party rooted in organized labor would have been consistently antimilitarist. It would also have been weak in the South, where unions were rare and farmers were not isolationists. Rather than a national party, the formation Ross imagines might not have been much bigger that than actual Farmer-Labor organization that the progressive Republican Robert La Follette and his family established in the upper Midwest. It would have been literally a middle-American radicalism.

Beyond counterfactuals, it is not obvious that the transformation of the SPA into a broad-based Labor-Farm party would have been a good thing. Norman Thomas’s noninterventionism and opposition to what Ross calls “state capitalism” were based on impeccable motives. But Thomas was wrong to think the United States could avoid war with Nazi Germany in the long run or that doing so was better than fighting. Ross quotes as a kind of prophecy the SPA’s 1940 platform, according to which:

Defeat of Hitler will be welcomed by all anti-fascists. But defeat of Hitler will mean the defeat of Hitlerism and a victory for democracy only if the roots of fascism and the war system are destroyed. The United States cannot contribute toward that end nor vindicate real democracy if it loses itself in the processes of war. If America enters the war, we shall be subjected to military dictatorship, the regimentation of labor and the ultimate economic collapse that must follow war. In an effort to ‘save democracy,’ we shall have destroyed its only remaining citadel.

Despite its many shortcomings, it is difficult to see Truman’s America in this grim forecast.


This book should probably end in 1952, when the SPA ran its last presidential campaign. It was a sad affair: Norman Thomas was unavailable to run because he was touring Asia at the expense of the CIA-funded American Committee for Cultural Freedom, having become a fellow traveler of Cold War liberalism. But Ross follows events up to 1972, when the party formally dissolved. This is because he wants to trace another genealogy: the emergence of neoconservatism from the sects that persisted after Thomas’s defection.

It is a kind of demonic-possession story. Having fatally weakened the SPA in the ’20s and ’30s, the Bolsheviks return to reanimate the corpse after the World War II. This time the villain is Max Shachtman, a Trotskyist who argued that the Soviet Union had become an obstacle to the very revolution that it had initiated. Shachtman and his followers urged socialists to work within the Democratic Party to promote a hard line against the USSR, as well as socialization of the domestic economy, rather than offering an electoral alternative. Their strategy of “realignment” attracted younger figures who became the public faces of socialism in the 1960s, most notably Michael Harrington and Bayard Rustin.

The key fact for Ross is that the Shachtmanites belied their socialist rhetoric and the SPA’s legacy of antimilitarism by offering political cover for the welfare-warfare policies of Democrats such as Lyndon Johnson, Hubert Humphrey, and Scoop Jackson. By 1972, many of them supported Richard Nixon. Following the lead of former Communists who played prominent roles in the conservative movement of the 1950s, the strategic “right” of postwar socialism eventually found a new home on the political right in the 1970s.

This is a fascinating story, which Ross may be the first to treat as a continuation of the odyssey that begins with the Populists and the Knights of Labor. But it fits awkwardly with the four or five hundred pages that precede it. By the time Ross turns to the post-history of the SPA—a sequence of almost totally insignificant paper organizations—the number of names, groups, and journals in play has become overwhelming. Early socialists dreamed of a broad-based party that could attract support from millions of ordinary Americans. By the end, there were more factions than members.

So why wasn’t that dream realized? Strategic choices clearly played some part in the failure to establish a viable socialist party. So did official repression. Later on, socialists had to contend with the problem of co-optation, whether by Communists or Democrats. Ross documents all these factors. Yet an even more important consideration is almost entirely absent from his analysis: America’s ethnic and religious diversity. Socialism appeals to class as the defining fact of politics. It is most successful when people have few other major differences from each other.

This has never been the case in the United States. Even before the wave of immigration from Eastern and Southern Europe that began in the 1870s, many Americans saw themselves as old stock or German or Irish, Catholic or Protestant, rather than as workers. There were some examples of pan-ethnic cooperation. But they were generally limited to specific industries or periods of economic depression.

Race was even more important. Blacks were America’s most exploited laborers, but they never embraced socialism to any significant extent. This was partly because their political activity was sharply restricted and partly because many early populists and socialists were white supremacists who limited black participation to racially segregated cells or discouraged it altogether. Due to the tragedy of America’s racial history, socialists lost a major source of potential support.

Finally, the leading representatives of American socialism may have been too American for their own good. The Jeffersonian elements Ross prizes were most appealing to middle-class Midwestern Protestants. As Lipset and Marks have pointed out, anti-statist themes were less exciting to a working class composed of new immigrants, particularly Catholics. They responded to Father Coughlin, not Norman Thomas.

It is possible that better luck and more skillful tactics could have overcome these obstacles. But it is not clear how much that would have mattered in the end. Despite their antimilitarist beginnings, socialist parties in most of Europe supported both World Wars and then embraced much the same blend of social welfare, economic corporatism, and militarized internationalism that has defined the Democratic Party at least since FDR.

Perhaps America is not exceptional after all.

Samuel Goldman is assistant professor of political science at The George Washington University.

Democrats Are Not Socialists, and Neither Is Bernie Sanders

Debbie Wasserman Schultz was recently mocked for flubbing a question on Chris Matthews. Asked the difference between Democrats and socialists, Wasserman Schultz tries to talk about the difference between Democrats and Republicans.

The exchange doesn’t reflect well on Wasserman Schultz, who plows through her talking points as if the question had never been asked. But it has a pretty easy answer. Historically, the essential feature of socialism is the demand for public ownership or direct government control of major sectors of the economy. A bit more abstractly, socialists have aimed to eliminate considerations of profit from as many areas of life as possible. They used to the describe this goal as “revolution”, which didn’t necessarily mean violence.

The modern Democratic Party isn’t about revolution. Since FDR, Democrats have consistently supported regulated competition and redistributive policies that direct private profits toward the relative losers in market exchange. These strategies are better understood as “welfarism” than socialism. A concrete example? Compare Britain’s NHS before Thatcher’s reforms to Medicare…or Obamacare, for that matter.

There’s something of a spectrum between these positions. Even so, you don’t meet many socialists in mainstream politics these days. Most “Socialist” parties in Europe abandoned their revolutionary dreams a long time ago. And the self-declared socialist Bernie Sanders offers a welfarist agenda that’s barely updated from the ’50s.

So no, Democrats aren’t socialists. We might be able to have a less stupid discussion of their actual positions if welfarists, and their critics, knew the difference.

Samuel Goldman is assistant professor of political science at The George Washington University.

It’s the Donald’s World, We’re Just Living In It

Christopher Halloran /

In late summer and early autumn of 1858, Abraham Lincoln and Stephen Douglas conducted seven debates around the State of Illinois. Thousands of people attended the contests, at which each speaker got 90 minutes to make his case. The result is among the classics of American political oratory. The style is folksy and occasionally silly. But the language is clear and honest, and the arguments are deadly serious.

Last night’s spectacle wasn’t like that. Although we call it a debate, it was really a group interview, with candidates answering questions from the moderators rather than developing and contrasting their positions. This was probably unavoidable, given the number of candidates and short attention spans of viewers. But the reality show format doesn’t inspire pride in American civic life.

Pious lament aside, however, the debate was vastly entertaining. I watched with a boisterous crowd of undergraduates at an ISI Honors Conference. They were, by turns, amused, provoked, and inspired. As the discussion continued late (too late) into the night, they also resisted any consensus about the winners. That’s important: pretty much all the candidates had at least one effective moment.

Rubio was probably the biggest surprise. Although he’s been overshadowed recently, he was the most likeable and relaxed man on the stage. John Kasich made a pretty good impression. He got tangled up in statistics and his dad’s resume, but many of conservative students I watched with were sympathetic to his faith-based argument for expanded healthcare. Huckabee also remains fantastic at doing what he does.

But the big story is still Trump. The Fox News commentators seemed pretty sure that his schtick turned people. We’ll get a sense from polling over the next few days whether that’s true. For my part, I thought his response to several attempted gotcha questions was pretty successful. Remember: a lot of Americans have had trouble with lenders or wavered in their party affiliations. What looks like irresponsibility to media and activist crowds might seem pretty normal beyond those circles.

The biggest losers were Bush, Walker, and Paul. They seemed uncomfortable and didn’t offer any memorable lines. Bush can afford a rope-a-dope strategy while the field clears. Walker and Paul need to provide justifications for their campaigns.

Last come the C’s. I thought Cruz and Carson made little impression. Carson’s a nice man who doesn’t belong in the race. And Cruz is expert at impressing his supporters… and no one else. Several people I spoke to thought Christie turned in a strong performance. Maybe. But I was not impressed by his hug-a-thon with Rand Paul.

The truth is, debates don’t matter much to election results, and early debates matter even less. So it’s not worth a lot of mental energy to analyze the details. As long as Trump remains in the race, he’ll be the center of attention, encouraging the carnival atmosphere. It’s the Donald’s world, we’re just living in it.

Samuel Goldman is assistant professor of political science at The George Washington University.

Can America Learn from German Universities?

Until Hitler took power, German universities were the envy of the world. They had they best facilities, offered the best training, and employed the best researchers. Between 1901 and 1932, scholars based in Germany won 33 Nobel Prizes for academic work (counting the historian Theodor Mommsen, but excluding other winners in Literature).  Americans won just five.

The academic balance of power has changed. American universities dominate international rankings. And German officials periodically warn of a “brain drain” toward the United States. It’s a sad decline for the land of Humboldt, Hegel, and Heisenberg.

You might reasonably conclude that German universities have something to learn from their American counterparts. The Notre Dame professor Mark Roche made that case in a recent book. Last week, he turned the argument around. In a piece for FAZ (link in German), Roche suggests that American universities emulate seven features of German universities: the intellectual independence they offer students; the seminar system; a place of honor for the traditional lecture; double majoring; professors who take a broad view of their subject; respect for the humanities; and a generous attitude toward academic training for non-academic careers.

Roche could have mentioned another appealing aspect of German universities: they’re much cheaper to run. As Rebecca Schuman reminds progressives impressed by the fact that they don’t charge tuition,

German universities consist almost entirely of classroom buildings and libraries—no palatial gyms with rock walls and water parks; no team sports facilities (unless you count the fencing fraternities I will never understand); no billion-dollar student unions with flat-screen TVs and first-run movie theaters. And forget the resort-style dormitories. What few dorms exist are minimalistic, to put it kindly—but that’s largely irrelevant anyway, as many German students still live at home with their parents, or in independent apartment shares, none of which foster the kind of insular, summer-camp-esque experience Americans associate closely with college life (and its hefty price tag)…There is also little in the way academic advising, which in the U.S. is now so hands-on that it has become its own cottage industry within the administration. Over there, you’re expected to know what you need to take, and to take it.

Roche provides useful reminders of the shortcomings of American higher education, which is quite expensive and not all that effective for undergraduates. But I’m not convinced Germany has many lessons to teach.

To begin with, American colleges and universities already do several of things Roche recommends. Double majors, for example, are pretty common.

Some of Roche’s other suggestions are in tension with each other. You can emphasize small seminars and traditional lectures, but probably not both. In any case, the relation between lectures and specialized study in Germany is determined by a model of secondary education that few Americans would accept. German students are ready for advanced work because they attended tracked high schools that rigorously separate the college-bound minority from those destined for trades.

Finally, funding structures make a difference. Because they depend on enrollment rather than direct subsidies, American universities have to compete for customers.  Although they don’t always pay off, football, fancy dorms, and other amenities that attract students are often attempts to balance the books.

The main problem, however, is that Roche thinks too much like a German. His argument implies that there’s just one model of well-run university. This approach goes back to Humboldt himself, who conceived the research university as a rational synthesis of ancient and modern, theory and practice, institution and individual.

American universities have never achieved this ideal, or even seriously pursued it. The truth is, the set of responsibilities they’ve acquired doesn’t make a lot of sense.

That’s not so terrible, though. What we lack in coherence we gain in diversity. In Germany, one university is about the same as another. Americans, on the other hand, can choose public or private, secular or religious, technical or humanistic, urban or rural, and so on. Rather than trying to fix colleges by making them more similar, we should resist standardization, whether it’s justified by economic, political, or even academic considerations. The Germans will always do that kind of thing better, anyway.

Samuel Goldman is assistant professor of political science at The George Washington University.

Another Misguided MOOC

Last week, Arizona State University and edX announced a new program to offer freshman  instruction online. Unless most MOOCs, these courses would offer graduation credits that students could use to continue at ASU or to transfer elsewhere. Although it questions the logistics, Walter Russell Mead’s blog argues that “this kind of experiment is promising, and shows how the mainstreaming of MOOCs could help lower costs.”

Lowering higher ed costs is an important goal, but MOOCification is the wrong way to go. The first reason, as Matt Reed points out, is that community colleges are already extremely cheap. ASU/edX  would charge $200 per credit. Yet students can take similar courses at Maricopa Community College for just $84 per credit.  Students in these courses would also have the benefit of “an actual instructor [to] provide actual guidance and feedback  throughout the course.”

Supporters of the ASU plan might observe that students who enroll in four-year colleges are more likely to get a degree than those with similar SAT scores who start at two-year institutions. But getting a degree is not an end in itself. Even romantics like me think it ought to promote students’ intellectual, cultural, and yes, economic flourishing after graduation.

Thanks to a survey by Gallup and Purdue, we now have a pretty good idea what aspects of college students think helped them in there postgrad lives. Here are the summary results:


So what matters to students, basically, is having personal relationships with professors and participating in extracurricular activities. In other words: the very experiences that MOOCs can’t provide, even if they’re taught by superstar lecturers.

As far as costs go, there’s good news and bad news in these results. The good news is that students don’t need posh dorms, elaborate food, and luxurious gyms. Although we can’t get the money back for monuments of indulgence that have already been built, universities can safely cut back on facilities in the future. If they’re worried that they’ll have trouble attracting paying customers without a lazy river, they might try emphasizing their commitment to what “science shows” really matters.

The bad news is that instructors who aren’t too overworked and stressed out to do real teaching and mentoring don’t come cheap. No one becomes an academic to get rich. In order to do their jobs, however, they need decent compensation, some job security, and reasonable teaching loads and research expectations.

Yet these are the costs that administrators and disruption theorists reliably attack. Somehow, there’s always money for high-tech gimmicks and big wigs’ salaries… but not for the people who do the most important work. MOOCs may be useful in providing instruction in specific areas, particularly for adult students. But they’re a distraction from the real problem of higher ed: how to offer serious instruction in real subjects to more of the students who want them, and to figure out something else to do with those who don’t.

Samuel Goldman is assistant professor of political science at The George Washington University.

The Mirage of a Classless Society

Paul Krugman  (flickr / 00Joshi)

In a recent post, Paul Krugman reiterated his view that conservative critics of the welfare state are petty authoritarians. Citing Corey Robin’s The Reactionary Mind, Krugman explains:

It’s fundamentally about challenging or sustaining traditional hierarchy. The actual lineup of positions on social and economic issues doesn’t make sense if you assume that conservatives are, as they claim, defenders of personal liberty on all fronts. But it makes perfect sense if you suppose that conservatism is instead about preserving traditional forms of authority: employers over workers, patriarchs over families. A strong social safety net undermines the first, because it empowers workers to demand more or quit; permissive social policy undermines the second in obvious ways.

In contrast to conservatism, Krugman argues:

…modern liberalism is in some sense the obverse — it is about creating a society that is more fluid as well as fairer. We all like to laugh at the war-on-Christmas types, right-wing blowhards who fulminate about the liberal plot to destroy family values. We like to point out that a country like France, with maternity leave, aid to new mothers, and more, is a lot more family-friendly than rat-race America. But if “family values” actually means traditional structures of authority, then there’s a grain of truth in the accusation. Both social insurance and civil rights are solvents that dissolve some of the restraints that hold people in place, be they unhappy workers or unhappy spouses. And that’s part of why people like me support them.

I’ve written about Robin’s widely-misunderstood argument in the past. But Krugman’s post is a good opportunity to revisit and summarize my critique. In short, Robin is right that classic conservative theorists were defenders of  economic, social, and political hierarchy against modern liberation movements. But he misunderstands the basis of the position.

The conservative position has never been simply that a hierarchical society is better than an egalitarian one. It’s that an egalitarian society is impossible. Every society includes rulers and ruled. The central question of politics, therefore, is not whether some will command while others obey. It’s who gives the orders.

Radical leftists understand this. That’s why Lenin’s “who, whom?” question became an unofficial motto of Bolshevism. The Bolsheviks promised that a classless society would one day emerge. In the meantime, however, they were open and enthusiastic practitioners of power politics.

Modern liberals find this vision upsetting. So they pretend that their policies are about reducing inequality and promoting freedom rather than empowering some people at the expense of others. They associate inequality with wealth and freedom with liberation from religion and family. So they assume that a society in which rich people, churches, and fathers have less power is ipso facto freer and more equal.

Notice how Krugman’s hostility to these traditional hierarchies blinds him to other kinds of inequality. He praises France because social insurance and stronger protections for employees make it easier for mothers and workers to stand up to patriarchs and bosses. Do they really make France “fairer and more fluid”? In cultural terms, perhaps. But not politically or even economically.

The defining feature of French life is that the welfare and regulatory state Krugman admires is administered by graduates of elite educational institutions. These aristocrats of the universities and civil service are geographically concentrated in Paris and anecdotally quite “inbred.” France is not a class society in the Marxist sense. But it could be described with only minimal exaggeration as an ENAligarchy.

Krugman doesn’t see the énarques as a ruling class that need to be knocked down a peg because their authority isn’t traditional. They wield power over other people’s lives because they got good grades, not because they have a lot of money or are heads of households or leaders of religious communities. But academic meritocracy is not the same thing as a fluid and fairer society. It’s certainly no fairer that some people are lucky enough to be smart than that others are good at making a fortune.

And France is no star when it comes to economic mobility. According to a review of the literature by the economist Miles Corak, France joins the U.S. and the UK as the Western countries with the least intergenerational mobility. Krugman also doesn’t mention that France is a very good place to have a job, but not so hospitable to people looking for work. That’s especially a problem for young people who didn’t go to the best schools.

There are serious arguments in favor of rule by a highly-trained administrative class within a moderately redistributive capitalist economy. Those arguments were a crucial source of the modern liberalism that Krugman endorses, and have recently been reiterated by Frank Fukuyama. What modern liberals really want, however, isn’t freedom or equality—terms that have no meaning before it’s determined for what and by whom they will be enjoyed. As conservatives have long understood, it’s a society in which people like themselves and their favored constituencies have more power while the old elites of property, church, and family have less.

Samuel Goldman is assistant professor of political science at The George Washington University.

What Libertarians (and Conservatives) Don’t Understand About Poverty

Photo by Jeremy Brooks (creative commons)

You can’t spend much time in right-of-center circles without hearing, often in the comfort of an open bar, that America’s poor don’t have it too bad. Yes, there are about 45 million people below the official poverty line. But that doesn’t mean that they’re suffering under the conditions we see in photos of the dustbowl or the old industrial slums. A Heritage Foundation report observes that “When LBJ launched the War on Poverty, about a quarter of poor Americans lacked flush toilets and running water in the home. Today, such conditions have all but vanished. According to government surveys, over 80 percent of the poor have air conditioning, three quarters have a car, nearly two thirds have cable or satellite TV, half have a computer, and 40 percent have a wide screen HDTV.”

Megan McArdle makes a version of the same argument in her comment on Joni Ernst’s State of the Union response. She reminds Internet snarks who mocked Ernst’s story about using plastic bags to protect her only pair of shoes that this was a common practice until pretty recently. According to McArdle, “we forget how much poorer we used to be, and then we forget that we have forgotten.” These days, even people without much money enjoy a material abundance of which their grandparents could only have dreamed. (Rod Dreher remembers the story of his own family here.)

That’s true, as far as it goes. Pretty much anything that’s made in a factory is cheaper and higher-quality than it used to be. I admit to moments of SWPL enthusiasm for craftsmanship (or the Brooklyn facsimile). But let’s get real: expanded access to consumer products is a good thing.

But that doesn’t mean poverty is exaggerated by ungrateful whiners. Goods and services that depend on skilled human labor cost more than they used to. Curiously, McArdle relies on figures from 1987 to make her case that American households face lighter expenses for necessities than they used to. That ignores the increase in prices for childcare, healthcare, and higher education over the last decade or so.

So people can accumulate possessions while maintaining a relatively low standard of living. Indoor plumbing won’t take care of your kids, and an Xbox won’t send them to college. The poor are also more likely to suffer from “diseases of affluence” such as obesity and diabetes. Unlike the truly affluent, however, they can’t afford to have them treated.

Material deprivation is also not always the most wrenching aspect of poverty. As Karl Polanyi argues his study of the Industrial Revolution, the lack of meaningful work and a secure social position can be worse than low wages or high consumer prices.

It’s important to question depictions of Dickensian poverty in the media, which often focus on exceptional cases. And we should resist nostalgia for a mythical time when folks didn’t have much but their dignity. But the grinding, uncertain lives of poor Americans today is a problem that new shoes and air conditioning won’t solve.

Literary Addendum: McArdle draws several examples of the bad old days from Laura Ingalls Wilder’s Little House novels for children. She claims:

…what really strikes you is how incredibly poor these people were. The Ingalls family were in many ways bourgeoisie: educated by the standards of the day, active in community leadership, landowners. And they had nothing.

This is a serious misreading of the books. As Wilder’s autobiography makes clear, the reason that the Ingalls family seems poor is that they were poor. There was nothing bourgeois about them, except perhaps Ma’s (relatively) advanced education. Even in the idealized version presented in the books, the Ingallses fail, time and again, to realize their dream of becoming independent farmers. That’s why the story ends, rather tragically, with Pa working as a clerk in a railroad town, a fate that he’d dragged his family thousands of miles to avoid.

Yes, Political Correctness Really Exists

Marcuse family photo.  CC BY-SA 3.0. via Wikimedia Commons.
Marcuse family photo. CC BY-SA 3.0. via Wikimedia Commons.

Jonathan Chait burned up the Internet this week with his critique of so-called political correctness. Among many responses, Amanda Taub‘s stands out for its denial of Chait’s basic premise. According to Taub:

…there’s no such thing as “political correctness.” The term’s in wide use, certainly, but has no actual fixed or specific meaning. What defines it is not what it describes but how it’s used: as a way to dismiss a concern or demand as a frivolous grievance rather than a real issue.

This is a curious response. Sure, people use the term in different ways. But Chait provides a perfectly serviceable definition: “political correctness is a style of politics in which the more radical members of the left attempt to regulate political discourse by defining opposing views as bigoted and illegitimate.”

I don’t think Taub would deny that this political style exists, although one may quibble with some of Chait’s examples. What she objects to is the way Chait describes it. In her view, calling denunciations of putatively bigoted opinions “political correctness” allows their advocates to avoid taking those criticisms seriously. So, in a feat of rhetorical jujitsu, Chait becomes guilty of the same tendency he opposes: ruling views he rejects out of respectable conversation.

This dispute is an object lesson in the pernicious effect of political correctness—or whatever you want to call it—on intellectual and political debate. Arguments about ideas devolve into wrangling about words. The conduct of politics by means of semantics sometimes reaches comic heights. In his piece, Chait reports an incident in which,

UCLA students staged a sit-in to protest microaggressions such as when a professor corrected a student’s decision to spell the word indigenous with an uppercase I—one example of many “perceived grammatical choices that in actuality reflect ideologies.”

But there’s nothing important at stake in the phrase “political correctness”. So let’s drop it, at least provisionally, and focus on the phenomenon that Chait describes. Contrary to popular perception, it’s not just a product of youthful exuberance among student activists or the ease and enforced brevity of Twitter. It’s rooted in a philosophical critique of the liberal theory of discourse.

Although it has precedents in Kant, this theory received a definitive formulation in John Stuart Mill’s On LibertyAccording to Mill, the truth is most likely to emerge from unrestricted debate. Although Mill did not use the metaphor, such a debate is conventionally described as a “marketplace of ideas,” in which vendors are free to offer their wares and customers are at liberty to purchase only the best goods.

There are two problem with this image. The first is that it assumes that consumers of ideas are in a position to judge which most closely approximate the truth. But that may not be the case. In order to make good purchasing decisions, customers need a certain level of background information and capacity for comparison.

In order to make the intellectual market function properly, Mill proposed that participation be restricted to “human beings in the maturity of their faculties.” In the most obvious sense, that means that we should not rely on judgments by children or the insane.

But Mill did not stop with ruling out those who had not yet reached the age of majority, or whose reason was in some way deranged. He also argued that the liberty of thought and discussion was not appropriate for “those backward states of society in which the race itself may be considered as in its nonage.” When it comes to “barbarians,” Mill reasoned, it is appropriate to use coercion, just as it is appropriate for parents to monitor their children’s reading. The implication for contemporary politics was that Britain was justified in practicing a kind of tutelary imperialism.

That conclusion might rule Mill off the syllabus at some universities today. But it actually reflects an important and potentially damaging tension in his argument. Mill defends the unrestricted exchange of ideas. Yet he also accords to those he judges fully rational the authority to determine who gets to participate in that exchange—and to enforce the education of those who don’t make the cut. For Mill, in other words, intellectual freedom presupposes a period of enlightened despotism.

The second problem emerges more directly from the quasi-commercial dimensions of Mill’s epistemological model. Mill assumed that all normally-constituted adults who had received a basic education were capable of reliably picking and choosing among intellectual offerings. That assumes they are unaffected by the sellers’ attempts to influence their choices.

But consumer preferences are influenced by advertising, reputation, the way products are presented, habit, and so one. In practice, it’s not easy to get shoppers to consider buying something new and different, even if it really is better than its competitors. Most of the time they buy the same products from familiar brands.

Some Marxists call the factors that interfere with judgment “false consciousness.” They argue that false consciousness accounts for the failure of revolutionary ideology to attract adherents among the working class in the developed world. On this view, it wasn’t outright repression or censorship that prevented the workers from adopting a Marxist perspective. It is was the subtle and concealed influence of capital on their ability to exercise their capacity to make their own decisions.

These tensions in Mill’s defense of intellectual freedom were recognized in the 19th century. What we now call political correctness was first articulated in the 1960s by the brilliant German-born philosopher Herbert Marcuse. Marcuse’s achievement was to turn Mill’s argument for free discussion, at least in a modern Western society, against its explicit conclusion.

Marcuse undertakes this inversion, worthy a black belt in dialectical reasoning, in the 1965 essay “Repressive Tolerance.” In it, Marcuse argues that the marketplace of ideas can’t function as Mill expected, because the game in rigged in favor of those who are already powerful. Some ideas enjoy underserved appeal due to tradition or the prestige of their advocates. And “consumers” are not really free to chose, given the influence of advertising and the pressures of social and economic need. Thus the outcome of formally free debate is actually predetermined. The ideas that win will generally be those justify the existing order; those that lose will be those that challenge the structure.

This prong of the argument is close to the standard critique of false consciousness. But Marcuse links it to Mill’s distinction between those who are and are not capable of participating in and benefitting from the unrestricted exchange of ideas.

According to Marcuse, many people who appear to be rational, self-determining men and women are actually in a condition of ideological enforced immaturity. They are therefore incapable of exercising the kind of that Mill’s argument presumes. In order to make debate meaningful, they need to be properly educated. This education is the responsibility of those who are already shown themselves to be capable of thinking for themselves—in this case, left-wing intellectuals rather than Victorian colonial administrators.

One might wonder how either Mill or Marcuse could be so sure that their kind of people knew what was best for others. The answer is that they regarded the truth as obvious. Mill was convinced that progress has demonstrated the obsolescence of non-Western culture, just as it had exposed the falsity of geocentric astronomy. In a postscript to the original essay, Marcuse expressed similar confidence in the rationality if not the linear character of history:

As against the virulent denunciations that such a policy would do away with the sacred liberalistic principle of equality for ‘the other side’, I maintain that there are issues where either there is no ‘other side’ in any more than a formalistic sense, or where ‘the other side’ is demonstrably regressive…

In Marcuse’s hands, Mill’s justification of enlightened despotism in undeveloped societies becomes a justification of enlightened despotism over the majority undeveloped individuals. The central difference between Mill and Marcuse is that the former believed that the necessity of despotism had passed, as least in the West. Marcuse contended intellectual freedom had to be be deferred until more people are likely to develop the correct opinions:

…the ways should not be blocked on which a subversive majority could develop, and if they are blocked by organized repression and indoctrination, their reopening may require apparently undemocratic means. They would include the withdrawal of toleration of speech and assembly from groups and movements which promote aggressive policies, armament, chauvinism, discrimination on the grounds of race and religion, of which oppose the extension of public services, social security, medical care, etc. Moreover, the restoration of freedom of thought may necessitate new and rigid restrictions on teachings and practices in the educational institutions which, by their very methods and concepts, serve to enclose the mind within the established universe of discourse and behavior—thereby precluding a priori a rational evaluation of the alternatives. And to the degree to which freedom of thought involves the struggle against inhumanity, restoration of such freedom would also imply intolerance toward scientific research in the interest of deadly ‘deterrents’, of abnormal human endurance under inhuman conditions, etc.

This passage is remarkable for the degree to which it prefigures so-called political correctness. Marcuse’s thought is that it is impossible for radical ideas to win a “free debate” in an society characterized by many forms of inequality. Therefore, debate should be restructured in ways that favor the weak and lowly. Marcuse goes on to speculate:

While the reversal of the trend in the education enterprise at least could conceivably be enforced by the students and teachers themselves, the systematic withdrawal of tolerance toward regressive and repressive opinions and movements could only be envisaged as the results of large-scale pressure which would amount to an upheaval.

Marcuse’s emphasis on students and professors encouraged the transformation of the universities that’s been exhaustively discussed by writers such as Roger Kimball. But his hopes for “large scale” pressure were disappointed until fairly recently, partly because the repressive tolerance thesis is as offensive to ordinary people as it is attractive to academics.

The advent of social media changed that dynamic. In addition to tilting public discourse toward the young, who are more likely to use these platforms, they make it easier for those whom Marcuse frankly described as subversives to organize and target the withdrawal of tolerance.

To be clear, I’m not suggesting that Gawker commenters are secret Marcusians. Actually, they’d probably benefit from reading this extraordinarily learned, subtle thinker. But they have absorbed a simplified version of Marcuse’s critique of Mill. In Marcuse, this critique culminates in an endorsement of legal as well as social pressure to hasten progress:

Different opinions and ‘philosophies’ can no longer compete peacefully for adherence and persuasion on rational grounds: the ‘marketplace of ideas is organized and delimited by those who determine the national and the individual interest….The small and powerless minorities which struggle against the false consciousness and its beneficiaries much be be helped: their continued existence is more important than the preservation of the rights and liberties which grant constitutional powers to those who oppress these minorities. It should be evident by now that the exercise of civil rights by those who don’t have them presupposes the withdrawal of civil rights from those who prevent their exercise…

How long until his unwitting heirs come to the same conclusion?

Samuel Goldman is assistant professor of political science at The George Washington University.

Leo Strauss: Hawk or Dove?

University of Chicago

The LORD is a man of war: the LORD is his name.
—Exodus 15:3

There is an old story that the Archangel Michael and the devil feuded over Moses’ remains. While Michael aimed to convey the prophet’s body up to heaven, Satan was determined to keep him buried in the dirt.

If the comparison is not impious, we may speak of a similar contest for custody of Leo Strauss. According to his admirers, Strauss earned a place among the angels by promoting Greek-inspired rationalism and a cautious liberalism. Strauss’s critics contend that he was a demonic figure, who encouraged his acolytes to disregard both scholarly probity and basic morality in favor of a Nietzschean will to power.

Strauss’s supporters held the upper hand so long as debate was focused on works that Strauss prepared for publication after his arrival in the United States in 1937. Yet they have struggled to explain the early works in German that have come to light over the last decade. These texts do not appear to be the work of a liberal rationalist. In a notorious letter to philosopher of history Karl Löwith, Strauss even expressed support for “the principles of the Right, fascist, authoritarian, imperialist principles…”

Robert Howse, who teaches law at New York University, is the latest combatant in the Strauss wars. In Leo Strauss: Man of Peace, Howse defends Strauss from his enemies while distancing him from some of his self-appointed friends. Howse acknowledges that Strauss flirted with extremism. But he argues that Strauss devoted the rest of his career to t’shuvah, a Hebrew word that is usually translated “repentance.”

Howse’s study has the merit of drawing on newly available sources from Strauss’s intellectual maturity: the archive of seminars made available by the Leo Strauss Center at the University of Chicago. And Howse is among very few writers on Strauss who are sympathetic to their subject without being sycophantic. Despite these virtues, I do not think Howse wins the battle for Strauss’s legacy, at least if this means distancing him from the politics of national self-assertion. That is because Howse does not draw the connection between Strauss’s early critique of liberalism and his lifelong Zionism.

Howse focuses on political violence. Departing from Strauss’s alleged influence on supporters of the Iraq War, Howse asks whether Strauss thought violence should be regulated by a normative standard or deployed according to its user’s interest. He answers that Strauss sought “a middle way between strict morality and sheer Machiavellian[ism].”

In itself, this conclusion is not very interesting. Every significant political theorist, including Machiavelli, has tried in some way to steer between the rocks of moral absolutism and political solipsism. Howse’s contribution is an argument about the character of the “middle way” that Strauss preferred. In courses on Thucydides, Kant, and Grotius from the 1960s, Howse finds Strauss praising the Nuremberg trials, United Nations, and nascent European Community. He argues that Strauss was essentially a Cold War liberal internationalist.

To make his case, Howse has to refute an interpretation of Strauss that has become dominant over the last decade or so. According to this interpretation, the most important influence on Strauss was the reactionary legal philosopher Carl Schmitt. In seminal works from the 1920s, Schmitt argued that the basis of politics is the distinction between friend and foe realized in mortal combat.

The Schmitt connection has been a centerpiece of attacks on Strauss since the mid-1990s. But Howse is more interested in confronting Strauss’s allies than in rehashing old debates. In particular, Howse accuses Heinrich Meier, the German scholar who edited Strauss’s Gesammelte Schriften, of “misreading Strauss as a hyper-Schmittian.” According to Howse, Meier not only inflates the significance of Schmitt for Strauss but also presents Strauss as agreeing with Schmitt’s politics of existential opposition.

Howse’s response to Meier has several dimensions. On the textual level, Howse shows that there is not enough evidence to support claims that Schmitt was among Strauss’s most important interlocutors. Strauss wrote a 1932 review of Schmitt’s seminal work, The Concept of the Political, that Schmitt recognized as the most searching he received. On that basis, he wrote a letter of recommendation for the Rockefeller Foundation grant that allowed Strauss to leave Germany. But these facts demonstrate no more than a professional relationship between scholars. And Schmitt’s anti-Semitism may have given him personal reasons to wish that Jewish intellectuals would make their careers elsewhere.

On the philosophical level, Howse reaffirms that Strauss was deeply critical of Schmitt’s approach. Although Schmitt claimed that he was distinguishing politics from morality, his argument was based on the assumption that a life devoted to existential confrontation is more worthy than one devoted to peace and prosperity. Although opposed to Christian and bourgeois norms, this assumption is inextricably normative. Strauss exposed Schmitt’s hard-boiled realism as a cover for his own brand of moralism.

Finally, Howse also offers a plausible if not novel account of the historical setting in which Strauss could have been attracted to antiliberalism without endorsing Schmitt’s theory of enmity. Given the failure of the Weimar republic, it seemed that a politics of militant self-assertion was necessary to protect Germany’s Jews. Strauss’s praise of “the principles of the Right, fascist, authoritarian, imperialist principles…” has to be read with this consideration in mind. The full context of the letter makes it clear that Strauss believed that only such principles were capable of standing up to the National Socialist regime.

When he wrote those words in 1933, Strauss may have been thinking of Mussolini, who was at the time an opponent of Hitler. As the 1930s continued, however, he associated them with Churchill. After Strauss arrived in the United States, he was known for insisting that “I am not liberal, I am not conservative, I always follow Churchill.” As Paul Gottfried pointed out in Leo Strauss and the Conservative Movement in America, Strauss was far more enthusiastic about England than the land of his birth.

Howse suggests that Strauss’s admiration for Churchill indicates his growing appreciation for liberal democracy. This is true, but only partly. What Strauss admired in Churchill and England was not bourgeois virtue or popular government. Rather, it was the “Roman” element he praised in the letter to Löwith.

This element consisted, in the first place, of a defiant militarism. In the letter to Löwith, Strauss quotes Virgil’s exhortation to the Romans “to rule with authority … spare the vanquished and crush the proud.” He elides the interstitial phrase “impose the way of peace.” The elision suggests that for Strauss the Roman way is the way of war and empire.

In 1934, Strauss identified a Roman quality in British parliamentary debate. But what he was praising was fairly specific: Churchill’s eloquent support for rearmament. There were compelling political and personal reasons for Strauss’s enthusiasm for military resistance to Germany. Nevertheless, it is worth recalling this view was deeply unpopular in the mid-’30s. Strauss’s praise for Parliament rests on its senatorial rather than its plebiscitary character.

Strauss’s affection for patricians was an important part of his near-worship of Churchill. As an aristocrat, soldier, and enthusiastic imperialist, Churchill personally represented the survival of premodern virtues within liberal democracy. It is easy to forget that Churchill’s critics often castigated him in terms similar to those Strauss used in his letter: imperialist, authoritarian, even fascist. In Strauss’s view, however, these were precisely the qualities that enabled Churchill to take a lonely stand against Hitler.

For Strauss, then, “the principles of the Right” were not so distant as they might now seem from the values that helped saved Europe from Nazi domination and its Jews from extinction. In 1941, he explained to an audience at the New School for Social Research that “it is the English, and not the Germans, who deserve to be, and to remain, an imperial nation: for only the English … have understood that in order to deserve to exercise imperial rule, regere imperio populos, one must have learned for a very long time to spare the vanquished and to crush the arrogant: parcere subjectis et debellare superbos.”

Howse argues that the lecture on “German Nihilism” from which this passage is quoted shows Strauss continuing his critique of Schmitt. And this is true as far as it goes: Strauss rejects the “warrior morality” that finds meaning in confrontation with a mortal enemy.

But that does not mean Strauss rejected violence as such. Like the letter to Löwith, “German Nihilism” culminates in a defense of war and empire. For Strauss, the problem with Schmitt was not that he placed violence at the center of politics. It was that he did so for the wrong reasons.

What did Strauss believe were the right reasons for violence? From the crisis of the 1920s, he learned that coercion was necessary to secure order. In the intellectual autobiography that he added to the 1965 English translation of his book on Spinoza, Strauss explained that the Weimar Republic “presented the sorry spectacle of justice without a sword or of justice unable to use the sword.” Without endorsing dictatorship, Strauss followed Machiavelli in regarding reliable execution as the central responsibility of the state.

But this is an argument about domestic politics. In a seminar on Thucydides taught just a few years before the Spinoza preface was published, Howse finds Strauss insisting that “foreign relations cannot be the domain of vindictive justice.” Force must be used in international affairs, but only to the extent necessary to secure a minimum of justice.

Howse observes that Strauss’s view that justice must be tempered by moderation was reflected in his assessment of the Nuremberg trials. While the Versailles treaty after World War I was a vindictive application of collective responsibility, the Nuremberg trials attempted to distinguish individual criminals from collaborators.

There is an implicit contrast here to Schmitt, who rejected the Nuremberg tribunal as an exercise in hypocritical moralism. As Strauss had shown, Schmitt himself was moralist when that suited his purposes. For Strauss, on the other hand, the fight against the Nazis was unquestionably a just war. He had argued in 1941 that England was not only permitted to fight Germany but had a moral right to do so.

At the time, Strauss had expressed this position using the rhetoric of empire. He was far from the only one to describe just causes in now politically incorrect terms. In 1942, the newly promoted Churchill explained that “I have not become the King’s First Minister in order to preside over the liquidation of the British Empire.” In Churchill’s view as much as Strauss’s, the war against Germany was a war for empire.

After the war, old-fashioned empire became untenable. In addition to economic and military obstacles, the principle of self-determination that encouraged resistance to the Nazis made it impossible for former imperial powers to justify their domination of other peoples.

Surprisingly, Howse finds Strauss relatively accepting of this change. In the Thucydides seminar, Strauss juxtaposes “empire” and “freedom from foreign domination” as the two greatest goals of politics. The suggestion is that the strong cannot be blamed for seeking empire. On the other hand, the weak cannot be blamed for resisting it. After about 1945, however, the moral and technological balance of power shifted in such a way as to give the resisters the upper hand. The imperial powers could no longer claim the right of the stronger because they were no longer stronger.

Howse argues that the Kant and Grotius seminars show Strauss searching for an acceptable order for the world of nation-states that replaced the old empires. He finds that Strauss, although resolutely anticommunist, expressed enthusiasm for a federation of republican states similar to the one suggested by Kant. Nevertheless Strauss, like Kant, rejected a world state as unavoidably tyrannical. The best alternative would be a federal arrangement involving shared sovereignty combined with respect for national particularity—perhaps in ways comparable to the European Union.

Howse thus concludes that the mature Strauss was a liberal internationalist. Although not naïve about the necessity of war, he believed that war should be waged for the sake of a more just order. According to Howse, this Strauss is far from the belligerent nationalist who is supposed to have inspired the neoconservatives. Like Socrates, he is a man of peace. thisarticleappeared-janfeb15

Yet there is something missing from Howse’s portrait of Strauss as a liberal internationalist. That is a detailed consideration of the role of Zionism in Strauss’s thought about violence.

In his intellectual autobiography, Strauss describes his earliest political decision as a commitment to “simple, straightforward political Zionism” at the age of 17. Throughout the 1920s, he was active in the Revisionist movement led by Vladimir Jabotinsky. In the 1930s, Strauss endorsed the “the principles of the Right, fascist, authoritarian, imperialist principles…” as the only basis for defense of Germany’s Jews. In the 1940s, he offered a moral defense of the British Empire partly because of the mercy it offered to the vanquished—including the Jews settled in Palestine. In the 1950s and 1960s, Strauss lectured and wrote extensively on Jewish themes, rarely failing to voice his admiration and gratitude for the foundation of the State of Israel.

These facts are barely mentioned in Leo Strauss: Man of Peace. In fact, the only explicit reference to the State of Israel that I have found comes in the conclusion, when Howse mentions Strauss’s 1957 letter to National Review defending Israel from accusations of racism. As part of his polemic against the neoconservative appropriation of Strauss, Howse assures readers that, “This was an act of loyalty to the Jewish people, not to the political right.”

Howse may be correct about Strauss’s intentions. But Strauss’s personal relationship to the American conservative movement is not the most important issue. Strauss’s lifelong commitment to Zionism tells us something important about his views on political violence. In this decisive case, he endorsed the politics of national self-assertion that Howse contends he had rejected by end of his career.

Strauss makes this point obliquely but unmistakably in the “Note on Maimonides’ Letter on Astrology” that he composed in 1968. In the letter, Maimonides attributes the destruction of the Second Temple to the fact that the Jews relied on magic to provide their defense, rather than practicing the art of war and conquest like the Romans who defeated them.

Strauss describes the remark as “a beautiful commentary on the grand conclusion of the Mishneh Torah: the restoration of Jewish freedom in the Messianic age is not to be understood as a miracle.” The Mishneh chapters that Strauss cites clarify this statement, explaining that the only difference between between the current age and the Messianic era will be “emancipation from our subjugation to the gentile kingdoms.”

For the mature Strauss, in other words, the redemption of the Jewish people was not mystical event. It is a political condition, defined by the reestablishment of Jews’ sovereignty in their own land. The achievement depended on much the same unsettling principles that Strauss endorsed in the infamous letter to Löwith. It may not be a coincidence that they were written almost exactly one year after Israel won control of the Temple Mount.

Strauss may have hoped the Jewish State could eventually become a respected member of a peaceful international federation. Nevertheless, this passage suggests that t’shuvah may not have been the central theme of Strauss’s career. Rather than enacting a return from extremism to moderation, Strauss’s thought about political violence was remarkably consistent concerning the nation that he cared most about. When it came to the Jewish people, Strauss felt that he had nothing to repent.

Samuel Goldman is assistant professor of political science at The George Washington University.

Marion Barry: D.C.’s Rascal King

The infamous former D.C. mayor Marion Barry died yesterday. Most of the obituaries and remiscences that have appeared so far are properly respectful. Even so, they can’t avoid mentioning Barry’s reputation for corruption and troubles with the law, especially his 1990 arrest on drug charges. Despite these problems, which might have doomed a lesser politician, Barry remained beloved in many parts of the city. How could citizens of the district continue to support him?

Part of the answer, as Adam Serwer points out, is that Barry was a very good politician. At the beginning of his career, he cultivated an image as an advocate for the District’s black majority, while reassuring the white elite that he was ready to do business. It’s easy to forget now, but Barry won election in 1978 largely due to votes from the Northwest quadrant. He lost much of that support when he ran for reelection. But by then he could rely on other allies.

But there was more to Barry’s success than tactical brilliance. He practiced urban politics like an old-fashioned ward boss, dispensing jobs and contracts as personal favors to his supporters. In some ways, the benefits were real: Washington’s black middle class depended heavily on municipal jobs. But they also promoted cronyism and incompetence, and helped bankrupt the city.

It’s tempting to conclude from these results that Barry was a very, very, very bad mayor. Although true in some ways, this assessment mistakes just how historically typical Barry was. If Barry had been been born white, in a different place, and in 1876 rather than 1936, he would likely be remembered as a lovable rogue, who used government to help out people to whom other roads out of poverty were closed.

There is no better exemplar of this type than James Michael Curley, the four-time mayor of Boston, Congressman, Governor, and two-time jailbird immortalized by Spencer Tracy in “The Last Hurrah (based on Edwin O’Connor’s novel). As Jack Beatty shows in his riveting biography, from the beginning of his political career before World War I to its end in the 1950s, Curley used government as an instrument in his lifelong mission to improve the lives of Boston’s Irish working class.

Setting a pattern that Barry would follow, Curley started out as a reformer. As challenges to his autocratic practices emerged, however, Curley relied increasingly on appeals to ethnic resentment and a bullying, macho style. In his 1942 Congressional race agains the Brahmin Thomas Eliot, Curley asserted that, “There is more Americanism in one half of Jim Curley’s ass than in that pink body of Tom Eliot.”

Curley’s divisive rhetoric was coarse but not very harmful in itself. Much worse were his policies, which relied on ever-increasing property taxes to pay for public-sector jobs. Boston boomed along with the rest of the country in the 1920s. By the time Curley left office, however, it had entered a decline from which it emerged only under the late Thomas Menino, who broke Curley’s record as Boston’s longest-serving mayor.

Like Barry, then, Curley was by objective standards a lousy mayor (particularly in his later terms). Nevertheless, he remained a hero to his people, who turned out in the thousands at his funeral. Were they sentimental about Curley’s big achievements, such as the construction of the municipal hospital? Grateful for the cash envelopes and no-show jobs Curley distributed around election time? Or unaware how Curley had damaged their city?

The answer probably involved all of these elements. Even in combination, however, they’re inadequate to explain Curley’s role in Boston. Rather than a conventional politician, Curley was a kind of tribal chieftain. More than any particular benefits, he offered his followers the sense that there was someone in power who was like them, who cared about them, and who would do whatever he could to help them.

Before World War II, there were plenty of white chiefs in the Curley mode, among rural Southerners as urban immigrants. By the ’70s, however, urban politics had been so thoroughly racialized that personal leadership and endemic corruption were seen as black pathologies rather than the historical norm. In the final analysis, Barry was a crook who hurt his city. But his greatest crime was being born black and too late to be crowned a “rascal king”.

Samuel Goldman’s work has appeared in The New CriterionThe Wall Street Journal, and Maximumrocknroll.

Republicans Ride an Empty Wave

Last night was a big win for Republicans. And I have nothing much to add to the already enormous literature documenting just what a romp it was (see Michael Brendan Dougherty’s after-action report in The Week). Most of the outcomes were consistent with predictions. But it was surprising how far Republicans outperformed the polls in accumulating large margins of victory.

Even so, Republicans should resist the temptation to conclude that the results give them an enduring advantage at the national level. This was a “wave election” only in the sense that American politics have been stormy for the last decade. I have to run off to teach, so I’ll make my case by means of a listicle. Here are six reasons for caution:

  1. The president’s party usually loses seats in midterm elections.
  2. Obama’s approval, while low, is higher than Bush’s at the same point in his presidency.
  3. We’ve seen this movie before. Remember the “permanent majority” of 2004? How about the “thumping” of 2006? Then there was the “new majority” of 2008. Of course, that was followed by the “Tea Party wave” of 2010. Which didn’t stop Obama from becoming the first president since Eisenhower to win a majority of the vote for a second time in 2012.
  4. The midterm electorate skews older, whiter, and richer than in presidential years. These are Republican demographics, so Republicans tend to do better. The 2016 electorate, on the other hand, will probably look more like 2008 than 2010. Republicans probably won’t ever win many votes from blacks or single women, but they need to continue doing better among the young and Hispanics (as several candidates did last night).
  5. The standard explanation of the results is that the election was a referendum on Obama’s policies. That’s not true for the simple reason that most voters have only the foggiest notion of what Obama’s policies are. (Polls on these matters can be misleading because they often ask respondents to choose from a predetermined set of responses to a leading question, which encourages unrepresentative, off-the-cuff answers.) Rather than voting on the success or failure of specific programs, many voters rely on a vague sense that things are going well or badly for the country.
  6. The biggest factor in voters’ assessment of the direction of the country is the condition of the economy. Right now it’s pretty lousy, despite relatively favorable growth and employment trends. But if these trends continue over the next two years—and they’re far less dependent on Washington that either party likes to admit—they may start to pay off for ordinary people. Should that occur, many will discover that they liked Democrats more than they thought.

The bottom line is that the results show Republicans making good use of a favorable conditions. But there’s no reason yet to think they’re the basis of a durable coalition.

Was Moses the First American?

Charlton Heston as Moses in "The Ten Commandments." Paramount Pictures

Indulging in what seems to be its regional pastime, Texans are fighting about schoolbooks again. While previous debates  centered on science instruction, this time it’s  proposed history texts  under scrutiny (I blogged about the curriculum standards  they’re supposed to meet here). At a hearing last week, books under review by the state Board of Education were blasted for saying too many nice things about Hillary Clinton, not enough nice things about Reagan, and too much about Moses altogether. In widely-reported testimony, the SMU historian Kathleen Wellman argued that the books treat Moses as an honorary founding father—so much so that “I believe students will believe Moses was the first American.”

I haven’t seen the proposed texts, so Wellman’s criticisms may be justified. But non-academics should be aware one of the most exciting movements in intellectual history in the last decade or so has discovered  the extraordinary prevalence of “Hebraic” rhetoric and symbolism in during the revolution and in the early republic.

Needless to say, Moses was not the first American. As Eran Shalev shows in his fine survey American Zion, however, early Americans were remarkably likely to think that their political struggles followed a pattern set by the Biblical Israelites. Washington, for example, was routinely identified as a modern Joshua. And the Union was often compared to federal arrangements among the Hebrew tribes.

The discourse of Hebrew republicanism is an important supplement to more familiar stories about the influence of Locke or the civic republicanism inherited from the Renaissance. At the same time, it’s hard to teach to beginners.

It’s important to avoid reductive arguments that America has a theological foundation. As far as we can tell, Hebraic models were more common in popular discourse than in elite deliberation. So while they played an important role in the public justification of political decisions and institutions, they had little direct influence on their design. And the republican interpretations of scripture on which the patriots relied in the 1770s and ’80s were not only selective, but also fairly novel. As recently as the 1750s, American clergy and laymen had usually argued that the Davidic monarchy was God’s paradigm of good government.

On the other hand, the politics of the revolution and early republic really were infused with Biblical rhetoric and examples. Neglect of this fact promotes the more fashionable dogma that the America revolution and constitutions were products of a secular society.

Textbook writers are thus in a bit of a pickle. Lacking the space to address complicated topics in-detail, it’s almost unavoidable that they traffic in simplifications. Perhaps the Texas books err on the side of evangelical conservatives by making the American founding more religious than it really was. But plenty of history writing makes the opposite error, stressing a few prominent skeptics at the expense of a richly Biblical political culture.

Samuel Goldman’s work has appeared in The New CriterionThe Wall Street Journal, and Maximumrocknroll.

Political Science, History, and the Right

Although I’m a card-carrying political scientist, I’ve never been entirely comfortable with the state of the discipline. Even so, I like to see my profession in the news. So I’ve been amused to find that political science has become the bone of contention in a spat between Ezra Klein and Thomas Frank. Klein argues that social science research has improved political journalism; Frank contends that it’s become an alibi for the status quo. Jonathan Chait and Freddy DeBoer weigh in on behalf of Klein and Frank, respectively. Klein’s followup is here.

The dispute is nominally about the relation between political science and the left. But it actually revolves around the right. Specifically, it’s about explaining Republicans’ electoral success since 1964, which both sides treat as a mystery on the order of Fermat’s last theorem. Basically, Frank attributes conservatives’ success to a decades-long campaign of “organizing and proselytizing and signing people up for yet another grievance-hyping mass movement.” Klein, on the other hand, argues that it’s mostly about the partisan realignment of the South, which has always been conservative, but used to vote for Democrats. Although Klein focuses on the House of Representatives, Chait makes a similar case about presidential elections.

The academic debate has strategic implications. If their relative strength is determined mostly by structural considerations, there’s not much Democrats can do to take control of Congress. On the other hand, Republicans will have a hard time winning the presidency with a coalition based in the inland South and Mountain West. Essentially, the parties will have reversed the positions they held in most of the period between between World War II and 1994, when Republicans owned the White House and Democrats dominated Capitol Hill.

Considered in this broader context, the progressive heyday of the mid-’60s was profoundly aberrant. A temporary constellation of factors—including America’s overwhelming economic advantage following the war, political participation by the youngest baby boomers, and the halo conferred on Johnson by his predecessor’s assassination—combined in a political moment that was without parallel before or since. Since the end of Reconstruction, the American norm has been regionally and institutionally divided, relatively conservative politics. The Republican ascendance since 1964 is in many ways a return to that norm, rather than a puzzling deviation from it.

Frank finds that conclusion too upsetting even to contemplate. Klein accepts it with weary resignation. But neither has any right to be as surprised as he seems to be. Rather than more training in statistics, progressives might benefit from a refresher course in history—which used to have a much more prominent place in the study of politics than it does today.

What Would Jeremiah Do?

Sistine Chapel, prophet Jeremiah / Wikimedia Commons
Sistine Chapel, prophet Jeremiah / Wikimedia Commons

In his 1981 classic After Virtue, Notre Dame philosophy professor Alasdair MacIntyre offers a provocative diagnosis of the modern condition. Rejecting the assumption that secular modernity is the culmination of centuries of improvement, MacIntyre contends that we live amidst the ruins of Western civilization.

There can be no restoration of the past. Even so, MacIntyre urges readers to take their bearings from a previous experience of loss by pursuing “local forms of community within which civility and the intellectual and moral life can be sustained through the new dark ages which are already upon us.” As he puts it in a famous sentence, “We are waiting not for a Godot, but for another—doubtless very different—St. Benedict.”

Benedict is considered the founder of the monasteries from which Christian Europe would eventually emerge. MacIntyre does not claim to be a successor to Benedict. But his suggestion that civilization can be preserved only by dropping out of modern life has become influential among conservatives with traditional religious commitments.

Rod Dreher has summarized the “Benedict Option” as “communal withdrawal from the mainstream, for the sake of sheltering one’s faith and family from corrosive modernity and cultivating a more traditional way of life.” And small but vibrant communities around the country are already putting the Benedict Option into practice. Without being rigorously separatist, these communities do aim to be separate. Some merely avoid morally subversive cultural influences, while others seek physical distance from mainstream society in rural isolation.

But a neo-Benedictine way of life involves risks. Communal withdrawal can construct a barrier against the worst facets of modern life—the intertwined commodification of personal relationships, loss of meaningful work to bureaucratic management, and pornographic popular culture—yet it can also lead to isolation from the stimulating opposition that all traditions need to avoid stagnation.

American religious history offers a clear example of this danger. Between World War I and the 1970s, conservative Protestants pursued strategies of withdrawal that impoverished their intellectual and cultural lives in ways that have they have only recently begun to remedy. In MacIntyre’s telling, the Benedict Option is a detour that leads back into the center of history as civilization eventually re-emerges from its refuges. But it can just easily become a dead end.

The Benedict Option is not the only means of spiritual and cultural survival, however. As a Catholic, MacIntyre searches for models in the history of Western Christendom. The Hebrew Bible and Jewish history suggest a different strategy, according to which exiles plant roots within and work for the improvement of the society in which they live, even if they never fully join it.

This strategy lacks the historical drama attached to the Benedict Option. It promises no triumphant restoration of virtue, in which values preserved like treasures can be restored to their original public role. But the Jews know a lot about balancing alienation from the mainstream with participation in the broader society. Perhaps they can offer inspiration not only to Christians in the ruins of Christendom but also to a secular society that draws strength from the participation of religiously committed people and communities. Call it the Jeremiah Option.


On March 16, 597 BC, the Babylonian king Nebuchadnezzar sacked Jerusalem after a long siege. In addition to the riches of the city and Temple, he claimed as spoils of war thousands of Judeans, including the king and court. Many of the captive Judeans were settled on tributaries of the Euphrates, which inspired the words of Psalm 137: “By the rivers of Babylon, there we sat down, yea, we wept, when we remembered Zion.”

The Judeans had reason to weep. In addition to the shame of defeat, few had direct experience of foreign cultures. The language and customs of their captors were alien and in some ways—particularly the practice of idolatry—abhorrent. More importantly, the captives could no longer practice their own religion. Banned by ritual law from making sacrifices outside the Promised Land, the exiles were unable to engage in public worship.

Under these circumstances, two ways of dealing with Babylonian society presented themselves. First, the exiles could accommodate themselves to the norms of the victors. They could learn Aramaic and adopt local manners. But this would mean the loss of their national and religious identities. In becoming honorary Babylonians, they would forfeit their status as God’s chosen people. On the other hand, the captives could resist. Taken from their homes by force, they might use force to get back again. Proposals for resistance behind enemy lines were seriously considered. In fact, several Judean leaders seem to have been executed for subversive plotting.

But any military campaign was doomed to failure. The empire was too strong to overthrow or escape. So how should its prisoners conduct themselves? How should they live in a society that they could not fully join without giving up their fundamental commitments?

The question was important enough to attract the attention of God himself. Speaking through the prophet Jeremiah, who remained back in Jerusalem, the Lord commanded the captives to steer a course between extremes of assimilation and violent resistance. In his famous letter to the leaders of the Judean community, Jeremiah reports God’s orders as follows:

Thus saith the Lord of hosts, the God of Israel, unto all that are carried away captives, whom I have caused to be carried away from Jerusalem unto Babylon; build ye houses, and dwell in them; and plant gardens, and eat the fruit of them; take ye wives, and beget sons and daughters; and take wives for your sons, and give your daughters to husbands, that they may bear sons and daughters; that ye may be increased there, and not diminished. And seek the peace of the city whither I have caused you to be carried away captives, and pray unto the Lord for it: for in the peace thereof shall ye have peace.
(Jeremiah 29:4-7)

What is God saying? In the first place, he insists that the captives unpack their bags and get comfortable. True, God goes on to promise to redeem the captives in 70 years. But this can be interpreted to mean that none of the exiles then living would ever see their homes again. After all, the span that the Bible allots to a human life is threescore years and 10.

So the captives are to await redemption in God’s time rather than seeking to achieve it by human means. But this does not mean that that they are to keep their distance from Babylonian society until the promised day arrives. On the contrary, God commands them to “seek the peace of the city whither I have caused you to be carried away captives, and pray unto the Lord for it: for in the peace thereof shall ye have peace.”

“Peace” could be read as the absence of conflict. But this doesn’t fully express God’s directive. In the Hebrew Bible and Jewish tradition more broadly, peace refers to flourishing and right order. What God is saying is that the exiles cannot prosper unless their neighbors do as well. For the time they are together, they must enjoy the blessings of peace in common.

By what means are these blessings to be secured? The reference to prayer suggests that God wants the Judeans to promote peace by spiritual means. But that is not all. God also enjoins the Judeans to promote the common good by means of ordinary life. His very first instruction is to build houses. In other words, the Judeans are to conduct themselves like long-term residents—if also resident aliens.

Reinforcing the point that captivity is for the long haul, God reminds the captives that the dwellings they are to build are not for themselves alone. Instead, they must shelter generations of children and grandchildren, multiplying the community. God’s plan is for expansion and growth, not marginal existence.

The emphasis on securing peace through ordinary life does not absolve the exiles of their responsibility to remain holy. But theirs is to be a holiness based on upright life rather than the independence of a homogeneous community. Reassuring those who feared that they could not continue their relationship with God in exile, God explains, “ye shall seek me, and find me, when ye shall search for me with all your heart.” The Babylonian captivity is thus the origin of Judaism as a law-based religion that can be practiced anywhere, rather than a sacrificial cult focused on the sacred temple.

The piety that God encourages, therefore, can be practiced by ordinary people living ordinary lives under difficult circumstances. God enjoins the captives not only to live in Babylon, but also to live in partnership with Babylon. Without assimilating, they are to lay down roots, multiply, and contribute to the good of the greater society.


The Babylonian captives addressed by Jeremiah bear comparison to the traditionalist dissenters in MacIntyre’s stylized history. Both groups are minorities. Both are prisoners of empires that are unwilling or unable to support their moral and religious commitments. Yet both know that violence is an unacceptable means to achieve their communal goals. It would not work—and more importantly, it offends God.

Where Jeremiah counsels engagement without assimilation, Benedict represents the possibility of withdrawal. The former goal is to be achieved by the pursuit of ordinary life: the establishment of homes, the foundation of families, all amid the wider culture. The latter is to be achieved by the establishment of special communities governed by a heightened standard of holiness.

Although it can be interpreted as a prophecy of doom, the Jeremiah Option is fundamentally optimistic. It suggests that the captives can and should lead fulfilling lives even in exile. The Benedict Option is more pessimistic. It suggests that mainstream society is basically intolerable, and that those who yearn for decent lives should have as little to do with it as possible. MacIntyre is careful to point out that the new St. Benedict would have to be very different from the original and might not demand rigorous separation. Even so, his outlook remains bleak.

MacIntyre’s pessimism conceals what can almost be called an element of imperialism—at least when considered in historical perspective. Embedded in his hope for a new monasticism is the dream of a restoration of tradition. The monks of the dark ages had no way of knowing that they would lay the foundation of a new Europe. But MacIntyre is well aware of the role that they played in the construction of a fresh European civilization—and subtly encourages readers to hope for a repetition.

Jeremiah’s message to the captives is not devoid of grandiose hopes: the prophet assures them that they or their progeny will ultimately be redeemed. But this does not require the spiritual or cultural conversion of the Babylonians.

The comparison between the options represented by Jeremiah and by Benedict has some interest as an exercise in theologico-political theorizing. But it is much more important as a way of getting at a central problem for members of traditional religious and moral communities today. How should they conduct themselves in a society that seems increasingly hostile to their values and practices? Can they in good conscience seek the peace of a corrupt and corrupting society?

In the 2013 Erasmus Lecture sponsored by First Things, Jonathan Sacks, the former chief rabbi of the United Kingdom’s United Hebrew Congregations of the Commonwealth, took up this question with specific reference to Jeremiah. Rejecting Jeremiah’s reputation as a prophet of doom, Sacks argued that Jeremiah’s letter to the exiles fundamentally expresses a message of hope. Despite their uncomfortable situation, the captives are not to resist or separate themselves from Babylonian society. Rather, they are to pursue the fulfillments of ordinary life, practice holiness, and work and pray for the prosperity of the society in which God placed them.

As Sacks pointed out, this pattern has governed much of Jewish history in the diaspora. Between the destruction of the second Temple in AD 70 and the foundation of the State of Israel in 1948, nearly all Jews have found themselves in a condition comparable to that of the Babylonian captives. A small and often despised minority, they have nevertheless taken to heart God’s insistence that their peace depend on the peace of their captors.

This is not a solution to all problems of communal survival, however. The appeal of assimilation has been considerable. Descendants of the captives often took Babylonian names and adopted Aramaic. In modern times, many Jews have not only modified religious practice but rejected Jewish identity altogether. Recent surveys show that Jewishness in America is seriously endangered by indifference and intermarriage. So advocates of more rigorous separation have a point.

Nevertheless, there may be lessons in Jeremiah and Jewish history for Christians and others concerned about their place in modern society. These can be sketched by three ideas.

First, internal exiles should resist the temptation to categorically reject the mainstream. That does not mean avoiding criticism. But it must be criticism in the spirit of common peace rather than condemnation. Jeremiah is famous as the etymological root of the jeremiad. Yet his most scathing criticisms are directed against his own people who have failed in their special calling of righteousness, not the “mainstream” culture.

Second, Jeremiah offers a lesson about the organization of space. Even though they were settled as self-governing towns outside Babylon itself, God encourages the captives to conduct themselves as residents of that city, which implies physical integration. There need be no flight to the hinterlands. Web issue image

Finally, Jewish tradition provides a counterpoint to the dream of restoring sacred authority. At least in the diaspora, Jews have demanded the right to live as Jews—but not the imposition of Jewish law or practices on others. MacIntyre evokes historical memories of Christendom that are deeply provocative to many good people, including Jews. The Jeremiah option, on the other hand, represents a commitment to pluralism: the only serious possibility in a secular age like ours.

I offer these arguments against communal withdrawal from a somewhat idiosyncratic motive. An heir to the Jewish diaspora, I am a relatively comfortable inhabitant of secular modernity. By what right do I counsel people whose first loyalty is to God?

The answer is: self-interest. While not a member of traditional religious community myself, I am convinced that the rest of society is immeasurably enriched by the presence of such communities in political, cultural, and intellectual life. So while I do fear that practices of separation will be bad for those communities themselves—as the fundamentalist experience of the last century indicates—I am certain that they will be bad for the rest of us. If demanding, traditional forms of religion disappear from mainstream culture, that culture may actually become the caricature of a destitute age on which MacIntyre builds his analysis.

At the same time, it would be cynical to offer a merely instrumental argument for the continued engagement of religious communities with secular society. Although not very observant myself, I found Jeremiah’s letter to the exiles helpful in thinking through this problem. God reminds the captives that they will find peace only in the peace of the Babylonians, that they are to promote the good of the rest of society as well as their own. The Jeremiah Option gives me reason to hope that Jews, Christians, and the rest of us can find peace together.

Samuel Goldman’s work has appeared in The New Criterion, The Wall Street Journal, and Maximumrocknroll.

← Older posts