In a post for The Immanent Frame responding to the Hobby Lobby and Wheaton College decisions, Indiana University professor Winnifred Fallers Sullivan challenges the idea of religious freedom on which those decisions are based. Although a liberal, Sullivan does not deny that private firms and religious colleges are engaged in a kind of religious practice. Rather, she argues that because religion means different things to different people, it’s impossible to systematically distinguish legitimate “religious freedom” from mere rejection of the law:
The need to delimit what counts as protected religion is a need that is, of course, inherent in any legal regime that purports to protect all sincere religious persons, while insisting on the legal system’s right to deny that protection to those it deems uncivilized, or insufficiently liberal, whether they be polygamist Mormons, Native American peyote users, or conservative Christians with a gendered theology and politics. Such distinctions cannot be made on any principled basis…Both the majority and dissenting Justices in these two cases affirm—over and over again—a commitment to religious liberty and to the accommodation of sincere religious objections. Where they disagree is on what counts as an exercise of religion. Their common refusal, together with that of their predecessors, to acknowledge the impossibility of fairly delimiting what counts as religion has produced a thicket of circumlocutions and fictions that cannot, when all is said and done, obscure the absence of any compelling logic to support the laws that purport to protect religious freedom today.
The whole post rewards careful reading. One issue that Sullivan leaves unexamined is what counts as a “principled basis”. I take it that she means a non-historical, more or less universal definition, which would make it possible to distinguish religion from non-religion in a logically consistent way. And she’s right that no such definition exists. To mention only one example, the state cult of the Romans had little in common with what we understand by “religion” today.
But why should American law be based on universal principles that can be applied in a quasi-Kantian manner? A considerable historical literature suggests that the religion clauses of the Constitution emerged from the historical experience of Anglo-Protestantism. They were developed and applied in a society that was assumed to be overwhelmingly Christian and organized, for the most part, into recognizable denominations. Of course, there were always communities whose religion was inconsistent with these assumptions. But it was assumed that they would either be demographically marginal, or identifiable under Christian theological categories.
The problem, of course, is that this world no longer exists. And not only because of secularization or immigration by Catholics, Jews, and more recently Muslims. As Sullivan observes, the American brand of evangelicalism encourages individuals to decide for themselves what religion means to an historically unprecedented degree. So we face the challenge of applying historically and theologically specific concepts of religion, liberty, and so on in a way that obscures their limits and contingency. Thus the knots into which both sides of the Court have had to twist their arguments not only in Hobby Lobby, but also in cases such as Kiryas Joel Village School District v. Grumet.
There’s no obvious solution to this problem. We can neither revive Anglo-Protestant categories in a pluralistic society, nor can we formulate a definition of religion that will satisfy everyone. My own preference is for giving as much deference as possible, consistent with public order, to congregations, non-profit institutions, and yes, private firms, to act in ways that reflect their beliefs about what they owe to God and the world. But that means giving up the dream of cultural hegemony that today inspire the secular Left at least as strongly as it once did the religious Right.
(h/t Samuel Moyn, via Facebook).
Disinvitation season has come and gone. In this year’s enactment of a now familiar exercise, Haverford, Rutgers, and Brandeis, among other other schools, were forced by opposition for students and faculty to alter plans for commencement speakers.
Parallel denunciations of creeping authoritarianism are part of the ritual. But the truth is that critics of the university on both the left and the right get what they really want out of these tiny fiascos: an opportunity to make vehement public statements when little of significance is at stake. Because commencement addresses are, with a few notable exceptions, emissions of immense quantities of hot air. Here are the deep thoughts with which Rice favored the graduates of Southern Methodist University in 2012.
Rather than lamenting the arrogance of administrators or immaturity of students, it’s worth considering how to reform the institution of commencement speeches altogether. After all, there’s no requirement that university import boldface names. Columbia, for example, allows only its president to speak. So here are some suggestions for preventing future commencements from becoming occasions for embarassing disinvitations.
First, give students a role in choosing speakers. This would help gauge potential controversy early in the selection process, as well as building a constituency for the choice. One reason it’s so easy for a relatively small group of critics to push out a speaker is that the rest of the student body has no stake in keeping him. Allowing them to exercise some influence over the initial decision could change that.
But maybe students would use their influence to pick popular culture figures rather than serious types that convince parents and taxpayers that their money is well-spent. That risk could be avoided if universities stopped paying large honoraria. If potential speakers have something important to say, they’ll be willing to do so in exchange for reasonable expenses. Don’t subsidize celebrities—or high-priced “thought leaders” flogging their books.
Next, separate the conferral of honorary degrees from speechgiving. The former implies collective endorsement of the speaker’s career. The latter does not. One of Rod’s readers claims that Haverford opponents of Berkeley chancellor Robert Birgeneau objected to his honorary degree more they did to his speaking invitation. Whether that’s true in this case, there’s a morally and political relevant difference between hearing someone out and allowing an honor to be given in one’s own name.
Finally, revive the old practice of allowing a student elected by students to speak at commencement. This would allow students to express criticism or disapproval of other speakers in precisely the kind of dialogue that both lefties and conservatives claim to endorse.
Any or all of these suggestions would help prevent silly controversies without giving in to the heckler’s veto. But maybe the best solution would be to cancel the speeches altogether. Does anyone really want to be lectured in the inevitable commencement weather of blistering sun or pouring rain?
Few things annoy academics more than being told that their work is irrelevant. So there’s nothing surprising about the backlash against Nicholas Kristof’s column in Sunday’s New York Times. Kristof contended that America’s professors, especially political scientists, have “marginalized themselves” by focusing on technical debates at the expense of real problems, relying too heavily on quantitative methods, and preferring theoretical jargon to clear prose. An outraged chorus of responses (round-up here) rejected Kristof’s generalization as a reflection of the very anti-intellectualism that he intended to criticize.
Some of the responses to Kristof reflect the expectation of public recognition for every contribution to the debate that makes so much academic writing a chore to read. Even so, Kristof ignores fairly successful efforts to make scholarship more accessible—even if you don’t count every blog by every holder of a Ph.D. The Tufts professor and Foreign Policy contributor Daniel Drezner has more than 25,000 Twitter followers—partly due to the success of a book the uses a zombie apocalypse scenario to compare theories of international relations. The Washington Post recently picked up The Monkey Cage (full disclosure: several of my colleagues at George Washington University are contributors) and The Volokh Conspiracy, which are populated by political scientists and legal academics, respectively. Not to mention Kristof’s own employer. Just the day before Kristof’s piece ran, the Times hired Lynn Vavreck to contribute to a new site concentrating of social science and public policy.
At least when it comes to political science, then, it’s just not plausible that “there are fewer public intellectuals on American university campuses today than a generation ago.” On the contrary, there are probably more academics who try to communicate with non-specialist audiences than there were in 1994. One difference is that the public intellectuals of past decades were more likely to engage directly with normative and historical Big Questions. That change reflects the declining influence of political theory in comparison with causal analysis, as well as the weakening of the Cold War imperative of justifying liberal democracy.
Of course, writing for non-specialist readers isn’t encouraged in graduate programs, and doesn’t often align with the requirements for hiring and promotion. But there are anecdotal reasons to think that expectations are slowly changing, as departments struggle to prove their ‘relevance’ in a period of financial retrenchment. If Kristof’s piece promotes these changes, it will have served a valuable function whether or not its argument is compelling.
In fairness to Kristof, however, none of the observations refute his basic claim. That’s because he isn’t actually talking about “public intellectuals”. Rather, he means old-fashioned mandarins, who move easily between Harvard Yard and Washington, usually without encountering many members of the public along the way. In a followup on Facebook, Kristof observes that “Mac Bundy was appointed professor of government at Harvard and then dean of the faculty with only a B.A.—impossible to imagine now.” After nearly a decade as dean, Bundy joined the Kennedy administration as National Security advisor, where his vast intellectual firepower led him to promote and defend the Vietnam War.
What Kristof really offers, then, is less an argument for public engagement by scholars than a plea for another crop of Wise Men who lend conventional wisdom the authority of the academy. Not coincidentally, he presents as an exception to the trend toward academic self-marginalization the former Princeton professor and State Department official Anne-Marie Slaughter, whose resume is as perfect a reflection of the meritocratic elite that Bundy helped create as Bundy’s own pedigree was of the old Establishment. More professors should learn to participate in public debate—including political theorists frustrated by the increasingly technical orientation of political science. But if ‘relevance’ means becoming mouthpieces of our new ruling class, then Kristof can keep it.
Reports that A- is the median grade in Harvard College have reopened the debate about grade inflation. Many of the arguments offered in response to the news are familiar. The venerable grade hawk Harvey “C-” Mansfield, who brought the figures to public attention, describes the situation as an “indefensible” relaxation of standards.
More provocative are defenses of grade inflation as the natural result of increased competition for admission to selective colleges and universities. A new breed of grade doves point out that standards have actually been tightened in recent years. But the change has been made to admissions standards rather than expectations for achievement in class.
According to the editorial board of the Harvard Crimson, “high grades could be an indicator of the rising quality of undergraduate work in the last few decades, due in part to the rising quality of the undergraduates themselves and a greater access to the tools and resources of academic work as a result of technological advances, rather than unwarranted grade inflation.” Matt Yglesias, ’03, agrees, arguing that “it is entirely plausible that the median Harvard student today is as smart as an A-minus Harvard student from a generation ago. After all, the C-minus student of a generation ago would have very little chance of being admitted today.”
There’s a certain amount of self-congratulation here. It’s not surprising that Harvard students, previous and current, think they’re smarter than their predecessors—or anyone else. But they also make an important point. The students who earned the proverbial gentleman’s Cs are rarely found at Harvard or its peers. Dimwitted aristocrats are no longer admitted. And even the brighter scions of prominent families can’t take their future success for granted. Even with plenty of money and strong connections, they still need good grades to win places in graduate school, prestigious internships, and so on.
The result is a situation in which the majority of students really are very smart and very ambitious. Coursework is not always their first priority. But they are usually willing to do what’s necessary to meet their professors’ expectations. The decline of core curricula has also made it easier for students to pick courses that play to their strengths while avoiding subjects that are tough for them. It’s less common to find Chemistry students struggling through Shakespeare than it was in the old days.
According to the Harvard College Handbook for Students, an A- reflects ”full mastery of the subject” without “extraordinary distinction”. In several classes I taught as an instructor and teaching fellow at Harvard and Princeton, particularly electives, I found that around half the students produced work on this level. As a result, I gave a lot of A-range grades.
Perhaps my understanding of “mastery” reflects historically lower demands. For example, I don’t expect students writing about Aristotle to understand Greek. Yet it’s not my impression that standards in my own field of political theory have changed a lot in the last fifty years or so. In absence of specific evidence of lowered standards, then, there’s reason to think that grade inflation at first-tier universities has some objective basis.
But that doesn’t mean grade inflation isn’t a problem. It is: just not quite the way some critics think. At least at Harvard and similar institutions, grades are a reasonably accurate reflection of what students know or can do. But they are a poor reflection of how they compare to other students in the same course. In particular, grade inflation makes it difficult to distinguish truly excellent students, who are by definition few, from the potentially much larger number who are merely very good.
Here’s my proposal for resolving that problem. In place of the traditional system, students should receive two grades. One would reflect their mastery of specific content or skills. The other would compare their performance to the rest of the class. Read More…
Bill de Blasio is mayor-elect of New York. According to many of de Blasio’s critics as well as his supporters, the unsurprising outcome of Tuesday’s election reflects a decisive turn in the city’s politics. The Nation claims that “Bill de Blasio’s exhilarating landslide victory over Joe Lhota in New York’s mayoral election offers a once-in-a-generation chance for progressives to take the reins of power in America’s largest—and most iconic—city.” In National Review, Kevin D. Williamson evokes John Carpenter’s b-movie classic, Escape from New York.
I say: not so fast. Neither turnout and polls nor de Blasio’s career so far support hope (or fears) that he’ll try to transform New York into Moscow on Hudson. Mayor de Blasio will not cultivate the the chummy relationship with Wall Street that Michael Bloomberg did. But there’s likely to be more continuity between their mayoralties than most people expect. In fact, that continuity is a bigger threat to the city’s future than the immediate collapse that some conservatives fear.
First, the election data. As Nicole Gelinas points out, de Blasio’s election was not the mandate for change that the margin of victory suggests.
Preliminary results show that about 1 million New Yorkers voted yesterday. That’s 13 percent lower than four years ago. Back then, remember, many voters disillusioned with the choices—Mayor Michael R. Bloomberg was running for a third term against the uninspiring Bill Thompson, Jr.—just stayed home. Turnout this year was as much as 30 percent below 12 years ago, when Bloomberg won his first victory. Voters supposedly so eager for change this year didn’t show that eagerness by voting. And a slim majority of the people who did vote—51 percent—told exit pollsters that they approve of Bloomberg, anyway. Though de Blasio’s victory margin was impressive, the scale of the win looks less stellar when put into recent historical context. As of early Wednesday, de Blasio had 752,605 votes—a hair shy of Bloomberg’s 753,089 votes in 2005.
So de Blasio did not win the votes of unprecedented number of New Yorkers. And many of those who did vote for him also supported Bloomberg. That doesn’t mean that they like everything Bloomberg did. But there’s no evidence here of a progressive tsunami.
What about de Blasio’s career? The tabloid press paid a great deal of attention to de Blasio’s visits to communist Nicaragua and the Soviet Union as a young man. More recently, however, de Blasio worked as a HUD staffer under Andrew Cuomo, and as campaign manager for Hillary Clinton. De Blasio took liberal positions during his tenure on the city council, particularly on symbolic issues involving gay rights. But this is not the resume of a professional radical.
It’s true that de Blasio made “a tale of two cities” the central theme of his campaign. As many observers have pointed out, however, he lacks the authority to enact his signature proposals: a tax increase on high earners, to be used to fund universal pre-K. Nothing’s impossible, but the chances of the state legislature approving such a tax hike are slim. The same goes for several of de Blasio’s other ideas, including a city-only minimum wage higher than the state’s minimum and the issuance of driver’s licenses to illegal immigrants.
The real issues under the de Blasio’s administration will be matters over which the mayor has some direct control. That means, above all, contracts with city workers, and policing. Will de Blasio blow the budget to satisfy public employee unions? And will he keep crime under control after eliminating stop-and-frisk ?
I’m cautiously optimistic about public safety. New Yorkers didn’t like living in fear, and de Blasio is smart enough to know that he’s finished if crime returns. Stop-and-frisk is not the only weapon in the NYPD’s arsenal.
The unions are a bigger problem. New York’s short-term fiscal outlook is reasonably good, so it’s hard to imagine de Blasio playing Scrooge to faithful supporters.
Even so, de Blasio’s mayoralty is unlikely to be a revolutionary moment. As Walter Russell Mead has argued, the biggest risk is that he pursues the same high-tax, high-regulation, high-service style of government on which Bloomberg relied. Those policies have helped transform much of New York into a gleaming shopping mall, which is admittedly a lot better than an urban jungle. But they are a bad strategy for prosperity in the years and decades to come.
Niccolo Machiavelli is often described as arguing that morality has no place in politics. That’s not quite right. Machiavelli believes that morality is crucial to political success. He just thinks that the important thing is to seem to possess the moral virtues, rather than actually to practice them. In a famous passage of The Prince, Machiavelli puts it this way:
Thus, it is not necessary for a prince to have all the above-mentioned qualities in fact, but it is indeed necessary to appear to have them. Nay, I dare say this, that by having them and always observing them, they are harmful; and by appearing to have them, they are useful…
In a column for Bloomberg, Ramesh Ponnuru implicitly encourages social conservatives to take Machiavelli’s advice. Reflecting on the likely failure of Ken Cuccinelli’s campaign for governor of Virginia, Ponnuru observes that current governor Bob McDonnell won a big victory in 2009 even though he agrees with Cuccinelli on many social issues. So:
Why do they seem to be succeeding now when they failed then? It’s partly a matter of countenance: McDonnell was cheerful (if boring), and Cuccinelli often appears dour and argumentative…Another difference, though, is that Cuccinelli made his name as a conservative crusader, especially on social issues, where McDonnell made his as a bipartisan problem-solver. McDonnell’s Democratic critics had to dig up a 20-year-old grad-school thesis he had written to make him look out of the mainstream; Cuccinelli’s have more recent initiatives and statements to work with.
Ponnuru goes on to contrast Cuccinelli’s likely failure to win election tomorrow with Chris Christie’s likely success:
Socially conservative positions on hot-button issues don’t seem to be a deal-breaker even for the much more liberal voters of New Jersey. Christie has vetoed legislation to grant state recognition to same-sex marriage—a judge later ordered it, though Christie briefly appealed—and vetoed bills to fund Planned Parenthood five times.
Ponnuru’s conclusion is that social conservatives shouldn’t be too upset by Cuccinelli’s defeat, since McDonnell and Christie’s examples show that social conservatives are not necessarily losers in blue and purple states. That’s true, but the distinction between seeming and being is important here. Ponnuru is right that social conservative views are not, in themselves, electoral poison. In other words, seeming to be a social conservative is not a problem—and may in some cases be good politics. Yet actually being one, in the sense of making serious attempts to promote social conservative policies, is and will remain serious obstacle to victory in places like Virginia and New Jersey.
Chris Christie is a good example of this dynamic. Christie knew quite well that his challenge to the gay marriage bill was purely symbolic, since the liberal state supreme court was certain to reinstate the law. What’s more, Christie dropped his opposition as soon as he could credibly claim that the court had forced his hand. This, too, was inevitable in a state in which a considerable majority of voters, including Republicans, favor gay marriage.
Christie’s Machiavellian approach isn’t popular with dedicated social conservatives. The National Organization for Marriage and the Family Research Council have both condemned Christie’s handling of gay marriage. But symbolic conservatism is popular with more moderate voters, who want to express disapproval from gay marriage and abortion, but are uncomfortable with policies that seem intrusive or intolerant.
The lesson of today’s election, then, will not be that social conservatives can compete in moderate and liberal areas if they offer more explicit and articulate defenses of their views. It’s that they can get away with expressing social conservative beliefs so long as they do nothing to suggest that those beliefs are likely to end up enshrined in law. Ponnuru points out that “If Christie wants to run for president, he may find that pointing this out is a low-cost way of appealing to a national constituency that matters a lot in his party.” Somewhere, Machiavelli smiles.
In a post for Prospect, Christopher Fear asks why academic political theory is so remote from political practice. He concludes that it’s because political theorists devote themselves to eternal riddles that he dubs “Wonderland questions” rather than today’s problems. Consider justice, perhaps the original topic of political theorizing:
One of the central questions of academic political philosophy, the supposedly universal question “What is justice?” is a Wonderland question. That is why only academics answer it. Its counterpart outside the rabbit-hole is something like “Which of the injustices among us can we no longer tolerate, and what shall we now do to rectify them?” A political thinker must decide whether to take the supposedly academic question, and have his answers ignored by politicians, or to answer the practically pressing question and win an extramural audience.
Fear is right about the choice that political theorists face between philosophical abstraction and making an impact on public affairs. But he doesn’t understand why they usually pick the former. The reason is simple. Academic political theorists ask academic questions because…they’re academics.
In other words, political theorists are members of a closed guild in which professional success depends on analytic ingenuity, methodological refinement, and payment of one’s intellectual debts through copious footnoting. They devote their attention to questions that reward these qualities. Winning an extramural audience for political argument requires different talents, including a lively writing style, an ear for the public discourse, and the ability to make concrete policy suggestions. But few professors have ever won tenure on the basis of those accomplishments.
Another reason academic political theorists avoid the kind of engagement Fear counsels is that they have little experience of practical politics. Most have spent their lives in and around universities, where they’ve learned much about writing a syllabus, giving a lecture, or editing a manuscript—but virtually nothing about governing or convincing lay readers. How does expertise on theories of distributive justice, say, prepare one to make useful suggestions about improving the healthcare system? Better to stick with matters that can be contemplated from the comfort of one’s desk.
In this respect, political theorists are at a considerable disadvantage compared to professors of law or economics. Even when their main work is academic, lawyers and economists have regular chances to practice in the fields in the fields they study. Within political science, many scholars of international relations pass through a smoothly revolving door that connects the university with the policy community. Political theorists have few such opportunities.
Fear points out that it wasn’t always this way. Before the 20th century, many great political theorists enjoyed extensive political influence. But Fear forgets the main difference between figures like Machiavelli, Locke, Montesquieu, Madison, Burke, Hume, Mill, or Marx and their modern epigones. The former were not professors. Although all devoted to the philosophical truth as they understood it, they were also men of affairs with long experience of practical politics.
The “brilliant and surreal tragedy of academic political theory,” then, is not that political theorists have been diverted into the wrong questions. It’s that political theory is an uncomfortable fit with the university. Academic political theorists gravitate toward the kind of questions that career scholars are in a position to answer.
August 20th was the birthday of the beloved “weird fiction” author H.P. Lovecraft. Like many writers, I confess that I failed to honor the occasion. But Peter Damien did notice that Lovecraft would have turned 123 this year. He wonders why anyone cares:
Because the fact is…he was a godawful writer. He was so bad. I really cannot stress this enough. I’m aware that the quality of a writer’s fiction is very much a matter of personal taste and not objective (and those people who mistakenly believe it is objective and matches up to their own tastes are always wrong). Still, I think we can safely agree that he was really awful as a writer, given that even people who are fans of Lovecraft don’t seem to defend his writing very much.
Damien’s aesthetic judgment is irreproachable. There’s not much to be said in defense of passages like this one from “The Call of Cthulhu“:
Cthulhu still lives, too, I suppose, again in that chasm of stone which has shielded him since the sun was young. His accursed city is sunken once more, for the Vigilant sailed over the spot after the April storm; but his ministers on earth still bellow and prance and slay around idol-capped monoliths in lonely places. He must have been trapped by the sinking whilst within his black abyss, or else the world would by now be screaming with fright and frenzy. Who knows the end? What has risen may sink, and what has sunk may rise. Loathsomeness waits and dreams in the deep, and decay spreads over the tottering cities of men. A time will come – but I must not and cannot think! Let me pray that, if I do not survive this manuscript, my executors may put caution before audacity and see that it meets no other eye.
Yet Damien’s definition of good writing is too narrow. He argues that Lovecraft’s “ideas were themselves amazing things. It’s just that Lovecraft lacked the capability to do anything useful with them himself.” If that were simply true, however, no one would read Lovecraft–or remember anything that he wrote. Lovecraft was a terrible crafter of sentences and had a rather distinctly brute-force approach to exposition. Despite these shortcomings, the fictional universe he created is unforgettable, right down to the ludicrous pseudo-languages he invented for his various creations.
Lovecraft’s profound influence as a creator of worlds suggests that he was a better writer, in the crucial sense of articulating and communicating his ideas, than his more technically accomplished competitors. To criticize his stilted dialogue or Gothic affectations is to miss the point. Clumsy as he is, Lovecraft is remarkably good not only at transporting his readers to places that don’t exist, but at bringing them back with mementos from the journey. What more can we ask of a writer in the overlapping group of genres that includes SF, horror, and fantasy?
Lovecraft has a recent counterpart in George R.R. Martin, the author of the Game of Thrones series. Like Lovecraft, although in a rather different style, Martin writes terrible sentences. But he has other virtues, particularly the ability to balance and pace dozens of parallel storylines and viewpoints.
I’m not suggesting that Lovecraft or Martin belong to the first rank of English literature. Still, I’d rather read their stuff than exercises in technical perfection inspired by, say, Raymond Carver. Is it “good” writing? Who cares. Ph’nglui mglw’nafh Lovecraft R’lyeh wgah’nagl fhtagn!
Jamelle Bouie argues that conservatives are simultaneously obsessed with and oblivious to race. Citing Glenn Beck’s accusation that President Obama hates white people and some conservatives’ public delight in the Zimmerman verdict, Bouie contends that conservatives have adopted a distorted and distorting understanding of racism according to which “anyone who treats race as a social reality is a racist.” It follows that:
Because Obama acknowledges race as a force in American life—and because he even suggests that there are racists among us—he becomes the “real racist,” a construction designed to give conservatives moral high ground, while allowing them to insult Obama. After all, for them, “racist” is the worst accusation in American life.
Bouie is right to criticize the naivëté about race that Dan McCarthy mentioned in his defense of Jack Hunter, which is characteristic of talk radio and other political entertainment. But he misunderstands the conceptual frame that many conservatives apply to these issues.
The background assumption in many conservative arguments about criminal justice or affirmative action is not precisely that any acknowledgement of “race as a force in American life” is racist. Rather, it’s that racism refers only to the kind of eye-popping bigotry recently on display in the film “Django”.
Conservatives correctly observe that this kind of overt hatred is rare today. They wrongly conclude from this that legacies of slavery and segregation are not relevant to modern life–and that anyone who says they are must therefore have ulterior motives.
Jonah Goldberg offers a representative sample of this view. In a post several months ago, Goldberg argued that racism “should be defined as knowing and intentional ill-will or negative actions aimed at an individual or group solely because of their race.”
Note the qualifiers: “knowing and intentional”; “ill-will”; “solely”. According to Goldberg, racism is limited to conscious malice independent of any non-racial considerations. And racism, on this definition, is no longer a big problem.
But this definition seriously obscures the role of race in American society, past and present. To mention only an obvious defect, it excludes the ideas about black inferiority that informed the “positive good” defense of slavery. John C. Calhoun was not Calvin Candy, the psychopathic plantation owner in “Django.” But it is obtuse to deny his racism on the grounds that he believed the slave system was beneficial to blacks.
In more contemporary inquiries, the restrictive definition of racism Goldberg suggests conceals systematic inequities in the economy and other spheres of activity. One need not regard every racially disproportionate outcome as the result of discrimination to understand that it is not simply a coincidence that blacks, who have within living memory been been excluded by law and custom from the vehicles of upward mobility, tend to be poorer and less educated than whites. Colorblindness on these issues is more like simple blindness.
Clumsy as it was, Rand Paul’s speech at Howard University in April was a step toward more serious conservative reflection on race. Although he relied implicitly on a definition of racism as conscious bigotry, Paul at least acknowledged that the bigotry of the past has had unconscious and enduring consequences, which have to be the starting point for arguments about policy. The task for conservatives is to make a plausible case that the policies they favor will be more effective in ameliorating those consequences than either the status quo or progressive alternatives. Until then, our black fellow citizens will be correct in their judgment that we are either hopelessly naïve or playing dumb about the unique and heavy burdens that they continue to bear.
The Atlantic ran a curious article last week. Under the headline “Why Americans All Believe They Are Middle Class”, Anat Shenker-Osorio argues that Americans are encouraged by politicians, the media, and even colloquial language to believe that they belong to the middle class, when in fact they do not. Shenker-Osorio observes:
The puzzle is why so many who do not fit the category (as median family income reported as just above $50,000 defines it) believe they do. Why does the description “middle-class nation” continue to feel appropriate, desirable, or both?
There’s not much of a puzzle here. According to a Pew poll that Shenker-Osorio cites in the piece, Americans don’t all think they’re middle class. On the contrary, they’re pretty good judges of where they belong in the class structure, at least as defined by the distribution of income.
In 2012, the poll showed 32 percent of American identifying as lower class or lower-middle class, 49 percent of Americans identifying as middle class, 15 percent of Americans identifying a upper-middle class, and 2 percent of Americans identifying as upper class. That correlates reasonably well with the 25 percent of American household with incomes of less than $25,000 a year, the 50 percent who earn between $25,000 and about $90,000 a year, the 20 percent who earn between $90,000 and $180,000, and the 5 percent who earn more than that.
The similarities are particularly striking when it comes to the top tier. The top 2 percent of American households have incomes of more than $450,000. That seems like a plausible cutoff for the 2 percent of Americans who call themselves “upper class”.
Regional variations are extremely important in understanding differences at the margins. Well-paid professionals are clustered in cities like New York and Washington. They tend to call themselves middle class or upper-middle class because taxes and high prices, especially for housing, constrain their standard of living. The reason is not that they’re trying to keep up with the Kardashians. In low-cost areas, on the other hand, it isn’t necessarily crazy to consider oneself middle class on a household income of, say, $30,000.
Moreover, class is about more than income. As the great sociologist Robert Nisbet observed, the concept of class developed to make sense of Victorian Britain. It referred partly to differences of wealth and economic activity: businessmen were middle class no matter how rich they were. But it also indicated different mores, speech, and even physical characteristics. When Britain imposed military conscription during the First World War, it discovered that working class recruits were significantly shorter than soldiers from the upper class.
Charles Murray and others have argued the United States is moving toward a similarly visible class system. But it’s still not easy to tell at glance (or a listen) to which class Americans belong. Most look and sound like they belong somewhere in the middle. That’s the historically consistent and still distinctively American phenomenon that the Pew poll reflects.
So it’s reasonable to see the U.S. as a middle-class society, even though far from all Americans identify as middle class. The question is whether it will remain one. The Pew numbers show the number of Americans who identify as middle class declining from 53 percent in 2008 to 49 percent in 2012. Far from indicating that Americans are victims of false consciousness, that figure suggests that the norms and expectations that defined American social reality in the 20th century are eroding fast.