When I was growing up in Birmingham in the Sixties and Seventies, my parents and I lived with my paternal grandparents — or did they live with us? I actually don’t know quite how to put it, and that is a telling fact.
By the time I was old enough to notice, my grandfather had been forced into retirement after suffering a stroke while driving. In those days nobody wore seat belts, so when, unconscious, he smashed his car he also smashed his body. I do not understand how he survived, but his rehabilitation lasted months and months and he never walked again.
A series of events then occurred in an order that I cannot confidently recall today. My father spent several years in prison, as did his younger brother — for unrelated reasons, I believe, though I know little about either circumstance; my grandfather, as if he didn’t have enough troubles, was diagnosed with advanced lung cancer and, after holding on for longer than anyone thought possible, thanks largely to the devoted care of my grandmother, died. That same grandmother also cared for my younger sister and me, since my mother was working long hours to keep a roof over our heads and supper on the table.
Eventually my father got out of prison and got a job as dispatcher for a trucking company. He worked the night shift: when I left for school in the morning he was still at work, and when I returned in the afternoon he was sleeping, and on the weekends he tended to be drunk, so I didn’t see much of him. (His temper was sufficiently unpredictable that I was just fine with that.) But by this time I would have said that my grandmother lived with us, because it was obvious that my father, for all his shortcomings, was the head of the household.
And yet we were all living in the same house that my father had grown up in, and at some point, after spending ten years in the Navy and marrying and divorcing and marrying again, had returned to. Presumably, had I been fully aware in those earlier days, I would have said that we were all living in my grandfather’s house, and he would have been the head of the household.
I found myself thinking about my history after reading this thoughtful and moving meditation by Navneet Alang. His essay is, among other things, a meditation on getting what feels like a late, an impossibly late, start in life: “How do you start to actually live your life when you’re already an old man?”
There’s much in this essay worthy of discussion, but this is the passage that set my memory working: “Now, it’s hard to shake the feeling that I have roundly and thoroughly f*ed up my life. After all, though it’s true I started and then cultivated a writing career while completing the dissertation, I am still living out that most obvious North American symbol of a life gone wrong: I’m single, nearly 40, and typing this in my parents’ home, where I have lived for the past four years.”
I think Alang is right to say that his experience is an “obvious North American symbol of a life gone wrong” — Failure to Launch and all that — but as I read his words I realized that it never would have occurred to my father to be embarrassed about living with his parents. It was too common in that era. Other families in our neighborhood were structured like ours, and our own extended family contained several similar groupings.
And of course if you move further back in time, or look elsewhere in the world today, you’ll find that multi-generational families sharing living quarters is, if anything, the norm. And it’s a norm that, though it certainly has its shortcomings, works well in various dimensions of “home economics”: if one wants to looks at it in the most grossly utilitarian terms, through living as an extended family my parents got free child care, my grandparents got free rent, and I grew up surrounded by family members who loved me, even when my father was in prison. How did living this way become an image of “a life gone wrong”?
Well, for one thing, the separation of the extended family is good for many industries, especially those that are housing-related and, of course, restaurants. (Few of those young adults who move out of the house learn to cook right away.) The more atomized people are, the more they need to buy — which also means: the more they need to work outside the home, in order to make the money to do the buying.
But it seems to me that the single most important contributing factor here — the most important by far — is the sexual revolution. Young adults need to live apart from their parents in order to be free to hook up without interference, explanation, or embarrassment. This is why we’re seeing increasing interest in “co-living” arrangements, especially in megalopolises — here’s a London example, and here’s one in Manhattan: the benefits of sharing at least some resources coupled with sexual freedom, which sounds like a great trade-off except that you’re basically living in a college dorm.
Some people like living in dorms, I guess, and de gustibus non est disputandum; but another way to interpret all this is to see it as an indication of the significant sacrifices in general quality of life that people are willing to make in order to insure maximal sexual freedom and avoid the “failure to launch” stigma — which, again, is largely a stigma because it suggests an insufficiently high valuation of sexual freedom. In the social world most younger adults inhabit, it’s simply unthinkable to change the hierarchy of values in such a way that erotic opportunity drops down the list.
But for other people, in other times and places, it was and is thinkable — which is worth remembering, however you explain it. The place of sex in current hierarchies of value is not a given of human nature; it’s an artifact of a particular socio-economic era, a particular ideology, a particular set of “hidden persuaders.” You may genuinely like your Controllers, but it’s always good to know who they are and what they want you to do.
I generally have little patience for denunciations of “cultural appropriation” — I tend to think that appropriation is at the very heart of all culture, and absolutely intrinsic to creativity — but yesterday’s tweetstorm by artist/writer/photographer Jon Tsuei gives a sense of the complications.
Tsuei was writing about the forthcoming adaptation of the classic manga Ghost in the Shell and explaining why he doesn’t like it. After explaining that Japan in that era was a disarmed country whose identity was largely invested in technological pre-eminence, he wrote:
Ghost In The Shell plays off all of these themes. It is inherently a Japanese story, not a universal one.
— Jon Tsuei (@jontsuei) April 15, 2016
You can “Westernize” the story if you want, but at that point it is no longer Ghost In The Shell because the story is simply not Western. — Jon Tsuei (@jontsuei) April 15, 2016
Then this final warning:
Respect the work for what it is and don’t bastardize it into what you want it to be. pic.twitter.com/ob6ZXOS2Qi
— Jon Tsuei (@jontsuei) April 15, 2016
I want to take Tsuei’s argument seriously here, but I also have some questions. Let me focus on two of them.
1) Would casting Japanese, or Japanese-American, actors address any of the issues Tsuei raises? (I ask this question because much of the controversy here seems to be focused on the casting of Scarlett Johansson in the lead role.) If the actors were ethnically Japanese but the film were in English, and marketed to a Western audience, would that not be removing the story from its proper context just as much as the current adaptation-in-progress would? Is there a way to present this story to a non-Asian audience that wouldn’t, in Tsuei’s view, fundamentally “bastardize” it?
2) In light of the argument that Tsuei makes here, what should we say about Kurosawa’s Throne of Blood and Ran? How would Tsuei respond to, say, a Shakespearean actor who denounced Kurosawa for wrenching the stories out of their proper context, who said, “Macbeth is inherently a British story, not a universal one … Respect the work for what it is and don’t bastardize it into what you want it to be”?
Please understand that these are not “gotcha” questions. I think one could legitimately argue that Kurosawa indicated that he was using rather than translating Shakespeare’s plays by creating his own titles and taking significant liberties with the story, and that therefore he’s not simply “bastardizing.” (In that case, would a film “based on” Ghost in the Shell but with a different title and a somewhat different story line be acceptable to Tsuei?) One could also distinguish, as Tsuei is implicitly doing, between stories that are “universal” and hence theoretically translatable into other cultures and stories that are so deeply embedded in a particular culture that they are simply incapable of translation — but how can we tell whether a given story is one or the other? Or whether some elements of it are translatable and others are not?
Again, I’m a let-a-thousand-flowers-bloom kind of guy — to use a phrase that may be untranslatable — and am not inclined to think that harm is done to any original culture or original work of art by even the schlockiest and crassest Hollywood adaptation. But Tsuei raises some fascinating issues. I hope to return to them later.
I am not a Roman Catholic — I am not even what is usually called an Anglo-Catholic — but I do think of myself as a Catholic Christian in the Anglican tradition. And if you can’t figure that one out …my apologies. One day I’ll explain myself. I mention all this now only in order to explain that while I am not an insider to the kerfuffle over Pope Francis’s new apostolic exhortation, I am not altogether an outsider either. I think what the Pope says matters, in one way or another, to all Christians, especially in the Western world, and even more especially to me; so it’s worth taking the time and trouble to understand him as well as we can.
Many — not all but many — of the conservative responses to Amoris Laetitia are exercises in uncharitable interpretation. For instance, Father Raymond J. De Souza: “From the first pages of Amoris Laetitia to the last, the exhortation evidently yearns to declare what it never declares: that the teaching on marriage and holy Communion can change.” Antonio Socci goes further, claiming that Francis is engaged in a “continuous demolition of Catholic doctrine” and is doing so by “cancelling the notion of ‘mortal sin.’”
Indeed, not only does Pope Francis not demolish Church teaching, or declare that such teaching can change, he insists upon the opposite:
In order to avoid all misunderstanding, I would point out that in no way must the Church desist from proposing the full ideal of marriage, God’s plan in all its grandeur: “Young people who are baptized should be encouraged to understand that the sacrament of marriage can enrich their prospects of love and that they can be sustained by the grace of Christ in the sacrament and by the possibility of participating fully in the life of the Church”. A lukewarm attitude, any kind of relativism, or an undue reticence in proposing that ideal, would be a lack of fidelity to the Gospel and also of love on the part of the Church for young people themselves. To show understanding in the face of exceptional situations never implies dimming the light of the fuller ideal, or proposing less than what Jesus offers to the human being. [p. 238 in the English version of Amoris Laetitia]
It turns out that you cannot avoid misunderstanding when people are determined to misunderstand — or rather, are insistent that they have read your heart and know what you truly intend, even if it’s the precise opposite of what you say. There’s no defense against that kind of reading.
Now, I suspect that conservative critics of Francis will say that they are reading a passage such as the above in light of what he says elsewhere, including elsewhere in the same document — as when he quotes with approval the Relatio Finalis of the recent Synod:
Under certain circumstances people find it very difficult to act differently. Therefore, while upholding a general rule, it is necessary to recognize that responsibility with respect to certain actions or decisions is not the same in all cases. Pastoral discernment, while taking into account a person’s properly formed conscience, must take responsibility for these situations. Even the consequences of actions taken are not necessarily the same in all cases. [quoted on p. 234]
The argument here relies on two principles.
The first is the principle of equity, which goes all the way back to Aristotle’s Politics. Aristotle points out that it is in the nature of law, and of any particular law, to be deficient insofar as it is general. Equity lies in the discerning, prudential application of a general law to particular cases. Thomas Aquinas agrees with Aristotle on this, and in a passage that Francis quotes (p. 235), says
Although there is necessity in the general principles, the more we descend to matters of detail, the more frequently we encounter defects… In matters of action, truth or practical rectitude is not the same for all, as to matters of detail, but only as to the general principles; and where there is the same rectitude in matters of detail, it is not equally known to all… The principle will be found to fail, according as we descend further into detail.
From this principle follows a second one: subsidiarity, which requires that issues that can be dealt with locally should be dealt with locally. Now, in Catholic teaching — see the Catechism, citing Pius XI’s Quadragesimo anno — this principle is usually applied to governmental matters, but it seems, certainly in Francis’s view, to have relevance to the Church as well. The relevant passage from Quadragesimo anno: “A community of a higher order should not interfere in the internal life of a community of a lower order, depriving the latter of its functions, but rather should support it in case of need and help to co-ordinate its activity with the activities of the rest of society, always with a view to the common good.” So for Francis, doctrine (in this case moral doctrine) is established by the Magisterium, where within Catholicism is the only place it can be established, but the pastoral application of discipline in light of that doctrine belongs to a subsidiary realm, usually the parish.
Francis is simply declining the (inevitably fruitless) attempt to settle at the level of the Papal office, and by further specification of law, issues that can better be settled at the local level by pastors who, knowing the people they serve, can apply prudential, equitable judgment. The principles he recommends are, when they’re invoked in other contexts, universally recognized as classically Catholic and classically conservative.
So why the hostility from so many conservative Catholics? I am not certain, but from what I have heard over the years, I don’t think many conservative Catholics have much trust in the average parish priest, especially here in America. But then, they don’t seem to have much trust in bishops, taken as a class, either. And from these responses to Amoris Laetitia they clearly don’t trust the Pope. So I find myself wondering in whom, or in what, they do place their trust.
I crave clarification or correction on all these matters.
Every philosopher must practice the arts of critique and construction: dismantling the arguments one disagrees with and building one’s own intellectual edifices. Few thinkers are equally good at both of these practices, which call for quite different skills, though no great thinker can do one only. As in so many other philosophical conversations, in this one we can trace ancestry back to the same original pair: Plato, through his mouthpiece Socrates, is the champion of all dismantlers; Aristotle the king of the architects.
Continental philosophy over the past half-century and more has specialized in the arts of disassembly—in what Martin Heidegger called Destruktion, Jacques Derrida deconstruction. It has not always been easy to tell what such thinkers would put in the place of the edifices they dismantled; at times some of them have simply disavowed the need to do so. (After delivering a paper at a conference early in his career, Derrida was once asked where he was going with his arguments, to which he replied that he was trying to put himself in a position where he no longer knew where he was going.)
By contrast, the English philosopher Roger Scruton has devoted much of his career to the articulation of a complex and highly positive account of conservatism: of what resources the conservative disposition brings to the challenge of sustaining the social order—through politics, yes, but especially through the mediating social forces of religion, community, and the arts. But in Fools, Frauds and Firebrands: Thinkers of the New Left, he largely sets aside constructive philosophical work in order to dismantle the dismantlers. This he does with rhetorical vigor and flair, and though he often paints with the broadest of brushes and does not always make the distinctions perfect fairness would call for, his critique is a powerful one indeed.
The major figures Scruton explores span a wide range of disciplines: there are historians (E.P. Thompson and Eric Hobsbawm), an economist (John Kenneth Galbraith), a legal theorist (Ronald Dworkin), philosophers of various stripes (Jean-Paul Sartre, Jürgen Habermas), a psychoanalyst (Jacques Lacan), a literary critic (Edward Said), and various unclassifiable figures (Michel Foucault, Slavoj Žižek). Do they all belong in the same book? Are they rightly subject to the same general critique?
Scruton gives two reasons for bringing them together here. The first is that they have all identified themselves as leftists—a claim that I do not believe to be true. The second is that “they illustrate an enduring outlook on the world, and one that has been a permanent feature of Western civilization at least since the Enlightenment.” That outlook is composed of two major commitments, or proclaimed commitments anyway: to liberation of individuals from oppressive existing structures, especially political, familial, and religious; and to social justice, usually conceived as requiring the elimination of political and economic systems that create inequality.
Scruton rightly notes that much of the internal tension, at times exploding into hatred, among figures of the New Left arises because these two commitments are pretty clearly not compatible: the more fully people are liberated, the more energetically they will create and sustain various forms of inequality, while equality can only be enforced at the cost of placing strict limits on personal freedom.
The one figure Scruton treats who simply does not fit is Foucault. Scruton calls him an “anarchist,” a view for which there is, as far as I know, no evidence, though he may have been a kind of libertarian. Foucault is hard to place on any political map because, as he himself frequently said, he was not interested in providing solutions to problems but rather a history of problems: his relentless emphasis on the failures of all schemes of liberation, their absorption into new forms of power—“Meet the new boss, same as the old boss,” as Pete Townshend of The Who wrote many years ago—has caused a good many leftists to insist that Foucault is actually a conservative.
While I am complaining, I will also note that Scruton has nothing to say about how several of these figures—especially Žižek and Alain Badiou, along with Jacques Derrida, who is barely mentioned here—have played a role in the so-called “religious turn” of humanistic studies, in which various movements generally called “postmodern” find a significant place for religion in their reflections, if not in their beliefs or practices. This marks a significant departure from the relentless secularism of most earlier forms of European leftism, and that deserves note. Nor does Scruton account fully for Jürgen Habermas’s reputation as a centrist figure in the German and more generally the European context. (Habermas too has spoken more warmly of religion in recent years.)
But Habermas’s current centrist reputation provides a very nice illustration of one of the key themes of this edition of Scruton’s book. Fools, Frauds and Firebrands is a reissue, with revisions and additions, of a book originally published in 1985 under the title Thinkers of the New Left. At that time, scarcely anyone imagined that the collapse of the Soviet Union was imminent, and the New Left of that era was often implicitly—less often explicitly—communist in orientation. In the intervening three decades things have changed, and in an important passage I must cite at length, Scruton explains why:
we must recognize that the Marxist spectacles are no longer on the left-wing nose. Why they were removed, and by whom, it is hard to say. But for whatever cause, left-wing politics has discarded the revolutionary paradigm advanced by the New Left, in favour of bureaucratic routines and the institutionalization of the welfare culture. The two goals of liberation and social justice remain in place: but they are promoted by legislation, committees and government commissions empowered to root out the sources of discrimination. Liberation and social justice have been bureaucratized. In looking back at the left intellectuals in the decades before the collapse of the Soviet Union, therefore, I am observing a culture that now survives largely in its academic redoubts, feeding from the jargon-ridden prose that it amassed in university libraries, in the days when universities were part of the anti-capitalist ‘struggle.’
One need to look no further than to the current student protests on American college campuses to see evidence for Scruton’s interpretation. Unlike their radical counterparts from the 1960s, these students have no thought of breaking down and rebuilding the fundamental structures of the university along egalitarian lines; rather, in their dream world, universities are still hierarchical, bureaucratic, and driven by administrative diktat, but the bosses doing the dictating are sympathetic to the students’ politics and willing to persecute the students’ enemies. By the very same logic, figures like Habermas and Derrida, whom we might call post-leftists, seek not revolution or even socialism as such but rather the elevation of the European Union into an imitative bureaucratic rival of the United States. (Just before Derrida’s death in 2004, he and Habermas oversaw a manifesto calling for just this: Old Europe, New Europe, Core Europe, with contributions by several other intellectuals, was published in 2005.)
This altered emphasis can be seen quite clearly in the first two figures Scruton treats in Fools, Frauds and Firebrands: E.P. Thompson and Eric Hobsbawm. Thompson, author of the enormously influential 1963 study The Making of the English Working Class, resigned from the English Communist Party in 1956 because the party would not repudiate the Soviet invasion of Hungary, and he later left his academic position at the University of Warwick because he thought that the university had become the merest tool of capitalism. Scruton thinks Thompson a biased and unreliable historian but clearly has a grudging respect for his integrity. Hobsbawm, by contrast, though remaining a communist all his life—in 1956 he even said that he was “approving, with a heavy heart, of what is now happening in Hungary,” and in an autobiography published in 2002 he commented that “To this day I notice myself treating the memory and tradition of the USSR with … indulgence and tenderness”—was perfectly happy to work for the University of London and to accept academic honors from all sorts of capitalist enterprises. In 1998, he was even pleased to be received by Her Majesty the Queen into the Order of the Companions of Honour.
If we understand the nature of this transformation—this move from the necessity of a fundamental restructuring of the Western political order to a mere consolidation of its bureaucratic order with a few minor directional tweaks, or even (in the case of Hobsbawm) just a few rhetorical gestures, as though reciting an ancient liturgy in a long-forgotten language—then the value of Scruton’s book becomes clear. Without an account of this transformation, one might reasonably ask why it would be worth our time to read about these figures whose real political influence, if it ever existed at all, ended a quarter-century ago. The answer is that Scruton shows how even the most seemingly radical language can easily, painlessly be absorbed into the very social and political institutions it is supposed to be opposing. Fools, Frauds and Firebrands is, among other things, a skillful ethnographic account of an intellectual subculture whose words and deeds run always on parallel tracks.
Let me conclude by returning to my opening theme of building and dismantling. Late in his book Scruton comments that he has searched these thinkers’ writings “in vain for a description of how the ‘equality of being’ advocated in their fraught manifestoes is to be achieved.” In the end, he says, “We know nothing of the socialist future, save only that is it both necessary and desirable.” But for that reason the socialist faith is a blind faith. Scruton knows perfectly well what he thinks is worth building—or, more to the point, conserving—and in his final chapter, titled “What is Right?”, he gives as clearly concise a summation of his own political position as one could ask for. It is only 15 pages long, but it is by itself worth the price of admission to this book.
Alan Jacobs is distinguished professor of the humanities in the honors program at Baylor University and the author of The Book of Common Prayer: A Biography.
Some years ago I was conversing with a Christian academic who is very conservative theologically — and conservative in his social views as well. We were trading ideas about a possible conference in which Christian scholars from a variety of disciplines would talk about what distinctively Christian approaches to their work might look like.
One person we discussed was the eminent economist Deirdre McCloskey, who at that time had recently written an extremely thoughtful essay on what it’s like to do economics as a Christian. My friend had read the essay and thought highly of it — but then he grew meditative. And I knew why. It was because Deirdre McCloskey had once been Donald McCloskey, but had undergone sex-reassignment surgery in the mid–1990s, an experience documented in the memoir Crossing. But when my friend finally spoke, he said something that surprised me:
“I’m reluctant to ask McCloskey to participate in such a conference — I’m not dead-set against it, and I’m open to persuasion, but I’m reluctant. Here’s the thing, though: it’s not because I think her gender change was sinful. It’s because the fact that Deirdre used to be Donald, or Donald is pretending to be Deirdre if you want to put it that way, is the only thing we’d hear about. Nothing that any of us actually said at the conference, including what McCloskey said, would even be noticed. All we’d get to do is field questions about what her presence there meant.
“Was it sinful for Donald to transform into Dierdre? Honestly, I don’t know what to think. I mean, yes, ‘male and female created he them,’ but to use that to settle the question would be the grossest kind of proof-texting. I think we have clear biblical evidence that homosexual behavior is wrong, but if we say that we can’t use surgery to change sex because we are altering the way we’re made, then is cosmetic surgery also sinful? Maybe … but sometimes the line between ‘merely’ cosmetic surgery and surgery to restore full bodily function, or achieve it for the first time, is hard to draw. You get dictatorial about this kind of thing and you end up becoming a Christian Scientist. I’m just not sure where to stand on this. This is probably not what you expected to hear from me, is it?”
No, it wasn’t. I was somewhat taken aback. But in his view — and as soon as he stated it I realized that he was correct — the theological and moral issues raised by homosexuality are completely different than those raised by transgenderism. And if we’ve all had a long time to think about homosexuality, we haven’t had much time at all to think about transgenderism, which is effectively new. (There have been crossdressers forever, of course, and men who have “passed” as women and women as men, but surgery and the role of the law in establishing identity create a new situation.)
And nobody is thinking about it now. Nor are many people likely to think about it, because it has become the new battleground in our endlessly absurd and absurdly endless culture war.
For some a rejection of transgenderism is now intrinsic to Christianity, on theological grounds that are almost never articulated (and that they almost certainly could not articulate); but they are largely responding to others for whom making every accommodation to transgender people has become The Great Civil Rights Issue of Our Time and for whom the nuclear option is always the first and only option — because error has no rights, remember? Among these people there’s no interest in thinking, or talking, or persuading; the demand is instant capitulation, or else.
And, it should be noted, most of those making such demands are people who, to put it in the mildest terms I can manage, didn’t give a rat’s mangy ass about transgendered people until about six months ago.
This is no way to live. And it’s certainly no way to think. It’s impossible, I believe, to read Deirdre McCloskey’s memoir with even a modicum of charity and not be moved by the plight Donald McCloskey found himself in for decades; but it is also impossible to take seriously casual cross-dressers who insist that their moral and legal situation must be precisely the same as McCloskey’s. If we were to think about all this, we’d realize that a great many experiences, beliefs, and behaviors are currently being lumped together under amorphous notions of “transgenderism” and “gender fluidity” — but meanwhile the idea that anyone, even a child, might have temporary gender dysphoria is not just rejected put punished to the fullest extent.
Such oversimplifications serve and promote extremism, which in turn promotes more ingroup/outgroup thinking, which leads to intensified culture wars. And I’m not at all convinced that all this is going to help the tiny, tiny number of transgendered people in America; I suspect we will end up with more people like Don/Dawn Ennis.
Not sure about that. But I do have a firm prediction: We will eventually see the Supreme Court rule whether the Constitution allows separate public restrooms for men and women. The arguments against such “separate but equal” provisions will mimic, as that phrase suggests, the case against school segregation in the Civil Rights era, though perhaps the more central arguments will be based on the claim that we have an inalienable right to self-definition. Because at bottom that’s what all these fights are about: not sex or gender, but about whether society is morally and legally obliged to defer to each individual’s self-understanding, and to do everything possible to enable each person to achieve a public identity that matches that internal one.
The existence of separate but equal public restrooms is an intolerable imposition on this right to self-definition, this right to live out to the fullest my own sense of self, however fluid that sense may be — so the argument before the Court will go. And I further predict that SCOTUS will accept that argument and order the elimination of men’s and women’s public restrooms.
The media will rejoice at this great victory for human flourishing — and perhaps most Americans will join them. After all, it will be an excellent excuse to forget about poverty, war, infrastructure collapse, porn addiction, universal governmental surveillance, and all the other problems we don’t have the first idea how to approach, must less to solve.
When will this court case happen? I’m setting the over/under at nine years.
One of the most troubling features of our current political and social climate is how powerfully it is shaped by sheer animus.
A couple of years ago, Scott Alexander wrote a post titled “I Can Tolerate Anything Except the Outgroup.” I strongly recommend that you read the whole thing, but essentially Alexander sets out to answer a question: How is it that, say, straight white men can be gracious and kind to, say, lesbian black women while being unremittingly bitter towards other straight white men? What has happened here to the old distinction between ingroups and outgroups? His answer is that “outgroups may be the people who look exactly like you, and scary foreigner types can become the in-group on a moment’s notice when it seems convenient.”
Then Alexander gives a powerful example. He mentions being chastised by readers who thought he was “uncomplicatedly happy” when he expressed relief that Osama bin Laden was dead.
Of the “intelligent, reasoned, and thoughtful” people I knew, the overwhelming emotion was conspicuous disgust that other people could be happy about his death. I hastily backtracked and said I wasn’t happy per se, just surprised and relieved that all of this was finally behind us. […]
Then a few years later, Margaret Thatcher died. And on my Facebook wall – made of these same “intelligent, reasoned, and thoughtful” people – the most common response was to quote some portion of the song “Ding Dong, The Witch Is Dead”. Another popular response was to link the videos of British people spontaneously throwing parties in the street, with comments like “I wish I was there so I could join in.” From this exact same group of people, not a single expression of disgust or a “c’mon, guys, we’re all human beings here.”
Even when he pointed this out, none of his readers saw a problem with their joy in Thatcher’s death. And that’s when Alexander realized that “if you’re part of the Blue Tribe, then your outgroup isn’t al-Qaeda, or Muslims, or blacks, or gays, or transpeople, or Jews, or atheists – it’s the Red Tribe.”
Since Alexander wrote that post, an article has appeared based on research that confirms his hypothesis. “Fear and Loathing across Party Lines: New Evidence on Group Polarization,” by Shanto Iyengar and Sean J. Westwood, indicates that Americans today do not simply feel animus towards those who disagree with with politically, but are prepared to act on it. Their research discovers a good deal of racial prejudice, which is to be expected and which is likely to grow worse in the coming years, but people seem to think that they shouldn’t be racists or at least shouldn’t show it. However, people of one Tribe evidently believe, quite openly, that members of the other Tribe deserve whatever nastiness comes to them — and are willing to help dish out the nastiness themselves. “Despite lingering negative attitudes toward African Americans, social norms appear to suppress racial discrimination, but there is no such reluctance to discriminate based on partisan affiliation.”
That is, many Americans are happy to treat other people unfairly if those other people belong to the alien Tribe. And — this is perhaps the most telling finding of all — their desire to punish the outgroup is significantly stronger than their desire to support the ingroup. Through a series of games, Iyengar and Westwood discovered that “Outgroup animosity is more consequential than favoritism for the ingroup.”
One of my consistent themes over the years — see, for instance, here and here — has been the importance of acting politically with the awareness that people who agree with you won’t always be in charge. That is, I believe that it is reasonable and wise, in a democratic social order, to make a commitment to proceduralism: to agree with my political adversaries to abide by the same rules. That belief is on its way to being comprehensively rejected by the American people, in favor of a different model: Error has no rights.
What is being forgotten in this rush to punish the outgroup is a wise word put forth long ago by Orestes Brownson: “Error has no rights, but the man who errs has equal rights with him who errs not.”
In a strong column, Michael Brendan Dougherty writes that “there are reasons to believe this moment is unsustainable and the basic status quo of the Republican Party will eventually emerge triumphant.”
The party has been unable to co-opt Trump, unable to endorse him, unable to oppose him effectively. But Trump is not going to bequeath to Republicans a squadron of Trumpistas in Congress. He is not going to build the kind of institutions that ideologues use to pressure a major political party. He’s just going to make a spectacle of himself. Barring a black swan event, he will fade away.
The GOP will still have all the problems that pre-dated Trump and that only he exposed. A plurality of its voters will still be unsatisfied with the party’s agenda, especially on the economy. It will still have problems with young voters, and still have a long-term demographic problem. Its philosophy will still be outdated.
But by 2018, all the talk of bloodbaths and schisms and fracturing will go away. The party will have powers to exercise, sinecures to offer, and orthodoxies to protect again. The party will get out of its hospital ward and move on. It will shock you how much it never happened.
As might be expected, I like this argument, since I have made it myself. In that post I wrote, “I would be very grateful if some wise person who is convinced that Trump is permanently changing the GOP would explain to me the mechanisms by which this change will be effected.” The power of incumbency is great, and not just in elected office. I expect the people who run the GOP now to be running it in 2020.
But I want to suggest one factor that might complicate the long-term effects of Trump. If he wins the party nomination — and Dougherty’s argument assumes that that is likely, though not inevitable — Republican politicians will have to decide whether to support him. And anyone who does, anyone who speaks up on Trump’s behalf and encourages voters to choose him in preference to Hillary Clinton, will carry that albatross around his or her neck permanently.
Among the major figures in the Republican Party, the one least likely to defend, endorse, or support Trump is surely Marco Rubio. (A point recently reinforced.) And if Rubio denounces Trump, or just stays silent, then that will significantly increase the likelihood of the scenario I imagined in that earlier post: an essentially intact GOP leadership in 2018 wheeling out as their preferred candidate a four-years-older, four-years-wiser, four-years-more-seasoned Marco Rubio.
B. What do you mean, “Easy for me to say”?
A. “Erst kommt das Fressen, dann kommt die Moral,” as Brecht said — grub first, then ethics. It’s a false dichotomy: either economic prosperity or human flourishing. People who are economically deprived don’t have the leisure to develop their … to start a Finer Things Club.
B. So we’re back to bread and circuses, then. In your model — in your Brave New World, if you want to keep the literary references going, we give up democracy, give up self-rule to the World Controllers, and in return get the bread that allows us to pursue the Finer Things.
A. That’s a rather dismissive way of —
Subjection in small affairs manifests itself every day and makes itself felt without distinction by all citizens. It does not make them desperate; but it constantly thwarts them and brings them to renounce the use of their wills. Thus little by little, it extinguishes their spirits and enervates their souls, whereas obedience, which is due only in a few very grave but very rare circumstances, shows servitude only now and then and makes it weigh only on certain men. In vain will you charge these same citizens, whom you have rendered so dependent on the central power, with choosing the representatives of this power from time to time; that use of their free will, so important but so brief and so rare, will not prevent them from losing little by little the faculty of thinking, feeling, and acting by themselves, and thus from gradually falling below the level of humanity.
— Hey, can you turn that down?
B. I’m not doing it. What is that?
A. That’s not coming from your laptop?
B. Nope. What was it saying? Something about obedience and servitude? Losing the faculty of thinking?
A. There must be a radio around here somewhere.
B. Yeah, must be. That was interesting, though. Hang on, let me google this… Oh. Hmmm. It’s from Tocqueville’s Democracy in America. Near the end of the second volume.
A. You’re kidding.
B. No. Apparently we have an eavesdropper.
The moral empire of the majority is founded in part on the idea that there is more enlightenment and wisdom in many men united than in one alone, in the number of legislators than in their choice. It is the theory of equality applied to intellects. This doctrine attacks the pride of man in its last asylum: so the minority accepts it only with difficulty; it habituates itself to it only in the long term. Like all powers, and perhaps more than any of them, therefore, the power of the majority needs to be lasting in order to appear legitimate. When it begins to establish itself, it makes itself obeyed by constraint; it is only after having lived for a long time under its laws that one begins to respect it.
B. Hmm. Someone seems to want to start an argument. Let me try to find out who said that … Well, that’s interesting.
B. That’s Tocqueville too.
A. Odd. It sounded like almost the opposite of that first thing. Or different, anyway.
B. Yeah. You know, I think our eavesdropper might be on to something.
A. In what sense?
B. Well, we’ve been talking a lot about democracy, and we’ve been talking about America, but we haven’t had too much to say about democracy in America. Our models have been from ancient China, or England a century ago, or whatever world it is that the neoreactionaries are living in —
A. Now hang on a minute —
B. Sorry. Clerk, strike that last comment from the record. But you know what I mean, yes? Maybe we should think more about the particular circumstances of American democracy.
A. And Tocqueville is supposed to be The Guy about that, yes? He sounds interesting … That comment about how people “sink below the level of humanity” when experiencing “subjection in small affairs” — that reminds me of your emphasis on giving people more power, more “voice,” in matters close to home — because that’s going to help them flourish in ways you were talking about last time.
B. Exactly. But then the passage also makes me wonder what to do when people have been “subjected” in this way so long that they have fallen “below the level of humanity” — can they be brought back? Or must others rule for them? And to what extent does our particular social and political order create this problem that it’s supposed to be solving?
A. That second passage, on “the theory of equality applied to intellects,” is relevant too — what if that “theory” is wrong?
B. Well, you’ve been saying it’s wrong, and it may be, though I’m not sure its wrongness has the political implications you think it does. Maybe you haven’t been living long enough under its laws!
A. Maybe … but in any case, I think we have some reading to do.
B. I think we do.
A. Big book, though.
B. It sure is.
A. May take a while to get through.
B. We have time. Let’s start reading. We can check in with each other later.
A. You have a deal.
A. Changing the size of our political units? Is that really possible? Surely any serious reduction of the scope of the modern nation-state is a pipe dream.
B. Well, we’re both dreaming here, aren’t we? It’s not as though there’s any real chance of your creating a class of specially educated technocrats to rule the world. We’re both speaking in ideal terms, and neither of us has any real plan for bringing our hopes to realization.
A. Maybe — but since our world is already effectively ruled by technocrats, my model, of having it ruled by the right technocrats, a genuine neo-aristocracy, has a good deal more practical plausibility than any distributist fantasy.
B. The similarity between your ideal and the current reality is not a feature, it’s a bug. That’s what I meant last time when I said that your position is less a radical alternative to democracy than a mere CMS (Capitalism Management Strategy). My ideal can be pursued with less danger to the common good than yours can. Every step towards devolution, however small, is a step in the right direction. But I don’t think you can create a new aristocracy without first implementing a structure that will strengthen the old one — which will make the rule of our current idiocracy stronger. You would leave so much of the old system firmly in place that a successful replacement of the current social hierarchy by the one Moldbug et al. want would simply mean a repetition of the scene at the end of Animal Farm: “They looked from man to pig, and from pig to man, but already it was impossible to tell which was which.”
And don’t we already see the truth of this if we just look at the absorption of Silicon Valley radicals into the power elite? A plutocrat in a t-shirt and sneakers is a plutocrat all the same.
A. I don’t think —
B. Hang on, hang on — sorry, please bear with me. We’re on a tangent here, and I need to finish the argument I started last time, before I forget what I was saying. I said earlier that I had two critiques of the current model, and I haven’t gotten to the second one yet. My first critique, you may recall, is that I don’t think international capitalism can be effectively ruled by anyone, no matter how perfectly educated, because it operates at a transhuman scale. The second … well, I was put in mind of it by some of the things Kevin Williamson has been writing lately about “failed communities:
The truth about these dysfunctional, downscale communities is that they deserve to die. Economically, they are negative assets. Morally, they are indefensible. Forget all your cheap theatrical Bruce Springsteen crap. Forget your sanctimony about struggling Rust Belt factory towns and your conspiracy theories about the wily Orientals stealing our jobs. Forget your goddamned gypsum, and, if he has a problem with that, forget Ed Burke, too. The white American underclass is in thrall to a vicious, selfish culture whose main products are misery and used heroin needles. Donald Trump’s speeches make them feel good. So does OxyContin. What they need isn’t analgesics, literal or political. They need real opportunity, which means that they need real change, which means that they need U-Haul.
This is a recurrent theme for Williamson. Last year he wrote, “Some towns are better off dead…. My own experience in Appalachia and the South Bronx suggests that the best thing that people trapped in poverty in these undercapitalized and dysfunctional communities could do is — move. Get the hell out of Dodge, or Eastern Kentucky, or the Bronx.”
A. My guess is that the answer a lot of those people would give Williamson is “I would if I could.” People with no jobs and no credit can’t rent a U-Haul; and even if they could, where would they unload it? You can’t get an apartment without a job and some credit, either.
B. I think that’s right, but I want to look at something a little deeper.
A. Oh, excuse me for being so shallow.
B. Certainly. But what I mean by “deeper” is, something Williamson doesn’t openly acknowledge. Some years ago now the economist Richard Thaler made a distinction I think very useful, between Econs and Humans. Econs are the “rational agents” of rational choice theory, who always make decisions based on dispassionate calculation “expected utility” and suchlike criteria. Humans are the rest of us.
Thaler (along with his sometime collaborator Daniel Kahneman) suspects that the Econ is a fictitious species, but Kevin Williamson, at least in these articles about “dysfunctional, downscale communities,” sure sounds like one. For him economic calculations seem to be the only ones, and if you raise any other considerations he just calls you “sentimental.” That someone might stay in an economically failing town because he loves it, or because his mother lives there, evidently strikes Williamson as rationally indefensible behavior.
I think such an assumption, if Williamson really holds it, is based on a pretty impoverished notion of what counts as “rational.” And I think your system does too. Maximizing expected utility is not how to live the good life. I mean, you know this, we all know this. We need a social and political system that doesn’t elevate economic considerations above all others — and doesn’t confine “economic” to matters involving money alone. There’s such a thing as human flourishing, eudaimonia, that cannot be measured in monetary terms.
A. Easy for you to say.
In July of 1977 a young man named Richard Herrin murdered his ex-girlfriend, Bonnie Garland, by smashing her head in with a hammer as she slept. Herrin and Garland had met as students at Yale, and had dated for two years, but she had told him she wanted to be free to see other people.
Amazingly, Herrin was not convicted of murder, but only of manslaughter, in large part because of the support he received from the Yale community, including the University’s Catholic chaplain. Herrin’s supporters argued that as a Latino from a Los Angeles barrio, he felt out of place and marginalized at Yale, which put him in a precarious emotional state even before Bonnie Garland rejected him. He was, these supporters said, a young man of extremely good character.
A psychiatrist named Willard Gaylin followed the case closely and eventually wrote an extremely powerful book about it. I read that book when it came out, more than thirty years ago, and have never forgotten it. The puzzle Gaylin set out to explore was this: Why were Herrin’s supporters so completely lacking in sympathy for Bonnie Garland and her grieving family? Why were so many of them contemptuous of her father’s insistence that the man who used the claw of a hammer to smash his daughter’s skull into fragments needed to be convicted of murder? Why did they so loftily lecture him on the necessity of forgiveness, and tell him that he just needed to “get over it”?
Gaylin’s answer — and this is what makes the book unforgettable — is that it happened this way because Bonnie Garland wasn’t there. Had Herrin severely injured her, but in such a way that she could appear in the courtroom, the sympathy of the jury and the audience would have flowed inevitably and naturally towards her. But by killing her — erasing her presence, turning her into a thing — he left the human sympathy that naturally arises in response to horror nowhere to go but towards him. “Poor young man. How he must have been suffering to do that.”
Gaylin’s book has come to my mind in the aftermath of this week’s Brussels bombings, as it has after previous, simliar attacks. From the right we hear: Look at the terrible people who did this! From the left we hear: Look at the innocent people who will suffer because of this! But from no one do we hear: Look at the dead — because they cannot be seen. They have left our world; images of them may remain, but they are definitely in the past tense. It is more natural and more interesting for us to stare at security-camera footage of the bombers. We are drawn towards those who breathe the air we breathe.
Therefore, since we can no longer see the dead, we must make a special effort to remember them. This is the great genius of Maya Lin’s Vietnam Veterans Memorial: it does no more — and no less — than to name the dead. Their bodies are buried, or lost; but their names we can still speak. And the deep implication of the Memorial is that we should.
We do not yet know all the names of those who were killed in Brussels. Some are missing and feared dead, but their families still await definitive news. A few we know: Adelma Tapia Ruiz, Olivier Delespesse, Léopold Hecht. May they rest in peace.
But this exercise can only remind us of those who have died by violence whose names we cannot find: so, so many of those killed by Daesh, or by Boko Haram, or in wars around the world. Confronted by so much destruction and misery, we may find it easier to turn our faces towards the perpetrators, or towards those we fear might be targeted. These are temptations to be resisted. We must, for a time at least, turn our faces towards those who have died, and say their names, when we can discover them. It is the least we can do for those whom evil has found.
This is a somewhat rambling follow-up to my recent post on conservatives in the academy and a post from a couple of months back on academic freedom and religious education.
It’s important to note that “academia” is not a single, monolithic thing, but rather a network of interrelated institutions that function in varying ways and with various boundaries. Let me take my own career as an example.
About fifteen years ago I became restless at Wheaton College and started wondering whether I wouldn’t be happier teaching somewhere else — especially in my native South. Also, I had myself been educated wholly in secular, public institutions and thought that it might be good for me to return to that world. So I started applying for jobs.
I had a fine job at Wheaton, so I didn’t apply that widely — I was especially concerned that, having taught academic high-achievers for so long, I wouldn’t be a good teacher for marginally qualified students. And as a member of many hiring committees over the years I had learned the fruitlessness of applying for positions you don’t really fit (in disciplinary speciality or rank or institutional mission). So that first year I applied only for jobs I thought I would enjoy and clearly had the qualifications for.
But in putting together my application, I found myself in a bit of a moral dilemma. I had more than enough publications in peer-reviewed journals, and books on peer-reviewed presses, to make a good case for myself … but I also had many publications in religious and politically conservative magazines and journals. Should I include those? They almost certainly wouldn’t help my case and might actually hurt it. But then I thought: No, this is who I am, I can’t pretend that I haven’t written this stuff. I decided to clearly distinguish my general-interest writing from my peer-reviewed stuff, but put it all on my CV. (They could google me, anyway.)
I sent off my applications, and within a couple of weeks I received polite letters of rejection from every school I applied to.
The next year I was still restless, so I applied a little more widely — and got the same result. At this point my professional pride was slightly wounded, but I was also curious about how this was working out. By the time the next hiring season rolled around, I was less inclined to move — but I applied anyway, still more widely than before. In fact, for a period of about ten years I applied to positions for which I was qualified, all over the United States, in every kind of college and university, including places where I definitely didn’t want to teach. I did my homework; I tailored my letters to the specifics of each school, each department; I presented my work with care. Moreover, as each year passed my list of publications was growing longer.
In all those years, in which I sent out over one hundred letters of application, I received not one inquiry. Either I got a polite rejection letter or no response at all. (And when I got no response I followed up with emails to department chairs; those weren’t answered either.) I wondered whether diversity initiatives might be at work, but when I could discover who had eventually been hired for the positions I applied for, I learned that most of the hires had been white men, like me.
Draw your own conclusions from all this. I am not sure what conclusions to draw. In the op-ed I mention in my previous post, Jon A. Shields and Joshua M. Dunn, Sr. mention the work of George Yancey: “Yancey found that 15 percent of political scientists and 24 percent of philosophers would discriminate against Republican job applicants, and at least 29 percent of professors in all disciplines surveyed would disfavor members of the National Rifle Association. He found that professors are even less tolerant of evangelicals, partly because that identity is a proxy for social conservatism.” But are we sure that that’s the way the arrow points? Maybe those professors are intolerant of political conservatives because they perceive political conservatism as a proxy for Christian belief. When hiring committees saw my CV, was the red flag my writing for conservative publications, or religious ones? Or were both equally alarming? Was my experience teaching at a religious college more alarming than either?
But while you’re thinking about all this, keep something in mind: While those rejections were piling up, I was publishing books — peer-reviewed books — on some of the better university presses: Oxford (once) Princeton (three times); and my next book will be with Harvard. I don’t know why the presses are so much more open-minded, but I suspect it has something to do with the thinness of the relationship: if Princeton University Press publishes my book, they don’t commit to putting me in a office down the hall for the next twenty years.
We can speculate about all that as well. But for now I just want to say that when people decry academia’s prejudice against Christians and conservatives, it’s important to ask: “What do you mean by ‘academia’?”
“Forget what the Right says,” the headline instructs me: things aren’t so bad for conservatives in the academy. Jon A. Shields and Joshua M. Dunn, Sr. write,
As two conservative professors, we agree that right-wing faculty members and ideas are not always treated fairly on college campuses. But we also know that right-wing hand-wringing about higher education is overblown. After interviewing 153 conservative professors in the social sciences and humanities, we believe that conservatives survive and even thrive in one of America’s most progressive professions.
Now, it’s true, they go on to say, that these thriving conservatives had to conceal their political views throughout graduate school, the hiring process, and their teaching careers up to tenure, and were only free to express themselves after getting tenure. (Perhaps Shields and Dunn did not interview conservative professors who failed to receive tenure.) And they admit that “being in the closet is not easy.” They further acknowledge that the faculty they interviewed agree that “too many disciplines and subfields — including sociology, literature and modern American history — are ‘unsafe spaces’ for right-wing thinkers. ”
Shields and Dunn also admit that conservatives
remain more poorly represented there than all current targets of affirmative action, and some studies suggest that their numbers are falling. There is bias, too: In one study by George Yancey, a sociologist at the University of North Texas, for example, some 30 percent of sociologists acknowledged that they would be less likely to hire a job applicant if they knew she was a Republican. Yancey found that 15 percent of political scientists and 24 percent of philosophers would discriminate against Republican job applicants, and at least 29 percent of professors in all disciplines surveyed would disfavor members of the National Rifle Association. He found that professors are even less tolerant of evangelicals, partly because that identity is a proxy for social conservatism.
And they note that “Stanley Rothman and Robert Lichter found that socially conservative professors tend to work at lower-ranked institutions than their publication records would predict. In addition, a study of elite law schools shows that libertarian and conservative professors publish more than their peers, which suggests that conservatives must outshine liberals to reach the summit of their profession. The finding is especially striking given that other research suggests it is more difficult for scholars to publish work that reflects conservative perspectives.”
See? Conservative complaints about the unfairness of the academy are clearly wildly overblown. One question, though: if all this adds up to “not so bad,” what would “bad” look like?
Now, I understand what Shields and Dunn are doing here. They want to let progressives in the academy know that there are already conservatives in their midst, some of whom are quite decent people. And they want to encourage conservative academics to make the choice they have made to teach in “the progressive academy.”
But when they write “The answer is not to segregate conservative professors at schools like Grove City College or Liberty University, which most conservative professors regard as academic Siberia,” what they actually mean is that most conservative professors who have chosen not to teach in such schools regard them as academic Siberia. The viewpoint of conservatives who have chosen to teach in schools more hospitable to conservatism is simply not considered by Shields and Dunn. So they try to put the very best face possible on some very unpleasant facts, including facts their own research has uncovered.
There’s a lot to unpack here. More on all this in subsequent posts.
As I write, this dialogue on the splendors and miseries of democracy has been going on for nine installments, with a tenth soon to appear. I’m writing this post first of all to provide a place where every entry will be linked; I’ll update here every time I post something new, so please bookmark this URL if you want to keep track.
But I also want to take this opportunity to answer a few questions that I’ve been getting about the series. Herewith:
So how long is this going to go on?
Honestly, I have no idea. I don’t have a plan; I’m just watching the conversation unfold according to its own internal logic and impetus. At least, that’s how it seems to me. At some point, like all conversations, this one will peter out, but whether that’s in two weeks or a year from now, I can’t say.
Why these two characters, A and B?
A while back, when I started reflecting on my own discouragement about the state of our democracy, I thought: Well, what are the alternatives? And, believing as I do that socialism, communism, and absolute monarchy have been tried and found wanting, I decided that the two most interesting alternatives to our current system are those promoted by the neoreactionaries and the distributists. And then I wondered what might happen if I put those two in conversation with each other.
But you’re B, right?
Not really, no.
So you’re A?
Not him (or her) either. Maybe I’ll know better what I am when I finish with this. My preoccupation at the moment is to try to allow each to make the best case possible for his (or her) position, and in the process to exemplify, as best I can manage it, constructive and charitable debate.
Do you think that neoreactionaries and distributists will agree that you’ve portrayed them fairly?
Many will not, I’m sure. But you know, not all neoreactionaries are the same, and not all distributists are the same. Each movement has a bigger tent than the outsider might at first think. I’ve tried to create figures who are more irenic than strident, willing to concede flaws in their preferred systems, and disinclined to assume bad faith in those who disagree with them; and in that sense, sad to say, these may not be typical figures. I’m sure many people will think I haven’t provided the best arguments, and that may well be true. I just hope the conversation will be interesting and useful.
But these two will continue to be the only participants?
I’m not sure about that. Someone else may be listening in.
Kinda curious that this post is also a dialogue.
Yes, but in this one I’m definitely speaking for myself. Anyway, here are the links to the installments so far:
- One: Is democracy a failed experiment?
- Two: The possibility of a new aristocracy
- Three: Can the neoreactionaries help?
- Four: Lessons from Old China
- Five: Edmund Burke and aristocracy
- Six: Voice, exit, and piety
- Seven: Maybe people aren’t as ignorant as we think
- Eight: Does the rise of Trumpery prove that democracy is dead?
- Nine: Can distributism be revived?
- Ten: On Poor Whites, Econs, and Humans
- Eleven: A Voice from the Past Makes Itself Heard
Previous installment here.
A. Subsidiarity? At this stage of the game?
B. Especially at this stage of the game.
A. I suppose next you’ll be advocating distributism.
B. And why not?
A. I can’t believe we’re even having this conversation. Say what you will about my neoaristocracy, dude, at least it’s an ethos calibrated to the world we actually live in.
B. That, in a nutshell, is my problem with it. You think of it as a radical alternative to democracy, but I see it as little more than a CMS (Capitalism Management Strategy). You want to leave the structures of modern international capitalism in place and just hand over the control of those structures to a different, presumably more qualified, group of people. By contrast, I think modern international capitalism has done for us pretty much all it can do and needs to be dialed back significantly.
A. Are you serious about distributism? You’re really envisioning some beautiful quasi-medieval world of local pubs that brew their own beer, mom-and-pop shops featuring homespun woolens and tallow candles made according to an ancient recipe handed down through the generations?
B. To be honest, that’s always been my problem with distributism: its absurdly nostalgic character, its idealizing of an often dirty and unpleasant past, its refusal to acknowledge what modernity has done for us. Too often the distributist vision is like a historical theme park, fun to visit as long as at the end of the day we can have hot showers and central heating.
But it doesn’t have to be that way. It’s possible to have distributism and subsidiarity without nostalgia. It’s possible to argue that a concerted program of devolution was not as good an idea in the time of Chesterton and Belloc as it is in ours.
A. Very convenient! It turns out that now is the time for returning power to the local, not a hundred years ago. But what if a hundred years in the future someone is saying the same? “Man, am I glad that no one implemented distributism in 2016 — think of all we would have missed out on!”
B. That might happen. I can’t rule it out. But aren’t you and I agreed that democracy as it’s currently practiced in America is in crisis? Don’t we think that we’re digging ourselves into a political and economic hole and at the very least we need to stop digging?
A. Indeed we do.
B. Okay. So what you’re saying, basically, is that there’s nothing wrong with international capitalism that a shift of power from an ill-informed rabble of a “democracy” to a well-informed neoaristocracy can’t fix.
A. I don’t think I would put it quite that way, but keep going. Your ideas are intriguing to me and I wish to subscribe to your newsletter.
B. First issue will be out soon. But for now let’s say that, at minimum, you believe that the dynamism of international capitalism can be used to make the world a better place if it’s directed by the right people. I have two major criticisms of that model. The first, which I described to you the last time we were together, is that I don’t think even “the right people” are capable of making dramatically superior decisions to hoi polloi because of the scale at which they’d have to operate and the general human inability to master their own cognitive biases.
A. May I ask you something about that?
A. It’s certainly sobering to hear Daniel Kahneman say that he has not been able to overcome his own cognitive biases, at least not to a great degree. But might that not be, at least in part, a function of his own education? Isn’t it possible that an educational system fundamentally based on what we’ve learned from Kahneman and other scholars in his field could produce a genuine neoaristocracy? I guess I’m asking: Don’t you believe in education?
B. I do believe in education, but within contraints established by our creatureliness. Nobody thinks that a person can run 100 meters in four seconds if he just trains hard enough. We have to understand and embrace our natural limits and work within them, rather than always striving to transcend them. That’s why I mentioned Dunbar’s number last time. What if that’s not some historically contingent phenomenon but rather a fundamental constraint on our cognitive capacities, a constraint as fixed and non-negotiable as the speed of light? If that’s the case then there is something intrinsically unmanageable about international capitalism, even with all the computing power we now have — especially since the computers are always programmed by us.
See, you’re trying to adapt our mode of governance to the size of today’s hypertrophied global nation-state; but what if we went the other way? What if we retained a commitment to democracy, but changed the size of our basic political units in such a way that democracy is actually realizable?
If there’s one point on which the political commentariat are currently agreed on, it’s that the rise of Donald Trump means the end of Business as Usual for the GOP — that there’s no going back to the way things have been done since the days of Ronald Reagan.
I’m not convinced.
I’m not saying that these people are wrong; I’m just not sure that they’re right. Will the party bosses and apparatchiks fall on their swords? Surely not. Will the highly-paid consultants admit that they don’t know what they’re talking about and gracefully withdraw from the stage? Don’t make me laugh.
So here’s the question that matters: Will the big donors demand the resignation of the party leaders, and turn to different consultants? Maybe.
But maybe not. Here’s what the people currently in charge will tell them: “Trump is a black swan. Nobody could have predicted that his candidacy would take off like this. Yes, he tapped into some legitimate resentment, but those people were pretty quiet before he came on the scene and they’ll fall back into that same quietness when he’s no longer around stirring them up. No one else is going to rise up to be their champion, because such a person would need not only to share their point of view, but also be famous enough to win their admiration and rich enough to run his own campaign without help from any of us. Ain’t gonna happen. By the time the next campaign rolls around, the long-standing patterns of political affiliation will have reasserted themselves. Plus, Rubio has earned a great deal of respect for how he has run his campaign, especially in these last few weeks; he has learned from the mistakes he made this time around; and four more years of age will only make him look more Presidential. We’ve got this. Just give us one more chance.”
Will the donors buy — figuratively and literally — this argument? I think there’s a pretty good chance they will. After all, to reject it would be to acknowledge that they’ve been throwing money away for years, in some cases decades; and to accept that they’ll have to build a whole new set of relationships with people they don’t know very well, or at all. The narrative the incumbent party bosses will tell them is a narrative they have massive incentive to believe.
I would be very grateful if some wise person who is convinced that Trump is permanently changing the GOP would explain to me the mechanisms by which this change will be effected. This inquiring mind wants to know.
P.S. Of course, the above assumes that Trump will not win the Presidency. If he does, all bets are off.
B. Wow, haven’t seen you in a while.
A. Yeah, sorry about that. I’ve been checking out the Seasteading Institute.
B. Goodness, you are serious, aren’t you?
A. Yes, serious — but also critical. Nothing if not critical. But when we left off you were beginning to make a rousing pitch for the renewal of democracy, or something like that —
B. Something like that.
A. — Right. But surely you’ll admit that the rise of Trumpism makes your case harder to sustain, yes? I mean, how could anyone who would vote for a blustering, arrogant, narcissistic, pathologically dishonest blowhard like Trump ever be said to have a legitimate right to vote? I think it was Camus who said that democracy is devised and maintained by people who know that they don’t know everything — but these people do think they know everything when in fact they know nothing!
B. My views of Trump are close to your own, though I think your description of him errs on the side of generosity. But I would account for Trumpism, for the Trump phenomenon, rather differently than you do. I want to remind you of something I said the last time we were together: “people are interested in and even knowledgable about politics – but politics only on a human scale, a scale appropriate to the range of their experience and interests. Many, perhaps most, of the pathologies of our current political order are products of inhuman scale.” Critics of Trumpism rarely acknowledge, much less reckon with, the uncomfortable fact that the very people who make shockingly bad decisions when they’re participating in national politics — and make no mistake, I think the decision to vote for President Trump is shockingly bad — are quite reliable, even shrewd, within their local sphere of action. I would be willing to bet that many of the people who say they want Trump as President wouldn’t even think about supporting him if he were running for Town Dogcatcher.
B. Yes, really! I mean, you need to know very little about Trump to realize that he simply couldn’t be counted on to do what he promises — even vows — to do. If Trump were dogcatcher, wild dogs would terrorize the town as he was explaining to his personal stylist that he’d like to be just a teeny bit more orange. (Though he’d put the whole power of his office behind the search for a Pekinese who growled at him once.) The question is: Why do people know this kind of thing on the local level but fail to grasp it on the national or international? It’s because none of us understand matters that get beyond the human scale. We’re all in the iron grip of Dunbar’s number.
A. But this is precisely why I’ve made my case for a new aristocracy — and why I think most people would not just accept but prefer it, in that they would be delivered from the need to think about things they can’t understand.
B. Your argument would be plausible were it not for some assumptions planted deep within it: namely that your neoaristoi can make themselves invulnerable to such constraints as Dunbar’s number, and can overcome the cognitive biases that other, lesser people are subject to. But we have strong reasons to question those assumptions. There’s a really sobering moment near the end of Thinking, Fast and Slow where Daniel Kahneman, who has done more than anyone else in the world to help us understand the range and extent of cognitive bias, writes:
What can be done about biases? How can we improve judgments and decisions, both our own and those of the institutions that we serve and that serve us? The short answer is that little can be achieved without a considerable investment of effort. As I know from experience, System 1 is not readily educable. Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues.
Let me stick with Kahneman’s work for a minute. You have repeatedly commented on how irrational most people are, and I have sometimes agreed with you. But Kahneman suggests that we tend to make that claim because we have a deeply unrealistic sense of what counts as rational:
The definition of rationality as coherence is impossibly restrictive; it demands adherence to rules of logic that a finite mind is not able to implement. Reasonable people cannot be rational by that definition, but they should not be branded as irrational for that reason. Irrational is a strong word, which connotes impulsivity, emotionality, and a stubborn resistance to reasonable argument. I often cringe when my work with Amos is credited with demonstrating that human choices are irrational, when in fact our research only showed that Humans are not well described by the rational-agent model.
Kahneman goes on to say that “Although Humans” — Humans being actual people, as opposed to the “rational agents” often posited by economists, whom Kahneman calls Econs — “are not irrational, they often need help to make more accurate judgments and better decisions, and in some cases policies and institutions can provide that help.” Indeed they can.
A. They can? Didn’t expect you to go there! Are you about to become a proponent of paternalistic nudge theory?
B. Well, that’s sort of where Kahneman is. But not me. I think of nudge theory as a way of dealing with both cognitive biases and the human dislike of being dictated to — the same dislike of being dictated to that animates much of Trumpism — without adjusting the structure or scale of our governing institutions. I’m going to push for something considerably more radical — as radical as your New Aristocracy, though in something like the opposite direction. I’m going to make a case — a kinda new case — for subsidiarity.
A. Oh good grief.
The other day I received a really smart and interesting email from a reader named Raghav Krishnapriyan in response to my second post on “disciplinary originalism” in SCOTUS decisions. Here’s a key passage:
I was surprised, however, to see Justice Jackson’s dissent in Korematsu cited as an exemplar of disciplinary originalism, since as far as I can tell, there’s nothing distinctively originalist about it. And insofar as either Jackson or Black (the author of the majority opinion) can be thought of as an originalist avant la lettre, I would have thought it would be Black, the textualist, rather than Jackson, the quintessential pragmatist.
We’re speaking here, after all, of the same Justice Jackson whose second most famous opinion began with the following candid acknowledgement:
“That comprehensive and undefined presidential powers hold both practical advantages and grave dangers for the country will impress anyone who has served as legal adviser to a President in time of transition and public anxiety. While an interval of detached reflection may temper teachings of that experience, they probably are a more realistic influence on my views than the conventional materials of judicial decision which seem unduly to accentuate doctrine and legal fiction.” [Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579, 634 (1952) (Jackson, J., concurring).]
Over and over one finds in his opinions references to the potential consequences that would follow if a case were decided a certain way, with an implicit or explicit suggestion that these consequences played a role in his decisionmaking process. See, e.g., Terminiello v. City of Chicago, 337 U.S. 1, 37 (1949) (Jackson, J., dissenting) (“The choice is not between order and liberty. It is between liberty with order and anarchy without either. There is danger that, if the Court does not temper its doctrinaire logic with a little practical wisdom, it will convert the constitutional Bill of Rights into a suicide pact.”); McDonald v. United States, 335 U.S. 451, 460 (1948) (Jackson, J., concurring) (“I am the less reluctant to reach this conclusion because the method of enforcing the law exemplified by this search is one which not only violates legal rights of defendant but is certain to involve the police in grave troubles if continued.”); W. Virginia Bd. of Educ. v. Barnette, 319 U.S. 624, 641 (1943) (“Nevertheless, we apply the limitations of the Constitution with no fear that freedom to be intellectually and spiritually diverse or even contrary will disintegrate the social organization. To believe that patriotism will not flourish if patriotic ceremonies are voluntary and spontaneous instead of a compulsory routine is to make an unflattering estimate of the appeal of our institutions to free minds.”). It seems to me like these sorts of considerations are antithetical to originalism of any variety.
Again, a superbly well-informed and thoughtful reply. I will of course disagree with it, at least in part.
First, though, I want to say that I find it more useful in these discussions to commend arguments than to commend justices. It is very rare to find a SCOTUS justice, or indeed any judge, whose decisions are greatly consistent with one another. Some legal scholars worry about this a lot, and believe they achieve a considerable degree of consistency. Others think that the achievement of consistency over time is nearly impossible and should not be the focus of obsessive attention. Justice Cardozo, for instance, in a famous book, commented,
It is well enough to say that we shall be consistent, but consistent with what? Shall it be consistency with the origins of the rule, the course and tendency of development? Shall it be consistency with logic or philosophy or the fundamental conceptions of jurisprudence as disclosed by analysis of our own and foreign systems? All these loyalties are possible. All have sometimes prevailed. How are we to choose between them? Putting that question aside, how do we choose between them?
He thought it was best to judge each case on its merits and not try to force one’s decisions into a pigeonhole, not to be afflicted by that “hobgoblin of little minds” that Emerson spoke of.
With all this in mind, my commendation was of a particular decision by Justice Jackson, not of Jackson’s jurisprudence more generally. I take Raghav’s point about the general tendency of Jackson’s legal reasoning.
Still, I think Youngstown is pretty useful in illustrating the points I have been trying to make in my two earlier posts on these matters, and in this one. Viz.:
1) Raghav writes of Hugo Black as a textualist, and he certainly is in Youngstown, where he refuses to allow Presidential powers not specified in the Constitution; but earlier, in the Korematsu decision from which Jackson so eloquently dissented … not so much. Thus illustrating my point about the rarity of judicial consistency, and also illustrating the intrinsic difficulties of methodological originalism, or methodological textualism, if you will.
2) By contrast, I love Jackson’s concurrence in Youngstown precisely because it acknowledges that there are multiple forces at work in any major judicial decision, multiple obligations for any justice. (This is also Cardozo’s point in the passage I quoted above.) He points out that one must look not just at the relevant judicial history, but also the contemporary legislative context: e.g., is the President in attempting to extend his powers acting in defiance of the express will of Congress? I read Jackson’s concurrence as a kind of correction of Black’s supposed textualism. Jackson is saying to Black, “Yes, you’ve reached the right decision in this case, but you are wrong to suggest that this is, or can be, a matter of simply reading the Constitutional text and saying, ‘Nope, nothing about seizing steel mills there.’ You may think that you made your decision on purely textual grounds, but you didn’t, because none of us ever does.”
3) Thus I think that Jackson’s concurrence in Youngstown operates under the same general jurisprudential logic as his dissent in Korematsu. He acknowledges that strict methodological textualism, or originalism, is impossible; but he refuses to accept that the only alternative to such an interpretative standard is Laurence Tribe’s model of reading the Constitution pragmatically, as an instrument for allowing us to get what we want — especially when “us” is the office of the President. So I think that in both Korematsu and in Youngstown Jackson is indeed manifesting the kind of skeptical restraint of expanding governmental power that I see as intrinsic to disciplinary (as opposed to methodological) originalism. This is a matter not of hermeneutical theory and the application thereof, but of temperament and disposition. If Jackson did not always act according to this disposition, and I grant Raghav’s point that he did not, he nevertheless managed to do so in some vitally important cases, and in them provides a model that I wish more justices would follow.
All that said, IANAL, so take this for what it’s worth. But at the very least, in this period of ever-increasing executive-branch power, it’s worth our while to meditate on both Korematsu and Youngstown. There’s much to be learned from both cases.
I’m late to this fairly silly little story, but something about it intrigues me:
“Don’t take this class if you believe the Bible is inspired or infallible.”
When I started taking classes a year ago as an entering doctoral student here at UC Berkeley, I knew I was entering a very liberal environment. I had heard that the campus was the flagship of liberal academia, and I was also familiar with other bastions of liberalism during my time in the Ivy League and at Oxbridge as an undergraduate and a master’s student, respectively. But despite UC Berkeley’s ultra-liberal reputation, I took it as a given that there was still significant latitude for free thought and expression in the classroom.
Thus, I was not expecting the unapologetically heavy-handed double standard I encountered from the professor, a well-respected biblical scholar. His initial cutting remark within five minutes of the start of class was soon followed by more: “This stuff isn’t taught in synagogues or churches because they don’t want to piss people off. … Anyone can take this class, as long as you play by the rules of the game. … If you disagree with the approach we use, that’s an F.”
What intrigues me is that I have a good deal of sympathy for the professor. Here’s why: A handful of times in my career at Wheaton College I had students who didn’t think that we should be reading literature at all. The protestors were always people who didn’t really grasp the distinction between a Bible college and a Christian liberal-arts college, and were deeply and genuinely grieved that in a school that formally upheld the authority of Holy Scripture students would be asked to read books written by atheists and pagans.
I think I was kind to these people, and would offer to meet with them in my office to talk about their concerns. But on two or three occasions a student wanted to use class time to conduct a debate about whether Christians were allowed to read non-Christian books — something that, beyond a fairly brief statement about why I think Christians are not just permitted to read such books but often should read such books, I was disinclined to do.
This was more of a pedagogical than a theological decision. There indeed could have been value, and not just for the protesting student, in going back to something like First Educational Principles and articulating a defense of a Christian liberal arts model, quoting Augustine (“All truth is God’s truth, wherever it is found”; “spoiling the Egyptians”) and all that. But in my judgment that class was, to quote a wise man, not the venue. The more time I spent making such arguments, the less time I could spend on the literature that I had been hired to teach. So, at the risk of frustrating people who had legitimate questions, I always made the call to cut that debate short.
So yeah, I have some sympathy for the professor at Cal. But only some. He was also — if he is being quoted accurately here — acting like a jerk, and here’s how he could have handled the sitation better:
- First, he could have been more clear about what he was actually saying to his students. First he said, “Don’t take this class if you believe the Bible is inspired or infallible”; but then he said, “Anyone can take this class, as long as you play by the rules of the game.” Which is it? Similarly unclear: “If you disagree with the approach we use, that’s an F.” If you disagree with anything about the “approach”?
- Having gotten all that clear in his own mind, he should have said something like this: “The historical-critical method for studying the sacred texts of Israel is often controversial within faith communities; but this isn’t a faith community, it’s an academic one, and we have our own rules and methods. You are of course free to disagree that this is the best way to approach these texts; but in this class this is the general approach we’re going to employ, and that’s not up for debate.”
- And then, finally, he should have said, “If you have questions about this decision, please come by my office to talk. I will try to explain, briefly, why I take the approach I take, and if that’s not enough for you, I’ll point you towards some books and articles that will make the case in greater detail.”
As a professor, you don’t have to change the structure of your class to suit students who would prefer you to do things a different way; but you don’t have to treat them with contempt or derision either. And that’s a key universal rule for teachers.
To understand why I think disciplinary originalism is valuable, let’s take a look at one of the most widely condemned of SCOTUS decisions, Korematsu vs. the United States. In Korematsu the court allowed the practice of evicting United States citizens, often native-born citizens, from their homes and moving them away from the West Coast simply because they were of Japanese descent. The vote was 6-3, and each of the justices who voted in favor of the executive order that mandated the evictions was appointed by President Roosevelt, the man who issued that order. (In a separate but closely related case, the Court ruled that such citizens could not be “detained,” thus depriving the internment camps for Japanese-Americans of legal sanction.)
The chief interest of Korematsu, for today’s reader of the history, is the dissent by Justice Robert Jackson, later to become the Chief Prosecutor at the Nuremberg Trials. In the first stage of his dissent — which you may see in full by going here and scrolling about three-fourths of the way down — Jackson points out that Fred Korematsu was a natural-born citizen of the United States whose loyalty to his country had never been questioned by anyone. He was merely living and working in the place of his birth, but was by the Executive Order obliged to turn himself in to military authorities — an obligation that he would not have faced had he been even “a German alien enemy, an Italian alien enemy, [or] a citizen of American-born ancestors, convicted of treason but out on parole.” Yet he was different from those others “only in that he was born of different racial stock.” Jackson continues:
Now, if any fundamental assumption underlies our system, it is that guilt is personal and not inheritable. Even if all of one’s antecedents had been convicted of treason, the Constitution forbids its penalties to be visited upon him, for it provides that ‘no Attainder of Treason shall work Corruption of Blood, or Forfeiture except during the Life of the Person attained.’ Article 3, 3, cl. 2. But here is an attempt to make an otherwise innocent act a crime merely because this prisoner is the son of parents as to whom he had no choice, and belongs to a race from which there is no way to resign.
This point would have been sufficient in itself to declare Roosevelt’s order unconstitutional, but Jackson discerned a larger and greater issue at stake:
Much is said of the danger to liberty from the Army program for deporting and detaining these citizens of Japanese extraction. But a judicial construction of the due process clause that will sustain this order is a far more [323 U.S. 214, 246] subtle blow to liberty than the promulgation of the order itself. A military order, however unconstitutional, is not apt to last longer than the military emergency. Even during that period a succeeding commander may revoke it all. But once a judicial opinion rationalizes such an order to show that it conforms to the Constitution, or rather rationalizes the Constitution to show that the Constitution sanctions such an order, the Court for all time has validated the principle of racial discrimination in criminal procedure and of transplanting American citizens.
Jackson’s point here is exceptionally acute: this is not a matter of “rationalizing” — giving an implausible intellectual account of — the order, but rationalizing the Constitution. Which is a far more dangerous move.
The principle then lies about like a loaded weapon ready for the hand of any authority that can bring forward a plausible claim of an urgent need. Every repetition imbeds that principle more deeply in our law and thinking and expands it to new purposes. All who observe the work of courts are familiar with what Judge Cardozo described as ‘the tendency of a principle to expand itself to the limit of its logic.’ A military commander may overstep the bounds of constitutionality, and it is an incident. But if we review and approve, that passing incident becomes the doctrine of the Constitution. There it has a generative power of its own, and all that it creates will be in its own image. Nothing better illustrates this danger than does the Court’s opinion in this case.
People are often automatically dismissive of “slippery-slope” arguments, as though no slopes are ever slippery; but once a metaphor is dead it’s dead. Justice Cardozo’s phrasing may be more useful: “the tendency of a principle to expand itself to the limits of its logic.” This tendency is almost inevitable in SCOTUS decisions, because of the power of precedent: only rarely is a decision walked back; rather, a “passing incident” very easily and naturally “becomes the doctrine of the Constitution” when justices see different situations in which it can be applied. All the pressure is on one side, towards expansion rather than contraction of the principle.
Such expansion of a principle is all the more likely to happen when popular opinion, especially elite popular opinion, is also strongly on one side. FDR’s decision to move Japanese-Americans from their homes was quite popular (as were the internment camps) and eight of the Justices had the further pressure of owing their positions on the Court to the Roosevelt. What they needed — but what only three of them had — was a jurisprudential principle substantial enough to make a counterweight to those pressures. All three of the dissenting judges had that principle, but it was most fully developed in and articulated by Jackson.
Just a few months ago Justice Antonin Scalia was asked, by law students at Santa Clara University, which Supreme Court opinion he most admired. He named Jackson’s dissent in Korematsu.
There are two very different ways to think of Antonin Scalia’s preferred method of interpreting the Constitution, often called originalism—and by the way, that Wikipedia page is unusually accurate and useful, though perhaps skewed a bit towards critics of originalism.
One might think of originalism as a method, or one might think of it as a discipline. If you conceive of it in the former way, you will run into some serious problems; if you conceive of it in the latter way, it is profoundly salutary. In a recent critique of Scalia, Laurence Tribe uses the words “method” and “methods” seventeen times — he can’t conceive of originalism in any other way. But there is another way.
Originalism as a method is unmanageable for several reasons, all of which stem from the essential and unavoidable condition of Constitutional interpretation, which is to apply to legal situations today a Constitution that was written more than two centuries ago. For one thing, it often requires justices, even after thorough and detailed research, to guess (infer, intuit) what the Framers might have thought about what’s happening today. But how do you do that? In the strictest sense, the Framers imagined almost nothing that we are dealing with today, since our world is so different. So rigorous methodological originalism would make the Constitution irrelevant to the law today.
It is tempting to say that, since strict or what we might call methodological originalism removes the Constitution from current legal disputes, then must we not follow the living Constitution model? In that just-linked (and now-archaic, though still-relevant) article, Jack Balkin of Yale Law School writes, “We are all living constitutionalists now. But only some of us are willing to admit it.” But that’s true only if the “living Constitution” model is the only alternative to methodological originalism.
If the problem with methodological originalism is that it renders the Constitution effectively nugatory in current legal disputes … well, that’s the problem with the living Constitution model too. Because in practice what makes the Constitution “living” is that it says what we want it to say. Scholars like Balkin more-or-less explicitly endorse this stance: “There’s something deeply wrong with a theory of constitutional interpretation that treats some of the key civil rights decisions of the 20th century as mistakes that we are stuck with.” That is, those “key civil rights decisions” produce immensely valuable and just results — a point I absolutely agree with — and therefore must be good decisions.
As I commented in an earlier post on Scalia, Balkin’s essential jurisprudential principle might be summarized thus:
If a law produces, or seems likely to produce, an outcome that right-thinking people deem socially desirable, then that law is ipso facto constitutional; by contrast, if that law produces, or seems likely to produce, an outcome that right-thinking people deem socially undesirable, then that law is ipso facto unconstitutional.
But it’s hard to see how a “living Constitution” that is alive in this way is anything more than a re-animated corpse controlled by a console in the hands of SCOTUS. Balkin has tried to square this circle, but in a way that it seems to me makes virtually no concessions to the originalist view it claims to be taking seriously. And Tribe simply grasps the nettle: “I see [Scalia], with great respect, as a worthy adversary—but an adversary all the same—of the just and inclusive society that our Constitution and laws should be interpreted to advance rather than impede.” First you decide what you think a “just and inclusive” society is, and then you interpret the Constitution so that it endorses your views. In such a scheme the Constitution, and therefore our own national history, is rendered incapable of speaking back to us — of having its own voice rather than a dim echo of our own.
I confess to much ambivalence on this score. In a very important sense it would have been far, far better for the key social and legal decisions of the Civil Rights era to have been made by legislative rather than the judicial system. But our legislators, especially on the state level, were moving very slowly or not at all. And when I think about those who in those days counseled patience, I always hear the voice of Martin Luther King, Jr.:
We have waited for more than three hundred and forty years for our God-given and constitutional rights. The nations of Asia and Africa are moving with jetlike speed toward the goal of political independence, and we still creep at horse-and-buggy pace toward the gaining of a cup of coffee at a lunch counter. I guess it is easy for those who have never felt the stinging darts of segregation to say “wait.” But when you have seen vicious mobs lynch your mothers and fathers at will and drown your sisters and brothers at whim; when you have seen hate-filled policemen curse, kick, brutalize, and even kill your black brothers and sisters with impunity; when you see the vast majority of your twenty million Negro brothers smothering in an airtight cage of poverty in the midst of an affluent society; when you suddenly find your tongue twisted and your speech stammering as you seek to explain to your six-year-old daughter why she cannot go to the public amusement park that has just been advertised on television, and see tears welling up in her little eyes when she is told that Funtown is closed to colored children, and see the depressing clouds of inferiority begin to form in her little mental sky, and see her begin to distort her little personality by unconsciously developing a bitterness toward white people; when you have to concoct an answer for a five-year-old son asking in agonizing pathos, “Daddy, why do white people treat colored people so mean?”; when you take a cross-country drive and find it necessary to sleep night after night in the uncomfortable corners of your automobile because no motel will accept you; when you are humiliated day in and day out by nagging signs reading “white” and “colored”; when your first name becomes “nigger” and your middle name becomes “boy” (however old you are) and your last name becomes “John,” and when your wife and mother are never given the respected title “Mrs.”; when you are harried by day and haunted by night by the fact that you are a Negro, living constantly at tiptoe stance, never quite knowing what to expect next, and plagued with inner fears and outer resentments; when you are forever fighting a degenerating sense of “nobodyness” — then you will understand why we find it difficult to wait. There comes a time when the cup of endurance runs over and men are no longer willing to be plunged into an abyss of injustice where they experience the bleakness of corroding despair. I hope, sirs, you can understand our legitimate and unavoidable impatience.
A long passage, but one that can’t be reflected on too deeply or too often. The extended, throbbing sentence in the middle of that paragraph is as powerful an embodiment as I know of the pain of waiting, waiting, waiting, for a remediation of the grossest of injustices.
So I get what Balkin is saying when he notes that few of us, on the Left or on the Right, would want to undo “the key civil rights decisions” of the 20th century. But not all the decisions, not most of the decisions, made under the living-Constitution model have been as just or as commendable. Indeed, some of them have made a mockery of the Constitution and cannot possibly be defended in terms of legal reasoning, however desirable one might think the outcome.
So this is why discplinary originalism matters. Disciplinary originalism understands that methodological originalism is unworkable because it makes the Constitution useless. But it also wants to allow the Constitution to speak to us, and to force us, when we are departing in some significant way from its principles, to go back to our legislators and change the laws — and amend the Constitution itself when necessary. Disciplinary originalism keeps us honest. It forces us to know what we’re doing, and not to console ourselves with the pretense that we are somehow in the Great Tradition of the Framers when we are in fact repudiating much of what they believed. It doesn’t tell us we can’t or shouldn’t dissent from the beliefs of the Framers; it just asks us to admit it openly when we do so.
Critics of Justice Scalia often accused him of inconsistency. And insofar as he was a methodological originalist he sometimes was inconsistent. But I think the heart of his jurisprudence was disciplinary originalism, and with his death the most powerful embodiment of that vital principle was lost. I do not think we shall look upon his like again. And that means that our Supreme Court will continue to make the kinds of decisions it has been making for decades, but will have no one on its bench to remind it of what it’s really doing. Antonin Scalia was the conscience of SCOTUS, and I don’t see how it’s going to get another one.