There is no better journalist in America than Andrew Ferguson, and his brilliant takedown of bad behavioral science provides yet more evidence for that claim. A passage on Stanley Milgram’s famous obedience-to-authority experiment especially caught my eye:
The results were an instant sensation. The New York Times headline told the story: “Sixty-five Percent in Test Blindly Obey Order to Inflict Pain.” Two out of three of his subjects, Milgram reported, had cranked the dial all the way up when the lab-coat guy insisted they do so. Milgram explained the moral, or lack thereof: The “chief finding” of his study, he wrote, was “the extreme willingness of adults to go to almost any lengths on the command of an authority.” Milgram, his admirers believed, had unmasked the Nazi within us all.
Did he? A formidable sample of more than 600 subjects took part in his original study, Milgram said. As the psychologist Gina Perry pointed out in a devastating account, Beyond the Shock Machine, the number was misleading. The 65 percent figure came from a “baseline” experiment; the 600 were spread out across more than a dozen other experiments that were variations of the baseline. A large majority of the 600 did not increase the voltage to inflict severe pain. As for the the participants in the baseline experiment who did inflict the worst shocks, they were 65 percent of a group of only 40 subjects. They were all male, most of them college students, who had been recruited through a newspaper advertisement and paid $4.50 to participate.
The famous 65 percent thus comprised 26 men. How we get from the 26 Yalies in a New Haven psych lab to the antisemitic psychosis of Nazi Germany has never been explained.
I’m interested in this because in my book on original sin I referred to Milgram’s experiments quite positively — and moreover, I never did any reading to find out whether they had been subjected to critique. I just assumed that they were universally accepted as valid. And why did I make that assumption? Because Milgram’s experiments confirmed the story I was telling about the return, in the twentieth century, of a widespread belief in human depravity.
Now, to be sure, the book by Gina Perry that Ferguson cites as authoritative on this matter has itself come under some criticism for one-sidedness; Milgram’s famous experiment may indeed hold up, at least in large part. But the point I want to make here is that I didn’t do anything to check it out — for me, the story Milgram told was too good to be false.
A. There’s nothing to be afraid of — but yes (since you’re wondering) my conviction that democracy is a failed experiment does stem, in part, from my reading of the neoreactionaries, especially Moldbug. But I’m not with him all the way — for instance, as you can tell from my earlier comments, I have a good deal more respect for the U. S. Constitution than does Moldbug, who has commented, “The basic nature of constitutional government is the formalization of power, and democracy is the formalization of mob violence.” Nah. But in many other respects his diagnosis of where we’ve gone awry is spot on.
B. Is it? I don’t think so. In fact, hearing that your thoughts have been shaped by Moldbug’s does more to discredit them than anything else you’ve said.
A. Why? Moldbug is a very smart guy — he’s just saying the kinds of things that most people are afraid to say.
B. Maybe. And sure, he’s smart. But he’s not especially knowledgable about things he needs to be knowledgable about in order to offer a compelling alternative to the existing political order. For instance, in one of his most-read posts he writes, “Thomas Aquinas derived Catholicism from pure reason. John Rawls derived progressivism from pure reason. At least one of them must have made a mistake. Maybe they both did” — which is absolutely nonsensical. He has no idea what he means by “Catholicism,” “progressivism,” “pure reason,” or “derived.” He has no idea what either Aquinas or Rawls would have made of those terms, or why they would have described their projects in wholly different ways. I distrust Moldbug because Moldbug clearly doesn’t understand — does not have even a minimally competent, first-year-undergrad comprehension of — many of the positions he rejects.
A. All right, so let’s grant, per argumentum, that Moldbug is not an expert in the history of political philosophy. But he doesn’t have to be in order to present a coherent and useful vision of a new direction in which we can go — a new direction I think you’ll agree we very much need.
B. A new direction, I’m not so sure; but a different direction, yes. Anyway, please remember that I’m not asking Moldbug to be an expert, but I do think he needs to have at least a basic understanding of the views he’s rejecting — precisely because he’s grounding the need for his ideas is the conviction that those other ideas are wrong. However, his acquaintance with those ideas is too superficial, and he’s too incurious about what Aquinas and Rawls really think, for me to take seriously his claim that he can offer a compelling alternative.
A. I don’t think that follows — you’re placing too much emphasis on the need to understand some pre-existing tradition of political thought. You’re trying to hold Moldbug accountable to the very system he’s repudiating: you’re rejecting the red pill because it’s not the blue one.
But in any case, let’s not belabor this question. I still have an argument I want to make.
B. Fair enough — as long as I get a chance to make an argument of my own before we’re too old to care.
A. Of course! But now I want to get back to this notion of — as you divined — aristocracy. The word means “rule by the excellent,” or the “best,” and the primary reason people dislike it is that they know that aristocracy never lives up to its name: it is never rule by the most excellent, but by the rich and powerful who in order to justify their rule designate themselves as excellent. That’s why it’s so absurd when people try to overcome resistance by replacing “aristocracy” with “meritocracy” — the words are synonyms, and “merit” can be faked and then justified as easily as can any other claim to excellence. By meritocracy people usually mean “rule by those who have been academic high achievers” as opposed to the popular use of aristocracy to mean “rule by those of high social status” — but given the enormously strong correlation between social status and academic performance, this is a distinction virtually without a difference.
B. So anyone, like you, who wants to make a case for aristocracy/meritocracy in preference to democracy has one big job at the outset: to show how it’s possible for a society to produce genuine aristoi — and put them in charge.
B. But even if you do that, you won’t have proved that such an aristocracy would be superior to democracy.
A. Sure. But one thing at a time. And don’t forget, if rule by the aristoi can be a fiction, rule by the demos can be too.
B. No doubt.
A. Okay, so back to work. I think the model we want to consider — though perhaps not to imitate slavishly — is imperial China’s examination system.
(to be continued…)
Damon Linker likes California’s assisted-suicide bill. After rejecting religiously-based arguments against suicide, he writes,
The arguments raised by disability-rights activists are more powerful, since they’re based less on appeals to absolute (and unconvincing) moral strictures than on the law’s potential to lead to bad consequences and abuse. One of those consequences is a kind of soft eugenics in which the terminally ill are subtly pressured to do the “selfless” thing of ending their lives to save their loved ones from the financial and emotional burdens of caring for them. One could also imagine a future attempt to expand the law to include not just terminally ill and suffering patients, but also people with chronic and debilitating but not fatal or excruciating illnesses. Finally, there’s the possibility of the law being changed so that it permits not just the patient but also family members or friends to request the lethal dosage. That, too, could lead to the exertion of pressure on a patient to end his or her life.
These are legitimate concerns that should be taken seriously, especially in light of a recent disturbing New Yorker article about how Belgium allows euthanasia for people suffering from depression. But the California law is written to avoid being applied in anything like the ways feared by most disability activists. So yes, let’s beware future amendments to the law that could lead to abuse. But that’s no reason to oppose its current, limited, and responsible form. (One doesn’t normally oppose a law based on the ways it might one day be changed, revised, or amended.)
I just want to make two comments about this. First, having read the text of the proposed law, I can’t see anything in it that would warrant Linker’s claim that “the California law is written to avoid being applied in anything like the ways feared by most disability activists.” It seems to me that it would be very easy for an attending physician and members of the dying person’s family to practice “a kind of soft eugenics in which the terminally ill are subtly pressured to do the ‘selfless’ thing of ending their lives to save their loved ones from the financial and emotional burdens of caring for them.” In fact, I don’t see how a law could be written to prevent that kind of pressure from being brought to bear on the dying.
Second, and in a spirit of theoretical disputation, I note Linker’s claim that “One doesn’t normally oppose a law based on the ways it might one day be changed, revised, or amended.” Doesn’t one? Shouldn’t one? It seems to me that there are cases in which it would be sensible to look at possible future extensions of a proposed law while evaluating its current form — and this could be one of them.
The law opens the choice of physician-assisted suicide to persons with a “terminal disease,” and defines “terminal disease” as “an incurable and irreversible disease that has been medically confirmed and will, within reasonable medical judgment, result in death within six months.” Surely someone will say, “Why six months? Why not a year — or more — if ‘reasonable medical judgment’ concludes that death is overwhelmingly likely?” That is, there’s an arbitrariness in the choice of six months as the (pardon the term) deadline for this choice which makes it likely that there will soon be pressure to extend it.
Moreover, there’s a great deal of wiggle room in the phrase “reasonable medical judgment.” One doctor may deem a disease fatal that another finds eminently treatable; and even when fatality is for all intents and purposes certain, people often surprise their doctors. Some cancer patients have lived far beyond the utmost time predicted for them; others die much more quickly than expected. (My father was one of the latter.)
So it seems to me that the law in its current form is already ripe for abuse; and it seems extremely likely that strong arguments will be made for extending the time frame in which suicide may be assisted — in the name of the same compassion that causes Linker to endorse the current bill. So even on non-religious grounds the proposed law seems to me far more questionable than Linker allows.
I want to consider some stories I have read recently — juxtapose them to one another. Let’s begin by looking at this story:
Last year I told a gay black male who wrote a story about a gay black male that I didn’t care about race or gender, and the class gasped. Even though I explained that I cared more about what happened to the character and about the elegance of the prose, my comment could have been a signal to erect a guillotine on the campus lawn. Nonetheless, the student thanked me after class. He said, “No one looks at my stories. They just look at me.”
Microinvalidations are characterized by communications or environmental cues that exclude, negate, or nullify the psychological thoughts, feelings, or experiential reality of certain groups, such as people of color. Color blindness is one of the most frequently delivered microinvalidations toward people of color.
“People are just people; I don’t see color; we’re all just human.” Or “I don’t think of you as Chinese.” Or “We all bleed red when we’re cut.” Or “Character, not color, is what counts with me.”
And then this story:
Academics of color experience an enervating visibility, but it’s not simply that we’re part of a very small minority. We are also a desired minority, at least for appearance’s sake. University life demands that academics of color commodify themselves as symbols of diversity — in fact, as diversity itself, since diversity, in this context, is located entirely in the realm of the symbolic. There’s a wound in the rupture between the diversity manifested in the body of the professor of color and the realities affecting that person’s community or communities. I, for example, am a black professor in the era of mass incarceration of black people through the War on Drugs; I am a Somali American professor in the era of surveillance and drone strikes perpetuated through the War on Terror….
It’s not that we’re too few, nor is it that we suffer survivor guilt for having escaped the fate of so many in our communities. It’s that our visibility is consumed in a way that legitimizes the structures of exclusion.
Skin feeling: to be encountered as a surface.
And finally, Ralph Ellison from Invisible Man, where so much of this discourse begins:
I am invisible, understand, simply because people refuse to see me. Like the bodiless heads you see sometimes in circus sideshows, it is as though I have been surrounded by mirrors of hard, distorting glass. When they approach me they see only my surroundings, themselves or figments of their imagination, indeed, everything and anything except me.
It’s easy — especially for anyone who discounts racism and the effects of racism as major shapers of the American cultural experience — to throw up one’s hands and say “It’s impossible to win with these people! It’s white people’s fault if they’re visible, it’s white people’s fault if they’re invisible! Heads they win, tails we lose!” Indeed, it’s not just easy, it’s inevitable.
But you know, it has to be hard to be either invisible or hyper-visible; and white America really does oscillate between casual clueless racism and genuine heartfelt desire to achieve colorblindness. (Though probably there has been a general drift towards the latter, which could be taken advantage of rather than resented.)
I would love to have a clear answer to this conundrum, but I don’t — except to note that it is a conundrum, an insoluble puzzle, a rhetorical circle — it’s the Mister Bones’ Wild Ride of political rhetoric. So maybe this is a good point at which to remind ourselves that, in this context, both “visibility” and “invisibility” are largely metaphorical. And then look through and beneath them for the more complex reality that they fail to capture — even if they may have been at times in their history conceptually useful and powerful. I think many critics of American racism have attached themselves to a vocabulary that just drops them in a ride that never ends.
(The first installment of the dialogue is here.)
B. But you’ve totally shifted ground here! What you’re offering now is not a critique of self-government, or even representative democracy, but of a corrupt electioneering system which couldn’t serve plutocracy better if it were designed to do so – and really, it is designed to do so, come to think of it.
A. And every “informed voter” knows that that’s the case, and expresses much tut-tutting disapproval, and occasionally even raises his or her voice in outrage — but keeps re-electing the same corrupt and/or weak-willed losers, or their newest clones. I have complained about American voters being ignorant, but even when they’re not ignorant they are thoughtless. Every opportunity they have to address the corruption of the system— and they have that opportunity every two years — they squander. They listen to the empty promises of politicians that flatter them, and pay not the slightest attention to the needs of society as a whole or those who come after them — that’s the selfishness, their third item of my indictment. They have repeatedly abused the privilege of voting, and they deserve to have it taken away from them.
B. Well, it’s a powerful indictment. According to your argument, then, this nearly universal abdication of democratic responsibility has led (one must assume) to the collapse of American society, widespread poverty, and internal and external powerlessness. Because clearly it wouldn’t be possible to a political system as corrupt and inefficient as the one you’ve described to produce even a mediocre social order — let alone an enormously wealthy and powerful society, a global hegemon such as the world has rarely if ever seen. So perhaps you’re living in the universe next door to mine….
A. No, I think we’re in the same universe, though I might want to argue whether a country’s achieving the status of “a global hegemon such as the world has rarely if ever seen” is, as you seem to think, a good sign. But let’s set all that aside and cut to the chase. As I read current events, and the history that has produced them, American power is chiefly the residual result of decisions made long ago by a much smaller electorate, a kind of aristocracy in all but name. Insofar as that aristocracy excluded women, people of color, and (at first) poor white men, it was unjustifiable; but another way to look at it is that the power went to the best-educated in society, the least vulnerable to the pressures of external forces. We are at work dismantling the brilliant edifice they constructed, though perhaps not fast enough for some; but it was so magnificently built, so delicately balanced— “a machine that would go of itself” — that it has proven exceptionally difficult to dismantle. But it will be dismantled, and just as we are continuing to benefit from the wisdom of our ancestors, our grandchildren will suffer from the stupidity of voters today.
B. You realize, I trust, that your historical argument could be challenged, and seriously challenged, at every single point.
A. Yeah. But we’re having a conversation, I’m not writing a treatise.
B. You also realize, I trust, that where you’re headed would constitute a more radical dismantling of the Constitution than anything else on the table?
A. No. I absolutely deny that. It would be a way to re-articulate and re-implement genuinely Constitutional principles in a new social order, one in which ignorance, thoughtlessness, and selfishness are no longer impediments to political power and influence.
B. I’m going to do you the honor of assuming that you are not going to argue for confining the franchise to white males who make more than $100,000 a year….
A. Much obliged.
B. But this is going to be an argument for a New Aristocracy, isn’t it?
B. I was afraid of that.
(to be continued…)
A. It’s time to accept a simple and yet profound fact: democracy is a failed experiment. People throughout the Western world — well, hold on: let’s just confine this discussion to America. Democracy in America is a failed experiment. Americans have demonstrated conclusively that they are too ignorant, thoughtless, and selfish to be trusted with self-governance.
B. Ignorant, thoughtless, and selfish! What a trifecta! Hyperbole much?
A. It’s not hyperbole. Let’s take my charges one at a time. Surely I don’t need to recite the dark litany of polls and studies that demonstrate how grossly misinformed Americans are about the basics of our political system, current laws and policies, the most elementary facts of world geography—
B. No, no, you don’t have to recite that litany — I have it by heart. But do you think that’s a new thing? Are you under the impression that our ancestors were learned and wise, spending their evenings discoursing on the subtleties of recent Supreme Court decisions?
A. I’m tempted to say yes. After all, they weren’t sitting around watching American Idol or hammering out wrathful comments on YouTube videos. They attended lectures and chatauquas, they participated in town halls and debating societies —
B. “They” did? You mean a handful of the wealthier and better-educated white men did, I think.
A. As I said, I’m tempted to say yes — and I really do believe the situation was more complicated, and better, than you have suggested. But for now I’ll waive the point. Let’s posit that Americans today are at least as knowledgable as their ancestors were. Okay?
B. Well … okay. For now. I reserve the right to debate this point later.
A. Fair enough. So what I want to say is that ignorance today matters more than it did in the past, because the role of government in our lives is so much greater. A hundred and fifty years ago it was possible to live a full and happy life with minimal experience of government. About a hundred years before that it was possible for Samuel Johnson to write, “How small, of all that human hearts endure, / That part which laws or kings can cause or cure.” Such innocent times! Now “laws and kings” have insinuated themselves so deeply into all our lives that ignorance of their power and influence can exact horrifying costs.
Plus, we have so many more educational opportunities than our ancestors —
B. Hang on, hang on — this is a dialogue, remember?
A. Sorry. Please go on.
B. Thanks. I think you need to stop and reflect on the fact that there is so much more to be ignorant of now than there was 150 years ago — and the increased complexity of government is a function of the increased complexity of the world. The transportation and communications technologies that arose in the 20th century have created a “global village” the very existence of which creates a need for wide-ranging knowledge that our ancestors couldn’t have imagined — to blame today’s people for —
A. I’m not blaming anyone.
B. Well, you kinda are.
A. I’ll try not to, because it’s not necessary to my argument. People may not be at fault for being too ignorant for self-government — but they still are too ignorant for self-government.
B. But isn’t that why we have a representative democracy? People elect representatives who can devote their full time and energy to mastering the complexities that we aren’t able to master.
A. Try watching C-SPAN for a while and tell me if you think those are people capable of “mastering complexities.”
B. Well, I have watched a good bit of C-SPAN and I have seen some pretty wonky Congresspersons — I think your critique is a lot more applicable to the politicians who make a point of saying and doing things that will land them on CNN and in the big newspapers.
A. Okay, that’s a fair point. But I think there are two other points you’re neglecting. First, even the wonky members of Congress tend to be selectively wonky. They have their one little area of expertise — or what they flatter themselves is expertise — and in other matters they just take their direction from their party’s leadership. And second, look at what actually gets done in Congress: certainly not intelligent and reasonable laws crafted by deeply knowledgable people to whom their colleagues defer! Rather, it’s pork-laden overstuffed monstrosities stitched together in order to please the whims of party leaders, big donors, and lobbyists for the hyper-wealthy corporations to which both major parties are equally indebted.
(to be continued…)
There’s a great deal of talk about “safe spaces” these days, but I put the phrase in quotes because rarely do these conversations refer to actual spaces. Instead, people seek social environments in which they’re proteced against verbal assault, or confrontation, or mere discomfort. Place as such doesn’t enter into it.
In stories, though, the idea of the safe space is a powerful one — even if the safety often proves illusory. (“The calls are coming from inside the house.”) And when there is genuine safety it’s rarely complete or permanent. In The Lord of the Rings Tom Bombadil’s house and Rivendell and Lothlorien are places of absolute refuge for the beleaguered characters, but we are reminded that none of them could hold out forever against the evil of Sauron. Perfect rest can be found in them; but only for now. The contingently safe space is a curiously strong theme in the Harry Potter books: living in the Dursley house grants Harry protection from Voldemort — until he comes of age; 12 Grimmauld Place protects members of the Order of the Phoenix — as long as they manage to prevent anyone from seeing them enter or leave; Hogwarts itself is invulnerable to Voldemort and the Death Eaters — but only as long as Dumbledore is present and in charge.
There are of course genuinely safe spaces in literature, and perhaps many readers will have favorites. I certainly know what mine is: it’s Nero Wolfe’s brownstone on West 35th Street in Manhattan.
Of all fictional series, Rex Stout’s Nero Wolfe stories have the most ingenious and fertile conceit (with the possible exception of Patrick O’Brian’s Aubrey-Maturin books). It is twofold: that the enormously fat Wolfe never willingly leaves his house, preferring to solve crimes simply by application of brain power; and that the man who moves for Wolfe, who serves as a kind of mobile prosthetic body for him, Archie Goodwin, narrates all the stories. There’s much to commend about this double conceit’s power to generate good stories — and about Rex Stout’s ability to conjure a consistently delightful narrative voice for Archie — but I want to talk about the house.
If you climb the steps and knock on the door, it will probably be answered by Fritz, Nero Wolfe’s chef — simply because Fritz’s kitchen is on the first floor, along with the dining room and Wolfe’s enormous office. The rest of the house is described at the Nero Wolfe Wikipedia page (linked above):
Nero Wolfe has expensive tastes, living in a comfortable and luxurious New York City brownstone on West 35th Street. The brownstone has three floors plus a large basement with living quarters, a rooftop greenhouse also with living quarters, and a small elevator, used almost exclusively by Wolfe. Other unique features include a timer-activated window-opening device that regulates the temperature in Wolfe’s bedroom, an alarm system that sounds in Archie’s room if someone approaches Wolfe’s bedroom door or windows, and climate-controlled plant rooms on the top floor. Wolfe is a well-known amateur orchid grower and has 10,000 plants in the brownstone’s greenhouse. He employs three live-in staff to see to his needs.
A back door, rarely used and treated as something of a secret, leads to a small garden where Fritz grows herbs and which features a vaguely described way out onto 35th Street (which seems also to be a secret, and is probably invisible from the outside, like 12 Grimmauld Place.)
The brownstone possesses an aura of self-sufficiency: I suppose Fritz has to shop for the food he cooks, but his larder seems magically full, and the meals served in that kitchen or in the adjoining dining room are in my imagining conjured more than made. (Fritz’s rooms are in the house’s basement, where he keeps an extensive collection of cookbooks.) The little world of the greenhouse, with its custodian Theodore Horstmann who lives among his orchids at the top of the house, is like a chunk of Faerie that one enters not by walking through a strange forest but by taking Wolfe’s little elevator.
Often in the books one of Wolfe’s clients finds himself or herself — usually herself — in danger and is brought by Archie to the house, whereupon the doors are locked and all creatures of evil intent are excluded. In one story a woman tries to stab Wolfe as he sits in his custom-made desk in his office; she dies instead. Wolfe is invulnerable there; I’m reminded again of Tom Bombadil, though in darker and more cynical form, utterly safe “within limits he has set for himself” and making others safe there too.
All of this is of course merely a dream of refuge dreamed by someone (me) who is one of the safest people in the world. As I write these words, refugees from the Middle East are pouring into Europe, and someone posted on Instagram images of notices that the city of Vienna has put up in all the transportation centers. The one in English (I saw Arabic ones too) begin with the word “WELCOME,” and go on to explain the various services the city provides for refugees, and to instruct visitors how to find help. Then, at the end, there is a single three-word paragraph:
You are safe.
“You are safe.” Could there be more powerful, more important, more consoling words? I have never needed to hear them in the way those thousands of refugees need to; and yet they answer to the deepest of all needs. For even water and food can wait for a while.
I have thought sometimes of finding myself in New York City, pursued by evil people who will do terrible things to me before they kill me. Somehow, against all odds, I make my way to the house of West 35th Street and rush up the steps and knock. Archie Goodwin opens the door a crack, then ushers me in. Up we go to the guest bedroom on the third floor, down the hall from Archie’s own room. The room is clean and quiet, and an orchid from Wolfe’s greenhouse stands in a vase on the bedside table. Once alone, I take off my clothes and turn out the light. In the morning Fritz will make a delicious breakfast, and there will be plenty of hot strong coffee. In the meantime, I sleep soundly and peacefully. Because here I am safe.
Damon Linker is right to say that the person now known as Kentucky Clerk should resign if she can’t fulfill the law the terms of her job require her to fulfill.
Mollie Hemingway is right to say that the attacks on Kentucky Clerk are utterly malicious and utterly mendacious.
There are really two significant stories here: one concerns Christians who think that they ought to be able to dissent from government and get paid by it at the same time; the other concerns secular liberals whose one principle in relation to the repugnant cultural other is “Any stick to beat a dog.”
UPDATE: I tried to comment on Noah’s response to this post, but WordPress didn’t let me. Or I don’t think it let me. Anyway two things: first, did I really sound “outraged”? I didn’t feel outraged. Perhaps I need to work on tone management.
Second, about the question of “significance:” if Kim Davis is a unique figure, then Noah is right, the story isn’t significant. But she may not be a unique figure. There seem to be a number of conservative Christians in America with a complex (possibly contradictory) attitude towards this country: on the one hand, a default patriotism and law-and-order mentality, often rooted in the belief that America is a “Christian nation,” that makes them comfortable with holding government jobs; and on the other hand, a belief that like the Apostles they should “obey God rather than man” and therefore should always be ready to dissent from the powers that be. This leads to someone like Kim Davis thinking that it’s possible for her to swear to uphold the law — but to refrain from upholding the law when it’s one her conscience disagrees with. If a large number of Americans, even in just a few states, feel the same way, then that will have consequences for elections, for laws, for the social fabric. And such consequences would be significant. Enough people have spoken out in support of Kim Davis to make me think that it’s not a trivial story.
I am fond of thought experiments, though many people are not—or so I infer from the fact that every time I propose one the most common response I get is a refusal of its terms. So a number of people who have responded to my recent little exercise have said something like “But that’s not the situation we’re in”—Yes it is, in this thought experiment that I am totally making up—or “I would not vote for either party”—but in this thought experiment you have to choose one.
There’s some of this even in the response from my friend Noah Millman, as when he wonders whether there really are threats to religious liberty. In my thought experiment there damn well are, because I say there are! Against Noah, I say that the premises of my thought experiment are not and indeed cannot be “debatable premises,” because they are the ones I posit simply for the sake of the experiment: thus my insistence at the outset on the term “hypothetical.”
I can’t help being reminded of one of my favorite scenes from Wodehouse, in which the pathologically diffident Gussie Fink-Nottle discusses with Bertie Wooster whether he should follow Jeeves’s advice to build his self-assurance by wearing a Mephistopheles outfit to a costume party:
‘And you can’t get away from it that, fundamentally, Jeeves’s idea is sound. In a striking costume like Mephistopheles, I might quite easily pull off something pretty impressive. Colour does make a difference. Look at newts. During the courting season the male newt is brilliantly coloured. It helps him a lot.’
‘But you aren’t a male newt.’
‘I wish I were. Do you know how a male newt proposes, Bertie? He just stands in front of the female newt vibrating his tail and bending his body in a semi-circle. I could do that on my head. No, you wouldn’t find me grousing if I were a male newt.’
‘But if you were a male newt, Madeline Bassett wouldn’t look at you. Not with the eye of love, I mean.’
‘She would, if she were a female newt.’
‘But she isn’t a female newt.’
‘No, but suppose she was.’
‘Well, if she was, you wouldn’t be in love with her.’
‘Yes, I would, if I were a male newt.’
A slight throbbing about the temples told me that this discussion had reached saturation point.
I continue to believe that a thought experiment like the one I suggested is valuable in the same way that A/B testing is valuable. When someone asks you which of two shades of blue you prefer, you can, I suppose, say “Why just two? Why not fifty shades of blue?” or “Why not green, and red, and burnt umber, and all the other colors?” But maybe we would all learn something, even if something small, if you just picked one of the damned shades of blue. And then we can move on to other experiments after that, and gradually, incrementally, build up a more reliable understanding of our own values and preferences.
To those who would say that A/B testing, and thought experiments, are simple in comparison to real-life decisions, I reply: Precisely. That’s just the point of them. Politics is hard because it’s so outrageously complicated. It’s easy to get lost in all the overlapping questions and competing priorities. If you agree with a political party about seven of its official platform positions, but disagree about only one, but the one is something you care passionately about while the seven are, for you, relatively insignificant—how are you supposed to weigh those things? It’s impossible to say off the cuff. More thinking is required. It helps to break the situation down into its component parts. That’s what a thought experiment like the one I proposed is for.
More about the substance of the matter later; right now, I have teaching to do.
Imagine that there are two leading American political parties. Imagine further that they are in general agreement on all issues except two. (That’s what makes this a true hypothetical.)
The first point of disagreement concerns religious liberty. Party A is a strong supporter of religious liberty; Party B thinks that religious liberty needs to be circumscribed in order to secure maximal equality or justice for others.
The second point of disagreement concerns foreign policy. Party B is in these matters cautious and circumspect, disinclined to adventurism, not isolationist but not interventionist either. Party A, by contrast, never met a foreign conflict it didn’t want to intervene in, and thinks what’s good for military expenditures is good for America. The more of our young men (and perhaps women) Party A can put in harm’s way thousands of miles from home the better it feels about itself. Pax Americana, world without end, y’all.
You (in this thought experiment) are a Christian and a strong supporter of religious liberty; you are also strongly opposed to unnecessary military adventures and foreign intervention more generally.
How do you vote? And on what grounds do you make that decision?
I’ve been thinking about this a good bit lately. While I am, as I have often demonstrated right here on this site, a vocal supporter of religious freedom, I’m also rather uncertain about how my religious convictions should affect my political decisions. The problem arises if we distinguish between individual and collective Christian action.
On the individual level, I know what I am supposed to do: if someone slaps me on one cheek, I should offer them the other; if someone takes my shirt, I should offer him my coat; if someone curses me, I should bless him; I should always seek the well-being of others in preference to my own. (Of course, this is not to say that I actually do what I know I should do.)
If that logic holds in the collective sphere as well, then perhaps Christian churches should not focus too much attention on what is best for them, but on what is best for their neighbors. They might have good reason, in that case, to accept constraints on religious freedom if that meant preventing unnecessary violence, death, and destruction from being unleashed on others.
Now, some Christians might also argue that the Church exists for others, so that promoting religious freedom, even at the cost of lives lost overseas, is still the selfless thing to do. And that could be right, but I think we all ought to be very wary of arguments that provide such a neat dovetailing of our moral obligations and our self-interest.
I honestly don’t know what I think about this, and still less do I know how to apply the proper principles to our own more complex political scene. But I do think it’s right to conclude that there are at least some potential circumstances in which religious believers, in order to be faithful to their religious traditions, would need to refrain from direct political advocacy for those traditions.
On Twitter, Damon Linker politely took me to task for, in my response to a post of his, ignoring the “substance” of that post. I believe by that he meant his explanation of his own views of fetal life, as opposed to the critique of Ross Douthat that I objected to.
Well, that post wasn’t about Linker’s own position, but rather about his peculiar way of responding to Ross Douthat. But okay—since you asked!—here goes. Linker writes,
Even if my wife and I could know every time a fertilized egg fails to implant and then sloughs off when she menstruates, we still would never be moved to mourn the death of a being with intrinsic moral worth. The same holds for fertilized eggs that slough off because a sexually active woman is using an IUD — or, for that matter, because a woman is breastfeeding in the first several months after giving birth. All of these activities lead to the “death” of what really is, at that pre-implanted stage, a clump of cells that is destined not to develop into anything at all.
Nine months after successful implantation, things are very different. I would even say categorically — ontologically — different. How is this possible? I have no idea. All I know is that nearly all of us are convinced that a newborn baby is a person, a creature with intrinsic dignity, worth, and a right to life that the liberal state is duty bound and justly empowered to protect — and yet also convinced that although this same creature possessed the same genetic code from the moment of fertilization, it was somehow of relative moral insignificance in those first few hours and days of microscopic life.
I would very much like to know Linker’s evidence for the claim that “nearly all of us are convinced that … this … creature … was somehow of relative moral insignificance in those first few hours and days of microscopic life.” Nearly all? But let’s continue:
Between those moments (conception and birth) lies a developmental continuum that confounds any and every effort at strictly rational systematization. An abortion at six weeks is worse than one at four weeks. Eight weeks is worse than six. Twelve is worse than 10. And so forth, as we approach fetal viability — at which point, what was once a medical procedure with minimal moral import becomes a matter of murder.
First of all, and especially in light of my critique of Linker’s critique of Douthat, I want to say that this identification of fetal viability as the point at which a fetus becomes a person entitled to legal protection is a big step, and one that I’m sure earns Linker plenty of condemnation from the pro-abortion world. And the criticisms I am now going to offer should not be seen as ignoring that step or diminishing its significance. But do I have some concerns about Linker’s line of argument? I do.
The first is that, while Linker’s view is often described as a “gradualist” one, and while morally that may be true, in legal terms it’s not gradualist at all: it’s totally binary, all or nothing. In this account, before viability the taking of a fetal life is legally nugatory; after viability it’s murder. This is a big jump in any circumstances, but especially worrisome given the success of prenatal medicine in pushing viability earlier and earlier. So whether a woman has done something of no legal interest or something of the greatest possible legal significance can change within a year. This is to make legal judgment—and the status of a human creature under the law—dependent to a disturbing degree on medical technology.
Moreover, Linker’s judgment about the “moral worth” of pre-viability fetuses is pretty shaky as well. There’s nothing wrong with that as a matter of personal feeling, though (as I suggested earlier) I’m not convinced that his personal feelings about zygotes—his moral intuitions about them—are as universal as he claims that they are. And that’s a problem with his case, because he grounds his entire approach to the legal status of fetuses in those feelings and intuitions. If almost everyone does share those intuitions, then maybe that will work as a matter of practical jurisprudence; but ethically it’s pretty dubious. After all, it hasn’t been that long since widespread intuitions about the “moral worth” of black people led to catastrophic evil. (And the leftovers of those intuitions are still poisonous for black people in America today.)
I appreciate, and even value, the general point that underlies Linker’s argument: that sometimes our laws have to be based on fallible and not especially consistent moral intuitions; that ad hoc reasoning is sometimes the best that we have; that the attempt to impose absolute consistency on our laws and jurisprudence is almost necessarily quixotic and prone to the generation of unintended consequences, because, as the adage rightly goes, hard cases make bad law. But I think our track record as a species—and more particularly as Americans—suggests that rough-and-ready moral intuitions do very little to protect the weak, the powerless, the despised. We need stronger and (yes) more consistent legal and moral stuff to protect those who cannot protect themselves.
Damon Linker is unhappy—unhappy with the tone of a recent post by Ross Douthat. Linker says that people like Douthat—but he really just means Douthat, because he doesn’t refer to anyone else in his post—are “losing their cool. And their heads.”
Linker calls Douthat’s post “harsh and angry”—a description I won’t contest, though I’m tempted—but notes that it is “uncharacteristically” so. Maybe he could have spent a few minutes contemplating why a writer as consistently irenic as Douthat might have lost a bit of patience on this particular subject. Might it be that title of the post Douthat was responding to describes his position as “glaring hypocrisy”? A touch on the provocative side, wouldn’t you say? (Yes, writers typically don’t choose titles, but they can protest inaccurate ones; I’ve done it myself more than a few times.)
Or might it be something that runs a little deeper? In that earlier post, Linker writes, “I have faith that Douthat’s honesty and intelligence will lead him to concede that he’s lost his debate with [William] Saletan”—the magnificent condescension of that line would have me banging on Linker’s door to challenge him to a duel—but Douthat demonstrates pretty thoroughly in his reply that he hasn’t lost that argument. And it’s interesting that in his lamentation over Douthat’s so-unfortunate tone, Linker never acknowledges any of the arguments Douthat makes or the studies he cites. It’s much easier to tut-tut over people “losing their cool.”
Linker seems to be troubled that Douthat doesn’t acknowledge how different his position is from that of people like “Katha Pollitt and Rebecca Watson [who consider] the termination of a pregnancy to be as morally insignificant as (in Douthat’s words) ‘snuffing out a rabbit.’” He is, as he keeps telling us, “deeply troubled by abortion.”
But the state of Linker’s feelings may not be the most germane thing here. The really key passage in Douthat’s “harsh and angry” post is one that Linker doesn’t quote:
It is not the pro-life movement that’s forced Planned Parenthood to unite actual family planning and mass feticide under one institutional umbrella. It is not the Catholic Church or the Quorum of the Twelve Apostles or the Southern Baptist Convention or the Republican Party that have bundled pap smears and pregnancy tests and HPV vaccines with the kind of grisly business being conducted on those videos. This is Planned Parenthood’s choice; it is liberalism’s choice; it is the respectable center-left of Dana Milbank and Ruth Marcus and Will Saletan that’s telling pro-life and pro-choice Americans alike that contraceptive access and fetal dismemberment are just a package deal, that if you want to fund an institution that makes contraception widely available then you just have to live with those “it’s another boy!” fetal corpses in said institution’s freezer, that’s just the price of women’s health care and contraceptive access, and who are you to complain about paying it, since after all the abortion arm of Planned Parenthood is actually pretty profitable and doesn’t need your tax dollars?
But instead of questioning the inevitability of this “package deal,” Linker prefers to (a) characterize opponents of it as exhibiting “glaring hypocrisy” and (b) express deeply-felt dismay if any of those opponents bristles at that characterization. To his credit, Linker is straightforward about his allegiances: “People like me—deeply troubled by abortion and yet supportive of women’s reproductive freedom (along with a good bit of the rest of the sexual revolution as well)—will never lend [the pro-life movement] our support. No matter how many barbaric videos its activist wing makes public.” Never.
If you tell people that you will never under any circumstances give them your support, then they may not thank you for instructing them in how to go about their business, no matter the state of your feelings. And if in the face of the horrors revealed by these recent videos of Planned Parenthood’s callous and mercenary attitude towards the organs of killed fetal humans your response is to attack Ross Douthat, then maybe, just maybe, you’re not as “deeply troubled by abortion” as you’d like to think you are.
As I—peculiar person that I am—see the world, few things could be more readily understandable than a person’s expressing gratitude that her mother didn’t choose to abort her. And that’s what the #unplannedparenthood hashtag on social media is all about: people telling their own stories of gratitude—gratitude to pregnant women who, in the face of fear and uncertainty, decided to take a chance on life; gratitude also, in many cases, to friends, family, churches, and community organizations who supported the women who took that risk. Who wouldn’t be grateful in such circumstances?
But for Olga Khazan, writing at The Atlantic, such expressions of gratitude are “bizarre,” “odd,” and “disastrously illogical.” I fear that I too must be disastrously illogical, because I fail to understand why Khazan then goes on to explain how “during the Great Depression, women who wanted to avoid having babies they couldn’t afford used ‘disinfectant douches’ that burned their genitals.” Is the point that people should not only be grateful for not being aborted but also grateful that their mothers weren’t faced with the prospect of singeing their genitals with corrosive chemicals? The relevance of this excursus escapes me.
At one point, groping to understand these alien minds, Khazan suggests that “the larger purpose seems to be to put many happy faces on the pro-life movement. All those people weren’t aborted! Isn’t that wonderful?” And she goes on to say,
Of course it is. But it also assumes that the only reason for an abortion would be that you’re mildly surprised by your pregnancy status, and uncertain what to do next.
But the #unplannedparenthood hashtag assumes no such thing. It is grossly insensitive and uncharitable of Khazan to assume that every woman who decided to keep an unplanned baby was only “mildly surprised” to be pregnant. And incurious of her too: her assumption won’t survive two minutes’ scrolling through search results for the hashtag, which show again and again the harrowing circumstances in which many, many, many women decided to bear unplanned children. That they made such an immensely consequential decision is amazingly courageous—there is nothing “of course” about it.
And often, at the time, these were unwanted children as well. Khazan notes that “there is a big difference between an unplanned pregnancy and an unwanted one”—which is indeed true. But one of the chief points that emerges from the #unplannedparenthood stories is that a great many children who were unwanted at first became very much wanted, very much loved later—either by their birth parents or by those who adopted them. Khazan’s moral world is so impoverished that in it only first thoughts count; by contrast, the people who are grateful for #unplannedparenthood are also grateful for second thoughts.
Khazan tries to draw our attention to a world in which abortion is illegal, as though that’s likely to happen any minute now, but it’s not likely, and that’s not the world that the #unplannedparenthood stories come from. In every case that I have seen, these stories commend women who could have chosen abortion, but chose life instead, even when it was costly to them. In a famous phrase, Edmund Burke spoke of an “unbought grace of life,” but the people who celebrate #unplannedparenthood know that the grace of life that experience was bought at a price—in many cases a very high price. Olga Khazan’s disdain for their expressions of thanks is contemptible.
The policing and disciplining of disagreement that I have been exploring in two previous posts—first and second—are the product of a massive cultural movement, in the process of development over centuries, that the philosopher Charles Taylor calls “code fetishism” or “normolatry.”
In an absolutely vital essay called “The Perils of Moralism,” included in this collection, Taylor explains that “modern liberal society tends toward a kind of ‘code fetishism,’ or nomolatry. … Code fetishism means that the entire spiritual dimension of human life is captured in a moral code.” This idea is first fully articulated in Kant’s deontological account of ethics, but it had been in the making for hundreds of years before that. “I want to argue that it was a turn in Latin Christendom which sent us down this road. This was the drive to reform in its various stages and variants—not just the Protestant Reformation, but a series of moves on both sides of the confessional divide. The attempt was always to make people over as more perfect practicing Christians, through articulating codes and inculcating disciplines.”
Eventually “the Christian life became more and more identified with these codes and disciplines.” But once that had happened, the Gospel itself became dispensable: all we had to do was to extract the rules from it, and the “values” that produced them, and we were good to go. Thus arise figures who use the codes extracted from Christianity against Christianity: Voltaire, Hume, Gibbon.
And thus also arises an antinomian counter-movement: “Modern culture is marked by a series of revolts against this moralism, in both its Christian and non-Christian forms. … The code-centered notion of order and its attendant disciplines begin to generate negative reactions from the eighteenth century on. These form, for instance, the central themes of the Romantic period.”
Thus modernity, at least since Kant, is characterized by constant tensions and frequent eruptions of hostility between two great opponents, the antinomians and the code fetishists. Most of the fights that afflict social media today are versions of this conflict: just think of the recent skirmishes between the self-described free-speech advocates on Reddit and the opponents whom they refer to as SJWs (Social Justice Warriors).
I think the key lesson to be drawn from Taylor’s account is that code fetishism produces antinomianism: antinomians are people who get frustrated by the code fetishists’ relentless policing and disciplining of disagreement—which the fetishists do because they are trying to build a more just society and think that codification and enforcement of rules is the only way to do it—and believe that a simply rejection of rules is the only way to resist. That is, both sides agree that morality is a matter of rules; but one side thinks that since rules require elaboration and enforcement, and other people are the ones elaborating and enforcing them, they would prefer what they see as the only alternative, a rule-rejecting, morally minimal commitment to freedom.
(At least, that is how the antinomians would describe themselves. The fierceness with which some of them persecute and attempt to silence dissenters—practices detailed in disturbing detail in Sarah Jeong’s new book The Internet of Garbage—suggest that a good many professed antinomians are actually code fetishists of a particular intense variety. Just for the purposes of this post I’m going to take the antinomians at their self-description.)
But what if this is a false dichotomy? What if the code fetishists and antinomians are both wrong, and wrong for the same reason: because they have unwittingly accepted the false idea that “the entire spiritual dimension of human life is captured in a moral code”? What if rule-following doesn’t produce justice, and the antinomians have an inadequate conception of freedom?
In an essay closely related to “The Perils of Moralism”—it even has some of the same sentences—Taylor suggests an alternative to this dichotomy. The essay is his brief but powerful foreword to The Rivers North of the Future: The Testament of Ivan Illich—a collection of interviews the writer and broadcaster David Cayley conducted with the great polymath in the late 1990s. This “testament” is enormously powerful and provocative itself, but for now I just want to highlight Taylor’s thoughts on Illich.
Taylor zeroes in on an obsession of Illich’s: Jesus’s parable of the Good Samaritan. For Illich, Taylor explains, “the Samaritan and the wounded man … are fitted together in a proportionality which comes from God, which is that of agape, and which became possible because God became flesh.”
The enfleshment of God extends outward, through such new links as the Samaritan makes with the Jew, into a network which we call the Church. But this is a network, not a categorical grouping; that is, it is a skein of relations which link particular, unique, enfleshed people to each other, rather than a grouping of people together on the grounds of their sharing some important property.
Illich believes that when we forget that what binds us is “a skein of relations,” we fall into a system of rules—we become code fetishists, for whom “the significance of the Good Samaritan story appears obvious: it is a stage on the road to a universal morality of rules.” But for Illich this is a “corruption of Christianity.” Our world looks very different if what matters is not the code we can abstract from a given situation but the situation itself—or, more specifically still, the utterly particular person who stands in front of us.
You can see the ubiquity of code fetishism—the can’t-see-around-it absolutism of normolatry—in Sam Biddle’s reflections on how he helped to ruin Justine Sacco’s life. He says that he apologized to her, but then elsewhere in the post he effectively walks back the apology:
I’ve been asked many times if I would post Sacco’s tweet all over again, and I still don’t know how to answer. Would I post the tweet again? Sure. Would I post the tweet knowing it’s going to cause an incredibly disproportionate personal disaster for Justine Sacco? No. Would I post the tweet knowing it could happen? Now we’re in dicey territory, and I’m thinking of ghosts: If you had a face-to-face sit-down with all of the people you’ve posted about, how many of THOSE would you do again? We’re wading through swamps and thorns, here.
Biddle would only “post the tweet again”—or at all—because he thinks that in it Sacco had violated some significant norm; but he would only hesitate because, having confronted her humanity, he realizes that code enforcement has a tendency to create “incredibly disproportionate personal disaster.” More crucially, he’s horrified by the very thought of scanning his history of social-media acts, because he could discover that he has violated codes himself, and then what would he do? “Swamps and thorns” indeed.
Biddle’s problem is that he is stuck between sensing the limits of normolatry and seeing no alternative to it except an antinomianism that strikes him as somehow irresponsible, perhaps even inhuman. He is morally disoriented by the confrontation with someone’s sheer personhood. He has the first inkling of the possibility that, as Taylor puts it in his summary of Illich’s thought,
even the best codes can become idolatrous traps that tempt us to complicity in violence. Illich reminds us not to become totally invested in the code — even the best code of a peace-loving, egalitarian variety — of liberalism. We should find the centre of our spiritual lives beyond the code, deeper than the code, in networks of living concern, which are not to be sacrificed to the code, which must even from time to time subvert it.
In this light, I think we can see that our dominant social media have a strong tendency to reinforce the normolatry-antinomianism dichotomy, and to obscure the need for “networks of living concern.” To search Twitter or Facebook for people using words you don’t like, or using important words in ways you don’t like; to scroll through a list of tweets or posts that employ a particular hashtag with an eye towards the absurd or offensive; to seek out particularly provocative tweets or posts in order to see how outrageous the replies are—these are the characteristic acts of the code fetishist. I pray you, avoid them.
This is a follow-up to my earlier post about disagreement.
Occasionally Americans debate the correctness of beliefs and practices — political, moral, social. But not very often. Most Americans, or so one would judge from social media anyway, are Bulverists: they already know who is right and who isn’t, so all they need to debate is why the people who get things wrong — so, so wrong — do so.
But wait: it turns out that there is actually a second form or stage of Bulverism, one that is becoming increasingly common. If the first stage of Bulverism is explanatory, this second stage is disciplinary: it is concerned to determine what penalties should be administered to those who are wrong. Disciplinary Bulverism is where all the action is today.
Consider the case of Brendan Eich, the former Mozilla CEO who was pressured to resign when it became widely known that he had contributed financially to the campaign for California’s Proposition 8. Now, Eich has made it clear that he doesn’t think he’s a martyr and would rather not have his name brought up so often in these contexts — a request that I am going to ignore, just this once, because well before he said that I asked whether people supported Eich’s ouster. Almost everyone who replied said that they did, but that’s as unscientific as a sample gets; and I’ve been unable to get a sense of just how severely Eich should be punished. One person tweeted to me that “A homophobe like Eich deserves whatever he gets,” but didn’t reply when I asked whether permanent unemployment would be a just punishment, or violent assault.
So I don’t think many people have a clear sense of how severely people should be punished for holding the wrong social, moral, or political views; but there seems to be widespread support for some kind of punishment, and something more than mere shaming. For most of the people Jon Ronson writes about in his work on internet shaming, shaming is just one element of the discipline they were subjected to. Consider the recent case of the English scientist Sir Tim Hunt, who after one sexist remark — or, as Catherine Bennett called it in the Guardian, “his determination to rescue science from female biology” — not only was forced to resign from his position at University College London but was also pushed out of the European Research Council. One strike and you’re out. Forever.
But it doesn’t always work this way — though I think more and more often the internet outrage machine demands the nuclear option as the first and only valid response. I suspect that Brendan Eich would have had at least a chance of keeping his job if he had said something like this: “I deeply regret having supported Proposition 8 and apologize without reservation to all who were rightly offended by my insensitivity. My views on same-sex marriage have evolved since then, and I pledge to do everything in my power to make Mozilla a more fully inclusive environment.” But no statement less absolute would have allowed him to escape with merely a public shaming and his job intact. (Tim Hunt actually made such an apology without receiving any mercy, but that could have been because his positions were more-or-less voluntary and more-or-less honorary.)
“Punishment” is the narrower category here, “discipline” the broader one, because there are forms of discipline that are not, or at least do not claim to be, punitive. So, for example, when scholars argue that racism is a form of mental illness, or that homophobia is, they would not suggest that racists and homophobes be punished. And while internet mobs delight in administering punishment and are happy to call it by that name, people in positions of social authority prefer a gentler approach, either to generate public confidence in their discretion or to burnish their own self-images. As Yeats wrote, “The rhetorician would deceive his neighbors, / The sentimentalist himself.”
Now, I certainly believe that racism is very wrong, though I would call it a sin rather than an illness or an error. (I’m not sure what homophobia is, but I think hatred of homosexuals is a sin also.) But that’s not the issue under debate here. My subject is what is to be done about people who hold the wrong beliefs, whether or not we describe their condition as an illness. And from the point of view of the Disciplinary Bulverism, wrong beliefs must be dealt with in some way, must be subjected to some form of discipline. And in that light, thinking of their error as a form of illness has certain advantages.
C.S. Lewis, the creator of the term “Bulverism,” also wrote an essay that’s very relevant to these considerations. It is called “The Humanitarian Theory of Pubishment,” and here is a key excerpt:
According to the Humanitarian theory, to punish a man because he deserves it, and as much as he deserves, is mere revenge, and, therefore, barbarous and immoral. It is maintained that the only legitimate motives for punishing are the desire to deter others by example or to mend the criminal. When this theory is combined, as frequently happens, with the belief that all crime is more or less pathological, the idea of mending tails off into that of healing or curing and punishment becomes therapeutic. Thus it appears at first sight that we have passed from the harsh and self-righteous notion of giving the wicked their deserts to the charitable and enlightened one of tending the psychologically sick. What could be more amiable?
But this theory is not quite as amiable as it looks — at least if you’re the one being “cured.” Lewis continues: “On this remedial view of punishment, the offender should, of course, be detained until he was cured. And of course the official straighteners are the only people who can say when that is. The first result of the Humanitarian theory is, therefore, to substitute for a definite sentence … an indefinite sentence terminable only by the word of those experts.” This point leads Lewis to his peroration:
It may be said that by the continued use of the word Punishment and the use of the verb “inflict” I am misrepresenting the Humanitarians. They are not punishing, not inflicting, only healing. But do not let us be deceived by a name. To be taken without consent from my home and friends; to lose my liberty; to undergo all those assaults on my personality which modern psychotherapy knows how to deliver; to be remade after some pattern of “normality” hatched in a Viennese laboratory to which I never professed allegiance; to know that this process will never end until either my captors have succeeded or I have grown wise enough to cheat them with apparent success — who cares whether this is called Punishment or not? That it includes most of the elements for which any punishment is feared — shame, exile, bondage, and years eaten by the locust — is obvious.
And of course this kind of thing happens all the time in Western societies today: sensitivity training and its many near relations. Nothing new there. In fact, the disciplinary structures that have been well-emplaced for decades will simply continue: but in the coming decades they will have different targets.
Because this is how a society built on disciplinary Bulverism works. The reparative or conversion therapy that was once widely used to change homosexuals can easily be adapted to address the problems of racists and homophobes — who can be easy to find, thanks to the social-media trails that most people leave online. Sometimes such reparation will merely be encouraged by friends and family; sometimes it will be made a condition of employment; sometimes it will be mandated by judges. Discipline will not always (perhaps not often) come directly from the State; it will typically be administered by what one of the more acute Marxist theorists called ideological state apparatuses — institutions (schools, hospitals, many private businesses) that the State trusts to enforce its preferences. And these preferences will not be argued for; as always in Bulverist thought, their essential truth will be assumed; and the way social media are used today will ensure that dissent is driven out of any given circle of discourse.
Althusser’s picture of how the state works closely resembles what Foucault — who was not a Marxist and whose political positions were highly ambiguous — called the “power/knowledge regime,” and what some current neoreactionaries call “the Cathedral.” It’s interesting to see people from all over the political map exploring the subtle ways that State power works. There’s a reason for this. People who support using the disciplinary powers of the State against their enemies always assume that people like them will be in power forever. And on this point they are always wrong.
In an excellent recent article, Mollie Hemingway wrote, “We are slowly forgetting how to dislike something without seeking its utter destruction.” I would only replace “slowly” with “quickly”—very quickly. This makes me think about disagreement—what it is, what it means, what it is for. So let’s explore.
Many years ago, the philosopher Michael Oakeshott wrote that “The view dies hard that Babel was the occasion of a curse being laid upon mankind from which it is the business of philosophers to deliver us, and a disposition remains to impose a single character upon significant human speech.” By “Babel” here Oakeshott does not mean the diversity of languages but the diversity of beliefs and positions; his statement is a kind of challenge to philosophical hubris, to the idea that arguments can be produced that will defeat the opposition once and for all.
Bernard Williams likewise appreciated the value of disagreement: “Disagreement does not necessarily have to be overcome. It may remain an important and constitutive feature of our relations to others, and also be seen as something that is merely to be expected in the light of the best explanations we have of how such disagreement arises.” The context here is, broadly speaking, ethics—how people should live—and Williams thinks that ethical questions are immensely complex, so that disagreement about them is “merely to be expected.” Indeed, any attempt to shut down disagreement on such matters will be an impoverishment of thought, and perhaps of life itself.
The ancient idea of the philosopher as gadfly arises from the awareness that a person can serve society not only by being correct but also, and in a distinct way, simply by being different—by challenging conventional wisdom and received beliefs. Similarly, in the American legal culture we have long seen defense attorneys serving a similar role: it is good for society, and for justice considered generally that even seemingly indefensible clients or ideas be defended. And sometimes, of course, what seems indefensible proves to be justified after all. But perhaps that’s not a value held in high regard any more—at least, in relation to some issues.
To be sure, toleration, both legally and socially, has always had limits. Consider John Milton’s “Areopagitica,” perhaps the most stirring celebration of freedom of the press ever composed. Hear, my friends, these noble words: “And though all the winds of doctrine were let loose to play upon the earth, so Truth be in the field, we do injuriously, by licencing and prohibiting to misdoubt her strength. Let her and Falshood grapple; who ever knew Truth put to the worse, in a free and open encounter?” But then just a few lines later: “I mean not tolerated Popery, and open superstition, which as it extirpates all religions and civil supremacies, so itself should be extirpate.” Milton reassures us that when he advocates freedom of speech he certainly doesn’t mean to include Catholics, whose words should be forcibly snuffed out.
So no society tolerates every imaginable form of speech; there are always boundaries. What’s disorienting about American society today is how quickly the boundaries are shifting. Beliefs that were almost universal less than 20 years ago—and are held by around 40 percent of the American people now—are deemed utterly beyond the pale. It’s hard not to suspect that some of the people most devoted to policing those boundaries are pouncing prosecutorially on views that they themselves held not that long ago. (The convert’s zeal.) And social media provide the chief impetus for both changing one’s own views and policing those whose views are different. In this environment, it’s hard to see who will resist what Oakeshott calls the “disposition … to impose a single character upon significant human speech.”
Maybe the Oakeshott/Williams view of philosophy as an opening-up rather than a closing-down of options can assist. In this fascinating conversation on the value of political disagreement, Gary Gutting and Jerry Gaus end up doing what people always do in these conversations: they advocate open disagreement but then quickly pause to say that “toleration has limits.” But, being philosophers, they go on to ask how those limits should be determined. Gaus: “The critical question is not whether I judge a person to be radically misguided, or judge her way of life to be morally repugnant, but whether she is a danger to the life and liberty of others.”
But that doesn’t help us very much unless we know what “danger” is, and its sibling “harm,” and no concepts have undergone more radical alteration in the recent shifting of social opinion than these. Thomas Jefferson famously said, “it does me no injury for my neighbor to say there are twenty gods or no God. It neither picks my pocket nor breaks my leg”—but that was in a simpler time. In a culture devoted to a minutely particular screening of language for microaggressions, the injury inflicted by opinions becomes the most talked-about form of harm. There are no socially useful gadflies in Microaggression World—unless, of course, you think it’s okay for some ideas to be challenged but not your favorite ones. And no one would ever be so inconsistent, would they?
People who traffic in symbolic manipulation—and that’s most of us, these digital days—are typically inclined to overrate the importance of symbolic manipulation. It’s always tempting to think that to exercise control over symbols—like the Confederate battle flag, which, for the record, I have long despised—is to strike a blow for justice. Again, social media play a key role here: Jerry Gaus once wrote an article “On the Difficult Virtue of Minding One’s Own Business”, but given the hyperpublic character of the web services most of us rely on, and the difficulty of getting any of them to reliably provide intimacy gradients, everyone’s business now seems to be everyone else’s business. In such a environment, ABP—Always Be Policing—is the watchword. Survey and critique others, lest you make yourself subject to surveillance and critique. And use the proper Hashtags of Solidarity, or you might end up like that guy who was the first to stop applauding Stalin’s speech.
Minding your own business, on this commonly-held account of things, is a vice, not a virtue, and those who handle disagreement peaceably are ipso facto deficient in their commitment to justice. To restore a belief to the positive value of disagreement, here, would be a challenging task indeed. When Bernard Williams writes of disagreement as “an important and constitutive feature of our relations to others,” he is speaking a moral language that’s incomprehensible to those for whom free speech is so last century and for whom history is always a story of moral progress.
How might such people come to see, with Williams, the virtue of moral and epistemic humility? How might they be brought to see that it can be a positive good to belong to a society in which people with deep disagreements, even about sexuality and personal self-determination, can live in peace with one another and, just possibly, converse? I have absolutely no idea.
In his review of Matthew Crawford’s The World Beyond Your Head, Tim Wu explains Crawford’s critique of Immanuel Kant’s understanding of the concept of freedom, which Crawford believes to have had enormous (and enormously negative) social consequences. Then, out of absolutely nowhere, Wu writes,
I sometimes wonder if Crawford’s beef with Kant is personal: for all the dangers of driving a motorcycle, my guess is that he would prefer going down in flames to living like the philosopher, whose life appears to have been among the most boring in recorded history. What else can you say about a man who opined on freedom yet is widely believed to have never had sexual intercourse?
Wu helpfully adds to that last sentence two—not one, two—links to Google Books searches for “kant + never + had + sex.”
When people talk about the hypersexualization of our culture, they’re usually referring to, for example, the advertising industry’s attempts to get even children to present themselves as sexual, and sexually desirable, beings. And this is true enough, important enough, lamentable enough. But comments like the one Wu makes here are even more telling, in a way, because they reveal the ways an obsession with sexuality gets disseminated throughout our whole public discourse. What does Kant’s sexual experience, or lack thereof, have to do with Crawford’s argument? What does it have to do with Kant’s own arguments about freedom or his arguments about anything else whatsoever? Would philosophers need to rewrite their interpretations of Kant’s thought if researchers discovered a liaison with a saucy little chambermaid?
Attempting to parse Wu’s line of thought, I can only come up with this: “Matt Crawford likes to do things, and Kant is famous for not doing many things, and the main thing that he didn’t do is have sex.” And not having sex is, we may infer, the ne plus ultra of boringly not-doing-things. (I mean, just think of poor celibate Joan of Arc—her life had to have been totally boring. So, so boring.) Later in the review Wu comments that Crawford’s “rather manly and physical ideas of living tend to suggest that someone like Stephen Hawking, bound in his wheelchair, has led a meaningless life.” But hasn’t Wu just made a very similar suggestion about Kant, only based on a different “physical idea of living”?
I’m old enough to remember when the main thing that people talked about Kant not doing was travel—when people thought that it was Kant’s reluctance to leave Königsberg that was his chief oddity. And since elsewhere in the review Wu describes Crawford’s love of motocycles and dislike of automobiles that insulate us from direct encounters with the world, a reference to Kant’s preference for just staying home might seem appropriate… but no. It’s sex. It’s always sex. See Tom Stoppard’s “Arcadia”:
Hannah: Sex and literature. Literature and sex. Your conversation, left to itself, doesn’t have many places to go. Like two marbles rolling around a pudding basin. One of them is always sex.
Bernard: Ah well, yes. Men all over.
Hannah: No doubt. Einstein—relativity and sex. Chippendale—sex and furniture. Galileo—“Did the earth move?” What the hell is it with you people?
Hannah’s point, of course, is that it’s not “men all over”—it’s Bernard. But who isn’t Bernard these days?
UPDATE: Tim Wu responded very graciously on Twitter to this post, and explained that he referred to Kant’s unsex-life, or sex-unlife, in order to suggest that Kant’s understanding of freedom may have been overly abstract. I’m going to risk ungraciousness in reply by saying that the association of sexual experience with personal freedom is a (perhaps the) founding axiom of our current sexual ideology, but one that’s pretty hard to sustain if we are honest about our lives. I wrote about this some years ago in relation to Anne Carson and Sappho.
And one more point. In “Sext,” the third poem of the great sequence “Horae Canonicae,” Auden speaks with reverence of those who have managed to take the “prodigious step” of ignoring the power of “the appetitive goddesses” to focus their attention on what fascinates them.
There should be monuments, there should be odes,
to the nameless heroes who took it first,
to the first flaker of flints
who forgot his dinner,
the first collector of sea-shells
to remain celibate.
Where should we be but for them?
Feral still, un-housetrained, still
wandering through forests without
a consonant to our names,
slaves of Dame Kind, lacking
all notion of a city.
Maybe those people — and maybe Kant was one of them — know something about freedom ungraspable by those enslaved to the appetitive goddesses.
These six axioms provide all you need to know to navigate the landscape of current debates about judicial decisions:
1) The heart wants what it wants.
2) The heart has a right to what it wants—as long as the harm principle isn’t violated.
3) A political or social outcome that is greatly desirable is also ipso facto constitutional.
4) A political or social outcome that is greatly undesirable is also ipso facto unconstitutional.
5) A judicial decision that produces a desirable outcome is (regardless of the legal reasoning involved) proof of the wisdom of the Founders in liberating the Supreme Court from the vagaries of partisan politics so that they can think freely and without bias. The system works!
6) A judicial decision that produces an undesirable outcome is (regardless of the legal reasoning involved) proof that the system is broken, because it allows five unelected old farts to determine the course of society.
From these six axioms virtually every opinion stated on social media about Supreme Court decisions can be clearly derived. You’re welcome.
Here’s a puzzling report from the New York Times:
A recent report from UBS Wealth Management found that people with more money are generally happy, which probably doesn’t come as much of a shock. “I would say that millionaires in general are very happy,” said Paula Polito, chief client strategy officer at UBS Wealth Management Americas. “I wouldn’t confuse happiness with contentment or satisfaction or achievement.”
Got it. Happy but not necessarily satisfied or content.
The UBS report found that satisfaction rose in line with wealth: 73 percent of those with $1 million to $2 million, 78 percent of those with $2 million to $5 million and 85 percent of those with over $5 million reported that they were “highly satisfied” with life.
Oh. So they are satisfied. Satisfied and happy? Satisfied and happy but not content?
What piqued my curiosity was how conflicted the report’s respondents seemed to be about the source of their wealth. They often have jobs that entail long hours, high pressure and working vacations.
Are those things satisfying? Happiness-conducive?
‘Part of this pressure to keep going is less about greed and more about insecurity that might be self-imposed,’ Ms. Polito said. ‘If you ask people, ‘If you knew you had five more years to live, would you act differently?’ they say they would. That’s a showstopper.’
Happy and satisfied but insecure?
Money buys happiness, the report said. But what good is that happiness if the millionaires who have it cannot enjoy the freedom the money gives them, the freedom that most people would love to have?
But if the inability to enjoy freedom doesn’t make you less happy or satisfied, is it a problem? If so, why?
My takeaway from reading this article: no one involved, from the investigators to the respondents to the reporter, has any idea what they mean by “happy” or “satisfied” or “content” or “free.”
Let’s try to think about these things, starting perhaps with W. H. Auden’s poem “The Unknown Citizen.” Everyone’s assignment: read this poem, think about it for a month, and then try again.
In the last few weeks, a soft fog of nostalgia has settled over much of England as the country commemorates the 25th anniversary of Italia 90. It was one of the few times that the national team did better than expected. The Three Lions had been inept, or profoundly unlucky, for several years, reaching their nadir in losing every match in the 1988 European championships. One London paper shouted to Bobby Robson, the England manager, “IN THE NAME OF GOD, GO!”—a nice echo of Oliver Cromwell’s words to the Rump Parliament—and then, when the team was held to a draw by Saudi Arabia, “IN THE NAME OF ALLAH, GO!” But Robson didn’t go, and continued to lead the team through World Cup qualifying and into the tournament itself, where, to general astonishment, they made it to the semi-finals, losing to West Germany—of course—on penalties—also of course.
In the later stages of the tournament I was in London, visiting England for the first time in my life. At the time I knew nothing about soccer; had never really paid attention to it. I do not believe I even understood that the World Cup was going on. But one evening my wife and I were walking near Covent Garden and trying to understand why the streets were so empty. We had not been in London long, but we understand that this was not normal; was not anything like normal. Then we started hearing the shouts.
From time to time, from every pub in earshot, groups of people would cry out: in fear, in anticipation, in misery—and three times in ecstasy. The last two marked the moments when England’s brilliant forward Gary Lineker converted two penalties to bring England back from a deficit against Cameroon, sending them into the semi-finals. But at the time I didn’t know that. I had to get back to our hotel and turn on the TV, and then read the next morning’s newspapers, to piece together the events of the evening. Gradually it dawned on me that this World Cup thing was a pretty big deal.
A few days later an American couple then living in London invited my wife and me to have dinner at their flat and then watch England play the Germans. I was half-disappointed—I had wanted to find out first-hand what it would be like to watch such an event in one of those raucous pubs—but it was impossible to say no. It was also the Fourth of July, which it was sensible to spend with my fellow Americans (though our pre-match dinner at a nearby Thai restaurant was not, perhaps, fully orthodox). But whether I would have enjoyed the pub experience more or not, in that little flat in Bloomsbury I came for the first time to understand something of soccer as a game, and of its role in English society.
I remember the somber dignity of the pre-match commentary—it sounded to me as though they were announcing the beginning of a war—and then, once the game started, the occasional shouts and curses from nearby flats and the streets below. But mainly I remember Paul Gascoigne, whom I had already noticed in the Cameroon match: his long pass to Lineker—I didn’t yet know to call it a “through ball”—that led to the third goal gave me my first awareness of the beautiful geometries of soccer. In the match against Germany I couldn’t stop watching him: he didn’t look like what I thought an athlete should look like, with his chunky frame and long spindly legs, and he ran a bit like the Tin Man, upright and jerkily. Yet he made things happen, he caused constant trouble for the opposition. And when, on receiving the yellow card that would have kept him out of the final, he broke down in tears, my eyes filled also.
Of course, I didn’t really know what was going on: I assumed that Gascoigne had been dismissed from the match, and couldn’t understand why he was so upset if he could keep playing. (He would later say, “When things are good and I can see they’re about to end I get scared, really scared. I couldn’t help but cry that night.”) But the emotional intensity of the players and the fans, especially as the match moved towards the penalty shootout, and the utter devastation on the faces of Stuart Pearce and Chris Waddle when they missed their penalties, simply radiated from the screen, overwhelmingly.
When we left the flat to walk back to our hotel, there were angry drunk people on the streets of London, but not too many of them. (We would learn the next day that the more violent ones had congregated in Trafalgar Square, and were thankful that our walk home hadn’t taken us in that direction.) Most of the people we saw looked dazed, spent, and yet somehow exhilarated. It was clear that something of great import had just happened to them. And it was clear that for the rest of my life I would be a soccer fan.