(To be continued…)
A: Well … Loki’s quite right, you know — at least in terms of what he says, as opposed to what he means. He means that we want to be ruled by him, a claim that I would firmly though politely (his being a god and all) reject. But do we want to be ruled? Of course we do. That’s why human societies so strenuously avoid direct democracy. Rule is tedious; it’s boring — almost no one actually wants to do it — we have a thousand other things we’d rather pursue, including, as a high priority, announcing to everyone who’ll listen how much better off the world would be if we ran it. For every person who votes there are at least four who want to tell you how they have all our political questions sorted out. Of course we want to be ruled. The only questions are who will rule us and how they will do so. And I’m making a proposal concerning those questions.
B. You’re confusing delegation and abdication. Those of us who through electing representatives delegate certain civic responsibilities aren’t abandoning self-rule! Note that, for one thing, we reserve the power to recall our representatives if we think they are abusing the trust we have placed in them, at the next election or, in desperate circumstances, earlier.
A. “If we think they are abusing the trust,” indeed. In a democracy hoi polloi are notoriously incompetent at figuring out whether they are being abused, having a strong tendency to re-elect their abusers while rejecting with alacrity people who are telling them sober and necessary truths. Didn’t Burke tell us this long ago?
When the leaders choose to make themselves bidders at an auction of popularity, their talents, in the construction of the state, will be of no service. They will become flatterers instead of legislators… If any of them should happen to propose a scheme of liberty, soberly limited, and defined with proper qualifications, he will be immediately outbid by his competitors, who will produce something more splendidly popular. Suspicions will be raised of his fidelity to his cause. Moderation will be stigmatized as the virtue of cowards; and compromise as the prudence of traitors; until, in hopes of preserving the credit which may enable him to temper, and moderate, on some occasions, the popular leader is obliged to become active in propagating doctrines, and establishing powers, that will afterwards defeat any sober purpose at which he ultimately might have aimed.
The best description imaginable of President Trump. Again, I think the people have proven both their incapacity to rule themselves and their fundamental disinclination to do so.
B. You would win me over with this citation of Burke if I didn’t know that Burke would have been horrified by the proposal you’re making.
A. Would he have? Is Burke the enemy of aristocracy?
B. Of your kind of aristocracy, I believe he is. He wrote in his famous letter to the Duke of Richmond, speaking of the hereditary aristocracy, “You, if you are what you ought to be, are in my eye the great oaks that shade a country, and perpetuate your benefits from generation to generation. The immediate power of a Duke of Richmond, or a Marquis of Rockingham, is not so much of moment; but if their conduct and example hand down their principles to their successors, then their houses become the public repositories and offices of record for the constitution.” This kind of long-term care for the good of a dear local place, or even a fatherland, is unlikely to be in the minds of your New Meritocrats. Indeed, I suspect that you’ll want to have such sentimentality bred out of them.
A. The most interesting and important phrase in that quotation is “if you are what you ought to be” — he knew perfectly well that the British aristocracy rarely were what they ought to be, and on occasion let them know his opinion of them with considerable asperity. He preferred that aristocracy to the rule of the demos, and with good reason. But I think that if we could bring Burke back now, I would at least try to convince him that there is a better model of aristocracy than the one he knew.
B. Yeah, well, good luck with that. But let’s not waste time debating counterfactuals or imagining alternative histories. In your imagined world, the demos will have no voice. You say that’s fine, because they don’t want one. But of course some of us will want one. And — let me guess here — you’re not planning to give us one. You’re going to offer no voice, but the possibility of exit. Right?
A. You are correct, sir.
Okay, Ron, you ask whether you’re allowed to sneeze. A tendentious way of presenting the issue. Of course you’re allowed to sneeze — it’s not as though anyone can stop you. If you’re in a closet with your family hiding from intruders bearing pistols and daggers, you’re allowed to sneeze, but I wouldn’t recommend it.
What’s that? Watching deer eat from a feeder isn’t like hiding from violent criminals? You’re right, Ron, it isn’t. Not very much, anyway. But again you’re missing the key point that you should be focusing on. We’re trying to establish a principle here, Ron, and the principle is that you can suppress a sneeze if you want to. But — and here we’re approaching the crux of the matter — you didn’t want to.
Go back and watch that video again, Ron. Look at those beautiful creatures: their delicate faces, their gentle demeanor, their polite interest in the contents of the feeder. And then the white snow in the background. There’s serenity there, a peaceful interlude in our lives that are so full of conflict; a chance for the deer to forage a bit — always more difficult for them in the winter — and for us to have a moment’s communion with the natural world.
And that’s when you decide to let one rip, Ron. Great.
You are what’s wrong with America, Ron, did you know that? You could have suppressed your impulse to sneeze, suppressed it for the greater good, for the good of the deer and your wife (if that’s your wife) and for all the good people of YouTube; but you chose not to do that. You didn’t even turn aside, or sneeze into your sleeve. You thought indulging your impulse was the most important thing in the world, and you got positively angry when someone suggested to you that it just might not be. A total lack of impulse control is what’s sending our country into what may be a permanent moral decline, and you’re the poster child for that vice.
Thanks, Ron. Thanks a lot.
One of the most regular running jokes in my family, for many years now, is that I don’t play Wii Boxing because I think it’s too violent. We make a joke of my tender conscience, but I really do wince when a little Mii’s head snaps back. I can’t play for more than a couple of minutes. I pause the game; I switch to golf, or tennis, or frisbee. My discomfort is genuine, and deeper than any reasonable standard would deem appropriate, and (to me, anyway) not funny at all. The roots of it sink deep into my life; follow those roots 40 years deep — give or take a few days — and eventually you’ll find yourself in front of a little black-and-white television set, in Birmingham, Alabama, on the first day of October 1975. Three days earlier I had turned seventeen.
Until then, I had been for most of my young life a very serious boxing fan. Boxing was common on network TV in those days, which was good, because network TV was all we had. Muhammed Ali was of course the dominant figure of the era, the one you couldn’t escape even if you wanted to, and a few years earlier, in my local library, I had picked up Sting Like A Bee: The Muhammed Ali Story, written by the admirable light-heavyweight Jose Torres in collaboration with Bert Randolph Sugar. I read it, read it again, went back to the library to renew it, and read it one more time.
What fascinated me wasn’t the biographical narrative, but Torres’ account of what life in the ring was really like. I have never forgotten his words in praise of body-punching:
I’ve hit fighters in their bodies with so much force that they couldn’t help but let out an involuntary groan like a wounded wolf. Uually the man who connects will jump at the hurt fighter with more punches. I never attacked after such a punch. I used to step back and let my rival savor every second of pain. I was not only a sadist but a technician; I knew how discouraging those punches were to the body. I became world’s champion by throwing one. A left hook to the liver.
You can see the left hook Torres is talking about at the beginning of this clip: Willie Pastrano is the victim, and it takes Willie about two seconds after the punch lands to feel its effect. When he does, he crumples. He gets back up, God bless him, and finishes the round, but that’s as far as he can go. The ref stops the fight, and Torres takes the title.1
It was Pastrano’s last fight. He retired, age twenty-nine.
I didn’t fight, myself, aside from a handful of schoolyard flailings; I small for my age, already a lover of words, and Torres wrote vividly; I became a literary boxing fan long before I knew that that was a tradition. By the time Ali fought Joe Frazier for the third time I considered myself a connoisseur. I had never heard of A. J. Liebling and his “sweet science of bruising” but I would have loved it if I had known.
I don’t think I had watched the first Ali-Frazier fight live, though I had seen replays on Wide World of Sports. Until that fight it was commonly said of Ali that he wouldn’t be able to take a punch, but in the fifteenth round of that fight Frazier hit Ali with as perfect a left hook to the jaw as has ever been thrown … and Ali got right back up. No one has ever had a more devastating left hook than Frazier, and no matter how many times I watch that clip I still cannot understand how that punch didn’t knock Ali cold. In slow motion you can see Ali just beginning to turn his head away a millisecond before the punch lands, though it doesn’t seem likely that that small motion could have made a difference. But in any case, no one ever — ever — again said anything about Ali being unable to take a punch.
I didn’t see the second fight either, and all I remember from it is the controversy about Tony Perez, the referee, who let Ali repeatedly grab the back of Frazier’s head and pull it down in their clinches. But by the time the third fight rolled around I was fully alert to the drama of it. I understood the contrast in styles — after all, there has never been a more obvious one: Frazier moving relentlessly, maliciously forward, head low, throwing hook after hook after hook to head and body, with both hands; Ali upright and bouncing, circling always to his left, disdaining body punches and hooks in favor of rapid-fire straight lefts and rights.
I understood also that these men were not rivals but rather actual enemies, that they truly hated each other. Having lived all my life in Alabama, where the world was neatly and simply divided between white people and black people, in that order, I don’t think I then grasped the racial dimensions of that hatred. I knew that Ali called Frazier a “gorilla,” but I never imagined the significance of a light-skinned Negro man saying that to a dark-skinned one. I might have been awakened to that dynamic if I had known that a few days before the fight, at the Marcos’s palace in Manila, Frazier had leaned over to Ali and quietly said, “I’m gonna whup your half-breed ass.” In turn, Ali would say to his corner just before the fight, “I’m gonna put a whuppin’ on this n*r’s head.” But I didn’t learn about any of that until later; I just knew that I had never anticipated anything in my short life as passionately as I anticipated what Ali had already called the Thrilla in Manila.
The classic account of what happened in that ring — and what happened before, and after — was written for Sports Illustrated by Mark Kram, and it remains the finest essay in sportswriting I have ever read. It captures with uncanny faithfulness the single fundamental fact about that fight, which is its ceaseless and horrifying brutality. By the third round Ali had pummeled Frazier so relentlessly that I was embarrassed for Joe, and I didn’t want to watch any more; I also knew that I would watch until the end, which I expected to come any moment. Then Frazier started to fight back.
As the advantage shifted back and forth between the two boxers, I watched in a state of ongoing incredulity. It was like seeing that Frazier punch that dropped Ali in their first fight, but a hundred times — a thousand. Ten thousand, it seemed. After a while I simply could not understand how either man remained standing, yet stand they did. And they punched — though “punch” is a pathetic word: the only adequate words are the ones that seem hyperbolic, like “bludgeon.”
It went on. For a time, for several rounds in the middle of the fight, Frazier got inside Ali’s guard and planted the top of his head under Ali’s chin and smashed Ali’s flanks and jaw again and again until I couldn’t imagine anything else happening, ever; but eventually, as the number of rounds (the number of years, I almost said) mounted, he grew exhausted and couldn’t get in there any more. And Ali, freed from that terrible pressure, found room to move; and then those long guns fired, repeatedly finding Frazier’s face and turning it gradually to pulp.
Frazier wouldn’t have quit, of course, under any circumstances less severe than death, but his trainer Eddie Futch couldn’t bear it any more and stopped the fight. The day after, Ali talked to Kram about what it had been like to be in that ring: “It was like death,” he said. He praised Frazier: “I’m gonna tell ya, that’s one helluva man, and God bless him” — but then, there was no reason for him to stint the praise. He had won; and Frazier had never had words to hurt him the way his contempt had slashed Frazier. The really remarkable thing was Frazier’s response, uttered just hours after his long war with Ali drew to its terrible close. “Man, I hit him with punches that’d bring down the walls of a city. Lawdy, Lawdy, he’s a great champion.”
As for me, I sat there for a while, once it was over, in my little bedroom in Alabama, staring at my little black-and-white TV. I could have watched elsewhere in the house on a larger screen, and in color, but I would never have risked being distracted by my uncomprehending family. So I sat there alone and in silence. I didn’t know it, but boxing was over for me; I would never watch another bout with interest and attention, and my tolerance for boxing’s aggression would shrink and shrink until I found myself avoiding Wii Boxing. And I still remember that night, when sleep took long to come; and for days afterward, a haze hung over my mind.
1. Torres beat Pastrano thanks in part to the instructions in combination-punching that Cus d’Amato — later Mike Tyson’s trainer — gave him Torres knew what it was like to be on the receiving end of such punches as well: four months after winning his crown he took on a non-title bout with a journeyman heavyweight named Tom McNeely, and though he won the fight he took such a beating to the body that some observers thought he was never again the same fighter.↩
B. Ah, the famous Imperial Examination! Ideal of meritocrats everywhere! But it’s not as though everyone in China had an equal shot at passing it — or even taking it. The rich who could afford tutors and bribes had a massive advantage over poor families whose sons had to rely on their wits and hard work. There was always some social group who were excluded from taking the exams — and of course women were never allowed to take it — and there was massive cheating —
A. Of course, of course. There is no possible system of politics or anything else that can’t be gamed, and in which the rich do not have advantages that the poor lack. To raise that as an objection to any scheme for social improvement is to allow the perfect — the impossible, the unrealizable perfect — to be the enemy of the good.
There will certainly be inequality at the beginning, but since money and discipline can only partially compensate for a lack of brains, and poverty can only partially impede the extravagantly intelligent, there would in such a system, over time, arise greater and greater equality both of opportunity and achievement. If you care about that kind of thing. I do, sort of, but not as much as I care about creating a political system in which the very best actually rule.
B. And it’s your view that China in the Imperial era actually achieved this genuine meritocracy?
A. Glad you asked. The answer is a firm No, in part because of the cheating and gaming we talked about a moment ago, but also because in the Imperial system the best were allowed to advise — but not to rule. The cult of the Emperor and the imperial family remained in place. China had created an enormously powerful system for funding and training the most gifted young men — and yes, it’s a shame that it was men only — ever devised, but restricted the ability of those men to set the course of Empire. So what I am arguing for is the next and obvious step: putting the aristoi — the genuine aristoi, not those of the dominant social class — in charge.
B. I wonder if you’d get “the genuine aristoi.” I recall that one Chinese philosopher, Ye Shi, commented that “A healthy society cannot come about when people study not for the purpose of gaining wisdom and knowledge but for the purpose of becoming government officials.”
A. I think Ye Shi may have been a little too concerned about people’s motives. If we can create examinations that accurately test for the skills that our rulers really need to have, and we select as our leaders the people who have those skills, who cares if their motives aren’t pure?
B. Hm. If you tell me that you can produce the best medical researchers, or particle physicists, by means of an examination, I might — might — take the notion seriously. But political rule? I don’t think so. Political leadership requires a whole host of skills and virtues — people skills, as we like to say, prudence, discernment, judgment of character — all traits that can’t possibly be tested for, but only developed through practice, experience. And some of those traits are virtues — so the character of the person in leadership actually matters. Politics isn’t a matter of A/B testing, of choosing the best option from a group of four!
A. Isn’t it? I’m not so sure. But I’ll grant that under democracy what you say may well be true — let’s say it is true. But democracy is what I’m trying to get rid of here, and one of the chief reasons I want to get rid of it is its tendency to generate just this kind of leader: someone who doesn’t know anything about anything but can somehow generate trust ex nihilo. I want — society needs — to ground our leadership choices in more objective terms of excellence, and relieving ourselves of the burden of democracy will give us a chance to do that. If instead of choosing leaders who can please hoi polloi we choose leaders with demonstrable expertise in the issues we face — poverty, poor health, inefficient energy usage, upheavals due to foreign conflicts, uncertainty because of irrational foreign governments —
B. Some of which are democracies. And even the ones that aren’t often have governments that stand because of their ability to “please hoi polloi.” Do you think you exam-crushing experts are going to have what it takes to deal with such retrograde social orders?
A. I think they’ll have a much better chance than the pols we send around the world today, many of whom have amateurish knowledge of the cultures within which they’re placed – and those are the good ones. I’d rather choose people with some of those social virtues you were lauding from within a pool of the demonstrably knowledgable than from within a pool produced by our current patronage system.
B. You know America has a foreign service exam, right?
A. Sure. And many of the people who aced it are working and suffering under inept direction from higher-ups who have no business making decisions. We’re like imperial China in that respect.
B. So you want to put the people who ace the exam in charge? And then extended a similar model into the rest of the governmental system?
A. Right — though of course people will need to gain experience over time — I wouldn’t suggest putting a 22-year-old in immediate charge of an embassy because she had the highest test scores.
B. Based on what you’ve said so far, I’m not sure why not. But let’s drop that — I have a different question for you. You’re creating a system in which almost everyone will be deprived of self-government. Do you think people in general will accept such a deprivation?
A. It’ll be a hard sell at first, because most people like to think of themselves as not just worthy of self-determination but positively inclined towards it. But they’re not — not either: not worthy and not so inclined. As I argued from the outset, the demos has made an absolute mess of things, implementing (through their chosen leaders) a vastly long series of selfish and stupid decisions, which they have also tried with considerable desperation to avoid facing the consequences of. But I also think on some level they know this — they understand that they are not suited for self-governance. And when someone comes forward with the ability to explain this to them in non-threatening terms, and to show them that democracy is not inevitable and that there really may be a better way, then I think they’ll be glad to be relieved of the burden of self-rule.
B. So if people are going to be persuaded to relinquish a system in which they choose leaders solely on the basis of trust-inducing capacity, they’re going to need one or more people they trust to do that persuading.
A. Yes. Ironic, isn’t it. But the history of politics is full of ironies. Only Nixon could go to China, etc.
B. You remind me of Loki.
B. Loki. In The Avengers. Telling people that they were made to be ruled.
(To be continued…)
There is no better journalist in America than Andrew Ferguson, and his brilliant takedown of bad behavioral science provides yet more evidence for that claim. A passage on Stanley Milgram’s famous obedience-to-authority experiment especially caught my eye:
The results were an instant sensation. The New York Times headline told the story: “Sixty-five Percent in Test Blindly Obey Order to Inflict Pain.” Two out of three of his subjects, Milgram reported, had cranked the dial all the way up when the lab-coat guy insisted they do so. Milgram explained the moral, or lack thereof: The “chief finding” of his study, he wrote, was “the extreme willingness of adults to go to almost any lengths on the command of an authority.” Milgram, his admirers believed, had unmasked the Nazi within us all.
Did he? A formidable sample of more than 600 subjects took part in his original study, Milgram said. As the psychologist Gina Perry pointed out in a devastating account, Beyond the Shock Machine, the number was misleading. The 65 percent figure came from a “baseline” experiment; the 600 were spread out across more than a dozen other experiments that were variations of the baseline. A large majority of the 600 did not increase the voltage to inflict severe pain. As for the the participants in the baseline experiment who did inflict the worst shocks, they were 65 percent of a group of only 40 subjects. They were all male, most of them college students, who had been recruited through a newspaper advertisement and paid $4.50 to participate.
The famous 65 percent thus comprised 26 men. How we get from the 26 Yalies in a New Haven psych lab to the antisemitic psychosis of Nazi Germany has never been explained.
I’m interested in this because in my book on original sin I referred to Milgram’s experiments quite positively — and moreover, I never did any reading to find out whether they had been subjected to critique. I just assumed that they were universally accepted as valid. And why did I make that assumption? Because Milgram’s experiments confirmed the story I was telling about the return, in the twentieth century, of a widespread belief in human depravity.
Now, to be sure, the book by Gina Perry that Ferguson cites as authoritative on this matter has itself come under some criticism for one-sidedness; Milgram’s famous experiment may indeed hold up, at least in large part. But the point I want to make here is that I didn’t do anything to check it out — for me, the story Milgram told was too good to be false.
A. There’s nothing to be afraid of — but yes (since you’re wondering) my conviction that democracy is a failed experiment does stem, in part, from my reading of the neoreactionaries, especially Moldbug. But I’m not with him all the way — for instance, as you can tell from my earlier comments, I have a good deal more respect for the U. S. Constitution than does Moldbug, who has commented, “The basic nature of constitutional government is the formalization of power, and democracy is the formalization of mob violence.” Nah. But in many other respects his diagnosis of where we’ve gone awry is spot on.
B. Is it? I don’t think so. In fact, hearing that your thoughts have been shaped by Moldbug’s does more to discredit them than anything else you’ve said.
A. Why? Moldbug is a very smart guy — he’s just saying the kinds of things that most people are afraid to say.
B. Maybe. And sure, he’s smart. But he’s not especially knowledgable about things he needs to be knowledgable about in order to offer a compelling alternative to the existing political order. For instance, in one of his most-read posts he writes, “Thomas Aquinas derived Catholicism from pure reason. John Rawls derived progressivism from pure reason. At least one of them must have made a mistake. Maybe they both did” — which is absolutely nonsensical. He has no idea what he means by “Catholicism,” “progressivism,” “pure reason,” or “derived.” He has no idea what either Aquinas or Rawls would have made of those terms, or why they would have described their projects in wholly different ways. I distrust Moldbug because Moldbug clearly doesn’t understand — does not have even a minimally competent, first-year-undergrad comprehension of — many of the positions he rejects.
A. All right, so let’s grant, per argumentum, that Moldbug is not an expert in the history of political philosophy. But he doesn’t have to be in order to present a coherent and useful vision of a new direction in which we can go — a new direction I think you’ll agree we very much need.
B. A new direction, I’m not so sure; but a different direction, yes. Anyway, please remember that I’m not asking Moldbug to be an expert, but I do think he needs to have at least a basic understanding of the views he’s rejecting — precisely because he’s grounding the need for his ideas is the conviction that those other ideas are wrong. However, his acquaintance with those ideas is too superficial, and he’s too incurious about what Aquinas and Rawls really think, for me to take seriously his claim that he can offer a compelling alternative.
A. I don’t think that follows — you’re placing too much emphasis on the need to understand some pre-existing tradition of political thought. You’re trying to hold Moldbug accountable to the very system he’s repudiating: you’re rejecting the red pill because it’s not the blue one.
But in any case, let’s not belabor this question. I still have an argument I want to make.
B. Fair enough — as long as I get a chance to make an argument of my own before we’re too old to care.
A. Of course! But now I want to get back to this notion of — as you divined — aristocracy. The word means “rule by the excellent,” or the “best,” and the primary reason people dislike it is that they know that aristocracy never lives up to its name: it is never rule by the most excellent, but by the rich and powerful who in order to justify their rule designate themselves as excellent. That’s why it’s so absurd when people try to overcome resistance by replacing “aristocracy” with “meritocracy” — the words are synonyms, and “merit” can be faked and then justified as easily as can any other claim to excellence. By meritocracy people usually mean “rule by those who have been academic high achievers” as opposed to the popular use of aristocracy to mean “rule by those of high social status” — but given the enormously strong correlation between social status and academic performance, this is a distinction virtually without a difference.
B. So anyone, like you, who wants to make a case for aristocracy/meritocracy in preference to democracy has one big job at the outset: to show how it’s possible for a society to produce genuine aristoi — and put them in charge.
B. But even if you do that, you won’t have proved that such an aristocracy would be superior to democracy.
A. Sure. But one thing at a time. And don’t forget, if rule by the aristoi can be a fiction, rule by the demos can be too.
B. No doubt.
A. Okay, so back to work. I think the model we want to consider — though perhaps not to imitate slavishly — is imperial China’s examination system.
(to be continued…)
Damon Linker likes California’s assisted-suicide bill. After rejecting religiously-based arguments against suicide, he writes,
The arguments raised by disability-rights activists are more powerful, since they’re based less on appeals to absolute (and unconvincing) moral strictures than on the law’s potential to lead to bad consequences and abuse. One of those consequences is a kind of soft eugenics in which the terminally ill are subtly pressured to do the “selfless” thing of ending their lives to save their loved ones from the financial and emotional burdens of caring for them. One could also imagine a future attempt to expand the law to include not just terminally ill and suffering patients, but also people with chronic and debilitating but not fatal or excruciating illnesses. Finally, there’s the possibility of the law being changed so that it permits not just the patient but also family members or friends to request the lethal dosage. That, too, could lead to the exertion of pressure on a patient to end his or her life.
These are legitimate concerns that should be taken seriously, especially in light of a recent disturbing New Yorker article about how Belgium allows euthanasia for people suffering from depression. But the California law is written to avoid being applied in anything like the ways feared by most disability activists. So yes, let’s beware future amendments to the law that could lead to abuse. But that’s no reason to oppose its current, limited, and responsible form. (One doesn’t normally oppose a law based on the ways it might one day be changed, revised, or amended.)
I just want to make two comments about this. First, having read the text of the proposed law, I can’t see anything in it that would warrant Linker’s claim that “the California law is written to avoid being applied in anything like the ways feared by most disability activists.” It seems to me that it would be very easy for an attending physician and members of the dying person’s family to practice “a kind of soft eugenics in which the terminally ill are subtly pressured to do the ‘selfless’ thing of ending their lives to save their loved ones from the financial and emotional burdens of caring for them.” In fact, I don’t see how a law could be written to prevent that kind of pressure from being brought to bear on the dying.
Second, and in a spirit of theoretical disputation, I note Linker’s claim that “One doesn’t normally oppose a law based on the ways it might one day be changed, revised, or amended.” Doesn’t one? Shouldn’t one? It seems to me that there are cases in which it would be sensible to look at possible future extensions of a proposed law while evaluating its current form — and this could be one of them.
The law opens the choice of physician-assisted suicide to persons with a “terminal disease,” and defines “terminal disease” as “an incurable and irreversible disease that has been medically confirmed and will, within reasonable medical judgment, result in death within six months.” Surely someone will say, “Why six months? Why not a year — or more — if ‘reasonable medical judgment’ concludes that death is overwhelmingly likely?” That is, there’s an arbitrariness in the choice of six months as the (pardon the term) deadline for this choice which makes it likely that there will soon be pressure to extend it.
Moreover, there’s a great deal of wiggle room in the phrase “reasonable medical judgment.” One doctor may deem a disease fatal that another finds eminently treatable; and even when fatality is for all intents and purposes certain, people often surprise their doctors. Some cancer patients have lived far beyond the utmost time predicted for them; others die much more quickly than expected. (My father was one of the latter.)
So it seems to me that the law in its current form is already ripe for abuse; and it seems extremely likely that strong arguments will be made for extending the time frame in which suicide may be assisted — in the name of the same compassion that causes Linker to endorse the current bill. So even on non-religious grounds the proposed law seems to me far more questionable than Linker allows.
I want to consider some stories I have read recently — juxtapose them to one another. Let’s begin by looking at this story:
Last year I told a gay black male who wrote a story about a gay black male that I didn’t care about race or gender, and the class gasped. Even though I explained that I cared more about what happened to the character and about the elegance of the prose, my comment could have been a signal to erect a guillotine on the campus lawn. Nonetheless, the student thanked me after class. He said, “No one looks at my stories. They just look at me.”
Microinvalidations are characterized by communications or environmental cues that exclude, negate, or nullify the psychological thoughts, feelings, or experiential reality of certain groups, such as people of color. Color blindness is one of the most frequently delivered microinvalidations toward people of color.
“People are just people; I don’t see color; we’re all just human.” Or “I don’t think of you as Chinese.” Or “We all bleed red when we’re cut.” Or “Character, not color, is what counts with me.”
And then this story:
Academics of color experience an enervating visibility, but it’s not simply that we’re part of a very small minority. We are also a desired minority, at least for appearance’s sake. University life demands that academics of color commodify themselves as symbols of diversity — in fact, as diversity itself, since diversity, in this context, is located entirely in the realm of the symbolic. There’s a wound in the rupture between the diversity manifested in the body of the professor of color and the realities affecting that person’s community or communities. I, for example, am a black professor in the era of mass incarceration of black people through the War on Drugs; I am a Somali American professor in the era of surveillance and drone strikes perpetuated through the War on Terror….
It’s not that we’re too few, nor is it that we suffer survivor guilt for having escaped the fate of so many in our communities. It’s that our visibility is consumed in a way that legitimizes the structures of exclusion.
Skin feeling: to be encountered as a surface.
And finally, Ralph Ellison from Invisible Man, where so much of this discourse begins:
I am invisible, understand, simply because people refuse to see me. Like the bodiless heads you see sometimes in circus sideshows, it is as though I have been surrounded by mirrors of hard, distorting glass. When they approach me they see only my surroundings, themselves or figments of their imagination, indeed, everything and anything except me.
It’s easy — especially for anyone who discounts racism and the effects of racism as major shapers of the American cultural experience — to throw up one’s hands and say “It’s impossible to win with these people! It’s white people’s fault if they’re visible, it’s white people’s fault if they’re invisible! Heads they win, tails we lose!” Indeed, it’s not just easy, it’s inevitable.
But you know, it has to be hard to be either invisible or hyper-visible; and white America really does oscillate between casual clueless racism and genuine heartfelt desire to achieve colorblindness. (Though probably there has been a general drift towards the latter, which could be taken advantage of rather than resented.)
I would love to have a clear answer to this conundrum, but I don’t — except to note that it is a conundrum, an insoluble puzzle, a rhetorical circle — it’s the Mister Bones’ Wild Ride of political rhetoric. So maybe this is a good point at which to remind ourselves that, in this context, both “visibility” and “invisibility” are largely metaphorical. And then look through and beneath them for the more complex reality that they fail to capture — even if they may have been at times in their history conceptually useful and powerful. I think many critics of American racism have attached themselves to a vocabulary that just drops them in a ride that never ends.
(The first installment of the dialogue is here.)
B. But you’ve totally shifted ground here! What you’re offering now is not a critique of self-government, or even representative democracy, but of a corrupt electioneering system which couldn’t serve plutocracy better if it were designed to do so – and really, it is designed to do so, come to think of it.
A. And every “informed voter” knows that that’s the case, and expresses much tut-tutting disapproval, and occasionally even raises his or her voice in outrage — but keeps re-electing the same corrupt and/or weak-willed losers, or their newest clones. I have complained about American voters being ignorant, but even when they’re not ignorant they are thoughtless. Every opportunity they have to address the corruption of the system— and they have that opportunity every two years — they squander. They listen to the empty promises of politicians that flatter them, and pay not the slightest attention to the needs of society as a whole or those who come after them — that’s the selfishness, their third item of my indictment. They have repeatedly abused the privilege of voting, and they deserve to have it taken away from them.
B. Well, it’s a powerful indictment. According to your argument, then, this nearly universal abdication of democratic responsibility has led (one must assume) to the collapse of American society, widespread poverty, and internal and external powerlessness. Because clearly it wouldn’t be possible to a political system as corrupt and inefficient as the one you’ve described to produce even a mediocre social order — let alone an enormously wealthy and powerful society, a global hegemon such as the world has rarely if ever seen. So perhaps you’re living in the universe next door to mine….
A. No, I think we’re in the same universe, though I might want to argue whether a country’s achieving the status of “a global hegemon such as the world has rarely if ever seen” is, as you seem to think, a good sign. But let’s set all that aside and cut to the chase. As I read current events, and the history that has produced them, American power is chiefly the residual result of decisions made long ago by a much smaller electorate, a kind of aristocracy in all but name. Insofar as that aristocracy excluded women, people of color, and (at first) poor white men, it was unjustifiable; but another way to look at it is that the power went to the best-educated in society, the least vulnerable to the pressures of external forces. We are at work dismantling the brilliant edifice they constructed, though perhaps not fast enough for some; but it was so magnificently built, so delicately balanced— “a machine that would go of itself” — that it has proven exceptionally difficult to dismantle. But it will be dismantled, and just as we are continuing to benefit from the wisdom of our ancestors, our grandchildren will suffer from the stupidity of voters today.
B. You realize, I trust, that your historical argument could be challenged, and seriously challenged, at every single point.
A. Yeah. But we’re having a conversation, I’m not writing a treatise.
B. You also realize, I trust, that where you’re headed would constitute a more radical dismantling of the Constitution than anything else on the table?
A. No. I absolutely deny that. It would be a way to re-articulate and re-implement genuinely Constitutional principles in a new social order, one in which ignorance, thoughtlessness, and selfishness are no longer impediments to political power and influence.
B. I’m going to do you the honor of assuming that you are not going to argue for confining the franchise to white males who make more than $100,000 a year….
A. Much obliged.
B. But this is going to be an argument for a New Aristocracy, isn’t it?
B. I was afraid of that.
(to be continued…)
A. It’s time to accept a simple and yet profound fact: democracy is a failed experiment. People throughout the Western world — well, hold on: let’s just confine this discussion to America. Democracy in America is a failed experiment. Americans have demonstrated conclusively that they are too ignorant, thoughtless, and selfish to be trusted with self-governance.
B. Ignorant, thoughtless, and selfish! What a trifecta! Hyperbole much?
A. It’s not hyperbole. Let’s take my charges one at a time. Surely I don’t need to recite the dark litany of polls and studies that demonstrate how grossly misinformed Americans are about the basics of our political system, current laws and policies, the most elementary facts of world geography—
B. No, no, you don’t have to recite that litany — I have it by heart. But do you think that’s a new thing? Are you under the impression that our ancestors were learned and wise, spending their evenings discoursing on the subtleties of recent Supreme Court decisions?
A. I’m tempted to say yes. After all, they weren’t sitting around watching American Idol or hammering out wrathful comments on YouTube videos. They attended lectures and chatauquas, they participated in town halls and debating societies —
B. “They” did? You mean a handful of the wealthier and better-educated white men did, I think.
A. As I said, I’m tempted to say yes — and I really do believe the situation was more complicated, and better, than you have suggested. But for now I’ll waive the point. Let’s posit that Americans today are at least as knowledgable as their ancestors were. Okay?
B. Well … okay. For now. I reserve the right to debate this point later.
A. Fair enough. So what I want to say is that ignorance today matters more than it did in the past, because the role of government in our lives is so much greater. A hundred and fifty years ago it was possible to live a full and happy life with minimal experience of government. About a hundred years before that it was possible for Samuel Johnson to write, “How small, of all that human hearts endure, / That part which laws or kings can cause or cure.” Such innocent times! Now “laws and kings” have insinuated themselves so deeply into all our lives that ignorance of their power and influence can exact horrifying costs.
Plus, we have so many more educational opportunities than our ancestors —
B. Hang on, hang on — this is a dialogue, remember?
A. Sorry. Please go on.
B. Thanks. I think you need to stop and reflect on the fact that there is so much more to be ignorant of now than there was 150 years ago — and the increased complexity of government is a function of the increased complexity of the world. The transportation and communications technologies that arose in the 20th century have created a “global village” the very existence of which creates a need for wide-ranging knowledge that our ancestors couldn’t have imagined — to blame today’s people for —
A. I’m not blaming anyone.
B. Well, you kinda are.
A. I’ll try not to, because it’s not necessary to my argument. People may not be at fault for being too ignorant for self-government — but they still are too ignorant for self-government.
B. But isn’t that why we have a representative democracy? People elect representatives who can devote their full time and energy to mastering the complexities that we aren’t able to master.
A. Try watching C-SPAN for a while and tell me if you think those are people capable of “mastering complexities.”
B. Well, I have watched a good bit of C-SPAN and I have seen some pretty wonky Congresspersons — I think your critique is a lot more applicable to the politicians who make a point of saying and doing things that will land them on CNN and in the big newspapers.
A. Okay, that’s a fair point. But I think there are two other points you’re neglecting. First, even the wonky members of Congress tend to be selectively wonky. They have their one little area of expertise — or what they flatter themselves is expertise — and in other matters they just take their direction from their party’s leadership. And second, look at what actually gets done in Congress: certainly not intelligent and reasonable laws crafted by deeply knowledgable people to whom their colleagues defer! Rather, it’s pork-laden overstuffed monstrosities stitched together in order to please the whims of party leaders, big donors, and lobbyists for the hyper-wealthy corporations to which both major parties are equally indebted.
(to be continued…)
There’s a great deal of talk about “safe spaces” these days, but I put the phrase in quotes because rarely do these conversations refer to actual spaces. Instead, people seek social environments in which they’re proteced against verbal assault, or confrontation, or mere discomfort. Place as such doesn’t enter into it.
In stories, though, the idea of the safe space is a powerful one — even if the safety often proves illusory. (“The calls are coming from inside the house.”) And when there is genuine safety it’s rarely complete or permanent. In The Lord of the Rings Tom Bombadil’s house and Rivendell and Lothlorien are places of absolute refuge for the beleaguered characters, but we are reminded that none of them could hold out forever against the evil of Sauron. Perfect rest can be found in them; but only for now. The contingently safe space is a curiously strong theme in the Harry Potter books: living in the Dursley house grants Harry protection from Voldemort — until he comes of age; 12 Grimmauld Place protects members of the Order of the Phoenix — as long as they manage to prevent anyone from seeing them enter or leave; Hogwarts itself is invulnerable to Voldemort and the Death Eaters — but only as long as Dumbledore is present and in charge.
There are of course genuinely safe spaces in literature, and perhaps many readers will have favorites. I certainly know what mine is: it’s Nero Wolfe’s brownstone on West 35th Street in Manhattan.
Of all fictional series, Rex Stout’s Nero Wolfe stories have the most ingenious and fertile conceit (with the possible exception of Patrick O’Brian’s Aubrey-Maturin books). It is twofold: that the enormously fat Wolfe never willingly leaves his house, preferring to solve crimes simply by application of brain power; and that the man who moves for Wolfe, who serves as a kind of mobile prosthetic body for him, Archie Goodwin, narrates all the stories. There’s much to commend about this double conceit’s power to generate good stories — and about Rex Stout’s ability to conjure a consistently delightful narrative voice for Archie — but I want to talk about the house.
If you climb the steps and knock on the door, it will probably be answered by Fritz, Nero Wolfe’s chef — simply because Fritz’s kitchen is on the first floor, along with the dining room and Wolfe’s enormous office. The rest of the house is described at the Nero Wolfe Wikipedia page (linked above):
Nero Wolfe has expensive tastes, living in a comfortable and luxurious New York City brownstone on West 35th Street. The brownstone has three floors plus a large basement with living quarters, a rooftop greenhouse also with living quarters, and a small elevator, used almost exclusively by Wolfe. Other unique features include a timer-activated window-opening device that regulates the temperature in Wolfe’s bedroom, an alarm system that sounds in Archie’s room if someone approaches Wolfe’s bedroom door or windows, and climate-controlled plant rooms on the top floor. Wolfe is a well-known amateur orchid grower and has 10,000 plants in the brownstone’s greenhouse. He employs three live-in staff to see to his needs.
A back door, rarely used and treated as something of a secret, leads to a small garden where Fritz grows herbs and which features a vaguely described way out onto 35th Street (which seems also to be a secret, and is probably invisible from the outside, like 12 Grimmauld Place.)
The brownstone possesses an aura of self-sufficiency: I suppose Fritz has to shop for the food he cooks, but his larder seems magically full, and the meals served in that kitchen or in the adjoining dining room are in my imagining conjured more than made. (Fritz’s rooms are in the house’s basement, where he keeps an extensive collection of cookbooks.) The little world of the greenhouse, with its custodian Theodore Horstmann who lives among his orchids at the top of the house, is like a chunk of Faerie that one enters not by walking through a strange forest but by taking Wolfe’s little elevator.
Often in the books one of Wolfe’s clients finds himself or herself — usually herself — in danger and is brought by Archie to the house, whereupon the doors are locked and all creatures of evil intent are excluded. In one story a woman tries to stab Wolfe as he sits in his custom-made desk in his office; she dies instead. Wolfe is invulnerable there; I’m reminded again of Tom Bombadil, though in darker and more cynical form, utterly safe “within limits he has set for himself” and making others safe there too.
All of this is of course merely a dream of refuge dreamed by someone (me) who is one of the safest people in the world. As I write these words, refugees from the Middle East are pouring into Europe, and someone posted on Instagram images of notices that the city of Vienna has put up in all the transportation centers. The one in English (I saw Arabic ones too) begin with the word “WELCOME,” and go on to explain the various services the city provides for refugees, and to instruct visitors how to find help. Then, at the end, there is a single three-word paragraph:
You are safe.
“You are safe.” Could there be more powerful, more important, more consoling words? I have never needed to hear them in the way those thousands of refugees need to; and yet they answer to the deepest of all needs. For even water and food can wait for a while.
I have thought sometimes of finding myself in New York City, pursued by evil people who will do terrible things to me before they kill me. Somehow, against all odds, I make my way to the house of West 35th Street and rush up the steps and knock. Archie Goodwin opens the door a crack, then ushers me in. Up we go to the guest bedroom on the third floor, down the hall from Archie’s own room. The room is clean and quiet, and an orchid from Wolfe’s greenhouse stands in a vase on the bedside table. Once alone, I take off my clothes and turn out the light. In the morning Fritz will make a delicious breakfast, and there will be plenty of hot strong coffee. In the meantime, I sleep soundly and peacefully. Because here I am safe.
Damon Linker is right to say that the person now known as Kentucky Clerk should resign if she can’t fulfill the law the terms of her job require her to fulfill.
Mollie Hemingway is right to say that the attacks on Kentucky Clerk are utterly malicious and utterly mendacious.
There are really two significant stories here: one concerns Christians who think that they ought to be able to dissent from government and get paid by it at the same time; the other concerns secular liberals whose one principle in relation to the repugnant cultural other is “Any stick to beat a dog.”
UPDATE: I tried to comment on Noah’s response to this post, but WordPress didn’t let me. Or I don’t think it let me. Anyway two things: first, did I really sound “outraged”? I didn’t feel outraged. Perhaps I need to work on tone management.
Second, about the question of “significance:” if Kim Davis is a unique figure, then Noah is right, the story isn’t significant. But she may not be a unique figure. There seem to be a number of conservative Christians in America with a complex (possibly contradictory) attitude towards this country: on the one hand, a default patriotism and law-and-order mentality, often rooted in the belief that America is a “Christian nation,” that makes them comfortable with holding government jobs; and on the other hand, a belief that like the Apostles they should “obey God rather than man” and therefore should always be ready to dissent from the powers that be. This leads to someone like Kim Davis thinking that it’s possible for her to swear to uphold the law — but to refrain from upholding the law when it’s one her conscience disagrees with. If a large number of Americans, even in just a few states, feel the same way, then that will have consequences for elections, for laws, for the social fabric. And such consequences would be significant. Enough people have spoken out in support of Kim Davis to make me think that it’s not a trivial story.
I am fond of thought experiments, though many people are not—or so I infer from the fact that every time I propose one the most common response I get is a refusal of its terms. So a number of people who have responded to my recent little exercise have said something like “But that’s not the situation we’re in”—Yes it is, in this thought experiment that I am totally making up—or “I would not vote for either party”—but in this thought experiment you have to choose one.
There’s some of this even in the response from my friend Noah Millman, as when he wonders whether there really are threats to religious liberty. In my thought experiment there damn well are, because I say there are! Against Noah, I say that the premises of my thought experiment are not and indeed cannot be “debatable premises,” because they are the ones I posit simply for the sake of the experiment: thus my insistence at the outset on the term “hypothetical.”
I can’t help being reminded of one of my favorite scenes from Wodehouse, in which the pathologically diffident Gussie Fink-Nottle discusses with Bertie Wooster whether he should follow Jeeves’s advice to build his self-assurance by wearing a Mephistopheles outfit to a costume party:
‘And you can’t get away from it that, fundamentally, Jeeves’s idea is sound. In a striking costume like Mephistopheles, I might quite easily pull off something pretty impressive. Colour does make a difference. Look at newts. During the courting season the male newt is brilliantly coloured. It helps him a lot.’
‘But you aren’t a male newt.’
‘I wish I were. Do you know how a male newt proposes, Bertie? He just stands in front of the female newt vibrating his tail and bending his body in a semi-circle. I could do that on my head. No, you wouldn’t find me grousing if I were a male newt.’
‘But if you were a male newt, Madeline Bassett wouldn’t look at you. Not with the eye of love, I mean.’
‘She would, if she were a female newt.’
‘But she isn’t a female newt.’
‘No, but suppose she was.’
‘Well, if she was, you wouldn’t be in love with her.’
‘Yes, I would, if I were a male newt.’
A slight throbbing about the temples told me that this discussion had reached saturation point.
I continue to believe that a thought experiment like the one I suggested is valuable in the same way that A/B testing is valuable. When someone asks you which of two shades of blue you prefer, you can, I suppose, say “Why just two? Why not fifty shades of blue?” or “Why not green, and red, and burnt umber, and all the other colors?” But maybe we would all learn something, even if something small, if you just picked one of the damned shades of blue. And then we can move on to other experiments after that, and gradually, incrementally, build up a more reliable understanding of our own values and preferences.
To those who would say that A/B testing, and thought experiments, are simple in comparison to real-life decisions, I reply: Precisely. That’s just the point of them. Politics is hard because it’s so outrageously complicated. It’s easy to get lost in all the overlapping questions and competing priorities. If you agree with a political party about seven of its official platform positions, but disagree about only one, but the one is something you care passionately about while the seven are, for you, relatively insignificant—how are you supposed to weigh those things? It’s impossible to say off the cuff. More thinking is required. It helps to break the situation down into its component parts. That’s what a thought experiment like the one I proposed is for.
More about the substance of the matter later; right now, I have teaching to do.
Imagine that there are two leading American political parties. Imagine further that they are in general agreement on all issues except two. (That’s what makes this a true hypothetical.)
The first point of disagreement concerns religious liberty. Party A is a strong supporter of religious liberty; Party B thinks that religious liberty needs to be circumscribed in order to secure maximal equality or justice for others.
The second point of disagreement concerns foreign policy. Party B is in these matters cautious and circumspect, disinclined to adventurism, not isolationist but not interventionist either. Party A, by contrast, never met a foreign conflict it didn’t want to intervene in, and thinks what’s good for military expenditures is good for America. The more of our young men (and perhaps women) Party A can put in harm’s way thousands of miles from home the better it feels about itself. Pax Americana, world without end, y’all.
You (in this thought experiment) are a Christian and a strong supporter of religious liberty; you are also strongly opposed to unnecessary military adventures and foreign intervention more generally.
How do you vote? And on what grounds do you make that decision?
I’ve been thinking about this a good bit lately. While I am, as I have often demonstrated right here on this site, a vocal supporter of religious freedom, I’m also rather uncertain about how my religious convictions should affect my political decisions. The problem arises if we distinguish between individual and collective Christian action.
On the individual level, I know what I am supposed to do: if someone slaps me on one cheek, I should offer them the other; if someone takes my shirt, I should offer him my coat; if someone curses me, I should bless him; I should always seek the well-being of others in preference to my own. (Of course, this is not to say that I actually do what I know I should do.)
If that logic holds in the collective sphere as well, then perhaps Christian churches should not focus too much attention on what is best for them, but on what is best for their neighbors. They might have good reason, in that case, to accept constraints on religious freedom if that meant preventing unnecessary violence, death, and destruction from being unleashed on others.
Now, some Christians might also argue that the Church exists for others, so that promoting religious freedom, even at the cost of lives lost overseas, is still the selfless thing to do. And that could be right, but I think we all ought to be very wary of arguments that provide such a neat dovetailing of our moral obligations and our self-interest.
I honestly don’t know what I think about this, and still less do I know how to apply the proper principles to our own more complex political scene. But I do think it’s right to conclude that there are at least some potential circumstances in which religious believers, in order to be faithful to their religious traditions, would need to refrain from direct political advocacy for those traditions.
On Twitter, Damon Linker politely took me to task for, in my response to a post of his, ignoring the “substance” of that post. I believe by that he meant his explanation of his own views of fetal life, as opposed to the critique of Ross Douthat that I objected to.
Well, that post wasn’t about Linker’s own position, but rather about his peculiar way of responding to Ross Douthat. But okay—since you asked!—here goes. Linker writes,
Even if my wife and I could know every time a fertilized egg fails to implant and then sloughs off when she menstruates, we still would never be moved to mourn the death of a being with intrinsic moral worth. The same holds for fertilized eggs that slough off because a sexually active woman is using an IUD — or, for that matter, because a woman is breastfeeding in the first several months after giving birth. All of these activities lead to the “death” of what really is, at that pre-implanted stage, a clump of cells that is destined not to develop into anything at all.
Nine months after successful implantation, things are very different. I would even say categorically — ontologically — different. How is this possible? I have no idea. All I know is that nearly all of us are convinced that a newborn baby is a person, a creature with intrinsic dignity, worth, and a right to life that the liberal state is duty bound and justly empowered to protect — and yet also convinced that although this same creature possessed the same genetic code from the moment of fertilization, it was somehow of relative moral insignificance in those first few hours and days of microscopic life.
I would very much like to know Linker’s evidence for the claim that “nearly all of us are convinced that … this … creature … was somehow of relative moral insignificance in those first few hours and days of microscopic life.” Nearly all? But let’s continue:
Between those moments (conception and birth) lies a developmental continuum that confounds any and every effort at strictly rational systematization. An abortion at six weeks is worse than one at four weeks. Eight weeks is worse than six. Twelve is worse than 10. And so forth, as we approach fetal viability — at which point, what was once a medical procedure with minimal moral import becomes a matter of murder.
First of all, and especially in light of my critique of Linker’s critique of Douthat, I want to say that this identification of fetal viability as the point at which a fetus becomes a person entitled to legal protection is a big step, and one that I’m sure earns Linker plenty of condemnation from the pro-abortion world. And the criticisms I am now going to offer should not be seen as ignoring that step or diminishing its significance. But do I have some concerns about Linker’s line of argument? I do.
The first is that, while Linker’s view is often described as a “gradualist” one, and while morally that may be true, in legal terms it’s not gradualist at all: it’s totally binary, all or nothing. In this account, before viability the taking of a fetal life is legally nugatory; after viability it’s murder. This is a big jump in any circumstances, but especially worrisome given the success of prenatal medicine in pushing viability earlier and earlier. So whether a woman has done something of no legal interest or something of the greatest possible legal significance can change within a year. This is to make legal judgment—and the status of a human creature under the law—dependent to a disturbing degree on medical technology.
Moreover, Linker’s judgment about the “moral worth” of pre-viability fetuses is pretty shaky as well. There’s nothing wrong with that as a matter of personal feeling, though (as I suggested earlier) I’m not convinced that his personal feelings about zygotes—his moral intuitions about them—are as universal as he claims that they are. And that’s a problem with his case, because he grounds his entire approach to the legal status of fetuses in those feelings and intuitions. If almost everyone does share those intuitions, then maybe that will work as a matter of practical jurisprudence; but ethically it’s pretty dubious. After all, it hasn’t been that long since widespread intuitions about the “moral worth” of black people led to catastrophic evil. (And the leftovers of those intuitions are still poisonous for black people in America today.)
I appreciate, and even value, the general point that underlies Linker’s argument: that sometimes our laws have to be based on fallible and not especially consistent moral intuitions; that ad hoc reasoning is sometimes the best that we have; that the attempt to impose absolute consistency on our laws and jurisprudence is almost necessarily quixotic and prone to the generation of unintended consequences, because, as the adage rightly goes, hard cases make bad law. But I think our track record as a species—and more particularly as Americans—suggests that rough-and-ready moral intuitions do very little to protect the weak, the powerless, the despised. We need stronger and (yes) more consistent legal and moral stuff to protect those who cannot protect themselves.
Damon Linker is unhappy—unhappy with the tone of a recent post by Ross Douthat. Linker says that people like Douthat—but he really just means Douthat, because he doesn’t refer to anyone else in his post—are “losing their cool. And their heads.”
Linker calls Douthat’s post “harsh and angry”—a description I won’t contest, though I’m tempted—but notes that it is “uncharacteristically” so. Maybe he could have spent a few minutes contemplating why a writer as consistently irenic as Douthat might have lost a bit of patience on this particular subject. Might it be that title of the post Douthat was responding to describes his position as “glaring hypocrisy”? A touch on the provocative side, wouldn’t you say? (Yes, writers typically don’t choose titles, but they can protest inaccurate ones; I’ve done it myself more than a few times.)
Or might it be something that runs a little deeper? In that earlier post, Linker writes, “I have faith that Douthat’s honesty and intelligence will lead him to concede that he’s lost his debate with [William] Saletan”—the magnificent condescension of that line would have me banging on Linker’s door to challenge him to a duel—but Douthat demonstrates pretty thoroughly in his reply that he hasn’t lost that argument. And it’s interesting that in his lamentation over Douthat’s so-unfortunate tone, Linker never acknowledges any of the arguments Douthat makes or the studies he cites. It’s much easier to tut-tut over people “losing their cool.”
Linker seems to be troubled that Douthat doesn’t acknowledge how different his position is from that of people like “Katha Pollitt and Rebecca Watson [who consider] the termination of a pregnancy to be as morally insignificant as (in Douthat’s words) ‘snuffing out a rabbit.’” He is, as he keeps telling us, “deeply troubled by abortion.”
But the state of Linker’s feelings may not be the most germane thing here. The really key passage in Douthat’s “harsh and angry” post is one that Linker doesn’t quote:
It is not the pro-life movement that’s forced Planned Parenthood to unite actual family planning and mass feticide under one institutional umbrella. It is not the Catholic Church or the Quorum of the Twelve Apostles or the Southern Baptist Convention or the Republican Party that have bundled pap smears and pregnancy tests and HPV vaccines with the kind of grisly business being conducted on those videos. This is Planned Parenthood’s choice; it is liberalism’s choice; it is the respectable center-left of Dana Milbank and Ruth Marcus and Will Saletan that’s telling pro-life and pro-choice Americans alike that contraceptive access and fetal dismemberment are just a package deal, that if you want to fund an institution that makes contraception widely available then you just have to live with those “it’s another boy!” fetal corpses in said institution’s freezer, that’s just the price of women’s health care and contraceptive access, and who are you to complain about paying it, since after all the abortion arm of Planned Parenthood is actually pretty profitable and doesn’t need your tax dollars?
But instead of questioning the inevitability of this “package deal,” Linker prefers to (a) characterize opponents of it as exhibiting “glaring hypocrisy” and (b) express deeply-felt dismay if any of those opponents bristles at that characterization. To his credit, Linker is straightforward about his allegiances: “People like me—deeply troubled by abortion and yet supportive of women’s reproductive freedom (along with a good bit of the rest of the sexual revolution as well)—will never lend [the pro-life movement] our support. No matter how many barbaric videos its activist wing makes public.” Never.
If you tell people that you will never under any circumstances give them your support, then they may not thank you for instructing them in how to go about their business, no matter the state of your feelings. And if in the face of the horrors revealed by these recent videos of Planned Parenthood’s callous and mercenary attitude towards the organs of killed fetal humans your response is to attack Ross Douthat, then maybe, just maybe, you’re not as “deeply troubled by abortion” as you’d like to think you are.
As I—peculiar person that I am—see the world, few things could be more readily understandable than a person’s expressing gratitude that her mother didn’t choose to abort her. And that’s what the #unplannedparenthood hashtag on social media is all about: people telling their own stories of gratitude—gratitude to pregnant women who, in the face of fear and uncertainty, decided to take a chance on life; gratitude also, in many cases, to friends, family, churches, and community organizations who supported the women who took that risk. Who wouldn’t be grateful in such circumstances?
But for Olga Khazan, writing at The Atlantic, such expressions of gratitude are “bizarre,” “odd,” and “disastrously illogical.” I fear that I too must be disastrously illogical, because I fail to understand why Khazan then goes on to explain how “during the Great Depression, women who wanted to avoid having babies they couldn’t afford used ‘disinfectant douches’ that burned their genitals.” Is the point that people should not only be grateful for not being aborted but also grateful that their mothers weren’t faced with the prospect of singeing their genitals with corrosive chemicals? The relevance of this excursus escapes me.
At one point, groping to understand these alien minds, Khazan suggests that “the larger purpose seems to be to put many happy faces on the pro-life movement. All those people weren’t aborted! Isn’t that wonderful?” And she goes on to say,
Of course it is. But it also assumes that the only reason for an abortion would be that you’re mildly surprised by your pregnancy status, and uncertain what to do next.
But the #unplannedparenthood hashtag assumes no such thing. It is grossly insensitive and uncharitable of Khazan to assume that every woman who decided to keep an unplanned baby was only “mildly surprised” to be pregnant. And incurious of her too: her assumption won’t survive two minutes’ scrolling through search results for the hashtag, which show again and again the harrowing circumstances in which many, many, many women decided to bear unplanned children. That they made such an immensely consequential decision is amazingly courageous—there is nothing “of course” about it.
And often, at the time, these were unwanted children as well. Khazan notes that “there is a big difference between an unplanned pregnancy and an unwanted one”—which is indeed true. But one of the chief points that emerges from the #unplannedparenthood stories is that a great many children who were unwanted at first became very much wanted, very much loved later—either by their birth parents or by those who adopted them. Khazan’s moral world is so impoverished that in it only first thoughts count; by contrast, the people who are grateful for #unplannedparenthood are also grateful for second thoughts.
Khazan tries to draw our attention to a world in which abortion is illegal, as though that’s likely to happen any minute now, but it’s not likely, and that’s not the world that the #unplannedparenthood stories come from. In every case that I have seen, these stories commend women who could have chosen abortion, but chose life instead, even when it was costly to them. In a famous phrase, Edmund Burke spoke of an “unbought grace of life,” but the people who celebrate #unplannedparenthood know that the grace of life that experience was bought at a price—in many cases a very high price. Olga Khazan’s disdain for their expressions of thanks is contemptible.
The policing and disciplining of disagreement that I have been exploring in two previous posts—first and second—are the product of a massive cultural movement, in the process of development over centuries, that the philosopher Charles Taylor calls “code fetishism” or “normolatry.”
In an absolutely vital essay called “The Perils of Moralism,” included in this collection, Taylor explains that “modern liberal society tends toward a kind of ‘code fetishism,’ or nomolatry. … Code fetishism means that the entire spiritual dimension of human life is captured in a moral code.” This idea is first fully articulated in Kant’s deontological account of ethics, but it had been in the making for hundreds of years before that. “I want to argue that it was a turn in Latin Christendom which sent us down this road. This was the drive to reform in its various stages and variants—not just the Protestant Reformation, but a series of moves on both sides of the confessional divide. The attempt was always to make people over as more perfect practicing Christians, through articulating codes and inculcating disciplines.”
Eventually “the Christian life became more and more identified with these codes and disciplines.” But once that had happened, the Gospel itself became dispensable: all we had to do was to extract the rules from it, and the “values” that produced them, and we were good to go. Thus arise figures who use the codes extracted from Christianity against Christianity: Voltaire, Hume, Gibbon.
And thus also arises an antinomian counter-movement: “Modern culture is marked by a series of revolts against this moralism, in both its Christian and non-Christian forms. … The code-centered notion of order and its attendant disciplines begin to generate negative reactions from the eighteenth century on. These form, for instance, the central themes of the Romantic period.”
Thus modernity, at least since Kant, is characterized by constant tensions and frequent eruptions of hostility between two great opponents, the antinomians and the code fetishists. Most of the fights that afflict social media today are versions of this conflict: just think of the recent skirmishes between the self-described free-speech advocates on Reddit and the opponents whom they refer to as SJWs (Social Justice Warriors).
I think the key lesson to be drawn from Taylor’s account is that code fetishism produces antinomianism: antinomians are people who get frustrated by the code fetishists’ relentless policing and disciplining of disagreement—which the fetishists do because they are trying to build a more just society and think that codification and enforcement of rules is the only way to do it—and believe that a simply rejection of rules is the only way to resist. That is, both sides agree that morality is a matter of rules; but one side thinks that since rules require elaboration and enforcement, and other people are the ones elaborating and enforcing them, they would prefer what they see as the only alternative, a rule-rejecting, morally minimal commitment to freedom.
(At least, that is how the antinomians would describe themselves. The fierceness with which some of them persecute and attempt to silence dissenters—practices detailed in disturbing detail in Sarah Jeong’s new book The Internet of Garbage—suggest that a good many professed antinomians are actually code fetishists of a particular intense variety. Just for the purposes of this post I’m going to take the antinomians at their self-description.)
But what if this is a false dichotomy? What if the code fetishists and antinomians are both wrong, and wrong for the same reason: because they have unwittingly accepted the false idea that “the entire spiritual dimension of human life is captured in a moral code”? What if rule-following doesn’t produce justice, and the antinomians have an inadequate conception of freedom?
In an essay closely related to “The Perils of Moralism”—it even has some of the same sentences—Taylor suggests an alternative to this dichotomy. The essay is his brief but powerful foreword to The Rivers North of the Future: The Testament of Ivan Illich—a collection of interviews the writer and broadcaster David Cayley conducted with the great polymath in the late 1990s. This “testament” is enormously powerful and provocative itself, but for now I just want to highlight Taylor’s thoughts on Illich.
Taylor zeroes in on an obsession of Illich’s: Jesus’s parable of the Good Samaritan. For Illich, Taylor explains, “the Samaritan and the wounded man … are fitted together in a proportionality which comes from God, which is that of agape, and which became possible because God became flesh.”
The enfleshment of God extends outward, through such new links as the Samaritan makes with the Jew, into a network which we call the Church. But this is a network, not a categorical grouping; that is, it is a skein of relations which link particular, unique, enfleshed people to each other, rather than a grouping of people together on the grounds of their sharing some important property.
Illich believes that when we forget that what binds us is “a skein of relations,” we fall into a system of rules—we become code fetishists, for whom “the significance of the Good Samaritan story appears obvious: it is a stage on the road to a universal morality of rules.” But for Illich this is a “corruption of Christianity.” Our world looks very different if what matters is not the code we can abstract from a given situation but the situation itself—or, more specifically still, the utterly particular person who stands in front of us.
You can see the ubiquity of code fetishism—the can’t-see-around-it absolutism of normolatry—in Sam Biddle’s reflections on how he helped to ruin Justine Sacco’s life. He says that he apologized to her, but then elsewhere in the post he effectively walks back the apology:
I’ve been asked many times if I would post Sacco’s tweet all over again, and I still don’t know how to answer. Would I post the tweet again? Sure. Would I post the tweet knowing it’s going to cause an incredibly disproportionate personal disaster for Justine Sacco? No. Would I post the tweet knowing it could happen? Now we’re in dicey territory, and I’m thinking of ghosts: If you had a face-to-face sit-down with all of the people you’ve posted about, how many of THOSE would you do again? We’re wading through swamps and thorns, here.
Biddle would only “post the tweet again”—or at all—because he thinks that in it Sacco had violated some significant norm; but he would only hesitate because, having confronted her humanity, he realizes that code enforcement has a tendency to create “incredibly disproportionate personal disaster.” More crucially, he’s horrified by the very thought of scanning his history of social-media acts, because he could discover that he has violated codes himself, and then what would he do? “Swamps and thorns” indeed.
Biddle’s problem is that he is stuck between sensing the limits of normolatry and seeing no alternative to it except an antinomianism that strikes him as somehow irresponsible, perhaps even inhuman. He is morally disoriented by the confrontation with someone’s sheer personhood. He has the first inkling of the possibility that, as Taylor puts it in his summary of Illich’s thought,
even the best codes can become idolatrous traps that tempt us to complicity in violence. Illich reminds us not to become totally invested in the code — even the best code of a peace-loving, egalitarian variety — of liberalism. We should find the centre of our spiritual lives beyond the code, deeper than the code, in networks of living concern, which are not to be sacrificed to the code, which must even from time to time subvert it.
In this light, I think we can see that our dominant social media have a strong tendency to reinforce the normolatry-antinomianism dichotomy, and to obscure the need for “networks of living concern.” To search Twitter or Facebook for people using words you don’t like, or using important words in ways you don’t like; to scroll through a list of tweets or posts that employ a particular hashtag with an eye towards the absurd or offensive; to seek out particularly provocative tweets or posts in order to see how outrageous the replies are—these are the characteristic acts of the code fetishist. I pray you, avoid them.
This is a follow-up to my earlier post about disagreement.
Occasionally Americans debate the correctness of beliefs and practices — political, moral, social. But not very often. Most Americans, or so one would judge from social media anyway, are Bulverists: they already know who is right and who isn’t, so all they need to debate is why the people who get things wrong — so, so wrong — do so.
But wait: it turns out that there is actually a second form or stage of Bulverism, one that is becoming increasingly common. If the first stage of Bulverism is explanatory, this second stage is disciplinary: it is concerned to determine what penalties should be administered to those who are wrong. Disciplinary Bulverism is where all the action is today.
Consider the case of Brendan Eich, the former Mozilla CEO who was pressured to resign when it became widely known that he had contributed financially to the campaign for California’s Proposition 8. Now, Eich has made it clear that he doesn’t think he’s a martyr and would rather not have his name brought up so often in these contexts — a request that I am going to ignore, just this once, because well before he said that I asked whether people supported Eich’s ouster. Almost everyone who replied said that they did, but that’s as unscientific as a sample gets; and I’ve been unable to get a sense of just how severely Eich should be punished. One person tweeted to me that “A homophobe like Eich deserves whatever he gets,” but didn’t reply when I asked whether permanent unemployment would be a just punishment, or violent assault.
So I don’t think many people have a clear sense of how severely people should be punished for holding the wrong social, moral, or political views; but there seems to be widespread support for some kind of punishment, and something more than mere shaming. For most of the people Jon Ronson writes about in his work on internet shaming, shaming is just one element of the discipline they were subjected to. Consider the recent case of the English scientist Sir Tim Hunt, who after one sexist remark — or, as Catherine Bennett called it in the Guardian, “his determination to rescue science from female biology” — not only was forced to resign from his position at University College London but was also pushed out of the European Research Council. One strike and you’re out. Forever.
But it doesn’t always work this way — though I think more and more often the internet outrage machine demands the nuclear option as the first and only valid response. I suspect that Brendan Eich would have had at least a chance of keeping his job if he had said something like this: “I deeply regret having supported Proposition 8 and apologize without reservation to all who were rightly offended by my insensitivity. My views on same-sex marriage have evolved since then, and I pledge to do everything in my power to make Mozilla a more fully inclusive environment.” But no statement less absolute would have allowed him to escape with merely a public shaming and his job intact. (Tim Hunt actually made such an apology without receiving any mercy, but that could have been because his positions were more-or-less voluntary and more-or-less honorary.)
“Punishment” is the narrower category here, “discipline” the broader one, because there are forms of discipline that are not, or at least do not claim to be, punitive. So, for example, when scholars argue that racism is a form of mental illness, or that homophobia is, they would not suggest that racists and homophobes be punished. And while internet mobs delight in administering punishment and are happy to call it by that name, people in positions of social authority prefer a gentler approach, either to generate public confidence in their discretion or to burnish their own self-images. As Yeats wrote, “The rhetorician would deceive his neighbors, / The sentimentalist himself.”
Now, I certainly believe that racism is very wrong, though I would call it a sin rather than an illness or an error. (I’m not sure what homophobia is, but I think hatred of homosexuals is a sin also.) But that’s not the issue under debate here. My subject is what is to be done about people who hold the wrong beliefs, whether or not we describe their condition as an illness. And from the point of view of the Disciplinary Bulverism, wrong beliefs must be dealt with in some way, must be subjected to some form of discipline. And in that light, thinking of their error as a form of illness has certain advantages.
C.S. Lewis, the creator of the term “Bulverism,” also wrote an essay that’s very relevant to these considerations. It is called “The Humanitarian Theory of Pubishment,” and here is a key excerpt:
According to the Humanitarian theory, to punish a man because he deserves it, and as much as he deserves, is mere revenge, and, therefore, barbarous and immoral. It is maintained that the only legitimate motives for punishing are the desire to deter others by example or to mend the criminal. When this theory is combined, as frequently happens, with the belief that all crime is more or less pathological, the idea of mending tails off into that of healing or curing and punishment becomes therapeutic. Thus it appears at first sight that we have passed from the harsh and self-righteous notion of giving the wicked their deserts to the charitable and enlightened one of tending the psychologically sick. What could be more amiable?
But this theory is not quite as amiable as it looks — at least if you’re the one being “cured.” Lewis continues: “On this remedial view of punishment, the offender should, of course, be detained until he was cured. And of course the official straighteners are the only people who can say when that is. The first result of the Humanitarian theory is, therefore, to substitute for a definite sentence … an indefinite sentence terminable only by the word of those experts.” This point leads Lewis to his peroration:
It may be said that by the continued use of the word Punishment and the use of the verb “inflict” I am misrepresenting the Humanitarians. They are not punishing, not inflicting, only healing. But do not let us be deceived by a name. To be taken without consent from my home and friends; to lose my liberty; to undergo all those assaults on my personality which modern psychotherapy knows how to deliver; to be remade after some pattern of “normality” hatched in a Viennese laboratory to which I never professed allegiance; to know that this process will never end until either my captors have succeeded or I have grown wise enough to cheat them with apparent success — who cares whether this is called Punishment or not? That it includes most of the elements for which any punishment is feared — shame, exile, bondage, and years eaten by the locust — is obvious.
And of course this kind of thing happens all the time in Western societies today: sensitivity training and its many near relations. Nothing new there. In fact, the disciplinary structures that have been well-emplaced for decades will simply continue: but in the coming decades they will have different targets.
Because this is how a society built on disciplinary Bulverism works. The reparative or conversion therapy that was once widely used to change homosexuals can easily be adapted to address the problems of racists and homophobes — who can be easy to find, thanks to the social-media trails that most people leave online. Sometimes such reparation will merely be encouraged by friends and family; sometimes it will be made a condition of employment; sometimes it will be mandated by judges. Discipline will not always (perhaps not often) come directly from the State; it will typically be administered by what one of the more acute Marxist theorists called ideological state apparatuses — institutions (schools, hospitals, many private businesses) that the State trusts to enforce its preferences. And these preferences will not be argued for; as always in Bulverist thought, their essential truth will be assumed; and the way social media are used today will ensure that dissent is driven out of any given circle of discourse.
Althusser’s picture of how the state works closely resembles what Foucault — who was not a Marxist and whose political positions were highly ambiguous — called the “power/knowledge regime,” and what some current neoreactionaries call “the Cathedral.” It’s interesting to see people from all over the political map exploring the subtle ways that State power works. There’s a reason for this. People who support using the disciplinary powers of the State against their enemies always assume that people like them will be in power forever. And on this point they are always wrong.
In an excellent recent article, Mollie Hemingway wrote, “We are slowly forgetting how to dislike something without seeking its utter destruction.” I would only replace “slowly” with “quickly”—very quickly. This makes me think about disagreement—what it is, what it means, what it is for. So let’s explore.
Many years ago, the philosopher Michael Oakeshott wrote that “The view dies hard that Babel was the occasion of a curse being laid upon mankind from which it is the business of philosophers to deliver us, and a disposition remains to impose a single character upon significant human speech.” By “Babel” here Oakeshott does not mean the diversity of languages but the diversity of beliefs and positions; his statement is a kind of challenge to philosophical hubris, to the idea that arguments can be produced that will defeat the opposition once and for all.
Bernard Williams likewise appreciated the value of disagreement: “Disagreement does not necessarily have to be overcome. It may remain an important and constitutive feature of our relations to others, and also be seen as something that is merely to be expected in the light of the best explanations we have of how such disagreement arises.” The context here is, broadly speaking, ethics—how people should live—and Williams thinks that ethical questions are immensely complex, so that disagreement about them is “merely to be expected.” Indeed, any attempt to shut down disagreement on such matters will be an impoverishment of thought, and perhaps of life itself.
The ancient idea of the philosopher as gadfly arises from the awareness that a person can serve society not only by being correct but also, and in a distinct way, simply by being different—by challenging conventional wisdom and received beliefs. Similarly, in the American legal culture we have long seen defense attorneys serving a similar role: it is good for society, and for justice considered generally that even seemingly indefensible clients or ideas be defended. And sometimes, of course, what seems indefensible proves to be justified after all. But perhaps that’s not a value held in high regard any more—at least, in relation to some issues.
To be sure, toleration, both legally and socially, has always had limits. Consider John Milton’s “Areopagitica,” perhaps the most stirring celebration of freedom of the press ever composed. Hear, my friends, these noble words: “And though all the winds of doctrine were let loose to play upon the earth, so Truth be in the field, we do injuriously, by licencing and prohibiting to misdoubt her strength. Let her and Falshood grapple; who ever knew Truth put to the worse, in a free and open encounter?” But then just a few lines later: “I mean not tolerated Popery, and open superstition, which as it extirpates all religions and civil supremacies, so itself should be extirpate.” Milton reassures us that when he advocates freedom of speech he certainly doesn’t mean to include Catholics, whose words should be forcibly snuffed out.
So no society tolerates every imaginable form of speech; there are always boundaries. What’s disorienting about American society today is how quickly the boundaries are shifting. Beliefs that were almost universal less than 20 years ago—and are held by around 40 percent of the American people now—are deemed utterly beyond the pale. It’s hard not to suspect that some of the people most devoted to policing those boundaries are pouncing prosecutorially on views that they themselves held not that long ago. (The convert’s zeal.) And social media provide the chief impetus for both changing one’s own views and policing those whose views are different. In this environment, it’s hard to see who will resist what Oakeshott calls the “disposition … to impose a single character upon significant human speech.”
Maybe the Oakeshott/Williams view of philosophy as an opening-up rather than a closing-down of options can assist. In this fascinating conversation on the value of political disagreement, Gary Gutting and Jerry Gaus end up doing what people always do in these conversations: they advocate open disagreement but then quickly pause to say that “toleration has limits.” But, being philosophers, they go on to ask how those limits should be determined. Gaus: “The critical question is not whether I judge a person to be radically misguided, or judge her way of life to be morally repugnant, but whether she is a danger to the life and liberty of others.”
But that doesn’t help us very much unless we know what “danger” is, and its sibling “harm,” and no concepts have undergone more radical alteration in the recent shifting of social opinion than these. Thomas Jefferson famously said, “it does me no injury for my neighbor to say there are twenty gods or no God. It neither picks my pocket nor breaks my leg”—but that was in a simpler time. In a culture devoted to a minutely particular screening of language for microaggressions, the injury inflicted by opinions becomes the most talked-about form of harm. There are no socially useful gadflies in Microaggression World—unless, of course, you think it’s okay for some ideas to be challenged but not your favorite ones. And no one would ever be so inconsistent, would they?
People who traffic in symbolic manipulation—and that’s most of us, these digital days—are typically inclined to overrate the importance of symbolic manipulation. It’s always tempting to think that to exercise control over symbols—like the Confederate battle flag, which, for the record, I have long despised—is to strike a blow for justice. Again, social media play a key role here: Jerry Gaus once wrote an article “On the Difficult Virtue of Minding One’s Own Business”, but given the hyperpublic character of the web services most of us rely on, and the difficulty of getting any of them to reliably provide intimacy gradients, everyone’s business now seems to be everyone else’s business. In such a environment, ABP—Always Be Policing—is the watchword. Survey and critique others, lest you make yourself subject to surveillance and critique. And use the proper Hashtags of Solidarity, or you might end up like that guy who was the first to stop applauding Stalin’s speech.
Minding your own business, on this commonly-held account of things, is a vice, not a virtue, and those who handle disagreement peaceably are ipso facto deficient in their commitment to justice. To restore a belief to the positive value of disagreement, here, would be a challenging task indeed. When Bernard Williams writes of disagreement as “an important and constitutive feature of our relations to others,” he is speaking a moral language that’s incomprehensible to those for whom free speech is so last century and for whom history is always a story of moral progress.
How might such people come to see, with Williams, the virtue of moral and epistemic humility? How might they be brought to see that it can be a positive good to belong to a society in which people with deep disagreements, even about sexuality and personal self-determination, can live in peace with one another and, just possibly, converse? I have absolutely no idea.