(To be continued…)
When I came to teach at Wheaton College, in the fall of 1984, on a one-year contract, I had been a Christian for about five years, knew almost nothing about the evangelical subculture, and was a theological novice. My one virtue was that I had some inkling of how little I knew. So I sought out Roger Lundin.
That year, it should be remembered, was the apogee of the Age of Theory, or near enough, and I had just completed my training in at least some of its intricacies. But when I arrived on Wheaton’s campus I quickly learned that this was not perceived as an accomplishment, but rather a cause for suspicion. Only Roger — who had himself been trained in a long-familiar makeshift blending of historical and New-Critical practices, but had been seeking, without much help or encouragement, to educate himself in the newer directions of our discipline — was a possible conversation partner. I read a few articles he had written and realized that, like me, he believed that within the Christian traditions there were resources for listening and then responding to the voices of critical theory; but, unlike me, he was already prepared by theological training to do this work. So I sought him out.
I would have sought him out anyway; his warmth and kindness drew me, as they drew so many others, towards him. We began talking our way through these issues, and through so much else. He introduced me to his longtime friend Mark Noll, whose office was then four floors up from ours, later a more convenient one floor down; Mark became my second theological teacher and instructor in the ways of the Christian mind. Roger and I prayed together and laughed a great deal. We created an organization called Slow White Men of America, of which we were the charter and only members. Once we spent a morning at Borders where I came across a book on a display table called Hollywood Priest: A Spiritual Struggle. I held it up before Roger solemnly, and he said, “Big Al” — he always called me Big Al, a somewhat comical thing given the disparity in our sizes — “I understand how people can be horrified at living in this country, but I do not understand how anyone could ever be bored.”
I am strolling down memory lane here, but this is appropriate. Anyone who knew Roger at all well noted how profoundly memorial, as well as historical, his imagination was. I may have had a very slightly better verbal memory than he did, which means that I could sometimes remind him what someone had said in a given long-ago conversation — at which point he would tell me precisely when that conversation had happened. Were he reading this, he would cheerfully rattle off the year, month, and day on which we paid that visit to Borders, what the weather had been, what headlines had been featured in the Trib, and how Chicago’s professional sports teams had performed.
This was a kind of parlor trick, and an invariably impressive one, but for Roger the cultivation of memory, both personal and cultural, is an essential spiritual discipline, and one which Americans, in our haste always to fare forward, tend very much to neglect. We strain into the future, but, as Fitzgerald reminds us, “we beat on, boats against the current, borne back ceaselessly into the past.” It was Roger’s distinctive calling as a teacher and scholar to encourage us to embrace that backwards pull, to use it to help us understand where we have come from, and to do honor to those who went before us. As we read in Ecclesiastucus: “Leaders of the people by their counsels, and by their knowledge of learning meet for the people, wise and eloquent are their instructions: … All these were honoured in their generations, and were the glory of their times.” Roger, who died suddenly last week, has now joined this great company.
The last communication I received from Roger, several weeks ago, concluded with these words:
Thanks, deep thanks, for your kindness, which is rooted in a friendship that now reaches back 31 years!
My beloved friend Brett Foster never wanted to leave anything behind. My wife Teri took this picture of him on the streets of Durham, I believe, in the summer of 2011, when we spent the summer in England leading a study tour. You can see the bag handing from his shoulder, and the scarf, and top of the coffee cup, but not the enormous backback that’s sitting just outside the frame. He was always encumbered in this way, because he never wanted to leave anything behind that he might need.
Now he has left us all behind.
Wheaton College’s Pierce Chapel was not made for funerals, especially Anglican ones, but it served, and hundreds of Brett’s friends arrived there on Saturday to commend his soul to God. November in Chicagoland is typically an unkind month, but the day was sunny and clear and warm enough that someone had opened windows in the chapel. Father Martin Johnson, the rector of All Souls’ Church, where Brett and I and our families were members for a decade, had brought the church’s butcher-block altar over and set it up in front of the stage. That altar had been made for the church by David Hooker, who became one of Brett’s dearest friends. Martin was grieving hard and perhaps working even harder — to make the day right for Brett, and for us.
After the tributes had been made — beautiful words, by some of the many who loved Brett — it was time for Communion. Brad Cathey had made the loaves and laid crosses upon them before baking, and Father Martin and Father Paul stood before the long lines and gave the bread of life, with the words of life.
But then some small chunks of bread fell to the ground at Martin’s feet. One piece was stepped on. I noticed that the poet Scott Cairns, who was sitting right in front of me, saw this, and was looking on in some discomfort. Then, when the last communicant had served, Scott and I at the same moment hunched forward, knelt, picked up every piece, and ate — I quickly and in some embarrassment, he, back in his seat, slowly and with reverence. This is my body, broken for you.
It was time for the final commendations. Martin circled Brett’s coffin with the censer. The sun was low enough now that its light slanted through the open windows, and I could see by that light the smoke of the incense caressing the embroidered pall that covered the coffin. A quiet breeze wafted it away as Martin, his censing completed, stood directly before the coffin. Then he bowed and, with infinite gentleness, kissed it.
We moved then to the cemetery, to commend Brett’s body to the earth, in the hope of the resurrection. As Brett’s wife Anise, and their children Avery and Gus, and Brett’s mother Suzanne, and Anise’s mother Sharie, and all the rest of the family and the great circle of friends looked on, Martin spoke the ancient and beautiful words, and, according to custom, cast the first handfuls of earth onto the coffin.
Though I love those words, I did not, this time, hear them. I looked around and saw that Brett’s corner of the cemetery was surrounded mostly by pines, though one old oak reached out a limb over the grave. It still had its leaves; soon it will shed them over the broken earth there. I remembered the words that Scott Cairns had read earlier, in the time of tributes at the chapel, words that he had written years ago for his father but that could not be more right for this grievous occasion, full of its own strange hope.
And this is the consolation — that the world
doesn’t end, that the world one day
opens up into something better, and then we
one day open up into something far better.
Maybe like this: one morning you finally wake
to a light you recognize as the light you’ve wanted
every morning that has come before. And the air
itself has some light thing in it that you’ve always
hoped the air might have. And One is there
to welcome you whose face you’ve looked for during all
the best and worst times of your life. He takes you to himself
and holds you close until you fully wake.
And it seems you’ve only just awakened, but you turn
and there we are, the rest of us, arriving just behind you.
We’ll go the rest of the way together.
The books above are most of the ones I’ve assigned for my Great Texts of the Twentieth Century course (missing are Art Spiegelman’s Maus and the daily poems I’ll be reading to my students). It’s a pretty heterogeneous group, but taken together these books touch on a great many of the key issues of our time, and those of all times: not just racism, sexism, and colonialism, but also the rise of biological science as First Philosophy, the various ways cultures constitute identity, the furthest reaches of human barbarity, the transformation of culture by electronic media, and the miraculous power of the writings of a first-century Jew to illuminate and interpret modern consciousness. Something to offend everyone, you might say.
And this is the point, or at least one of the chief points. Here at Baylor, I want my students—most but not all of whom are Christians; some are simply unbelievers, some are uncertain and struggling—to encounter the texts, and through those texts the experiences, that served to undermine Christian faith and practice in the twentieth century. But I also want them to encounter Christians (Barth, Merton, Bonhoeffer, Eliot, Auden, Simone Weil in her unique way) for whom the twentieth century’s challenges provided an impetus to re-think and re-live Christianity in fresh ways.
I could, of course, work to protect them from this violent clash of powerful and contradictory ideas; I could—I am free to do this—build a syllabus that focused on Christian writers and perhaps other religious believers and presented anti-religious writers whose work is cartoonish or in other ways simplistic. And perhaps if I did that some of my students would feel safer. But that, I am convinced, would be a false sense of safety, and would leave them underprepared for an adult world in which their ideas and beliefs will receive daily challenges. What kind of teacher would I be if I let that happen?
There is one point on which the crusade for the imposition of trigger warnings is absolutely right. It is not for nothing that reading was always feared throughout history. It is indeed a risky activity: reading possesses the power to capture the imagination, create emotional upheaval and force people towards an existential crisis. Indeed, for many it is the excitement of embarking on a journey into the unknown that leads them to pick up a book in the first place.
Can one read Proust’s In Search of Lost Time or Tolstoy’s Anna Karenina ‘without experiencing a new infirmity or occasion in the very core of one’s sexual feelings?’ asked the literary critic George Steiner in Language and Silence: Essays 1958–1966. It is precisely because reading catches us unaware and offers an experience that is rarely under our full control that it has played, and continues to play, such an important role in humanity’s search for meaning. That is also why it is so often feared.
And reading should be feared; particular books and authors should be feared. But it is not always—indeed, it is not often—best to flee from what we fear. Better to master the fear, to approach what scares us, but to do so with care and preparation and in an environment where those around you wish you well.
My dear friend Brett Foster, whose death earlier this week hangs over me heavily, wrote a poem that I have been thinking about a great deal. It’s called “Back-to-School Rondeau”, and I think it beautifully describes the fears and excitements of genuine education, the ways the pursuit of knowledge takes up and involves the whole of our being. I’ll leave you with it.
It’s almost time to set aside the waning
distractions of first youth, the life contained
for years at home. What’s home? The place you grow
out of, everything receding slowly,
fading like a chalked sidewalk in the rain.
Leave childish things behind, said a certain
fellow. (Others afterward.) Don’t remain:
the friends gone late in summer let you know
it’s almost time.
Don’t leave behind new clothes, impromptu plans —
they’ll match surroundings well, remind again
of shining coming: new homes to let go
of, too; the best things said; mind’s overflow;
surprising callings; time for love, and pain.
It’s almost time.
Richard Rodriguez, in his great memoir Hunger of Memory, movingly recounts the day when a nun, one of his teachers at his parochial school in Sacramento, asks his Mexican parents to speak English at home in order to encourage Richard to improve his English, to be more confident speaking it in school. He was in first grade.
Not a request that would be made today, I suspect. His parents agreed, of course—a nun had asked them! And while young Richard missed very much the sounds of Spanish at home, his English did get better. He became more comfortable at school; indeed, eventually his public identity came to be closely associated with his academic success. And he became strangely grateful for that nun’s request. It set him on the road to manhood: “I became a man by becoming a public man.”
This experience (and others like it) led eventually to Rodriguez, as a graduate student, becoming notorious for his opposition to bilingual education programs. He may or may not have been right in that opposition—there should be, and there is, serious debate about when young people need to make that essential transition from the private comforts of home to the sometimes challenging but also rewarding demands of public life—but no one has ever articulated more precisely the essential principle at stake here:
While one suffers a diminished sense of private individuality by becoming assimilated into public society, such assimilation makes possible the achievement of public individuality.
I think Rodriguez’s point is essential for understanding the current kerfuffle at Yale University, where students and alumni are—track with me here—demanding the resignation of of the master of a college and the assistant master because the master is refusing to apologize for not having exercised dictatorial authority over other students’ Halloween costumes. To most outside observers this will seem pretty silly. But let’s ask where the kind of reaction the students and alumni are having comes from.
The key may be found in this op-ed in the Yale Herald, significantly titled “Hurt at Home.” Jencey Paz writes,
As a Silimander, I feel that my home is being threatened. Last week, Erika Christakis, the associate master of Silliman College, sent an email to the Silliman community that called an earlier entreaty for Yalies to be more sensitive about culturally appropriating Halloween costumes a threat to free speech. In the aftermath of the email, I saw my community divide. She did not just start a political discourse as she intended. She marginalized many students of color in what is supposed to be their home.
But Silliman College is not “supposed to be their home.” It is a residential college in a university, a place where people from all over the world, from a wide range of social backgrounds, and with a wide range of interests and abilities, come to live together temporarily, for about 30 weeks a year, before moving on to their careers. It is an essentially public space, though with controls on ingress and egress to prevent chaos and foster friendship and fellowship.
It is possible, of course, that Yale sells their residential college system to students as a kind of “home”; I don’t know. The official description seems to me to strike an appropriate note without over-promising: “The residential colleges allow students to experience the cohesiveness and intimacy of a small school while still enjoying the cultural and scholarly resources of a large university; the residential colleges do much to foster spirit, allegiance, and a sense of community at Yale.”
Now, to be sure, this “cohesiveness and intimacy” can for some students be very powerful—their college can even be a better and healthier environment for them than their actual home. The great theater critic Kenneth Tynan loved Magdalen College, Oxford (where C.S. Lewis was his tutor) so much that he wanted his ashes to be interred there. But it was not his home, and could not have been, because there were other people there who didn’t even know him, or who knew him but didn’t like him, or whose preferences were radically different than his, and who had no long-term bond with him to force them to come to some mutually agreeable terms beyond basic tolerance for three years or so.
Residential colleges have long been defended as transitional spaces between the world of home and a fully independent adult life, and it would be a great mistake to think of them as merely continuing the ethos of home. That would leave young people totally unprepared for that “adult life,” which I think we might, for the purposes of this discussion, define as that period of one’s existence during which there is no one to run to to demand control over other people’s Halloween costumes. When one only has, to return to Rodriguez’s terms, “private individuality,” it is quite natural, if not altogether admirable, to seek out an authority figure when someone’s holiday costume offends you. But by the time one gets to college one’s “public individuality” should be sufficiently developed that the wearing of costumes should be seen as an essentially trivial matter that students can deal with among themselves. If they can’t, then the university needs to acknowledge that they’re dealing with some serious cases of arrested development.
Let me wrap this up by simply repeating a passage from a post I wrote some months ago: In a fascinating article called “The Japanese Preschool’s Pedagogy of Peripheral Participation,” Akiko Hayashi and Joseph Tobin describe a twofold strategy commonly deployed in Japan to deal with preschoolers’ conflicts: machi no hoiku and mimamoru. The former means “caring by waiting”; the second means “standing guard.” When children come into conflict, the teacher makes sure the students know that she is present, that she is watching—she may even add, kamisama datte miterun, daiyo (the gods too are watching)—but she does not intervene unless absolutely necessary. Even if the children start to fight she may not intervene; that will depend on whether a child is genuinely attempting to hurt another or the two are halfheartedly “play-fighting.”
The idea is to give children every possible opportunity to resolve their own conflicts—even past the point at which it might, to an American observer, seem that a conflict is irresolvable. This requires patient waiting; and of course one can wait too long—just as one can intervene too quickly. The mimamoru strategy is meant to reassure children that their authorities will not allow anything really bad to happen to them, though perhaps some unpleasant moments may arise. But those unpleasant moments must be tolerated, else how will the children learn to respond constructively and effectively to conflict—conflict which is, after all, inevitable in any social environment? And if children don’t begin to learn such responses in preschool when will they learn it?
Imagine if at university they had developed no such abilities and were constantly dependent on authorities to ease every instance of social friction. What a mess that would be.
Damon Linker, with whom I seem to be debating a lot these days:
Professors in the humanities and social sciences engage in highly specialized research, attempting to push knowledge into new areas — and many view this effort as a project that involves and requires liberating individuals from the dead weight of received prejudices.
The result is that academics usually end up pursuing scholarly agendas that are the furthest thing from anything that could be described as “conservative.” The imperative to advance knowledge demands that research contributes something new. Meanwhile, the tendency to relegate all received truth claims to the category of prejudice leads to suspicion even of the established findings of the previous generation of scholars.
This would be a more convincing argument if “the established findings of the previous generation of scholars” were never oriented towards or tended to reinforce political liberalism; and if liberals had a tendency to relegate their own preferred truth claims, the ones they have received from previous generations of liberals, to the category of prejudice. (I’m using “liberalism” in a loose and conventional sense here; one could of course ask whether liberalism is really liberal, and so on.)
Linker wrote in a recent column devoted to defining contemporary conservatism that “If you ask conservatives what this comprehensive moral outlook consists of, they’ll likely say one of several things: Devotion to individual freedom. Constitutionalism. A concern with limited government. Fear of tyranny.” In many different academic disciplines, there’s an enormous body of scholarly work that, in the view of the conservatives Linker describes, neglects or attacks these values. Wouldn’t conservatives then have an inclination to critique that scholarship and defend those values? Would they be piously reverent towards scholarship that endorses, and often enforces, liberalism simply because it’s “established”?
On the other side of the fence: as Linker acknowledges, an essentially if vaguely liberal outlook on the world simply is the default position of the majority of professors in the majority of academic disciplines, and has been for several professorial generations; so where among these liberals is the suspicion of the past that Linker identifies with liberalism? It turns out that such suspicion is highly circumscribed: it’s limited to some findings of previous generations of scholars, not extended to those scholars’ overall political outlook.
Linker wants to argue that conservatives have, because of their core values, selected themselves out of the academy. And no doubt that happens sometimes. But a great many more who would like to make a contribution are ruled out — quietly and behind the scenes — for exactly the same reason that genuinely radical leftists are ruled out: they would rock our gently swaying boat, here on our calm, calm lake.
A: Well … Loki’s quite right, you know — at least in terms of what he says, as opposed to what he means. He means that we want to be ruled by him, a claim that I would firmly though politely (his being a god and all) reject. But do we want to be ruled? Of course we do. That’s why human societies so strenuously avoid direct democracy. Rule is tedious; it’s boring — almost no one actually wants to do it — we have a thousand other things we’d rather pursue, including, as a high priority, announcing to everyone who’ll listen how much better off the world would be if we ran it. For every person who votes there are at least four who want to tell you how they have all our political questions sorted out. Of course we want to be ruled. The only questions are who will rule us and how they will do so. And I’m making a proposal concerning those questions.
B. You’re confusing delegation and abdication. Those of us who through electing representatives delegate certain civic responsibilities aren’t abandoning self-rule! Note that, for one thing, we reserve the power to recall our representatives if we think they are abusing the trust we have placed in them, at the next election or, in desperate circumstances, earlier.
A. “If we think they are abusing the trust,” indeed. In a democracy hoi polloi are notoriously incompetent at figuring out whether they are being abused, having a strong tendency to re-elect their abusers while rejecting with alacrity people who are telling them sober and necessary truths. Didn’t Burke tell us this long ago?
When the leaders choose to make themselves bidders at an auction of popularity, their talents, in the construction of the state, will be of no service. They will become flatterers instead of legislators… If any of them should happen to propose a scheme of liberty, soberly limited, and defined with proper qualifications, he will be immediately outbid by his competitors, who will produce something more splendidly popular. Suspicions will be raised of his fidelity to his cause. Moderation will be stigmatized as the virtue of cowards; and compromise as the prudence of traitors; until, in hopes of preserving the credit which may enable him to temper, and moderate, on some occasions, the popular leader is obliged to become active in propagating doctrines, and establishing powers, that will afterwards defeat any sober purpose at which he ultimately might have aimed.
The best description imaginable of President Trump. Again, I think the people have proven both their incapacity to rule themselves and their fundamental disinclination to do so.
B. You would win me over with this citation of Burke if I didn’t know that Burke would have been horrified by the proposal you’re making.
A. Would he have? Is Burke the enemy of aristocracy?
B. Of your kind of aristocracy, I believe he is. He wrote in his famous letter to the Duke of Richmond, speaking of the hereditary aristocracy, “You, if you are what you ought to be, are in my eye the great oaks that shade a country, and perpetuate your benefits from generation to generation. The immediate power of a Duke of Richmond, or a Marquis of Rockingham, is not so much of moment; but if their conduct and example hand down their principles to their successors, then their houses become the public repositories and offices of record for the constitution.” This kind of long-term care for the good of a dear local place, or even a fatherland, is unlikely to be in the minds of your New Meritocrats. Indeed, I suspect that you’ll want to have such sentimentality bred out of them.
A. The most interesting and important phrase in that quotation is “if you are what you ought to be” — he knew perfectly well that the British aristocracy rarely were what they ought to be, and on occasion let them know his opinion of them with considerable asperity. He preferred that aristocracy to the rule of the demos, and with good reason. But I think that if we could bring Burke back now, I would at least try to convince him that there is a better model of aristocracy than the one he knew.
B. Yeah, well, good luck with that. But let’s not waste time debating counterfactuals or imagining alternative histories. In your imagined world, the demos will have no voice. You say that’s fine, because they don’t want one. But of course some of us will want one. And — let me guess here — you’re not planning to give us one. You’re going to offer no voice, but the possibility of exit. Right?
A. You are correct, sir.
Okay, Ron, you ask whether you’re allowed to sneeze. A tendentious way of presenting the issue. Of course you’re allowed to sneeze — it’s not as though anyone can stop you. If you’re in a closet with your family hiding from intruders bearing pistols and daggers, you’re allowed to sneeze, but I wouldn’t recommend it.
What’s that? Watching deer eat from a feeder isn’t like hiding from violent criminals? You’re right, Ron, it isn’t. Not very much, anyway. But again you’re missing the key point that you should be focusing on. We’re trying to establish a principle here, Ron, and the principle is that you can suppress a sneeze if you want to. But — and here we’re approaching the crux of the matter — you didn’t want to.
Go back and watch that video again, Ron. Look at those beautiful creatures: their delicate faces, their gentle demeanor, their polite interest in the contents of the feeder. And then the white snow in the background. There’s serenity there, a peaceful interlude in our lives that are so full of conflict; a chance for the deer to forage a bit — always more difficult for them in the winter — and for us to have a moment’s communion with the natural world.
And that’s when you decide to let one rip, Ron. Great.
You are what’s wrong with America, Ron, did you know that? You could have suppressed your impulse to sneeze, suppressed it for the greater good, for the good of the deer and your wife (if that’s your wife) and for all the good people of YouTube; but you chose not to do that. You didn’t even turn aside, or sneeze into your sleeve. You thought indulging your impulse was the most important thing in the world, and you got positively angry when someone suggested to you that it just might not be. A total lack of impulse control is what’s sending our country into what may be a permanent moral decline, and you’re the poster child for that vice.
Thanks, Ron. Thanks a lot.
One of the most regular running jokes in my family, for many years now, is that I don’t play Wii Boxing because I think it’s too violent. We make a joke of my tender conscience, but I really do wince when a little Mii’s head snaps back. I can’t play for more than a couple of minutes. I pause the game; I switch to golf, or tennis, or frisbee. My discomfort is genuine, and deeper than any reasonable standard would deem appropriate, and (to me, anyway) not funny at all. The roots of it sink deep into my life; follow those roots 40 years deep — give or take a few days — and eventually you’ll find yourself in front of a little black-and-white television set, in Birmingham, Alabama, on the first day of October 1975. Three days earlier I had turned seventeen.
Until then, I had been for most of my young life a very serious boxing fan. Boxing was common on network TV in those days, which was good, because network TV was all we had. Muhammed Ali was of course the dominant figure of the era, the one you couldn’t escape even if you wanted to, and a few years earlier, in my local library, I had picked up Sting Like A Bee: The Muhammed Ali Story, written by the admirable light-heavyweight Jose Torres in collaboration with Bert Randolph Sugar. I read it, read it again, went back to the library to renew it, and read it one more time.
What fascinated me wasn’t the biographical narrative, but Torres’ account of what life in the ring was really like. I have never forgotten his words in praise of body-punching:
I’ve hit fighters in their bodies with so much force that they couldn’t help but let out an involuntary groan like a wounded wolf. Uually the man who connects will jump at the hurt fighter with more punches. I never attacked after such a punch. I used to step back and let my rival savor every second of pain. I was not only a sadist but a technician; I knew how discouraging those punches were to the body. I became world’s champion by throwing one. A left hook to the liver.
You can see the left hook Torres is talking about at the beginning of this clip: Willie Pastrano is the victim, and it takes Willie about two seconds after the punch lands to feel its effect. When he does, he crumples. He gets back up, God bless him, and finishes the round, but that’s as far as he can go. The ref stops the fight, and Torres takes the title.1
It was Pastrano’s last fight. He retired, age twenty-nine.
I didn’t fight, myself, aside from a handful of schoolyard flailings; I small for my age, already a lover of words, and Torres wrote vividly; I became a literary boxing fan long before I knew that that was a tradition. By the time Ali fought Joe Frazier for the third time I considered myself a connoisseur. I had never heard of A. J. Liebling and his “sweet science of bruising” but I would have loved it if I had known.
I don’t think I had watched the first Ali-Frazier fight live, though I had seen replays on Wide World of Sports. Until that fight it was commonly said of Ali that he wouldn’t be able to take a punch, but in the fifteenth round of that fight Frazier hit Ali with as perfect a left hook to the jaw as has ever been thrown … and Ali got right back up. No one has ever had a more devastating left hook than Frazier, and no matter how many times I watch that clip I still cannot understand how that punch didn’t knock Ali cold. In slow motion you can see Ali just beginning to turn his head away a millisecond before the punch lands, though it doesn’t seem likely that that small motion could have made a difference. But in any case, no one ever — ever — again said anything about Ali being unable to take a punch.
I didn’t see the second fight either, and all I remember from it is the controversy about Tony Perez, the referee, who let Ali repeatedly grab the back of Frazier’s head and pull it down in their clinches. But by the time the third fight rolled around I was fully alert to the drama of it. I understood the contrast in styles — after all, there has never been a more obvious one: Frazier moving relentlessly, maliciously forward, head low, throwing hook after hook after hook to head and body, with both hands; Ali upright and bouncing, circling always to his left, disdaining body punches and hooks in favor of rapid-fire straight lefts and rights.
I understood also that these men were not rivals but rather actual enemies, that they truly hated each other. Having lived all my life in Alabama, where the world was neatly and simply divided between white people and black people, in that order, I don’t think I then grasped the racial dimensions of that hatred. I knew that Ali called Frazier a “gorilla,” but I never imagined the significance of a light-skinned Negro man saying that to a dark-skinned one. I might have been awakened to that dynamic if I had known that a few days before the fight, at the Marcos’s palace in Manila, Frazier had leaned over to Ali and quietly said, “I’m gonna whup your half-breed ass.” In turn, Ali would say to his corner just before the fight, “I’m gonna put a whuppin’ on this n*r’s head.” But I didn’t learn about any of that until later; I just knew that I had never anticipated anything in my short life as passionately as I anticipated what Ali had already called the Thrilla in Manila.
The classic account of what happened in that ring — and what happened before, and after — was written for Sports Illustrated by Mark Kram, and it remains the finest essay in sportswriting I have ever read. It captures with uncanny faithfulness the single fundamental fact about that fight, which is its ceaseless and horrifying brutality. By the third round Ali had pummeled Frazier so relentlessly that I was embarrassed for Joe, and I didn’t want to watch any more; I also knew that I would watch until the end, which I expected to come any moment. Then Frazier started to fight back.
As the advantage shifted back and forth between the two boxers, I watched in a state of ongoing incredulity. It was like seeing that Frazier punch that dropped Ali in their first fight, but a hundred times — a thousand. Ten thousand, it seemed. After a while I simply could not understand how either man remained standing, yet stand they did. And they punched — though “punch” is a pathetic word: the only adequate words are the ones that seem hyperbolic, like “bludgeon.”
It went on. For a time, for several rounds in the middle of the fight, Frazier got inside Ali’s guard and planted the top of his head under Ali’s chin and smashed Ali’s flanks and jaw again and again until I couldn’t imagine anything else happening, ever; but eventually, as the number of rounds (the number of years, I almost said) mounted, he grew exhausted and couldn’t get in there any more. And Ali, freed from that terrible pressure, found room to move; and then those long guns fired, repeatedly finding Frazier’s face and turning it gradually to pulp.
Frazier wouldn’t have quit, of course, under any circumstances less severe than death, but his trainer Eddie Futch couldn’t bear it any more and stopped the fight. The day after, Ali talked to Kram about what it had been like to be in that ring: “It was like death,” he said. He praised Frazier: “I’m gonna tell ya, that’s one helluva man, and God bless him” — but then, there was no reason for him to stint the praise. He had won; and Frazier had never had words to hurt him the way his contempt had slashed Frazier. The really remarkable thing was Frazier’s response, uttered just hours after his long war with Ali drew to its terrible close. “Man, I hit him with punches that’d bring down the walls of a city. Lawdy, Lawdy, he’s a great champion.”
As for me, I sat there for a while, once it was over, in my little bedroom in Alabama, staring at my little black-and-white TV. I could have watched elsewhere in the house on a larger screen, and in color, but I would never have risked being distracted by my uncomprehending family. So I sat there alone and in silence. I didn’t know it, but boxing was over for me; I would never watch another bout with interest and attention, and my tolerance for boxing’s aggression would shrink and shrink until I found myself avoiding Wii Boxing. And I still remember that night, when sleep took long to come; and for days afterward, a haze hung over my mind.
1. Torres beat Pastrano thanks in part to the instructions in combination-punching that Cus d’Amato — later Mike Tyson’s trainer — gave him Torres knew what it was like to be on the receiving end of such punches as well: four months after winning his crown he took on a non-title bout with a journeyman heavyweight named Tom McNeely, and though he won the fight he took such a beating to the body that some observers thought he was never again the same fighter.↩
B. Ah, the famous Imperial Examination! Ideal of meritocrats everywhere! But it’s not as though everyone in China had an equal shot at passing it — or even taking it. The rich who could afford tutors and bribes had a massive advantage over poor families whose sons had to rely on their wits and hard work. There was always some social group who were excluded from taking the exams — and of course women were never allowed to take it — and there was massive cheating —
A. Of course, of course. There is no possible system of politics or anything else that can’t be gamed, and in which the rich do not have advantages that the poor lack. To raise that as an objection to any scheme for social improvement is to allow the perfect — the impossible, the unrealizable perfect — to be the enemy of the good.
There will certainly be inequality at the beginning, but since money and discipline can only partially compensate for a lack of brains, and poverty can only partially impede the extravagantly intelligent, there would in such a system, over time, arise greater and greater equality both of opportunity and achievement. If you care about that kind of thing. I do, sort of, but not as much as I care about creating a political system in which the very best actually rule.
B. And it’s your view that China in the Imperial era actually achieved this genuine meritocracy?
A. Glad you asked. The answer is a firm No, in part because of the cheating and gaming we talked about a moment ago, but also because in the Imperial system the best were allowed to advise — but not to rule. The cult of the Emperor and the imperial family remained in place. China had created an enormously powerful system for funding and training the most gifted young men — and yes, it’s a shame that it was men only — ever devised, but restricted the ability of those men to set the course of Empire. So what I am arguing for is the next and obvious step: putting the aristoi — the genuine aristoi, not those of the dominant social class — in charge.
B. I wonder if you’d get “the genuine aristoi.” I recall that one Chinese philosopher, Ye Shi, commented that “A healthy society cannot come about when people study not for the purpose of gaining wisdom and knowledge but for the purpose of becoming government officials.”
A. I think Ye Shi may have been a little too concerned about people’s motives. If we can create examinations that accurately test for the skills that our rulers really need to have, and we select as our leaders the people who have those skills, who cares if their motives aren’t pure?
B. Hm. If you tell me that you can produce the best medical researchers, or particle physicists, by means of an examination, I might — might — take the notion seriously. But political rule? I don’t think so. Political leadership requires a whole host of skills and virtues — people skills, as we like to say, prudence, discernment, judgment of character — all traits that can’t possibly be tested for, but only developed through practice, experience. And some of those traits are virtues — so the character of the person in leadership actually matters. Politics isn’t a matter of A/B testing, of choosing the best option from a group of four!
A. Isn’t it? I’m not so sure. But I’ll grant that under democracy what you say may well be true — let’s say it is true. But democracy is what I’m trying to get rid of here, and one of the chief reasons I want to get rid of it is its tendency to generate just this kind of leader: someone who doesn’t know anything about anything but can somehow generate trust ex nihilo. I want — society needs — to ground our leadership choices in more objective terms of excellence, and relieving ourselves of the burden of democracy will give us a chance to do that. If instead of choosing leaders who can please hoi polloi we choose leaders with demonstrable expertise in the issues we face — poverty, poor health, inefficient energy usage, upheavals due to foreign conflicts, uncertainty because of irrational foreign governments —
B. Some of which are democracies. And even the ones that aren’t often have governments that stand because of their ability to “please hoi polloi.” Do you think you exam-crushing experts are going to have what it takes to deal with such retrograde social orders?
A. I think they’ll have a much better chance than the pols we send around the world today, many of whom have amateurish knowledge of the cultures within which they’re placed – and those are the good ones. I’d rather choose people with some of those social virtues you were lauding from within a pool of the demonstrably knowledgable than from within a pool produced by our current patronage system.
B. You know America has a foreign service exam, right?
A. Sure. And many of the people who aced it are working and suffering under inept direction from higher-ups who have no business making decisions. We’re like imperial China in that respect.
B. So you want to put the people who ace the exam in charge? And then extended a similar model into the rest of the governmental system?
A. Right — though of course people will need to gain experience over time — I wouldn’t suggest putting a 22-year-old in immediate charge of an embassy because she had the highest test scores.
B. Based on what you’ve said so far, I’m not sure why not. But let’s drop that — I have a different question for you. You’re creating a system in which almost everyone will be deprived of self-government. Do you think people in general will accept such a deprivation?
A. It’ll be a hard sell at first, because most people like to think of themselves as not just worthy of self-determination but positively inclined towards it. But they’re not — not either: not worthy and not so inclined. As I argued from the outset, the demos has made an absolute mess of things, implementing (through their chosen leaders) a vastly long series of selfish and stupid decisions, which they have also tried with considerable desperation to avoid facing the consequences of. But I also think on some level they know this — they understand that they are not suited for self-governance. And when someone comes forward with the ability to explain this to them in non-threatening terms, and to show them that democracy is not inevitable and that there really may be a better way, then I think they’ll be glad to be relieved of the burden of self-rule.
B. So if people are going to be persuaded to relinquish a system in which they choose leaders solely on the basis of trust-inducing capacity, they’re going to need one or more people they trust to do that persuading.
A. Yes. Ironic, isn’t it. But the history of politics is full of ironies. Only Nixon could go to China, etc.
B. You remind me of Loki.
B. Loki. In The Avengers. Telling people that they were made to be ruled.
(To be continued…)
There is no better journalist in America than Andrew Ferguson, and his brilliant takedown of bad behavioral science provides yet more evidence for that claim. A passage on Stanley Milgram’s famous obedience-to-authority experiment especially caught my eye:
The results were an instant sensation. The New York Times headline told the story: “Sixty-five Percent in Test Blindly Obey Order to Inflict Pain.” Two out of three of his subjects, Milgram reported, had cranked the dial all the way up when the lab-coat guy insisted they do so. Milgram explained the moral, or lack thereof: The “chief finding” of his study, he wrote, was “the extreme willingness of adults to go to almost any lengths on the command of an authority.” Milgram, his admirers believed, had unmasked the Nazi within us all.
Did he? A formidable sample of more than 600 subjects took part in his original study, Milgram said. As the psychologist Gina Perry pointed out in a devastating account, Beyond the Shock Machine, the number was misleading. The 65 percent figure came from a “baseline” experiment; the 600 were spread out across more than a dozen other experiments that were variations of the baseline. A large majority of the 600 did not increase the voltage to inflict severe pain. As for the the participants in the baseline experiment who did inflict the worst shocks, they were 65 percent of a group of only 40 subjects. They were all male, most of them college students, who had been recruited through a newspaper advertisement and paid $4.50 to participate.
The famous 65 percent thus comprised 26 men. How we get from the 26 Yalies in a New Haven psych lab to the antisemitic psychosis of Nazi Germany has never been explained.
I’m interested in this because in my book on original sin I referred to Milgram’s experiments quite positively — and moreover, I never did any reading to find out whether they had been subjected to critique. I just assumed that they were universally accepted as valid. And why did I make that assumption? Because Milgram’s experiments confirmed the story I was telling about the return, in the twentieth century, of a widespread belief in human depravity.
Now, to be sure, the book by Gina Perry that Ferguson cites as authoritative on this matter has itself come under some criticism for one-sidedness; Milgram’s famous experiment may indeed hold up, at least in large part. But the point I want to make here is that I didn’t do anything to check it out — for me, the story Milgram told was too good to be false.
A. There’s nothing to be afraid of — but yes (since you’re wondering) my conviction that democracy is a failed experiment does stem, in part, from my reading of the neoreactionaries, especially Moldbug. But I’m not with him all the way — for instance, as you can tell from my earlier comments, I have a good deal more respect for the U. S. Constitution than does Moldbug, who has commented, “The basic nature of constitutional government is the formalization of power, and democracy is the formalization of mob violence.” Nah. But in many other respects his diagnosis of where we’ve gone awry is spot on.
B. Is it? I don’t think so. In fact, hearing that your thoughts have been shaped by Moldbug’s does more to discredit them than anything else you’ve said.
A. Why? Moldbug is a very smart guy — he’s just saying the kinds of things that most people are afraid to say.
B. Maybe. And sure, he’s smart. But he’s not especially knowledgable about things he needs to be knowledgable about in order to offer a compelling alternative to the existing political order. For instance, in one of his most-read posts he writes, “Thomas Aquinas derived Catholicism from pure reason. John Rawls derived progressivism from pure reason. At least one of them must have made a mistake. Maybe they both did” — which is absolutely nonsensical. He has no idea what he means by “Catholicism,” “progressivism,” “pure reason,” or “derived.” He has no idea what either Aquinas or Rawls would have made of those terms, or why they would have described their projects in wholly different ways. I distrust Moldbug because Moldbug clearly doesn’t understand — does not have even a minimally competent, first-year-undergrad comprehension of — many of the positions he rejects.
A. All right, so let’s grant, per argumentum, that Moldbug is not an expert in the history of political philosophy. But he doesn’t have to be in order to present a coherent and useful vision of a new direction in which we can go — a new direction I think you’ll agree we very much need.
B. A new direction, I’m not so sure; but a different direction, yes. Anyway, please remember that I’m not asking Moldbug to be an expert, but I do think he needs to have at least a basic understanding of the views he’s rejecting — precisely because he’s grounding the need for his ideas is the conviction that those other ideas are wrong. However, his acquaintance with those ideas is too superficial, and he’s too incurious about what Aquinas and Rawls really think, for me to take seriously his claim that he can offer a compelling alternative.
A. I don’t think that follows — you’re placing too much emphasis on the need to understand some pre-existing tradition of political thought. You’re trying to hold Moldbug accountable to the very system he’s repudiating: you’re rejecting the red pill because it’s not the blue one.
But in any case, let’s not belabor this question. I still have an argument I want to make.
B. Fair enough — as long as I get a chance to make an argument of my own before we’re too old to care.
A. Of course! But now I want to get back to this notion of — as you divined — aristocracy. The word means “rule by the excellent,” or the “best,” and the primary reason people dislike it is that they know that aristocracy never lives up to its name: it is never rule by the most excellent, but by the rich and powerful who in order to justify their rule designate themselves as excellent. That’s why it’s so absurd when people try to overcome resistance by replacing “aristocracy” with “meritocracy” — the words are synonyms, and “merit” can be faked and then justified as easily as can any other claim to excellence. By meritocracy people usually mean “rule by those who have been academic high achievers” as opposed to the popular use of aristocracy to mean “rule by those of high social status” — but given the enormously strong correlation between social status and academic performance, this is a distinction virtually without a difference.
B. So anyone, like you, who wants to make a case for aristocracy/meritocracy in preference to democracy has one big job at the outset: to show how it’s possible for a society to produce genuine aristoi — and put them in charge.
B. But even if you do that, you won’t have proved that such an aristocracy would be superior to democracy.
A. Sure. But one thing at a time. And don’t forget, if rule by the aristoi can be a fiction, rule by the demos can be too.
B. No doubt.
A. Okay, so back to work. I think the model we want to consider — though perhaps not to imitate slavishly — is imperial China’s examination system.
(to be continued…)
Damon Linker likes California’s assisted-suicide bill. After rejecting religiously-based arguments against suicide, he writes,
The arguments raised by disability-rights activists are more powerful, since they’re based less on appeals to absolute (and unconvincing) moral strictures than on the law’s potential to lead to bad consequences and abuse. One of those consequences is a kind of soft eugenics in which the terminally ill are subtly pressured to do the “selfless” thing of ending their lives to save their loved ones from the financial and emotional burdens of caring for them. One could also imagine a future attempt to expand the law to include not just terminally ill and suffering patients, but also people with chronic and debilitating but not fatal or excruciating illnesses. Finally, there’s the possibility of the law being changed so that it permits not just the patient but also family members or friends to request the lethal dosage. That, too, could lead to the exertion of pressure on a patient to end his or her life.
These are legitimate concerns that should be taken seriously, especially in light of a recent disturbing New Yorker article about how Belgium allows euthanasia for people suffering from depression. But the California law is written to avoid being applied in anything like the ways feared by most disability activists. So yes, let’s beware future amendments to the law that could lead to abuse. But that’s no reason to oppose its current, limited, and responsible form. (One doesn’t normally oppose a law based on the ways it might one day be changed, revised, or amended.)
I just want to make two comments about this. First, having read the text of the proposed law, I can’t see anything in it that would warrant Linker’s claim that “the California law is written to avoid being applied in anything like the ways feared by most disability activists.” It seems to me that it would be very easy for an attending physician and members of the dying person’s family to practice “a kind of soft eugenics in which the terminally ill are subtly pressured to do the ‘selfless’ thing of ending their lives to save their loved ones from the financial and emotional burdens of caring for them.” In fact, I don’t see how a law could be written to prevent that kind of pressure from being brought to bear on the dying.
Second, and in a spirit of theoretical disputation, I note Linker’s claim that “One doesn’t normally oppose a law based on the ways it might one day be changed, revised, or amended.” Doesn’t one? Shouldn’t one? It seems to me that there are cases in which it would be sensible to look at possible future extensions of a proposed law while evaluating its current form — and this could be one of them.
The law opens the choice of physician-assisted suicide to persons with a “terminal disease,” and defines “terminal disease” as “an incurable and irreversible disease that has been medically confirmed and will, within reasonable medical judgment, result in death within six months.” Surely someone will say, “Why six months? Why not a year — or more — if ‘reasonable medical judgment’ concludes that death is overwhelmingly likely?” That is, there’s an arbitrariness in the choice of six months as the (pardon the term) deadline for this choice which makes it likely that there will soon be pressure to extend it.
Moreover, there’s a great deal of wiggle room in the phrase “reasonable medical judgment.” One doctor may deem a disease fatal that another finds eminently treatable; and even when fatality is for all intents and purposes certain, people often surprise their doctors. Some cancer patients have lived far beyond the utmost time predicted for them; others die much more quickly than expected. (My father was one of the latter.)
So it seems to me that the law in its current form is already ripe for abuse; and it seems extremely likely that strong arguments will be made for extending the time frame in which suicide may be assisted — in the name of the same compassion that causes Linker to endorse the current bill. So even on non-religious grounds the proposed law seems to me far more questionable than Linker allows.
I want to consider some stories I have read recently — juxtapose them to one another. Let’s begin by looking at this story:
Last year I told a gay black male who wrote a story about a gay black male that I didn’t care about race or gender, and the class gasped. Even though I explained that I cared more about what happened to the character and about the elegance of the prose, my comment could have been a signal to erect a guillotine on the campus lawn. Nonetheless, the student thanked me after class. He said, “No one looks at my stories. They just look at me.”
Microinvalidations are characterized by communications or environmental cues that exclude, negate, or nullify the psychological thoughts, feelings, or experiential reality of certain groups, such as people of color. Color blindness is one of the most frequently delivered microinvalidations toward people of color.
“People are just people; I don’t see color; we’re all just human.” Or “I don’t think of you as Chinese.” Or “We all bleed red when we’re cut.” Or “Character, not color, is what counts with me.”
And then this story:
Academics of color experience an enervating visibility, but it’s not simply that we’re part of a very small minority. We are also a desired minority, at least for appearance’s sake. University life demands that academics of color commodify themselves as symbols of diversity — in fact, as diversity itself, since diversity, in this context, is located entirely in the realm of the symbolic. There’s a wound in the rupture between the diversity manifested in the body of the professor of color and the realities affecting that person’s community or communities. I, for example, am a black professor in the era of mass incarceration of black people through the War on Drugs; I am a Somali American professor in the era of surveillance and drone strikes perpetuated through the War on Terror….
It’s not that we’re too few, nor is it that we suffer survivor guilt for having escaped the fate of so many in our communities. It’s that our visibility is consumed in a way that legitimizes the structures of exclusion.
Skin feeling: to be encountered as a surface.
And finally, Ralph Ellison from Invisible Man, where so much of this discourse begins:
I am invisible, understand, simply because people refuse to see me. Like the bodiless heads you see sometimes in circus sideshows, it is as though I have been surrounded by mirrors of hard, distorting glass. When they approach me they see only my surroundings, themselves or figments of their imagination, indeed, everything and anything except me.
It’s easy — especially for anyone who discounts racism and the effects of racism as major shapers of the American cultural experience — to throw up one’s hands and say “It’s impossible to win with these people! It’s white people’s fault if they’re visible, it’s white people’s fault if they’re invisible! Heads they win, tails we lose!” Indeed, it’s not just easy, it’s inevitable.
But you know, it has to be hard to be either invisible or hyper-visible; and white America really does oscillate between casual clueless racism and genuine heartfelt desire to achieve colorblindness. (Though probably there has been a general drift towards the latter, which could be taken advantage of rather than resented.)
I would love to have a clear answer to this conundrum, but I don’t — except to note that it is a conundrum, an insoluble puzzle, a rhetorical circle — it’s the Mister Bones’ Wild Ride of political rhetoric. So maybe this is a good point at which to remind ourselves that, in this context, both “visibility” and “invisibility” are largely metaphorical. And then look through and beneath them for the more complex reality that they fail to capture — even if they may have been at times in their history conceptually useful and powerful. I think many critics of American racism have attached themselves to a vocabulary that just drops them in a ride that never ends.
(The first installment of the dialogue is here.)
B. But you’ve totally shifted ground here! What you’re offering now is not a critique of self-government, or even representative democracy, but of a corrupt electioneering system which couldn’t serve plutocracy better if it were designed to do so – and really, it is designed to do so, come to think of it.
A. And every “informed voter” knows that that’s the case, and expresses much tut-tutting disapproval, and occasionally even raises his or her voice in outrage — but keeps re-electing the same corrupt and/or weak-willed losers, or their newest clones. I have complained about American voters being ignorant, but even when they’re not ignorant they are thoughtless. Every opportunity they have to address the corruption of the system— and they have that opportunity every two years — they squander. They listen to the empty promises of politicians that flatter them, and pay not the slightest attention to the needs of society as a whole or those who come after them — that’s the selfishness, their third item of my indictment. They have repeatedly abused the privilege of voting, and they deserve to have it taken away from them.
B. Well, it’s a powerful indictment. According to your argument, then, this nearly universal abdication of democratic responsibility has led (one must assume) to the collapse of American society, widespread poverty, and internal and external powerlessness. Because clearly it wouldn’t be possible to a political system as corrupt and inefficient as the one you’ve described to produce even a mediocre social order — let alone an enormously wealthy and powerful society, a global hegemon such as the world has rarely if ever seen. So perhaps you’re living in the universe next door to mine….
A. No, I think we’re in the same universe, though I might want to argue whether a country’s achieving the status of “a global hegemon such as the world has rarely if ever seen” is, as you seem to think, a good sign. But let’s set all that aside and cut to the chase. As I read current events, and the history that has produced them, American power is chiefly the residual result of decisions made long ago by a much smaller electorate, a kind of aristocracy in all but name. Insofar as that aristocracy excluded women, people of color, and (at first) poor white men, it was unjustifiable; but another way to look at it is that the power went to the best-educated in society, the least vulnerable to the pressures of external forces. We are at work dismantling the brilliant edifice they constructed, though perhaps not fast enough for some; but it was so magnificently built, so delicately balanced— “a machine that would go of itself” — that it has proven exceptionally difficult to dismantle. But it will be dismantled, and just as we are continuing to benefit from the wisdom of our ancestors, our grandchildren will suffer from the stupidity of voters today.
B. You realize, I trust, that your historical argument could be challenged, and seriously challenged, at every single point.
A. Yeah. But we’re having a conversation, I’m not writing a treatise.
B. You also realize, I trust, that where you’re headed would constitute a more radical dismantling of the Constitution than anything else on the table?
A. No. I absolutely deny that. It would be a way to re-articulate and re-implement genuinely Constitutional principles in a new social order, one in which ignorance, thoughtlessness, and selfishness are no longer impediments to political power and influence.
B. I’m going to do you the honor of assuming that you are not going to argue for confining the franchise to white males who make more than $100,000 a year….
A. Much obliged.
B. But this is going to be an argument for a New Aristocracy, isn’t it?
B. I was afraid of that.
(to be continued…)
A. It’s time to accept a simple and yet profound fact: democracy is a failed experiment. People throughout the Western world — well, hold on: let’s just confine this discussion to America. Democracy in America is a failed experiment. Americans have demonstrated conclusively that they are too ignorant, thoughtless, and selfish to be trusted with self-governance.
B. Ignorant, thoughtless, and selfish! What a trifecta! Hyperbole much?
A. It’s not hyperbole. Let’s take my charges one at a time. Surely I don’t need to recite the dark litany of polls and studies that demonstrate how grossly misinformed Americans are about the basics of our political system, current laws and policies, the most elementary facts of world geography—
B. No, no, you don’t have to recite that litany — I have it by heart. But do you think that’s a new thing? Are you under the impression that our ancestors were learned and wise, spending their evenings discoursing on the subtleties of recent Supreme Court decisions?
A. I’m tempted to say yes. After all, they weren’t sitting around watching American Idol or hammering out wrathful comments on YouTube videos. They attended lectures and chatauquas, they participated in town halls and debating societies —
B. “They” did? You mean a handful of the wealthier and better-educated white men did, I think.
A. As I said, I’m tempted to say yes — and I really do believe the situation was more complicated, and better, than you have suggested. But for now I’ll waive the point. Let’s posit that Americans today are at least as knowledgable as their ancestors were. Okay?
B. Well … okay. For now. I reserve the right to debate this point later.
A. Fair enough. So what I want to say is that ignorance today matters more than it did in the past, because the role of government in our lives is so much greater. A hundred and fifty years ago it was possible to live a full and happy life with minimal experience of government. About a hundred years before that it was possible for Samuel Johnson to write, “How small, of all that human hearts endure, / That part which laws or kings can cause or cure.” Such innocent times! Now “laws and kings” have insinuated themselves so deeply into all our lives that ignorance of their power and influence can exact horrifying costs.
Plus, we have so many more educational opportunities than our ancestors —
B. Hang on, hang on — this is a dialogue, remember?
A. Sorry. Please go on.
B. Thanks. I think you need to stop and reflect on the fact that there is so much more to be ignorant of now than there was 150 years ago — and the increased complexity of government is a function of the increased complexity of the world. The transportation and communications technologies that arose in the 20th century have created a “global village” the very existence of which creates a need for wide-ranging knowledge that our ancestors couldn’t have imagined — to blame today’s people for —
A. I’m not blaming anyone.
B. Well, you kinda are.
A. I’ll try not to, because it’s not necessary to my argument. People may not be at fault for being too ignorant for self-government — but they still are too ignorant for self-government.
B. But isn’t that why we have a representative democracy? People elect representatives who can devote their full time and energy to mastering the complexities that we aren’t able to master.
A. Try watching C-SPAN for a while and tell me if you think those are people capable of “mastering complexities.”
B. Well, I have watched a good bit of C-SPAN and I have seen some pretty wonky Congresspersons — I think your critique is a lot more applicable to the politicians who make a point of saying and doing things that will land them on CNN and in the big newspapers.
A. Okay, that’s a fair point. But I think there are two other points you’re neglecting. First, even the wonky members of Congress tend to be selectively wonky. They have their one little area of expertise — or what they flatter themselves is expertise — and in other matters they just take their direction from their party’s leadership. And second, look at what actually gets done in Congress: certainly not intelligent and reasonable laws crafted by deeply knowledgable people to whom their colleagues defer! Rather, it’s pork-laden overstuffed monstrosities stitched together in order to please the whims of party leaders, big donors, and lobbyists for the hyper-wealthy corporations to which both major parties are equally indebted.
(to be continued…)
There’s a great deal of talk about “safe spaces” these days, but I put the phrase in quotes because rarely do these conversations refer to actual spaces. Instead, people seek social environments in which they’re proteced against verbal assault, or confrontation, or mere discomfort. Place as such doesn’t enter into it.
In stories, though, the idea of the safe space is a powerful one — even if the safety often proves illusory. (“The calls are coming from inside the house.”) And when there is genuine safety it’s rarely complete or permanent. In The Lord of the Rings Tom Bombadil’s house and Rivendell and Lothlorien are places of absolute refuge for the beleaguered characters, but we are reminded that none of them could hold out forever against the evil of Sauron. Perfect rest can be found in them; but only for now. The contingently safe space is a curiously strong theme in the Harry Potter books: living in the Dursley house grants Harry protection from Voldemort — until he comes of age; 12 Grimmauld Place protects members of the Order of the Phoenix — as long as they manage to prevent anyone from seeing them enter or leave; Hogwarts itself is invulnerable to Voldemort and the Death Eaters — but only as long as Dumbledore is present and in charge.
There are of course genuinely safe spaces in literature, and perhaps many readers will have favorites. I certainly know what mine is: it’s Nero Wolfe’s brownstone on West 35th Street in Manhattan.
Of all fictional series, Rex Stout’s Nero Wolfe stories have the most ingenious and fertile conceit (with the possible exception of Patrick O’Brian’s Aubrey-Maturin books). It is twofold: that the enormously fat Wolfe never willingly leaves his house, preferring to solve crimes simply by application of brain power; and that the man who moves for Wolfe, who serves as a kind of mobile prosthetic body for him, Archie Goodwin, narrates all the stories. There’s much to commend about this double conceit’s power to generate good stories — and about Rex Stout’s ability to conjure a consistently delightful narrative voice for Archie — but I want to talk about the house.
If you climb the steps and knock on the door, it will probably be answered by Fritz, Nero Wolfe’s chef — simply because Fritz’s kitchen is on the first floor, along with the dining room and Wolfe’s enormous office. The rest of the house is described at the Nero Wolfe Wikipedia page (linked above):
Nero Wolfe has expensive tastes, living in a comfortable and luxurious New York City brownstone on West 35th Street. The brownstone has three floors plus a large basement with living quarters, a rooftop greenhouse also with living quarters, and a small elevator, used almost exclusively by Wolfe. Other unique features include a timer-activated window-opening device that regulates the temperature in Wolfe’s bedroom, an alarm system that sounds in Archie’s room if someone approaches Wolfe’s bedroom door or windows, and climate-controlled plant rooms on the top floor. Wolfe is a well-known amateur orchid grower and has 10,000 plants in the brownstone’s greenhouse. He employs three live-in staff to see to his needs.
A back door, rarely used and treated as something of a secret, leads to a small garden where Fritz grows herbs and which features a vaguely described way out onto 35th Street (which seems also to be a secret, and is probably invisible from the outside, like 12 Grimmauld Place.)
The brownstone possesses an aura of self-sufficiency: I suppose Fritz has to shop for the food he cooks, but his larder seems magically full, and the meals served in that kitchen or in the adjoining dining room are in my imagining conjured more than made. (Fritz’s rooms are in the house’s basement, where he keeps an extensive collection of cookbooks.) The little world of the greenhouse, with its custodian Theodore Horstmann who lives among his orchids at the top of the house, is like a chunk of Faerie that one enters not by walking through a strange forest but by taking Wolfe’s little elevator.
Often in the books one of Wolfe’s clients finds himself or herself — usually herself — in danger and is brought by Archie to the house, whereupon the doors are locked and all creatures of evil intent are excluded. In one story a woman tries to stab Wolfe as he sits in his custom-made desk in his office; she dies instead. Wolfe is invulnerable there; I’m reminded again of Tom Bombadil, though in darker and more cynical form, utterly safe “within limits he has set for himself” and making others safe there too.
All of this is of course merely a dream of refuge dreamed by someone (me) who is one of the safest people in the world. As I write these words, refugees from the Middle East are pouring into Europe, and someone posted on Instagram images of notices that the city of Vienna has put up in all the transportation centers. The one in English (I saw Arabic ones too) begin with the word “WELCOME,” and go on to explain the various services the city provides for refugees, and to instruct visitors how to find help. Then, at the end, there is a single three-word paragraph:
You are safe.
“You are safe.” Could there be more powerful, more important, more consoling words? I have never needed to hear them in the way those thousands of refugees need to; and yet they answer to the deepest of all needs. For even water and food can wait for a while.
I have thought sometimes of finding myself in New York City, pursued by evil people who will do terrible things to me before they kill me. Somehow, against all odds, I make my way to the house of West 35th Street and rush up the steps and knock. Archie Goodwin opens the door a crack, then ushers me in. Up we go to the guest bedroom on the third floor, down the hall from Archie’s own room. The room is clean and quiet, and an orchid from Wolfe’s greenhouse stands in a vase on the bedside table. Once alone, I take off my clothes and turn out the light. In the morning Fritz will make a delicious breakfast, and there will be plenty of hot strong coffee. In the meantime, I sleep soundly and peacefully. Because here I am safe.
Damon Linker is right to say that the person now known as Kentucky Clerk should resign if she can’t fulfill the law the terms of her job require her to fulfill.
Mollie Hemingway is right to say that the attacks on Kentucky Clerk are utterly malicious and utterly mendacious.
There are really two significant stories here: one concerns Christians who think that they ought to be able to dissent from government and get paid by it at the same time; the other concerns secular liberals whose one principle in relation to the repugnant cultural other is “Any stick to beat a dog.”
UPDATE: I tried to comment on Noah’s response to this post, but WordPress didn’t let me. Or I don’t think it let me. Anyway two things: first, did I really sound “outraged”? I didn’t feel outraged. Perhaps I need to work on tone management.
Second, about the question of “significance:” if Kim Davis is a unique figure, then Noah is right, the story isn’t significant. But she may not be a unique figure. There seem to be a number of conservative Christians in America with a complex (possibly contradictory) attitude towards this country: on the one hand, a default patriotism and law-and-order mentality, often rooted in the belief that America is a “Christian nation,” that makes them comfortable with holding government jobs; and on the other hand, a belief that like the Apostles they should “obey God rather than man” and therefore should always be ready to dissent from the powers that be. This leads to someone like Kim Davis thinking that it’s possible for her to swear to uphold the law — but to refrain from upholding the law when it’s one her conscience disagrees with. If a large number of Americans, even in just a few states, feel the same way, then that will have consequences for elections, for laws, for the social fabric. And such consequences would be significant. Enough people have spoken out in support of Kim Davis to make me think that it’s not a trivial story.
I am fond of thought experiments, though many people are not—or so I infer from the fact that every time I propose one the most common response I get is a refusal of its terms. So a number of people who have responded to my recent little exercise have said something like “But that’s not the situation we’re in”—Yes it is, in this thought experiment that I am totally making up—or “I would not vote for either party”—but in this thought experiment you have to choose one.
There’s some of this even in the response from my friend Noah Millman, as when he wonders whether there really are threats to religious liberty. In my thought experiment there damn well are, because I say there are! Against Noah, I say that the premises of my thought experiment are not and indeed cannot be “debatable premises,” because they are the ones I posit simply for the sake of the experiment: thus my insistence at the outset on the term “hypothetical.”
I can’t help being reminded of one of my favorite scenes from Wodehouse, in which the pathologically diffident Gussie Fink-Nottle discusses with Bertie Wooster whether he should follow Jeeves’s advice to build his self-assurance by wearing a Mephistopheles outfit to a costume party:
‘And you can’t get away from it that, fundamentally, Jeeves’s idea is sound. In a striking costume like Mephistopheles, I might quite easily pull off something pretty impressive. Colour does make a difference. Look at newts. During the courting season the male newt is brilliantly coloured. It helps him a lot.’
‘But you aren’t a male newt.’
‘I wish I were. Do you know how a male newt proposes, Bertie? He just stands in front of the female newt vibrating his tail and bending his body in a semi-circle. I could do that on my head. No, you wouldn’t find me grousing if I were a male newt.’
‘But if you were a male newt, Madeline Bassett wouldn’t look at you. Not with the eye of love, I mean.’
‘She would, if she were a female newt.’
‘But she isn’t a female newt.’
‘No, but suppose she was.’
‘Well, if she was, you wouldn’t be in love with her.’
‘Yes, I would, if I were a male newt.’
A slight throbbing about the temples told me that this discussion had reached saturation point.
I continue to believe that a thought experiment like the one I suggested is valuable in the same way that A/B testing is valuable. When someone asks you which of two shades of blue you prefer, you can, I suppose, say “Why just two? Why not fifty shades of blue?” or “Why not green, and red, and burnt umber, and all the other colors?” But maybe we would all learn something, even if something small, if you just picked one of the damned shades of blue. And then we can move on to other experiments after that, and gradually, incrementally, build up a more reliable understanding of our own values and preferences.
To those who would say that A/B testing, and thought experiments, are simple in comparison to real-life decisions, I reply: Precisely. That’s just the point of them. Politics is hard because it’s so outrageously complicated. It’s easy to get lost in all the overlapping questions and competing priorities. If you agree with a political party about seven of its official platform positions, but disagree about only one, but the one is something you care passionately about while the seven are, for you, relatively insignificant—how are you supposed to weigh those things? It’s impossible to say off the cuff. More thinking is required. It helps to break the situation down into its component parts. That’s what a thought experiment like the one I proposed is for.
More about the substance of the matter later; right now, I have teaching to do.
Imagine that there are two leading American political parties. Imagine further that they are in general agreement on all issues except two. (That’s what makes this a true hypothetical.)
The first point of disagreement concerns religious liberty. Party A is a strong supporter of religious liberty; Party B thinks that religious liberty needs to be circumscribed in order to secure maximal equality or justice for others.
The second point of disagreement concerns foreign policy. Party B is in these matters cautious and circumspect, disinclined to adventurism, not isolationist but not interventionist either. Party A, by contrast, never met a foreign conflict it didn’t want to intervene in, and thinks what’s good for military expenditures is good for America. The more of our young men (and perhaps women) Party A can put in harm’s way thousands of miles from home the better it feels about itself. Pax Americana, world without end, y’all.
You (in this thought experiment) are a Christian and a strong supporter of religious liberty; you are also strongly opposed to unnecessary military adventures and foreign intervention more generally.
How do you vote? And on what grounds do you make that decision?
I’ve been thinking about this a good bit lately. While I am, as I have often demonstrated right here on this site, a vocal supporter of religious freedom, I’m also rather uncertain about how my religious convictions should affect my political decisions. The problem arises if we distinguish between individual and collective Christian action.
On the individual level, I know what I am supposed to do: if someone slaps me on one cheek, I should offer them the other; if someone takes my shirt, I should offer him my coat; if someone curses me, I should bless him; I should always seek the well-being of others in preference to my own. (Of course, this is not to say that I actually do what I know I should do.)
If that logic holds in the collective sphere as well, then perhaps Christian churches should not focus too much attention on what is best for them, but on what is best for their neighbors. They might have good reason, in that case, to accept constraints on religious freedom if that meant preventing unnecessary violence, death, and destruction from being unleashed on others.
Now, some Christians might also argue that the Church exists for others, so that promoting religious freedom, even at the cost of lives lost overseas, is still the selfless thing to do. And that could be right, but I think we all ought to be very wary of arguments that provide such a neat dovetailing of our moral obligations and our self-interest.
I honestly don’t know what I think about this, and still less do I know how to apply the proper principles to our own more complex political scene. But I do think it’s right to conclude that there are at least some potential circumstances in which religious believers, in order to be faithful to their religious traditions, would need to refrain from direct political advocacy for those traditions.
On Twitter, Damon Linker politely took me to task for, in my response to a post of his, ignoring the “substance” of that post. I believe by that he meant his explanation of his own views of fetal life, as opposed to the critique of Ross Douthat that I objected to.
Well, that post wasn’t about Linker’s own position, but rather about his peculiar way of responding to Ross Douthat. But okay—since you asked!—here goes. Linker writes,
Even if my wife and I could know every time a fertilized egg fails to implant and then sloughs off when she menstruates, we still would never be moved to mourn the death of a being with intrinsic moral worth. The same holds for fertilized eggs that slough off because a sexually active woman is using an IUD — or, for that matter, because a woman is breastfeeding in the first several months after giving birth. All of these activities lead to the “death” of what really is, at that pre-implanted stage, a clump of cells that is destined not to develop into anything at all.
Nine months after successful implantation, things are very different. I would even say categorically — ontologically — different. How is this possible? I have no idea. All I know is that nearly all of us are convinced that a newborn baby is a person, a creature with intrinsic dignity, worth, and a right to life that the liberal state is duty bound and justly empowered to protect — and yet also convinced that although this same creature possessed the same genetic code from the moment of fertilization, it was somehow of relative moral insignificance in those first few hours and days of microscopic life.
I would very much like to know Linker’s evidence for the claim that “nearly all of us are convinced that … this … creature … was somehow of relative moral insignificance in those first few hours and days of microscopic life.” Nearly all? But let’s continue:
Between those moments (conception and birth) lies a developmental continuum that confounds any and every effort at strictly rational systematization. An abortion at six weeks is worse than one at four weeks. Eight weeks is worse than six. Twelve is worse than 10. And so forth, as we approach fetal viability — at which point, what was once a medical procedure with minimal moral import becomes a matter of murder.
First of all, and especially in light of my critique of Linker’s critique of Douthat, I want to say that this identification of fetal viability as the point at which a fetus becomes a person entitled to legal protection is a big step, and one that I’m sure earns Linker plenty of condemnation from the pro-abortion world. And the criticisms I am now going to offer should not be seen as ignoring that step or diminishing its significance. But do I have some concerns about Linker’s line of argument? I do.
The first is that, while Linker’s view is often described as a “gradualist” one, and while morally that may be true, in legal terms it’s not gradualist at all: it’s totally binary, all or nothing. In this account, before viability the taking of a fetal life is legally nugatory; after viability it’s murder. This is a big jump in any circumstances, but especially worrisome given the success of prenatal medicine in pushing viability earlier and earlier. So whether a woman has done something of no legal interest or something of the greatest possible legal significance can change within a year. This is to make legal judgment—and the status of a human creature under the law—dependent to a disturbing degree on medical technology.
Moreover, Linker’s judgment about the “moral worth” of pre-viability fetuses is pretty shaky as well. There’s nothing wrong with that as a matter of personal feeling, though (as I suggested earlier) I’m not convinced that his personal feelings about zygotes—his moral intuitions about them—are as universal as he claims that they are. And that’s a problem with his case, because he grounds his entire approach to the legal status of fetuses in those feelings and intuitions. If almost everyone does share those intuitions, then maybe that will work as a matter of practical jurisprudence; but ethically it’s pretty dubious. After all, it hasn’t been that long since widespread intuitions about the “moral worth” of black people led to catastrophic evil. (And the leftovers of those intuitions are still poisonous for black people in America today.)
I appreciate, and even value, the general point that underlies Linker’s argument: that sometimes our laws have to be based on fallible and not especially consistent moral intuitions; that ad hoc reasoning is sometimes the best that we have; that the attempt to impose absolute consistency on our laws and jurisprudence is almost necessarily quixotic and prone to the generation of unintended consequences, because, as the adage rightly goes, hard cases make bad law. But I think our track record as a species—and more particularly as Americans—suggests that rough-and-ready moral intuitions do very little to protect the weak, the powerless, the despised. We need stronger and (yes) more consistent legal and moral stuff to protect those who cannot protect themselves.