Steve Sailer quotes Robert Wright on the “Trolley Problem” thought experiment:

The thought experiment—called the trolley problem—has over the past few years gotten enough attention to be approaching “needs no introduction” status. But it’s not quite there, so: An out-of-control trolley is headed for five people who will surely die unless you pull a lever that diverts it onto a track where it will instead kill one person. Would you—should you—pull the lever?

Now rewind the tape and suppose that you could avert the five deaths not by pulling a lever, but by pushing a very large man off a footbridge and onto the track, where his body would slow the train to a halt just in time to save everyone—except, of course, him. Would you do that? And, if you say yes the first time and no the second (as many people do), what’s your rationale? Isn’t it a one-for-five swap either way?

Greene’s inspiration was to do brain scans of people while they thought about the trolley problem. The results suggested that people who refused to save five lives by pushing an innocent bystander to his death were swayed by emotional parts of their brains, whereas people who chose the more utilitarian solution—keep as many people alive as possible—showed more activity in parts of the brain associated with logical thought.

To which Sailer responds:

Isn’t the Trolley Problem the dumbest question ever?
First, why use a something as unlikely to successfully work as a human body, when you should be looking around for something more likely to slow down a trolley? You know, rather than immediately push the fat man to his death, maybe it would make more sense to enlist his help in finding and shoving something more useful into position?
Second, assuming the problem comes with some rationalization for its Moloch-like desire for human flesh, why not jump yourself? I mean, I weigh 197 pounds. That’s pretty fat (although not as fat as I used to be!).

Presumably, the question comes with some explanation for why only somebody fatter than yourself will do the trick? But, how do you know?In what universe do runaway trolleys come with a safety label that reads: “Your 197 pounds is not heavy enough. Only an NFL offensive lineman-sized fat man will do.”

Third, have you ever tried to push a fat man to his death? I don’t want to get into a lot of irrelevant details here of who tried to do what to whom, but let me just note for the record: It’s not really as easy as philosophers assume.
Of all sizes of humans, large, fat men big enough to stop a trolley are the hardest to push around. The NFL searches them out precisely because the laws of physics mean they are hard to move. So, if you tried to shove a fat man onto the tracks, he’d probably say, “Hey, knock it off.” And if you persisted, you’d probably just wind up wrestling in the dirt while the trolley zooms past.

So maybe the aversion to pushing fat men to their deaths of most people who aren’t utilitarian philosophers makes a little sense?

I suspect Sailer thinks he’s mocking the problem, but he should take his answers more seriously. The psychologists who set up the experiment as a way of demonstrating that strict utilitarianism is more “logical” and less “emotional” should consider the possibility that our emotions represent an evolved theory of information that still has some utilitarian value alongside logical deduction.

When we intuit there’s something wrong with a choice, we say it “feels” wrong. That doesn’t necessarily mean that we’re experiencing a kind of moral revulsion – indeed, it usually doesn’t mean that at all. It usually means we intuit a problem – one that may be perfectly explainable by logic once we figure out what it is.

“Push the fat man in front of the trolley” feels like the wrong answer not because it’s wrong to ever affirmatively kill somebody to save others, but because the problem is, as Sailer points out, both artificial and ridiculous. That artificiality and absurdity is what divides the “push the fat man” scenario from the “throw the switch” scenario.

It’s reasonable to assume that pulling the lever will actually divert the train, and that if the train is diverted the five people will be saved – and if the effort fails, nothing worse will happen than would have happened otherwise, and no one would think it amiss for you to have tried to divert the train. By contrast, if you try to push the fat man, he may throw you in front of the train instead. If you succeed, he may not stop the train. And, as Sailer says, how on earth can you know that the fat man is the only, or even the optimal, way to stop the train?

All of that uncertainty should make us more reluctant to take action – all the more so when we’re talking about killing somebody. It’s  entirely logical to refuse to act in such circumstances. But calculating the odds isn’t something you have time to do in situations like that, if it’s even possible. Instead, what you need is a good intuition of what might actually work, and how certain you can reasonably be that it will. “Push the fat man” doesn’t pass that test – and shouldn’t.

I’m not arguing for a deontological approach to ethics as against a consequentialist one. I think it’s silly to say “you can’t ever kill the fat man because that’s using him as a means not an end.” I’m arguing that consequentialism has to take account of whether the ends are “ends in view” or are the more distant consequence of contingent probabilities.

And the artificiality of the scenario is part of what triggers, or should trigger, a red flag about resorting to consequentialism to justify the action. “Throw the switch or not” is a natural choice actually presented by real conditions – switches imply choices by definition. “Push the fat man or don’t” isn’t a natural choice presented by real conditions – it’s a scenario concocted for an experiment. By definition, those cannot be the only options in the universe. And our brains can tell.

It seems to me that what characterizes the people who choose the “logical” answer – push the fat man – is not that they gave a less-emotional response but that they gave a less-inuitive, less-gestalt-based response. They were willing to accept the conditions of the problem as given without question. That’s a response to authority – they are turning off the part of their brains that feels the situation as a real one, and sticking with the part of the brain that reasons from unquestionable givens to undeniable conclusions.

There’s a place for that kind of response – but I would argue that answering questions of great moral import is emphatically not that place. Indeed, from the French Revolution to the Iraq War, modernity is littered with the corpses of those whose deaths were logically necessary for some hypothesized outcome that could not actually have been known with remotely the necessary level of certainty. In that regard, I suspect an aversion to following logic problems to fatal conclusions is not merely a kind of moral appendix handed down from our Stone Age ancestors, but remains positively adaptive.