Politics Foreign Affairs Culture Fellows Program

Men or Machines?

State of the Union: Machines seem “objective,” but often reflect their designer’s subjective preferences.

Screen Shot 2022-03-16 at 11.46.33 PM
The human-machine Maria in Fritz Lang's 1927 'Metropolis' (Source)

Michael Toscano's review of Matthew Crawford's books and thought on technology was published in our latest print issue and is now featured on our homepage. It is one of my favorite recent pieces at TAC.

Toscano describes the distinction that Crawford draws between tools that “extend human mentality outward” and those, like social media, that “foster a different kind of selfhood.”


The sense one has when scrolling is universality—of being lifted out of one’s immediate surroundings to wherever the spirit leads. Does a conversation bore you? Then scroll. Driving? Then scroll. This sense of seamlessness and easy mastery is reinforced by social media, which relocates “connections” into placeless electronic networks, where one cannot be seen and one’s flaws can be hidden. One’s online avatar is a fully controllable brand. 

Social media promises users greater connectivity with their peers. Paradoxically, though, it alienates users from one another by offering a highly addictive, curated experience designed to keep them scrolling:

Rather than unburden the user, these machines contract his mental and physical extension by addicting him to a device that apes mastery but provides none. This makes him more manipulable by choice architects who are secretly pushing the user down predetermined channels. Every click is nudged, tracked, analyzed, and either sold for profit or marked for divergences. As Crawford puts it, “the diversity of human possibility [is] being collapsed into a mental monoculture—one that can more easily be harvested by mechanized means.”

On its face, the technology seems neutral. Social media companies call their products “platforms” to evoke in users a sense that they can “build” their own identities on the apps without interference. But it is a patina of neutrality. Behind the curtain, people, whom Toscano calls “choice architects,” are making conscious value judgments about users' experiences.

We assume that machines—a speed camera on the roadway, a self-driving car—are inherently more “objective” decision-makers than are the human beings they replace. But in each case, human judgement has not been eliminated, only been bumped up a level. The builders of these machines make value judgements, which appear to end-users as being “objective,” but are only objective inasmuch as they execute, without discrimination, the creator's subjective designs:


If you have ever passed a state trooper going fifteen over and he just lets you drive on, it’s because, in his judgment, you were driving safely, whatever your speed. Law enforcement by camera knows no such nuance. It is deferred to as “objective.” “Machines don’t make judgments,” Crawford says. “They do, however, offer an image of neutrality and necessity, behind which the operation of human judgment becomes harder to make out, and harder to hold to account.”

Take, as a low stakes example, the debate over baseball's potential embrace of a robotic strike zone. The rulebook defines a strike as a pitch that crosses home plate below the top of a batter's chest and above the bottom of his kneecap when “the batter is prepared to swing at a pitched ball.” Deciding when a batter “is prepared to swing” requires human judgment. A robotic strike zone would take the necessary judgment call out of the hands of on-field umpires and delegate it to nerds in a skybox.

In other words, what appears to be an objective judgment from a machine often reflects the subjective choices of the designer, and raises questions about the responsibility of users and designers in contexts far more important than baseball, such as in traffic fatalities:

Driverless cars, Crawford believes, will drive a rift between human beings and their actions that may very well disrupt our self-understanding as ethical beings. If you are being chauffeured about in an automated Google device and the A.I. makes a dumb move and a child is fatally struck, who is to blame? You? The engineers to whom the A.I. is a black box? Google? Do you have any confidence whatsoever that anyone will be held liable?

Proponents of these technologies will have to answer. And Toscano, in his piece, performs a service in asking the right questions.