Politics Foreign Affairs Culture Fellows Program

The Soul of the Machine

Matthew B. Crawford offers a vision of technology for free men.

Car repair shop
(Photo by David Ebener/picture alliance via Getty Images)

We are in the midst of a “Fourth Industrial Revolution,” says Klaus Schwab, impresario of the World Economic Forum. Of what does it consist?

Wind and solar will fully replace fossil fuels, and our carbon footprint will be brought to zero by such measures as forcing incandescent lightbulbs off the market and replacing beef with insects and synthetic meat. Robotic cars will take command of the roads. Artificial intelligence will supplant human beings and force millions of laborers into the breadline (or into universal basic income). We will be integrated into the “Internet of Bodies” where our every thought, action, and physical state is raw data for corporate and state actors alike. And, if that weren’t enough, the genetic code of human beings will be edited and reengineered.


The reader might not like the sound of all this, but too bad. It is inevitable.

But is it? The signals conflict. Wind and solar are, at best, straining to power our existing infrastructure. Their performance will not improve when tasked with juicing the technologies that Schwab et al. are seeking to promote. Despite the launch of flashy tools like ChatGPT, within the A.I. field it is conceded that the Promethean project to create consciousness is dead. The structural and engineering impediments to a broad launch of autonomous vehicles are so intrinsic that it is currently unimaginable for a major fleet to be fielded without a monumental effort of social engineering—something the late-modern American state simply lacks the capacity to do.

For more than a decade, several thinkers have been filing a minority report that, despite all superficial appearances, America is in a state of profound technological decline. In 2011, Peter Thiel gave the canonical statement on the right: “We wanted flying cars, instead we got 140 characters.” Thiel’s judgment was echoed on the left by late anthropologist David Graeber, doyen of Occupy Wall Street, who wrote in 2012: “A secret question hovers over us, a sense of disappointment, a broken promise we were given as children about what our adult world was supposed to be like… Where, in short, are the flying cars?” 

A group of thinkers broadly affiliated with the New Right (for lack of a better term) have added to Thiel’s assessment a growing body of trenchant analyses, looking at various explanations for the technological trap we have fallen into: the misapplication of research and development spending; misplaced faith in the market to generate innovation; the lack of a meaningful industrial policy; the misalignment of elites with more technologically fruitful projects; the financialization of the economy; misguided preoccupation with culture-warring.

There tends to be a conjoined concern among these thinkers (with whom I mostly agree) that in a world of great power contest and emerging multipolarity, the United States is falling dangerously behind China. The broad sentiment is that we must move heaven and earth to “catch up,” a notion that gives me pause. If our singular guide for moving forward is catching up with China, where will that lead us but to China? With a growing constellation of “smart” technologies breathing down our necks, Americans can be forgiven for despairing at the idea of aping a regime that relentlessly surveils its own people and ensnares them in the web of a social credit system.


Lacking a more humane dream—which, I know personally, many on the New Right welcome—all of the above will point in one direction. Either the Fourth Industrial Revolution is inevitable, or we must put every effort into instituting it and beating China to the punch.

Where, instead, should we go? What are the tools that could enhance and extend the powers of the average citizen, rather than make them tools of corporate, state, or machine power? More specifically, what are the qualities of tools that make for a certain kind of soul and strengthen a certain type of political regime, that of a republic—a regime of free men?

Matthew B. Crawford, philosopher, social critic, and mechanic, has been asking these questions for more than a decade. His work, spanning now three excellent books including Shop Class as Soulcraft (2009), illuminates the difference between tools that liberate and those that oppress. He identifies a totalitarian tendency latent in the machines that we associate with the Fourth Industrial Revolution, including driverless cars. Crawford’s books—part case study, part theory—intentionally eschew a systematic approach and have no intention of presenting a complete “philosophy” of technological republicanism. Nevertheless, they provide critical insights, judgments, and principles for those of us who are committed to such a task.

Crawford’s first two books, Shop Class as Soulcraft and The World Beyond Your Head (2015), were concentric explorations of how mastery of hand tools fits a person to hard, bodily reality. Against the omnipresent forces herding us into the thin domain of the digital, where objects can be given any appearance, a good hammer teaches the opposite lesson. Material things have qualities of their own that cannot be overwritten. With patience, they can be re-shaped, but not transmuted. To get under the hood is to do something real, to be something real. “Grime-under-the-fingernails, bodily involvement with the machines we use entails a kind of agency,” Crawford wrote. 

Crawford flirted with stardom with the release of Shop Class, which hit the New York Times bestsellers list and earned a nice pot of prestigious awards. With the release of The World Beyond Your Head, Crawford’s star dimmed a bit, but not because of any diminished excellence. The booming reception of Shop Class often resulted in an oversimplified, non-threatening exhortation to restore technical education to our secondary schools. The World Beyond Your Head offered a hard-to-dodge challenge to the emerging technological system itself. “In a culture saturated with technologies for appropriating our attention, our interior mental lives are laid bare as a resource to be harvested by others,” he wrote. However obvious now, back then it was an extreme minority position.

In opposition to these mentally extractive technologies, Crawford commends tools that extend human mentality outward, that focus attention, rather than divert it. The World Beyond Your Head concludes with a luminous inquiry into a world-class organ-builder shop, Taylor and Boody, in Staunton, Virginia, a lovely little town 45 minutes west of my own town of Charlottesville. Overseen by John Boody, the boss who makes the final judgments, the common work of making massive pipe organs that emit gorgeous, sonorous notes, coordinates the individual attentions of the group. Each member brings to bear a technical proficiency that comes from years of practice and total dedication, making adjustments that at times are down to “millionths of a millimeter.”

For Crawford, there is an ethical dimension to attaining this level of skill. To judge the quality of something beyond detection by a high-tech instrument is a power that comes only to those who take their natural talent and commit it entirely to their work. This is to allow the work to recast your mind and senses, to give them definition and make them receptive to the potential for excellence that is latent within one’s chosen materials.

Silicon Valley’s digital tools, by contrast, foster a different kind of selfhood. The old tools situated and defined one; the new tools relieve one of such specificity. The sense one has when scrolling is universality—of being lifted out of one’s immediate surroundings to wherever the spirit leads. Does a conversation bore you? Then scroll. Driving? Then scroll. This sense of seamlessness and easy mastery is reinforced by social media, which relocates “connections” into placeless electronic networks, where one cannot be seen and one’s flaws can be hidden. One’s online avatar is a fully controllable brand. 

By making cheap labor conveniently accessible through an app, if one has the funds, one can be unburdened of practically any menial activity. Shopping for groceries, cleaning, chauffeuring, nannying—the American underclass can be made available at the push of a button. These tools, then, are not mere products of disinterested “advancement” but are technological expressions of our decadent dream of becoming unburdened, autonomous selves.

The reality, of course, is far different. Rather than unburden the user, these machines contract his mental and physical extension by addicting him to a device that apes mastery but provides none. This makes him more manipulable by choice architects who are secretly pushing the user down predetermined channels. Every click is nudged, tracked, analyzed, and either sold for profit or marked for divergences. As Crawford puts it, “the diversity of human possibility [is] being collapsed into a mental monoculture—one that can more easily be harvested by mechanized means.”

If Big Tech’s tools proffer a false freedom, true tools are media for genuine agency. Crawford puts forward the musician. “One can’t be a musician,” he says, “without learning to play a particular instrument, subjecting one’s fingers to the discipline of frets or keys.” It seems a paradox. A musician’s freedom and power to make beautiful music depends on her “obedience … to the mechanical realities of her instrument.” She becomes a true agent, her soul is matured and made excellent, by attending to what she loves and humbling herself to the craft. What her submission to her instrument gives us is music, the language of the spirit.

By contrast, under the guise of autonomy, Big Tech’s devices tangle up our attention, lash it to the screen, and turn it from the world, despoiling it of vitality and priming it for consumption. Big Tech wins our minds. We lose a world full of souls made excellent by the rough tutelage of reality. It’s a choice between souls that are formed and souls that are formless. What is ultimately at stake, then, is the kind of men that our machines beget. This is Crawford’s ultimate criterion of technological value. 

Crawford’s tools of choice are motorcycles, automobiles, and other roaring vehicles. These machines and how to care for them receive extended attention in his first two books. They emerge as the overarching interest in his third and, to date, best book, Why We Drive: Toward a Philosophy of the Open Road (2020). Like a piano, hockey stick, samurai sword, or wrench, he argues, automobiles can become prosthetic extensions of man’s natural powers. A good driver in a good car, says Crawford, knows the road. He feels the road.

But a good driver’s car is hard to come by these days. Beginning in the 1990s, the ideal of the luxury car, with a “goal of maximum insulation from the road by damping out all mechanical ‘transients,’” became standard for practically all models. Crawford is old enough to remember when the frame of the car was the main medium for communicating information to the driver, who could literally feel his wheels’ contact with the road. The new engineering ideal was to smooth out the shocks felt by the car, which kept mechanical data from working its way up to the driver. This trade-off created new problems. The driver may be more comfortable, but he is also more ignorant of his car and road conditions, so he has to be alerted to problems by other means, chiefly by sounding chimes and flashing symbols on the dashboard. “Car makers have attenuated the natural bonds between action and perception, interposing a layer of representations.”

Driving and cars became, in a word, computerized. As Crawford says, “Essentially, there is another hood under the hood.” This has profound implications for any aspiring do-it-yourself automechanic like Crawford. Today’s cars are designed to keep you out. Whereas before, the average husband was also the family mechanic, computerized automobiles seal him off from such mastery. He has been marginalized—along with his neighborhood mechanic, formerly called in for bigger issues—by computer technicians at the dealership who increasingly oversee even routine fixes. The automobile has been integrated into the knowledge economy and is now dependent on experts and corporate bureaucracies for operation. The corporate benefit to keeping owners in a protracted financial relationship is obvious.

This raises profound questions. Why are we being systematically denied total access to our own machines? What happens to our minds when our machines cease to be open to us and become instruments for sealing us off?

The first effect is the erosion of skill. Drawing on numerous studies, Crawford reports that, with such features as cruise control, GPS, automatic transmission, or Tesla Autopilot, “we have learned that when a driver is out of the loop of controlling their vehicle, they tend to become sleepy and less vigilant, and it takes them longer to respond to sudden events.” This dumbing down will only deepen as automation and ultimately A.I. take control of more tasks. A recent European Commission mandate requires that “all new vehicles sold for use on European roads must include lane-keeping and automated braking systems, as well as speed limiters and data recorders.”

It is a self-reinforcing cycle. “As the space for intelligent human action gets colonized by machines, our intelligence erodes, leading to demands for further automation,” Crawford writes. “Automation has a kind of totalizing logic to it. At each stage, remaining pockets of human judgment and discretion appear as bugs that need to be solved.”

This wider shift away from human judgment is making our lives unintelligible. Crawford warns that a technological veil is settling over the world, soon to cover everything. This creeping impenetrability is intrinsic in the operations of algorithms and A.I. Crawford joins computer scientist Michael Wooldridge in worrying that A.I. is inherently opaque. A common method of instructing a robot to identify something is to show it many images of that object. An engineer, by selecting the images, has significant influence over the process, but he cannot perfectly predetermine what lessons the A.I. draws from its instruction. “They are black boxes,” Wooldridge says. When they make poor choices, “they cannot explain or rationalize the decisions they make in a way that a person can.” 

This should register as a matter of extraordinary significance to the reader, because, as Crawford notes, more and more aspects of our lives are measured and controlled by these algorithms, “our movements through space, our patterns of consumption, our affiliations, our intellectual habits and political leanings, our linguistic patterns.” Other examples are even more intimate: “our sexual predilections and ready inferences about the state of our marriage, our moment-to-moment emotional state as revealed by our facial expression.”

With impersonal algorithms, we have no recourse to an authority that can answer for them. One recent study found that only 3 percent of calls to the Internal Revenue Service helpline were answered by a human being while the other 82 million calls were caught up in the net of automation. As algorithms grow in complexity, not even the tech companies that own them can ultimately provide an account of their actions. “This lack of accountability to citizens cannot be overcome by good-faith efforts to achieve ‘transparency,’ for reasons intrinsic to the technology,” Crawford summarizes.

This opacity is part of “a wider crisis of political legitimacy that has been brewing for some time in Western democracies.” If you have ever passed a state trooper going fifteen over and he just lets you drive on, it’s because, in his judgment, you were driving safely, whatever your speed. Law enforcement by camera knows no such nuance. It is deferred to as “objective.” “Machines don’t make judgments,” Crawford says. “They do, however, offer an image of neutrality and necessity, behind which the operation of human judgment becomes harder to make out, and harder to hold to account.”

As these abstract systems assume a greater share of public and social interactions, the upshot is alienation. This alienation is by no means equal. Elites are granted the privilege of human-to-human service, and the rest are rebuffed by the machine. Crawford expects the captains of Silicon Valley to stick with “dumb stuff”—meaning objects that do not operate algorithmically and surveil the user—“just as they currently send their children to special schools from which devices with screens are rigorously excluded.”

What is emerging is a caste system in which the lower strata of society will be governed by techno-bureaucratic algorithmic “knowledge,” and the higher authorities will move freely among a community of fellow humans exerting agency over their dumb machines. The super elite will be driven by their own hands, or at least by the hands of their servants. Automated cars are only for you and me.

To join humans on the road, A.I. will need to ape the practical reasoning of human beings and to equal it in power and functionality. This is proving to be exceedingly difficult to do.

As mentioned above, strong A.I. has been all but abandoned. But lowering the sights doesn’t necessarily bring A.I. any nearer to successfully replicating human action. Crawford recounts the story of a Google car that waited, paralyzed, at a four-way stop, because human drivers would roll through without coming to a complete rest. The machine had been programmed to wait until the other vehicles were completely stationary. “Of course,” Crawford writes, “what human drivers do is make eye contact in such a situation, or read other cues of social interaction.” Machines can operate only in an environment in which all other agents abide by the same set of rules and interpret them similarly. 

Time will ultimately tell if Crawford’s prediction is correct that a road occupied by both humans and robots is doomed to fail. Since the publication of the book in 2020, the technology has taken an important step forward. To oversimplify, the most cutting-edge research shows that A.I. can be trained via machine learning to identify the behaviors of other drivers and categorize them into certain types and then use mathematical probability to forecast what kinds of moves such drivers tend to make. This A.I. does better on the freeway—in safely performing three lane changes in rapid succession—than A.I. trained under different premises.

This is a genuine technological advance and one that, if significantly developed, can potentially anticipate human action. That’s still a very long way off. Even so, this from Crawford still stands: “Autonomous cars face the same predictive problem as human drivers, except that they are subject to neither the benefits nor the hazards of being engaged in a socially bootstrapped, interpretive process of mutual prediction.”

Getting the technology right is unlikely to be the deciding factor anyway. When the world’s largest corporations, backed by both political parties and the coordinated force of the global elite, execute a technological transformation, it will happen, even if the design is bad or the pain among the people is great. See the catastrophic move toward renewables and the energy crisis in Europe. A populace at risk of freezing to death is apparently a price worth paying.

Driverless cars, Crawford believes, will drive a rift between human beings and their actions that may very well disrupt our self-understanding as ethical beings. If you are being chauffeured about in an automated Google device and the A.I. makes a dumb move and a child is fatally struck, who is to blame? You? The engineers to whom the A.I. is a black box? Google? Do you have any confidence whatsoever that anyone will be held liable?

With his first two books, Crawford gained a reputation for being cool under the hood. His analyses were deep, serious, at times salty, but always levelheaded. Which makes this sentence in the introduction of Why We Drive all the more alarming: “There is something new and voracious in the world that feeds on individual agency, and is basically imperial in its aspiration.” What is that thing? Crawford adopts Shoshana Zuboff’s name for it: “surveillance capitalism.”

If you learn only one thing about driverless cars, let it be that the largest fleet will be owned and operated by Google, which claims an absolute right to any intel it gathers on its users. Extraction of your information—which, as writer Jon Askonas has pointed out, provides a very small share of the company’s revenue—is nonetheless the keystone of its operations. Why should we expect its cars to behave differently? Crawford puts his finger on it: 

By colonizing your commute…with yet another tether to the all-consuming logic of surveillance and profit, those precious fifty-two minutes of your attention are now available to be auctioned off to the highest bidder. The patterns of your movements through the world will be made available to those who wish to know you more intimately—for the sake of developing a deep, proprietary science of steering your behavior. Self-driving cars must be understood as one more escalation in the war to claim and monetize every moment of life that might otherwise offer a bit of private head space.

Your driverless car will not really be a car, in other words. It will look, sound, and move like a car, but the computer under the hood—the device by which Google will watch you—will be the essence of the thing. Like Search and all of Google’s other products, it will offer you a service in order to gather information on you, shape your values, and nudge you toward becoming a certain kind of person.

Google’s surveillance and deplatforming will take on new, malevolent dimensions when the corporate behemoth charts your course through the world. Driverless cars are the first major foray into establishing the “smart city,” organized, governed, and policed by algorithm. As Crawford explains, “The idea is that our movements through the city, the infrastructure we depend on, the police protections, trash collection, parking, deliveries and all other services that make a city work, will be orchestrated by an ‘urban operating system.’”

In the smart city, Google “would presumably export the same set of guiding principles beyond the screen and install them in the physical world, where they would guide the activities of our bodies.” Crawford adds, “Unplugging would not be an option.” 

How will deplatforming and demonetizing be implemented in the smart city? Google has been disappearing content of a Christian and conservative character at an alarming pace and without warning. Will permission to go to church be conditioned on right-think, or suspended altogether during a pandemic? Will mobility be rationed to combat the climate crisis? Will there be rallies or festivals that one might be precluded from attending based on analysis of one’s data? Or no-go zones for people of a certain skin pigment or politically disadvantageous sexuality? It would be naïve not to wonder. 

Human beings want to drive. My son, two years old, recently pushed his sister’s toy stroller about the living room. “I’m driving, papa!” he boasted. He likes to be a “big boy,” which means doing what I do. “Wonderful!” I replied. “When you get older you can drive a big car, just like me.” If they let you, I added under my breath.

Prior to moving down to Charlottesville in 2019, with my wife and firstborn, I lived in New York City for twenty years. I loved my life on mass transit, bicycle, and foot, but even I can see that there is something fundamentally human being expressed in the pleasure one takes in hitting the road. Yes, our highway system is a product of social engineering and can lead to alienating conditions. But getting one’s license is the closest thing kids have today to a rite of passage (sadly). Commutes are horrible—but that’s not all cars are for.

Like all other good tools, automobiles extend and amplify our embodied capacity. Crawford gives it an anthropological title: Homo moto. Basically, man who delights in moving about, and whose cognitive, ethical, social, and political nature develops by doing so.  

At first, I admit, I rolled my eyes a little. What, one more, Homo this-and-that? Then I took my little ones to the beach and, after playing with them in the waves, I stood back, took it all in, and, well, there he was. Old folk strolled along the shore; swimmers dived under the waves and danced in the tide; boogie boarders rode the surge; further out, surfers, more skilled and adventurous, caught the curl; further still, maybe two hundred feet out now, a lone windsurfer cut across the blue; beyond him, a small yacht cruised along leisurely (I imagined beers, music, sunglasses, and wind in the hair); a four-wheeler vroomed along the sand dunes behind me; and a two-seater circled in the sky above, with a long banner whipping behind to advertise for a local amusement park (where kids and families, no doubt, were thrilling to some big drops and loop-de-loops). This, I said to myself, is Homo moto.

It seemed tasteless, suddenly, to speak of human beings and their magnificent driving machines as mere transportation. Cars are a site of action, not a mere means to move between here and there. The crafting of extraordinary tools for moving about the elements quickly, freely, and playfully, is a spirited and dignified work. These tools enable man’s playing, Homo ludens—indeed, Crawford’s thinking is deeply inspired by Johann Huizenga—and his crafting of ingenious instruments, Homo faber. What Crawford is at pains to argue is that these three primary dimensions of human nature, man’s playing, making, and free movement, are expressed jointly in the driving and loving of cars, even if late-modern elites strongly deny it. This, according to Crawford, is why we drive. “To drive,” he says, “is to exercise one’s skill at being free, and I suspect that is why we love to drive.” This freedom is what we stand to lose.

A vigorous republicanism, Crawford says, is fueled by “a tribunal spirit that looks with active hostility on whatever erodes the status of man.” In this age in which unprecedented corporate and state powers are acting upon us through so-called smart technology, a ferocious judgment must be publicly applied, perhaps for the first time, to machines. If a technology strips men of skill, warps them with vice, puts them under surveillance, threatens their nature, or diminishes their citizenship, it is opposed to republicanism.

For Crawford, it does not matter if tyrannical technologies are tools of state or corporate power, especially since their hegemony over the American people is blended and shared. As he testified before the Senate Judiciary Subcommittee in June 2021: “From the perspective of ordinary citizens, the usual distinction between government and ‘the private sector’ [sounds] like a joke, given how the tech firms order our lives in far-reaching ways.”

It should be clear by now for all to see that “self-government broadly understood, both as an individual capacity for self-command and as a political dispensation,” will not survive without a willingness to contend with the techno-political powers that encircle us. Technological republicanism requires a fundamental shift in our disposition. We can no longer pretend that machines are “neutral” or that they benefit all Americans equally. Different technologies serve different economic, political, cultural, and class interests—obviously. Where those interests undermine the commonweal, they must be checked. Where they support self-government and the common good, they must be empowered.

Fortunately, the timing for such a struggle may be right, even if the hour is late. The myth that technological change naturally offers a universal benefit is losing its hold. With widespread concern for the terrible effects of social media on the wellbeing of children, the convergence, right before our eyes, of big tech and state power into a regime of digital punishment and comprehensive surveillance, the unmitigated disaster of Covid—a mechanized genome (don’t forget)—it is dawning on people here in the States and across the globe that maybe, just maybe, some technologies don’t serve them.

Crawford is correct: Technological power is a “political issue.” When Big Tech targets our highways and roads for the dominance of its machines, they are undermining our “sovereignty” and making a claim about who rules. Crawford writes, “As creatures who are self-moving, freedom of movement would seem to be the most fundamental freedom there is.” American lawmakers should counter power with power and do all they are able to stop Big Tech’s hostile takeover of our roads.

But what, again, is the positive vision? At the risk of being romantic, the positive vision is nothing other than human beings. Technological republicanism seeks to “reclaim the real,” as Crawford puts it. That does not mean adopting a conservatism that attempts to freeze time or eradicate all traces of technology in our lives. Rather, it means designing machines to extend human mentality outward, complement human skill and assert their rightful dominion. 

In sum, machines are made for man, not man for the Machine. It is high time that the American people and their leaders remember this. We can thank Matthew B. Crawford for jogging our memory.