Politics Foreign Affairs Culture Fellows Program

Can Populists Be Allowed Access to A.I.?

One Silicon Valley capitalist wants to keep advanced technology out of the hands of his political opponents.

Credit: Michael Mandiberg

The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma, by Mustafa Suleyman with Michael Bhaskar, Crown Publishing, 352 pages

Technology is not neutral. As Neil Postman repeatedly reminded us, “Every new technology benefits some and harms others.” Postman’s repeated rejoinder was the plain truth, but, until recently, technological neutrality held the high ground of being considered common sense.


That era is swiftly coming to an end. “Technology is a form of power,” admits Mustafa Suleyman, an artificial intelligence researcher and co-founder of DeepMind, in his new book, The Coming Wave. That means that technology is necessarily political. Technology is man’s assertion of power over nature, but it is also an assertion of power by men over men: something that has been furiously denied by those interested in preserving the belief that their machines are generally beneficial to mankind.

If technology is power, one must strive to keep it from falling into the hands of rival or enemy states, if possible. There are domestic duties, too. One is obliged to limit who gets their hands on it and for what purpose. How should these limits be decided? The Coming Wave turns out to be an argument for why Silicon Valley capitalists like Suleyman should be granted a monopoly over genetic engineering technologies and A.I. and why regulators should keep them from basically everyone else—especially “bad actors,” i.e., “terrorists” and “populists.” 

Numbered among those from whom these powers should be kept are typical figures in countries such as Poland, China, Russia, Hungary, the Philippines, and Turkey, as well as “populist movements” like QAnon (by which he means Trump voters), the gilets jaunes in France, Bolsonaro in Brazil, and Brexiteers in the U.K. These bad characters are “turning away from the bright twentieth century promise that greater interconnectedness would accelerate the spread of wealth and democracy.” Though some of their grievances may be justified, Suleyman admits, in the end they act out of “social resentment.” One cannot in good conscience allow them to get their hands on this power. Suleyman calls this “containment.”

One of the technologies that must be contained (artificial intelligence) is one in which Suleyman has a massive stake. He wisely brackets out the question of the singularity: that is, a potential catastrophe brought about by the engineered generation of consciousness. Generative A.I. is improving at an exponential rate right now. Mastery of the human genome is accelerating, A.I. is increasingly being deployed in biogenetics, and the cost of utilizing these technologies is dropping precipitously. These fast-moving conditions are sufficient to totally upend our existing social order. Suleyman believes this revolution must be allowed to happen—provided it be done by the right sort of people. 

Containment means constructing a regulatory regime that keeps costs unnaturally high, beyond the reach of the vast majority of people. One can sympathize with Suleyman’s fear over DIY genetic cooks whipping up something, or someone, in their basement. We should do all in our power to stop that. But the well-intentioned people whom Suleyman would trust have accidentally done catastrophic damage, too, as we saw when U.S. supported gain-of-function researchers let a novel coronavirus slip from a lab in Wuhan.


By demanding the right to monopoly power over this new science, as Suleyman does, he necessarily calls forth his own opposition. There are those of us who don’t want him to have this power, either.

He has an answer for us: It’s inevitable. Not because history has some whiggish trajectory in which rationality and technology naturally unfold, but because the incentives are “unstoppable.” It is the “$100 trillion opportunity,” says Suleyman, and someone is going to claim it. If we don’t, China will.

One can appreciate Suleyman’s honesty, but it does not keep him from falling into contradiction. While he abandons some myths of technological inevitability and neutrality, he erects another, namely that of Homo technologicus. This creature shares several features with Homo faber, man the fabricator, the tool user, who builds the human environment. Homo faber makes everything from simple tools to revolutionary machines that turn over civilization. Homo technologicus only does the latter. He is purely revolutionary. 

While Homo faber is potentially every man, only the rare few can plausibly be Homo technologicus. Today, they gravitate to Silicon Valley. The implication, which Suleyman appears not to realize (or perhaps he does), is that everyone else languishing outside of this vaunted circle is less than fully human. We should keep this in mind as men like him seek monopoly power over machines that promise to transform our lives. They will be used for nothing less than the reconfiguration of the human person.

Having admitted that technology is power, Suleyman turns out to be uninterested in the question of whether this particular power—seizing control of the human genome, or overturning the social order via A.I.—is good. It is my assertion that, pace Suleyman, anyone who fails to consider such critical questions about technologies should be among the last to enjoy monopoly power over them, and among the very first to enjoy the strictures of containment.