fbpx
Politics Foreign Affairs Culture Fellows Program

The Tech Giants Must Be Stopped

They destroy jobs, distort markets, and trample on liberties. It's time we directed technological change rather than the other way around.
Google

Editor’s Note: Bill Nitze, though a friend of TAC, is no conservative. Here, the lawyer and technology entrepreneur implores conservatives to join liberals and others who believe the emergence of huge tech “platform” companies poses a threat to individual liberty and must therefore be curtailed. But he goes further, describing the onslaught of technology that almost surely will transform the human experience. Many conservatives will recoil at some of his solutions, but the challenge is real and must be confronted by all of mankind.

Conservatives constantly combat “big government.” They oppose regulations and other governmental initiatives that erode individual liberty and undermine the “American way of life” through bureaucratic intrusion. This conservative resistance to the growing power of governmental departments and agencies is understandable. But there is a greater threat to individual liberty and autonomy than the one posed by big government. Conservatives should take note and join the fight.

The threat comes from the growing power of leading technology companies such as Amazon, Apple, Facebook, Google, and Microsoft. These companies destroy jobs through automation. They manipulate people by using data about their purchasing patterns and other aspects of their lives to maximize sales and profits. They produce social media and virtual reality products that are addictive and adversely affect child development and individual autonomy. They also seem bent on building a virtual economy whose intelligence and other capabilities exceed our own.

 

These companies already dominate their markets. Google controls nearly 90 percent of search advertising, Facebook almost 80 percent of mobile social traffic, and Amazon about 75 percent of e-book sales. This spectacular success has created unprecedented wealth and established U.S. leadership in the technologies of the future. But recent misuse of their platforms to influence the 2016 U.S. elections and broader concerns over data privacy, control, and revenue sharing have led some to take a more critical view of these big tech companies. John McCain, the Arizona Republican senator, has joined two Democratic senators, Amy Klobuchar of Minnesota and Mark Warner of Virginia, in sponsoring a bill called the Honest Ads Act, which would subject online political advertising to the same rules of disclosure as ads on television, print, and radio.

Meanwhile, legal scholars such as Zephyr Teachout, Lina Khan, and Michael Shapiro have urged Congress to examine the practices of big tech companies in mergers and acquisitions to determine if they violate U.S. antitrust laws. In September Klobuchar, who is the ranking Democrat on the Judiciary Subcommittee on Antitrust, Competition Policy and Consumer Rights, introduced into the committee a bill called the Merger Enforcement Improvement Act, designed to give antitrust enforcers more information on the effects of mergers.

Modern conservatism generally has regarded bureaucratic expertise and the use of governmental regulation as antithetical to individual liberty and autonomy. Federal regulation underpinned by scientific expertise is at the core of the modern administrative state founded by Theodore Roosevelt and Woodrow Wilson at the beginning of the last century to check the rising power of railroads and industrial combines—and also to check their financial enablers such as banker J.P. Morgan. FDR’s New Deal further expanded the scope of the administrative state and the role of scientific experts in regulating the economy in response to the Great Depression. Lyndon Johnson’s Great Society and the environmental laws passed under Richard Nixon continued the process. This great expansion of the state’s economic role was motivated by a desire to protect the public’s interest in greater economic opportunity, better working conditions, a social safety net, public health and safety, protection of nature, and a clean environment.

In the 1970s, Watergate and the scourge of economic “stagflation” eroded public trust in government and triggered a partial rollback of the administrative state. There was the initial deregulation of the transport sector under Jimmy Carter and the rise of neoliberal orthodoxy under Ronald Reagan. Inflation was largely tamed, and economic growth rebounded under Reagan and (following a pause under H. W. Bush) Bill Clinton. This economic success was rewarded at the ballot box. But under the surface of the reigning neoliberal consensus, the loss of manufacturing jobs, increasing income inequality, and the hollowing out of Middle America generated more and more political anger. When combined with a growing perception by non-college educated whites that the country’s elites were collecting all the chips while playing identity politics at their expense, these trends set the stage for the election of Donald Trump and the current crisis of conservatism.

♦♦♦

One frequently overlooked aspect of the federal government’s activities during the period of growing governmental regulation was vigorous enforcement of U.S. antitrust laws. Antitrust enforcement, which enjoyed broad support on both the right and left in the early decades of the 20th century, prevented firms with greater market power from acquiring or driving out of business smaller manufacturers, retailers, service providers, and banks. This was designed to ensure a relatively broad distribution of economic activity outside of big urban centers and preserve a direct connection with, and responsiveness to, the members of the communities served by those local factories and firms. But since the 1970s these local factories and firms have experienced an accelerating decline due to consolidation, technological change, and globalization. This in turn has led to an incalculable loss of social capital. All this has contributed to the sad state of our politics today. More vigorous antitrust enforcement over the last 40 years would not have reversed these trends, but it would have greatly cushioned their impact.

The historical success of antitrust enforcement poses an important lesson for conservatives. The question should not be whether a particular course of action is taken by government or the private sector, but whether that action achieves constitutionally acceptable goals while preserving or even enhancing individual liberty and opportunity. Conservatives are rightly skeptical about giving too much power to government bureaucrats to regulate the private sector without sufficient public accountability or oversight by elected officials. But they should be equally skeptical about the tendency of individuals and businesses with unprecedented market power to pursue their own narrow interests in a manner that frustrates achievement of broader public goals and undermines individual liberty, opportunity, and quality of life. It is the latter risk that Adam Smith had in mind when he railed against the tendency of businessmen to restrict competition and fix prices in The Wealth of Nations (1776) and that stirred F.A. Hayek, in his The Road to Serfdom (1944), to support government regulations protecting human health, safety, and the environment.

What’s needed is a synthesis in thinking that employs antitrust laws in ways that minimize the use of detailed top-down regulations, keep decision making at the lowest practical level of government, maximize the use of economic incentives, and enhance competition to reduce cost and encourage innovation. The aim should be creation of the transparency and public trust needed to reap the huge benefits that information technologies can provide while minimizing their downsides. Before developing such a framework, however, public and private sector stakeholders should understand how the technology giants—particularly Amazon, Apple, Google, Facebook, and Microsoft—are transforming the U.S. economy and society.

These companies enjoy two advantages that allow them to dominate their markets. First, they are accumulating mega-data about their customers’ lives on a scale that no competitor can possibly match. Every time someone orders a product on Amazon, uses an app on an iPhone, or posts on Facebook, he or she is providing reams of personal data. When aggregated and analyzed through increasingly sophisticated algorithms, these vast stores of information give the company accumulating them the ability to influence our behavior in ways that we may not even be aware of.

Second, the technology companies also use their highly valued stock to acquire smaller companies that have developed competing or complementary technologies—Google’s acquisition of DeepMind, for example, and Facebook’s acquisition of Instagram. Continued growth and the prospect of vast new markets pump up the stock price, which in turn generates the wherewithal for these companies to acquire emerging competitors, thereby perpetuating the cycle.

But these companies’ influence on the U.S. economy goes far beyond dominance of their particular markets. In an article entitled “Managing Our Hub Economy,” Marco Iansiti and Karim R. Lakhani write in the September/October 2017 issue of the Harvard Business Review that the digital superpowers are restructuring the U.S. economic landscape into a “hub” economy. They don’t compete in a traditional fashion—seeking to increase market share in a particular industry by improving products or reducing costs. Rather the operators of hub platforms seek to leverage the network-based assets developed in one setting in order to enter other industrial sectors. This seems to be leading us to what they call “a winner-take-all world.”

For example, Google’s Android and related technologies form “competitive bottlenecks” through control over billions of mobile consumers that other product and service providers want to reach. The aim is to transform their competitive structures from being product-driven to being network-driven, expanding the tech giants’ reach and influence far beyond any given industry and into all segments of society. This transformation then enables them to extract network-based rents from other companies across many industries without fear of the kind of competition that normally emerges in traditional industries. Consider, as another example, Amazon’s plan, following its acquisition of Whole Foods, to use what is essentially predatory pricing, underwritten by its vast presence in retailing, to restructure the grocery industry.

This transformation to a “hub” economy is helping to create an intelligence that is external to humans and housed in the virtual economy. In an article in the October 2017 McKinsey Quarterly entitled “Where is Technology Taking the Economy?” W. Brian Arthur writes that digital technologies have created a virtual and autonomous second economy that is housed externally in the virtual economy’s algorithms and machines rather than in human beings. He points out that over the last ten years the combination of intelligent sensors and algorithms has given digital technology the ability to make associations and thereby sense a situation and take appropriate action without human intervention. Technology that prevents vehicle collisions without driver intervention is an example of such a combination.

A further and potentially dangerous step towards a virtual economy is the creation of artificial systems that learn on their own. Last year, researchers at Facebook’s Artificial Intelligence Research unit built chatbots that were meant to learn how to negotiate by mimicking human trading and bartering. But when the social network paired two of the chatbots, named Alice and Bob, to trade against each other, they started to learn their own bizarre form of communication. Since the Facebook team assigned no reward for sticking to English, the chatbots quickly developed their own code words for deals. After shutting down the incomprehensible conversation between the bots, Facebook’s researchers declared victory by describing the project as an important step towards “creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant.” This episode illustrates the tech industry’s growing “black box” problem with software that increasingly operates in ways that its creators cannot understand or control.

Arthur goes on to conclude that the “re-architecting” of the economy described by Iansiti and Lakhani is creating a virtual and self-sustaining external economy that will require less and less human participation. The resulting increase in “bounty” through increased productivity will lead us to the “Keynes point,” where enough is produced by the economy, both physical and virtual, for all of us. The accompanying decrease in “spread” resulting from a permanent loss of jobs, however, will greatly exacerbate existing wealth and income disparities, absent redistribution—changing who gets what and how they get it. In order to limit the increase in economic and social inequality and attendant stress on our political institutions resulting from this process, the United States will need to induce the big tech companies to (1) open up and diversify the hub by encouraging and supporting the creation of subsidiary networks, (2) give more people the skills and connectivity necessary to find productive work in the hub economy, and (3) to share the wealth.

Opening up and diversifying the hub economy by encouraging the creation of subsidiary networks will require both top-down pressure from government and bottom-up initiatives from the technology companies. The White House, Department of Justice, Federal Trade Commission, other concerned agencies, and Congress will have to work together to develop new antitrust laws and regulations that create the space for subsidiary networks to emerge without forgoing the benefits of global platforms or discouraging innovation. Measures that might help to achieve this result include requiring the large technology companies to deal with their customers or competitors on the basis of greater equality just as AT&T did with new telecommunications companies after its court-ordered break-up, or oil refiners did with railroads after the creation of the Interstate Commerce Commission. Such measures might even include requiring the technology companies to share proprietary code with customers or competitors.

Also needed is a model contract or set of rules for data sharing that does not discourage data accumulation and analysis, but gives individuals greater control of how, when, and with whom their specific data is shared. Creating such rules will not be easy. It will require a degree of government coercion since the tech companies will not voluntarily share the tremendous rents that they extract from accumulating massive amounts of data with the individuals who are the source of that data. But the protection of individual rights to data must be balanced against the benefits of accessing and analyzing data from large populations in real time. Rules that respect this balance cannot be imposed by fiat from above, but should evolve from an iterative process involving government officials, outside experts, interested members of the public and the companies themselves.

Of course the technology companies should be encouraged to participate with the federal government, state and local governments, colleges, and community groups in shaping those laws and regulations. They should also be incentivized to address the second goal, referred to above, of giving more people the skills and connectivity necessary to find productive work in the hub economy. Specifically they need to invest some of their vast resources in programs to provide new educational and training opportunities for people living in smaller towns and cities and rural areas, many of whom voted for President Trump, and to give them better access to jobs in today’s high-tech economy.

The companies could connect isolated communities and give laptops to all school children, a program adopted in Uruguay with considerable success. They should help fund school efforts in these communities to hire more and better-qualified teachers in math, basic science, and other STEM subjects. They should create hands-on training apprenticeships in partnership with local employers that would prepare people for well-paying jobs near where they live. Finally the big tech companies, working with local governments, community colleges, and other partners, should set up digital employment exchanges on which companies could post job openings with specific requirements for each job and provide opportunities for job seekers to be interviewed online.

Unfortunately, however successful these actions will be in creating jobs and improving quality of life in the immediate time frame, they will not sufficiently address the longer-term challenge posed by AI-related technologies. Many people voted for President Trump not only because they were experiencing downward economic mobility due to the loss of well-paid manufacturing and service jobs, but also because they felt marginalized and devalued by social forces beyond their control—and beyond the control of government. Both of these trends are about to accelerate.

The economic and social anxiety now experienced by members of the lower middle and working classes without college educations will increasingly be felt by lawyers, doctors, accountants, investment advisers, and other educated professionals as artificial systems become more and more capable of performing their professional tasks. It is difficult to predict how many new jobs will be created by new technologies and applications, but it is highly unlikely that they will equal the number of jobs they will destroy, given the rapidly improving capabilities of robots and other non-human systems. One category of employment that will produce millions of new jobs over the next 20 to 30 years—taking care of old people—will shrink eventually as robots become more and more capable of meeting the physical and emotional needs of older people.

This brings us to the third goal: sharing the wealth. What may be needed is a new kind of social contract under which a substantial portion of the wealth generated by robots and other non-human systems is redistributed. One way of achieving this result would be to increase marginal tax rates on high incomes and to redistribute the proceeds. Another, suggested by Microsoft’s Bill Gates, would tax robots in the same way that we tax human workers. Still another is to give members of the public a carried equity interest in the companies owning and operating the robots. Finally, there is the lingering idea (not affordable in the current economic environment) of giving everybody a guaranteed minimum income. Whatever methods are chosen, a substantial portion of the population will need to find something other than paid work in their pursuit of lives with dignity and purpose. Finding solutions to the redistribution challenge that do not undermine human liberty and autonomy may be the greatest intellectual challenge facing conservatism over the next 30 years.

♦♦♦

An even more fundamental looming question is how humans should relate to machines that have capabilities equal to, or even greater than, our own. If we can develop such machines, we will. Humankind has never in the past refrained from developing a technology because of potential dangers posed to the species, and it is unlikely that we will do so in the future. As that time approaches, we will have to choose whether to give the machines the same dignity and status that we give humans or seek to control them as human servants or instruments for our own gratification. The former course will require a degree of self-confidence, openness, and risk that is uncharacteristic of many conservatives, who often seem to define conservatism as rejection of the “other.” The latter course will inevitably lead to a conflict that is unlikely to end well for humans, but it isn’t clear that the former course of openness and tolerance will be requited with machine benignity.

Even if we avoid such a conflict and find a way of living in harmony with machines whose replication and destiny we can no longer control, we will face the challenge of preserving our own dignity and autonomy in the face of non-human entities whose intelligence and capabilities will eventually exceed our current levels. It is important not to exaggerate the capabilities of current information technology. We will need many years of research and development before we understand the incredibly complex and intricate workings of the human brain. But we will get there. Meeting this challenge will require us to be both accepting of continuous and potentially revolutionary change and vigilant in minimizing the risks to individual autonomy and liberty associated with that change.

We can meet that challenge only by co-evolving with technology through enhancement of our own physical and mental capacities. Even with existing technology we can anticipate major improvements in quality and length of life by overcoming many chronic diseases such as cancers and genetic traits such as sickle cell anemia. We are already improving our mental capabilities through greater connectivity with each other and the cloud and are developing therapeutic techniques such as deep brain stimulation based on probes targeted on specific neurons. Improving the internal capabilities of the human brain is not yet feasible and remains controversial, but it probably will become feasible in the not-too-distant future. What’s critical is that, as humans and their creations learn from each other, we co-evolve towards greater appreciation, justice, and wisdom rather than the reverse.

Development of autonomous weapons illustrates the importance of such co-evolution. The U.S. Defense Department has said that it will keep humans “in the loop” in operating unmanned weapons systems such as armed robots or drones for the foreseeable future, particularly with respect to “kill” decisions (we can only hope that other countries and non-state actors will do likewise). As the capabilities of these unmanned systems improve, however, human operators will rely on these systems more and more in making decisions. Thus they may reach a point where they make “better” and less error-prone decisions than their human operators. The trick will be to ensure that both operators and unmanned systems improve their ethical decision-making capabilities in tandem with their technical capabilities.

The above discussion, though necessarily speculative, leads to one firm conclusion. Unprecedented technological change is inevitable. Only by being prepared to take collective action to ensure that its benefits are equitably shared and to curb its misuse can we guide that change in directions favorable to us as individuals and to humanity as a whole. We can start by taking on the platform tech giants that are transforming the global economy.

William A. Nitze is a technology entrepreneur and former EPA official who lives in Washington, D.C.

Advertisement

Comments

Become a Member today for a growing stake in the conservative movement.
Join here!
Join here