We seem to be in the midst of an awakening, although some are calling it a backlash. Over the last several months, there has spread a new awareness over the ethical and political consequences of technology. This awakening of interest stems in part from the belated realization that Facebook was weaponized during the 2016 presidential campaign, with precisely targeted disinformation leading some to claim that the “informational underpinnings of democracy have eroded.” Seemingly trivial, even frivolous tools of communication now appeared to be burdened with political and ethical consequences that can no longer be ignored.
But it isn’t just politics driving renewed interest in the ethics of technology. Warnings have also been sounded about the exploitation of user data, the bias of algorithms, the ethical costs of pervasive AI and emerging robotics, and the implicit values of autonomous machines. Still others worry about the recklessness of technology firms that seem oblivious or else indifferent to the ethical consequences of the technologies they create. Most recently, a spate of former executives, designers, and investors in social media companies have made startling revelations about what many already suspected: digital media is often designed to get its users addicted.
This awakening is a welcome development, though whether it will amount to a sustained reality or a momentary fit of frustration remains to be seen. But even when the ethical dimensions of technology are recognized, it can be hard to know exactly what to do in response.
We ordinarily assume that technology is fundamentally neutral and that all that is of ethical consequence is how those neutral tools are put to use by moral agents. There is a certain commonsense plausibility to this attitude, and it is not so much wrong as it is inadequate. It is inadequate because it does not account for how a tool’s design, a network’s architecture, or a device’s affordances, to give a few examples, create ethical predispositions. These technologies induce and tempt; they enter into the circuit of action, habit, virtue, vice, and character; they frame our perception of the world. Taken together, this means that independent of the particular uses to which they are put, technologies have a moral or ethical bent to them, and, insofar as they mediate relationships among individuals and their relationships to a shared world, a political bent as well. This bent is sometimes the result of conscious and nefarious design. Sometimes it is an unintended consequence of a design process that was merely focused on functionality and oblivious to ethical and political impacts. The bent is not always obviously good or bad, we should add, but it is there, exerting its often unperceived influence.
The awakening to technology’s ethical dimension is, therefore, a helpful first step. The second step involves determining what to do about it. As it turns out, this may prove even more challenging than arriving at the initial realization. Indeed, acknowledging technology’s ethical ramifications often only serves to deepen our appreciation of just how morally significant our technologies can be without leaving us any wiser about how we ought to relate to them. Some of our best critics of technology—Jacques Ellul, for example, or Neil Postman—have been wrongly accused of a thoroughgoing pessimism. Actually, they just reckoned honestly with the scope of the problem, which is a necessary starting place to find a way forward. In any case, it is undoubtedly true that wrangling with technology’s ethical consequences is a complicated business.
Consider one recent flashpoint. Last November, Cathy O’Neil, a data scientist and author of the book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, published an op-ed in the New York Times calling on academics to “step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives.” Academics, O’Neil charged, where “asleep at the wheel.” Not surprisingly, numerous scholars in a wide variety of disciplines quickly took to social media to protest that they were, in fact, doing the very work that O’Neil was calling them to do. “We are awake, but not at the wheel,” went the common rejoinder.
Indeed, for well over a hundred years, a tradition of trenchant criticism has focused critical attention on the ethical and political consequences of modern technology. But this tradition, prescient though it now appears, hasn’t succeeded in changing the way we relate to technology as individuals or as a society. Those who are alert to technology’s political and ethical dimensions have not, in other words, been the people “at the wheel.” They have not been among the well-positioned executives, legislators, lobbyists, administrators, and culture industry personalities who tend to direct society’s normative currents.
This admission, however, is only part of the picture. It is also the case that technology as it is now structured is stubbornly resistant to ethical and political critique. One important reason for this is often overlooked: when there is a desire to give direction to a given technology, it is difficult to know where exactly to focus one’s efforts; in fact, it may sometimes be impossible to determine. Moreover, if we’re asking what can I do, then we’re likely not going to get very far at all. This is because so much of what we think of as technology is not a series of discreet tools but a system of integrated technologies that necessarily entangle us in complex social networks. Under these conditions, moral agency is, for better or for worse, distributed among the artifacts, the system within which the artifacts function, the other people to whom we are related within this system, and, finally, the individual himself.
Also at issue is the relationship between modern individualism and the popular understanding of technology as an ethically neutral tool. Modern individualism, particularly in its Enlightenment formulations, is characterized by an insistence on autonomy, rationality, and moral self-determination. This view of the self requires us to assume the neutrality of technology. If technology is not neutral, then we cannot aspire to be wholly autonomous and self-determining. If we do insist on the Enlightenment version of individualism, then we place the whole burden of coping with technology’s ethical ramifications on ourselves alone. This explains why talk about the ethics of technology tends to get such limited traction. We’re dealing with realities for which the individual is not an effective locus of resistance.
In The Techno-Human Condition, Braden Allenby and Daniel Sarewitz develop a three-level taxonomy of technology. Level I is technology considered as a straightforward means to accomplish a clear goal, as a matter of immediate effectiveness. Allenby and Sarewitz offer the airliner as one example. Want to get from New York to Seattle? The plane will do the trick admirably. Level II is technology considered as a matter of systemic complexity. If the airliner is a Level I technology, then the whole air transportation system is Level II. While the plane, taken by itself, is a safe, efficient, and reliable way of getting from here to there, the air transportation system is often endlessly frustrating and inefficient. Finally, Level III is technology considered as an Earth system, one of global consequence that is fundamentally inscrutable and unpredictable. The example here is the whole nexus of cultural developments connected to the advent of flight.
When we think about technology, Allenby and Sarewitz suggest, we’re almost always stuck in Level I thinking, at which we are allowed to imagine that personal choices are sufficient for the ethical task before us. We ordinarily fail to account for the Level II systems; much less do we think about Level III matters. But if our renewed interest in the ethics of technology is to bear any enduring fruit, then we need to learn to think along these lines. While there are many important choices that we can and must make regarding our personal use of technology, we also need to think in corporate and institutional terms, as members of moral and political communities. This may be the only way to effectively face the challenges posed by contemporary technology. Indeed, it may only be possible to maintain our individual commitments in the company of others and within communal structures that empower us and free us to realize those commitments.
At the very least it should be clear that we can no longer think about our moral and political lives without thinking about technology. But as smartphones draw us in and Facebook subsumes our social networks, this necessary thought can only be the beginning.