fbpx
Politics Foreign Affairs Culture Fellows Program

The Internet’s Web of Lies

Free speech is an essential right. But what of our online public square, where the truth is regularly deluged by falsehoods?
Free speech Twitter

In 2012, a reporter asked Reddit cofounder Alexis Ohanian what he imagined America’s Founding Fathers would have thought of his company. “A bastion of free speech on the World Wide Web?” Ohanian replied. “I would love to imagine that Common Sense would have been a self-post on Reddit, by Thomas Paine, or actually a redditor named T_Paine.” A couple years ago, paeans of this kind were common in Silicon Valley. Open expression wasn’t merely a catchphrase there; it was the cornerstone of their industry. “Information wants to be free,” internet pioneer Stewart Brand declared at the world’s first Hackers Conference in 1984.

Recently, though, that commitment has begun to wobble. Brexit, Pizzagate, Gamergate, Russian election meddling, measles outbreaks across the First World, doxing, trolling, child porn, revenge porn, the live streaming of a mass murder in New Zealand, and an unremitting torrent of vitriol have shaken the tech giants’ faith in open expression. Facebook, Twitter, and YouTube now employ thousands of moderators around the world to censor content, and have, of late, shown an increased willingness to ban people from their platforms. In August 2018, the conspiracy theorist Alex Jones was banned from a bevy of platforms—Facebook, Apple, and YouTube among them—ostensibly for promoting the canard that the 2012 Sandy Hook Elementary School shooting was a hoax. Since then, others have been banned as well, including Milo Yiannopoulos, Laura Loomer, and Louis Farrakhan. Is Silicon Valley losing its stomach for open expression or is this simply what happens when free speech absolutism gets mugged by modern, digital reality?

One could, of course, argue that little has changed. The First Amendment to the Constitution protects Americans from government censorship. But private companies like Twitter, Facebook, and YouTube have the right to post or prohibit what they please on their own platforms, just as scores of other companies have before them. No one tells Penguin Random House that it has to publish every novel that comes in over the transom. It’s a private company, not a public service. On the other hand, as Supreme Court Justice Anthony Kennedy pointed out in a 2017 ruling, the internet has become “the modern public square.” Facebook has 200 million monthly users in the United States alone, or about three fifths of the U.S. population. In a single minute, the site receives 500,000 new comments, 293,000 new statuses, and 450,000 new photos. In the same amount of time, 400 hours of video are uploaded to YouTube, and 300,000 tweets are posted to Twitter. In a sense, they’re more like public utilities than private corporations, providing essential services to millions of people. A political candidate would have a hard time staying competitive if Twitter or Facebook decided, for their own fickle or fiendish reasons, to ban her from their sites. “I’m confident that Reddit could sway elections,” Reddit CEO Steve Huffman admitted last year. He added, “We wouldn’t do it, of course.”

That’s not particularly comforting. Neither is a recent New York Times report detailing Facebook’s censorship practices around the world. Working off of a massive, Byzantine rulebook, moderators, many of whom are not fluent in the languages of the regions they are monitoring, have just seconds to determine whether a post (a picture, a video, a news story) is fit for the eyes of Facebook users. Not surprisingly, mistakes have been made: big ones. A fundraising drive for volcano victims in Indonesia was banned because a co-founder of the drive was on a list of offensive groups, while a paperwork error allowed a pro-genocide group in Myanmar to stay on the platform for months—months in which thousands of Rohingya civilians were butchered. At the same time, the social media giants have been willing to bend to the demands of governments whose markets they covet—blocking posts about freeing Kashmir in India or ones that criticize Atatürk in Turkey—in effect becoming bespoke censors for repressive regimes around the globe. Are these the people you’d trust to control what you see and hear online?

If not, then who, exactly, would you trust? Even if you did manage to find an impartial arbiter, would you want him/her/it to restrict what you’re able to read? Bad ideas, no less than good ones, play a vital role in human discourse, if only to be openly discredited. So argued John Stuart Mill, the 19th-century English philosopher, whose book On Liberty remains perhaps the best defense of free speech ever mounted. Censorship is wrong, he maintained, because “[i]f the opinion is right, they [the readers] are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.” Alex Jones’s assertion that the Sandy Hook massacre was a government hoax is preposterous. It takes an extraordinary amount of credulity to entertain such a notion for more than half a minute. But refusing to entertain it at all or to hear it bandied about by others encourages—nay, enforces—a closed-mindedness that is itself detrimental to human flourishing. After all, conspiracy theories, on occasion, do turn out to be true. Members of the Nixon administration, including the president himself, really did conspire to conceal acts of political espionage against their opponents, a plot so foolhardy that many senior reporters refused to take it seriously for months. The only way for human knowledge to advance is for people to be willing to question their most cherished assumptions.

Yet not all questions are created equal: honest ones attempt to find the truth, while dishonest ones, of the type that Alex Jones specializes in, attempt to tie it into knots. The trouble is that the human mind isn’t very good at distinguishing between the two. A study conducted by data scientists at the Massachusetts Institute of Technology that examined the life cycles of 126,000 Twitter “rumor cascades”—stories that took off before they could be verified as true or false—found that fake stories spread six times faster than real ones. “Falsehood,” the authors of the study explained, “diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.” Mill was not blind to such dangers. “The dictum that truth always triumphs over persecution,” he explained, “is one of those pleasant falsehoods which men repeat after one another till they pass into commonplaces, but which all experience refutes.” He simply believed that any filter designed to block bad ideas would invariably hinder the progress of plenty of good ones, too.

♦♦♦

Of course, what Mill failed to anticipate is that good and bad ideas alike might, one day, be generated by machines. Such, for better and worse, is the world we live in now. In August 2017, a woman named Angee Dixson joined Twitter. Attractive, dark-haired, and proudly pious, Dixson was an outspoken supporter of President Donald Trump, voicing her admiration for the president up to 90 times a day. When Trump was criticized for equivocating about the death of a counter-protester at a white supremacist rally in Charlottesville, Virginia, Dixson leapt to his defense: “Dems and Media Continue to IGNORE BLM [Black Lives Matter] and Antifa [anti-fascist] Violence in Charlottesville.” Her tweets, in their syntax and their sentiments, were indistinguishable from any number of other posts by Republican voters around the country. The catch, in Dixson’s case, was that she wasn’t a Republican voter. In fact, she wasn’t a human being at all. She was a bot, a software program that masquerades as a person online, sending out statements and answering replies just as an actual user would. The tipoff was her use of URL “shorteners”—essentially abbreviated web addresses. If the Russian programmers who designed her had been a bit cannier, though, it’s conceivable that no one would have caught on, and Angee Dixson would still be tweeting today. In a way, she still is. It’s estimated that roughly 15 percent of Twitter’s user base is fake.

Do the old free speech norms apply when robots are doing the talking? John Stuart Mill championed a free market of ideas at least in part because he felt that, however appealing lies may be, humans instinctively (if often clumsily) seek the truth, the way moths seek the light of a candle. “The real advantage which truth has,” he explained, “[is] that when an opinion is true, it may be extinguished once, twice, or many times, but in the course of ages there will generally be found persons to rediscover it.” Software bots, though, don’t seek the truth. They seek to do whatever their programmers have designed them to do, whether it be promoting products or peddling propaganda. Cyberspace is already swamped with bot-generated disinformation, and it’s only going to get worse. “Today, it remains possible for a savvy internet user to distinguish ‘real’ people from automated botnets,” authors P.W. Singer and Emerson T. Brooking write in their book LikeWar: The Weaponization of Social Media. “Soon enough, even this uncertain state of affairs may be recalled fondly as the ‘good old days’—the last time it was possible to have some confidence that another social media user was a flesh-and-blood human being.”

That, to say the least, is concerning. Conflicts are already leaking out of the internet into the real world. In Chicago today, gang wars often begin online, many times between factions that don’t share contiguous territory. “What started as a provocation online winds up with someone getting drilled in real life,” writes journalist Ben Austen. Alarmingly, nation-states are doing the same thing. On Christmas Eve 2016, Pakistani defense minister Khawaja Asif read an internet report stating that Israel was threatening to attack his country if it intervened in the Syrian civil war. Asif responded with his own threat, taking to Twitter to declare Pakistan’s willingness to use nuclear weapons on Israel in the event of an attack. As it turned out, Asif had been duped. Israel hadn’t issued a warning to Pakistan. Fortunately, the Israeli government was quick to correct the record, defusing the situation before it exploded into violence. Imagine if it hadn’t, though, or if Twitter bots, posing as Israeli diplomatic officials, had responded with further hostile tweets. Pakistan and Israel each have a substantial nuclear arsenal. An exchange of warheads between the two nations would, in all likelihood, result in a “nuclear autumn”—an only slightly milder form of nuclear winter—that would kill tens of millions of people, both in the region and around the globe. It’s easy to defend free speech in the abstract, but when fake news disseminated through social media threatens to cause a nuclear holocaust, it gets a lot tougher.

The trouble with the internet-as-public-square analogy invoked by Justice Anthony Kennedy is that in an actual town square the identities of the citizenry are visible for all to see. Among the many things that a public square provides is a forum for the display of personal character. (It’s no coincidence that, for centuries, the public square was the location of choice for feting heroes, pillorying reprobates, and executing criminals.) Not so on the internet. Even the sites (Facebook, Instagram, Tinder) that do attempt to impose some accountability on their users are highly depersonalizing, largely because they deprive them of the social cues—a frown, a smile, a sigh—that help them read their interlocutors’ intentions. This is one reason it’s so difficult to tell the difference between irony and bigotry on the web. The confusion is so pervasive that a term has been coined to describe it, “Poe’s Law,” which states that it’s impossible to parody an extreme opinion online in such a way that someone won’t mistake it for the genuine article. And yet there are times when the anonymity of the internet is a godsend. Political dissidents, corporate whistleblowers, and sexual minorities around the world all depend on social media to get their stories out, while keeping their names and faces hidden. Consider the consequences if we were to force gay men in Nigeria, atheists in Pakistan, or critics of Mohammad bin Salman in Saudi Arabia to walk openly through their digital town squares.

These are issues that the news industry has been grappling with for the better part of two decades. Journalism and social media have commingled so much lately that many people have begun asking whether the social media giants need to be reclassified as publishers rather than platforms. It’s an appealing idea. For years, the sites have been having it both ways, helping themselves to all the benefits of the publishing world—social influence, editorial control, and enormous ad revenue—while enjoying the legal protections that shield platforms from prosecution. Their defense, that they’re mere conduits for conversation, like the phone company, is absurd, and not only because they employ so many censors. Just as important, their algorithms give them a level of editorial control that newspaper magnates of old could only have dreamed of. That’s what makes them so appealing to advertisers: they can personalize the information flow to each and every user. Facebook even ran an experiment in 2012 in which, to field-test the effectiveness of their business model, they successfully manipulated the emotions of nearly 700,000 users by secretly feeding one group positive news and the other negative news. It was, in a way, a fascinating psychological study, proving that, in the words of the study’s authors, “emotional states can be transferred to others via emotional contagion”—not exactly the type of thing you’d associate with a neutral platform.

♦♦♦

It’s not clear, however, that reclassifying social media companies as publishers is possible, at least in any meaningful, legal sense. Section 230 of the 1996 Communications Decency Act gives service providers a fairly free hand to delete or otherwise monitor content on their platforms without becoming publishers. Litigants who have attempted to argue otherwise have, at least so far, been rebuffed in court. Even if such a change were possible, it would only generate more questions, like: which sites are publishers and which platforms? (Is Amazon a publisher? Is Google? What about dating sites like Tinder?) What counts as content moderation? (Does it only refer to the editing and removal of content, or does it also include the use of algorithms to rank-order what users see?) And, when it comes to the news, how do we determine who’s a journalist and who’s a source? 

This last question may seem like a non sequitur, but in the modern media environment, in which people get so much of their news from Twitter and Facebook, it’s a hot debate—and getting hotter since Julian Assange was arrested in April. For more than a century, reporters in the United States have been given special protections to publish classified documents, so long as they haven’t broken the law to obtain them. Such protections do not extend to government employees or the public at large. This is why Chelsea Manning was jailed for seven years, while the reporters at The Guardian and The New York Times who broke the story she leaked continued to walk the streets freely. But if anyone with a Twitter account can now be a reporter, who gets the right to publish government documents? It would be nice, in theory, to give everyone the speech protections enjoyed by journalists at the Times. But do we want to allow anyone with a WiFi connection—spies, disaffected government workers, Macedonian computer hackers—to publish classified U.S. documents online? 

Rather than reclassifying the tech giants, a growing number of people have begun to suggest that they should be broken up, just as their industrial-age forebears (U.S. Steel, Standard Oil, AT&T) were in the last century. Massachusetts Senator Elizabeth Warren is probably the most prominent advocate for such a move, but the idea has been championed by a number of social media critics lately, including Columbia Law School professor Tim Wu, Atlantic staff writer Franklin Foer, and venture capitalist Roger McNamee, who was an early investor in Facebook. In The Curse of Bigness: Antitrust in the New Gilded Age, Wu ably argues that the tech giants have become too big for anyone’s good, squelching competition, raising advertising rates, and cutting the quality of their services. Foer and McNamee, meanwhile, focus on privacy—or, rather, the increasing lack of it online. Though the social media moguls have been vigorous exponents of free speech for years, they’ve taken much longer to come around to the benefits of personal privacy. The very concept is antithetical to their business model, which depends on the exposure of user information—buying habits, political preferences, medical histories, anything and everything advertisers always wanted to know about customers but were afraid to ask—which they happily suck up and sell to the highest bidders. What does this have to do with free expression? In World Without Mind: The Existential Threat of Big Tech, Foer contends that freedom of speech must be preceded by freedom from surveillance:

If we believe we’re being watched, we’re far less likely to let our minds roam toward opinions that require courage or might take us beyond the bounds of acceptable opinion. We begin to bend our opinions to please our observer. Without the private space to think freely, the mind deadens—and then so does the Republic.

Wu, Foer, and McNamee make a strong case. The companies’ size alone is a hazard, allowing them to wield enormous influence in Washington, while shaking down individual states for tax breaks that their competitors can’t hope to receive. For all that, it’s hard to see what breaking them up would do to improve online speech. Sure, prying apart Facebook, Instagram, and WhatsApp—all three of which are currently controlled by Mark Zuckerberg—would make the social media marketplace more robust. It would certainly make it more competitive. But how would it solve the problem of trolls or doxing or fake news or when to ban people and when to let them speak? In Zucked: Waking Up to the Facebook Catastrophe, McNamee offers a number of small-bore solutions. His suggestion of putting a five-second delay on Facebook’s live stream is a good one. The TV networks have used such a delay for decades to keep obscenities—not to mention horrors like the Christchurch massacre—from being broadcast live over the air. And his notion of “bubble-free” views on Facebook and Google is intriguing. The idea is to reduce filter bubbles by giving users a button to turn off the algorithms that microtarget them on the sites. But these are cosmetic fixes. How can people ever be sure they’re not being microtargeted? Offer them a “bubble-free” button on Facebook, and they may choose to stay in the bubble. Isn’t that what they do already when they switch the channel to MSNBC or Fox News?

♦♦♦

In other words, perhaps the trouble isn’t with social media; perhaps it’s with us. Optimists will tell you that this is just what happens when new technologies come along. The worries that critics now voice about Facebook and Twitter were once expressed about videogames, television, movies, radio, and the printing press. At the beginning of the last century, it was yellow journalism that was supposed to be poisoning democracy—spreading lies, pushing nations into war, and allowing media moguls like Hearst and Pulitzer to dial up the animosities of their readers. Now, newspapers are beacons of truth, guardians of democracy, and defenders of the written word, providing the public with the kind of nuanced analysis they can’t get in 280 characters. 

If history teaches us anything, though, it’s that the medium does matter. How we communicate invariably affects what we say, as anyone who has ever tried to have a conversation via text message knows. It’s nearly impossible to imagine the Reformation occurring were it not for the invention of the printing press 70 years before. No one outside of Wittenberg would have ever heard of Martin Luther, not just because his theses wouldn’t have been disseminated but because, without print, there wouldn’t have been a readership for them in the first place. As Andrew Pettegree details in his superb history of printing, The Book in the Renaissance, the invention of movable type didn’t just spread the printed word; it also spread literacy along with it, creating new readers where none existed before. Readership shot up all across Europe, particularly among women who, in previous centuries, had rarely had access to the written word. Though he may have been unusually headstrong and iconoclastic, Luther, in speaking out against the Catholic Church, wasn’t acting alone. Rather, he was taking part in a larger conversation that began before his apostasy and continued after his death—a conversation enabled, in no small part, by the invention of movable type.

Internet enthusiasts, for obvious reasons, love this comparison. It suggests that a new, game-changing media technology was the driver of centuries of social progress: the printing press begat the Reformation, which begat the Scientific Revolution, which begat the Enlightenment, which begat American democracy…and on and on down the ages, each step leading to the next on the ladder of human improvement. What this chronicle elides are all the missteps that occurred along the way. The Reformation was not immediately followed by the Scientific Revolution but by the Counter-Reformation, complete with inquisitions, the Thirty Years’ War, and the forced conversion of tens of thousands of Amerindians. And though print may have democratized speech, it often did the opposite as well, helping to enforce the dictates of tyrants and kings. “Print did as much to perpetuate blatant errors as it did to spread enlightened truth,” the cultural anthropologist Renato Rosaldo writes. “Never had scholars found so many words, images, and diagrams at their fingertips. And never before had things been so confusing with, for instance, Dante’s world view achieving prominent visibility at the same time that Copernican views were making their way into print. Nonsense and truth seemed to move hand in hand with neither made uncomfortable by the presence of the other.” Sound familiar?

The difference this time is that nonsense flows in both directions. On social media, the users are the audience and the performers, the readers and the ones being read about. This is, in part, what makes sites like Facebook, Instagram, and Twitter so addictive. They provide us with the voyeuristic pleasure of watching others, along with the narcissistic pleasure of watching ourselves. Whether we can handle the kick of this digital speedball remains to be seen. One reason the gun massacre in Christchurch, New Zealand was so disturbing was that it seems to have been a pure, unadulterated product of online speech. The suspected killer, Brenton Tarrant, was not only radicalized on sites like 4chan and 8chan, but actually appears to have performed the massacre for them, live streaming his assault on Facebook and leaving a meme-laced manifesto for his fellow white supremacists to read online. It’s a frightening example of speech both leading to violence and, in the case of Tarrant’s live stream, being itself a kind of performative violence. One suspects that even John Stuart Mill would have hesitated to defend that kind of expression. In one section, Tarrant posed a series of self-directed questions and answers. Where did he research and develop his beliefs? he asked himself. “The internet, of course,” he replied. “You will not find the truth anywhere else.” 

Graham Daseler is a film editor, animator, and writer. His articles have appeared in The Times Literary Supplement, The Los Angeles Review of Books, 34th Parallel Magazine, and numerous film periodicals. 

Advertisement