Politics Foreign Affairs Culture Fellows Program

Google v. Section 230

The tech giant and its peer companies are too big to feign neutrality.

Supreme Court Hears Cases This Week Involving Google And Twitter With Broad Speech Ramifications For The Internet
(Photo by Drew Angerer/Getty Images)

Left-wing media pundits and Big Tech-funded activists have been sounding the alarm for some time now about a Supreme Court case that could “ruin the Internet.” That case has now finally arrived, with the potential to “bring the tech giants to their knees.” Will Gonzalez v. Google end the Internet as we know it? Will Big Tech finally pay the piper?

Based on oral arguments that took place last week, it seems unlikely. 


But something important did surface from the back and forth between the arguing attorneys and the Supreme Court justices: Large technology platforms such as Google and its YouTube platform have the power not only to organize the world’s information, but to shape it. With their wide reach and complex algorithms, Big Tech platforms can influence the behavior of their millions of users, including voters

The ultimate question, unlikely to be addressed by the Court in the Gonzalez case, is how far Big Tech monopolies can be allowed to extend their influence over us before they face legal consequences. 

Which brings us to the awful facts of the Gonzalez case. On November 13, 2015, Nohemi Gonzalez, a twenty-three-year-old U.S. citizen studying in Paris, France, was eating at a café when three ISIS terrorists, including Abdelhamid Abaaoud, sprayed bullets into the crowd and shot Gonzalez, resulting in her death. The shooting was part of a broader series of attacks perpetrated by ISIS in Paris that day. 

What does any of that have to do with Google? Apparently, ISIS regularly used YouTube for terror recruitment purposes. More than a year and a half before the attack, Abaaoud, one of the attackers who killed Gonzalez, had appeared in an ISIS YouTube video trying to raise jihadi fighters to join ISIS. 

The Gonzalez family sued Google under a federal statute that provides a civil remedy to victims of terrorism in suits against any defendant—in this case, Google—that aids and abets terrorism, or provides “material support” to terror groups. The family’s suit alleged Google was aware that ISIS was using its YouTube platform, that Google’s algorithms had “recommended ISIS videos” like Abaaoud’s to users, and the service allowed ISIS supporters to communicate about the content of the videos. Even though Google had the ability to effectively block them, the family asserts, the company failed to do so.


Enter Section 230 of the Communications Decency Act, a congressional provision that gives near-complete civil immunity to tech companies such as Google. According to Google’s lawyers, Section 230, which was enacted in 1996, should be read as a near-absolute shield against treating them like any other “publisher.” If The American Conservative printed defamation, for instance, it could be held liable. But not so for Google, as long we are talking about user-generated content that Google allows on its platform.

Here is the tricky part. Should YouTube’s suggestions to users encouraging them to check out ISIS videos make them responsible for the damages to the Gonzalez family? A very broad reading of Section 230 would suggest not. But, what about the YouTube suggestions to the millions of users that helped spread ISIS’s recruiting scheme?

Previously, the Ninth Circuit Court of Appeals ruled in this case that such video suggestions were not a violation of Section 230. After all, the two majority judges wrote, the complex Google algorithms that trigger YouTube video suggestions are simply “neutral tools” in the maze of the Big Tech company’s digital universe. 

At the Supreme Court, that issue was woven throughout the arguments in some twenty different exchanges. Algorithms, some of the justices noted, just might be skewed in non-neutral ways to cause harm. Justice Barrett questioned Google’s attorney Lisa Blatt about algorithms that categorize information in discriminatory ways. Blatt hedged, but seemed to concede that such practices would be outside of Section 230 protection. 

Overall, the justices were unified in their occasional confusion about how to approach these issues. Much of their confusion stemmed from Section 230 itself, which was passed by Congress almost 30 years ago, an eternity in terms of today’s technological advancement. Back then, there was no Google, Twitter, YouTube, or Facebook, and the average American surfed the internet only about thirty minutes a month. The internet was unrecognizable.

Now, we are in a bold new world, and are not privy to the internal coding decisions reflected in Google and YouTube’s algorithms. But Justice Gorsuch had it right when he noted, “I'm not even sure any algorithm really is neutral.” Computer scholars agree; algorithms are written by humans, and all humans have biases.

Artificial intelligence and machine learning, the latest entry in the algorithm controversy, also mimic the social and political biases of its coders. Supposed AI wunderkind ChatGPT has already exhibited “left-leaning political biases,” refusing to report on the Hunter Biden laptop story in the “style” of the New York Post. The automated program responded to questions from the Post with this remarkable confession: “it is possible that some of the texts that I have been trained on may have a left-leaning bias.” More remarkable is how quickly the AI system admitted the bias of its programmers. It took an appearance before congressional committees before Mark Zuckerberg and Jack Dorsey were willing to make similar confessions about their platforms.

But even with all the controversy over algorithmic bias, the American economy remains inexorably tied to technology. When Google’s lawyer Lisa Blatt was questioned by Justice Alito regarding whether her tech clients would “collapse” if they had to face civil liability for permitting defamatory content on their platforms, she replied, “Every other website might … because they're not as big as Google.” 

Blatt’s response was a significant concession that the largest companies are more than capable of surviving exposure to civil liability. It follows, therefore, that the market-dominant giants, the “big” ones such as Google (and its YouTube), Meta (Facebook and Instagram), Amazon, Apple, and Microsoft, would not be disabled if Congress were to amend Section 230 in ways that would impose stricter forms of liability on them but not on smaller tech companies. 

American Principles Project has advocated just that as a legislative approach, focusing on the market-dominant giants who use monopoly power to exercise viewpoint-biased control over citizen speech, while protecting vibrant growth among start-ups and mid-level competitive hopefuls. 

It’s possible that the Supreme Court will follow Justice Gorsuch, who noted that Gonzalez v. Google could be remanded back to lower courts to reconsider its flawed neutral-algorithms assumption. Regardless, the issue still remains, as always, in the lap of Congress, which created Section 230 in the first place. Instead of waiting for the Supreme Court to legislate via judicial fiat, perhaps the People’s House should act.


Become a Member today for a growing stake in the conservative movement.
Join here!
Join here