As the eminent philosopher Alasdair MacIntyre recently said in a lecture given at the University of Notre Dame, “The mind is mindless without the imagination.” And even though his subject was the grammar of morality, his pithy line prompts us to picture a “mindless mind” completely devoid of imagination. Something the exact opposite of ourselves: an anti-mind as alien in its amorality as it is dizzying in its irrationality. A sort of embodied void. And if with our human brains, constructed as they are to find rhythms and project patterns onto the random, and with our human souls, attuned to resonate with transcendent order, we’re unable to envision what an imagination-less non-mind would truly be, we can simply turn to YouTube.
Stories about the horrors of YouTube’s algorithm-generated childrens videos are beginning to make the rounds, and they’re just as frightening as everyone contends. On his blog earlier this month, TAC’s Rod Dreher described these videos as “science fiction nightmare come true.” Writer and artist James Bridle wrote about them in a long introductory piece on Medium saying, “Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level.”
So what exactly are these videos? Their main content components are created by bots and/or by humans using algorithms dictated by popular and seemingly anodyne keywords like “Spiderman” “educational,” “baby,” “colors.” Many are harmless and nonsensical, but the ones that have drawn the most outrage show familiar children’s characters ranging from superheroes to cartoon icons like Mickey Mouse engaging in violent, macabre activities. There’s “Peppa Pig” being tortured at the dentist. There’s the Hulk being attacked by a zombie, and then Hulk-the-zombie biting “Frozen’s” Elsa on the neck, and so on.
If you need convincing with your own eyes, then watch this gem, called “Superheroes BURIED ALIVE Outdoor Playground Finger Family Song NurseryRhyme Education Learning Video.” A highlight: Hulk and Spiderman, sporting gargantuan Cupie doll heads, watch from a distance while the Grim Reaper from the movie “Scream,” the Joker, Venom, and a maniacal hobo clown dance menacingly around comic and cartoon characters that they’ve buried up to their necks.
Furthermore, algorithms aren’t just used to create the videos, they are used for machine learning (read: Artificial Intelligence or AI) to game the mathematical formulas that YouTube uses to filter and push the content in front of your kids. Even though YouTube’s sorting software is proprietary and opaque, it’s fairly obvious from the sheer volume of these disturbing videos that YouTube itself isn’t necessarily doing anything wrong. They’re simply being flooded with content at a level that boggles the mind, much of it created to either skirt or manipulate YouTube’s filters.
Not surprisingly, these grotesqueries are garnering tens of millions of clicks and generate countless advertising dollars for their makers. Parents, trusting that YouTube’s content filter will keep their children from seeing inappropriate content, will turn on a YouTube channel and keep it on autoplay, with one video following another non-stop. It might start off innocent enough but after the loop is engaged, darker videos begin to seep in when mom is or is not in the room to see.
After news of these videos began filtering through the press, YouTube promised to escalate its review policy in order to clamp down on the inappropriate content. But given the scale of the machine it created, this seems like a daunting, if not unachievable task.
It’s not entirely restricted to YouTube, either. As parents imbue ever greater trust in tech companies to filter, organize, and arrange content, those same companies have been relinquishing more and more human oversight to machine-learning algorithmic software. As Will Ormeus writes in Slate, a very important trade-off is made when companies choose ease over more direct control. Ormeus explains that, “Automation brings huge advantages of scale, speed, and price: We now have virtually endless content and information at our fingertips, all organized for us according to (some computer program’s notion of) our personal needs, interests, and tastes. Google, Facebook, Spotify, Amazon, Netflix: All have taken tasks once done by humans (librarians, scrapbookers, DJs, retail clerks, video-store managers—and, let’s not forget, advertising salespeople)—and found ways to do them automatically, instantly, and at close to zero marginal cost. As a result, they’re taking over the world, and making enormous profits in the process.”
Ormeus’s disturbing rendering of how an artificial intelligence can create and then promote online content suggests an easy solution. Perhaps we simply tweak the algorithms in order to secure more desirable results. But this is easier said than done. Platforms like YouTube are massive, and the purpose of a recommendation algorithm is to sift through unimaginable amounts of data using keywords and phrases in order to customize content. Putting a human finger on the scale to tip it in a certain direction might work if not for two things: the presence of bots tipping the scale in the other direction (i.e. in the direction of chaos) and the proprietary opacity of the algorithms themselves. As James Bridle writes, “A huge number of these videos are essentially created by bots and viewed by bots, and even commented on by bots. That is a whole strange world in and of itself.”
But the world gets stranger, and more uncertain still. What should give us most pause is that our children’s thoughts will come to mimic the incoherence of artificial intelligence. The “intelligence” of the algorithm doesn’t have a mind. It has no values and represents no ethos beyond the constant churning out of content. As algorithms and machine learning come to control more and more of our daily lives, from how movies are made to the medical treatment you receive, the role of the human mind in the running of society will become superfluous—a vestigial appendage. And filling the void that it once occupied will come more of the moral and intellectual incoherence that we see in these absurd YouTube videos.
This is our first intimate encounter with cybernetic intelligence of any significance. It’s our first real exposure to artificial creativity where and when it counts. This isn’t a randomly generated song or art created in a lab as a prototype of what artificial intelligence might someday be able to “achieve.” This is artificially created culture where it matters most, where the minds of our children are themselves being formed within the imagination-less void of an algorithmic feedback loop. And the effects are already discernible. As one concerned mother reports on Reddit, her child was actually beginning to imitate the often odd, even demented speech patterns in the videos. Our children and the algorithm are echoing mindlessness.
The violence in these videos are actually the least troubling thing about them. Stories intended specifically for children have since time immemorial often been lurid expressions of collective wisdom. Just go back and read Snow White in the original German if you need proof. But fairytales and folklore are in many ways the exact opposite of AI-generated videos. In the surreal violence of myth is the distillation of our collective wisdom as a human race, developed through eons of lived experience in the world. If anything, they’re the very embodiment of coherency. But AI is the inverse of this, and in many instances we degrade ourselves by engaging with it. As Jaron Lanier writes in You Are Not A Gadget:
The same ambiguity that motivated dubious academic AI projects in the past has been repackaged as mass culture today. Did that search engine really know what you want, or are you playing along, lowering your standards to make it seem clever? While it’s to be expected that the human perspective will be changed by encounters with profound new technologies, the exercise of treating machine intelligence as real requires people to reduce their mooring to reality.
But the same logic, both economic and moral, that animates the creation of these videos doesn’t seem to be abating any time soon. We have a future to look forward to in which our creative consumption, our entire lives, will be organized to keep us circling around the nihilistic void of the algorithm, mimicking it as it mimics us.
For now this is happening with children’s videos on YouTube.
Artists and philosophers have seen this coming. Perhaps the most intriguing example is the early writing of Thomas Pynchon, where disgust at an ever more permeable boundary between the animate and inanimate is complicated by something akin to awe. In his first novel, V., he has a character named Benny Profane interact with an object called SHROUD (Synthetic Human, Radiation Output Determined), a cyborg comprised of robotic elements, synthetic humanoid skin, and an actual human skeleton. In their conversation, they make reference to SHOCK (Synthetic Human Object, Casualty Kinematics), a synthetic human body made to asses crash trauma. SHROUD tells Benny that everyone will be like SHOCK and him someday:
After a while he got up and went over to SHROUD. “What do you mean, we’ll be like you and SHOCK someday? You mean dead?”
Am I dead? If I am then that’s what I mean.
“If you aren’t then what are you?”
Nearly what you are. None of you have very far to go.
“I don’t understand.”
So I see. But you’re not alone. That’s a comfort isn’t it?
It would take a book all of its own to unpack the complexity of Pynchon’s thoughts on cybernetics and the organic/inorganic split, but suffice to say he sensed the impending dehumanization which inevitably springs from the blending of the two in a bid to control the bodies and minds of the masses with technology. Pynchon emphasized entropy in his work. His books resonate with the empty maniacal laughter of the Joker figure in the “buried alive” video on YouTube. A simulacrum of delight terrifying to anyone whose mind hasn’t already been ground to mush by an algorithm. It isn’t so much the content of the videos that should disturb us, but their utter meaninglessness. An imagination without mind whose continued existence depends on the deterioration of our own.
Scott Beauchamp’s work has appeared in the Paris Review, Bookforum, and Public Discourse, among other places. His book Did You Kill Anyone? is forthcoming from Zero Books. He lives in Maine.