fbpx
Politics Foreign Affairs Culture Fellows Program

The Internet As Monster

James Bridle loves the Internet, but sometimes, he wants to destroy it. Here's why
Screen Shot 2017-11-07 at 3.05.07 PM

A filmmaker friend texted me a link to an essay today, and now I see that a reader has brought it up in a comment thread. I need to re-read it to know for sure what I think about it, but man, it’s some disturbing stuff. The writer is an artist (and, it appears, a cultural liberal) names James Bridle, and he titles the piece (which appears on Medium), “Something Is Wrong On The Internet”.

It starts like this:

As someone who grew up on the internet, I credit it as one of the most important influences on who I am today. I had a computer with internet access in my bedroom from the age of 13. It gave me access to a lot of things which were totally inappropriate for a young teenager, but it was OK. The culture, politics, and interpersonal relationships which I consider to be central to my identity were shaped by the internet, in ways that I have always considered to be beneficial to me personally. I have always been a critical proponent of the internet and everything it has brought, and broadly considered it to be emancipatory and beneficial. I state this at the outset because thinking through the implications of the problem I am going to describe troubles my own assumptions and prejudices in significant ways.

One of so-far hypothetical questions I ask myself frequently is how I would feel about my own children having the same kind of access to the internet today. And I find the question increasingly difficult to answer. I understand that this is a natural evolution of attitudes which happens with age, and at some point this question might be a lot less hypothetical. I don’t want to be a hypocrite about it. I would want my kids to have the same opportunities to explore and grow and express themselves as I did. I would like them to have that choice. And this belief broadens into attitudes about the role of the internet in public life as whole.

OK, fair enough. But then it gets really weird:

I’ve also been aware for some time of the increasingly symbiotic relationship between younger children and YouTube. I see kids engrossed in screens all the time, in pushchairs and in restaurants, and there’s always a bit of a Luddite twinge there, but I am not a parent, and I’m not making parental judgments for or on anyone else. I’ve seen family members and friend’s children plugged into Peppa Pig and nursery rhyme videos, and it makes them happy and gives everyone a break, so OK.

But I don’t even have kids and right now I just want to burn the whole thing down.

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.

When I got to this point in the essay — and it’s only the first few paragraphs — I thought maybe my filmmaker friend texted it to me because Bridle is a kook. But then I read on.

Bridle is not a kook.

It’s hard to summarize his point, but I’ll try. He talks about the extreme popularity of YouTube content made for small children. A lot of it is perfectly innocent, but there are some trolls who get in, and end up putting material that’s very traumatizing for children into the mix. YouTube can’t really prevent this. What’s especially creepy, Bridle says, is that there are lots of algorithms at work producing these kinds of videos, with no human involvement. It ends up with really bizarre and disturbing stuff, all being streamed to kids.

Bridle:

I’m trying to understand why, as plainly and simply troubling as it is, this is not a simple matter of “won’t somebody think of the children” hand-wringing. Obviously this content is inappropriate, obviously there are bad actors out there, obviously some of these videos should be removed. Obviously too this raises questions of fair use, appropriation, free speech and so on. But reports which simply understand the problem through this lens fail to fully grasp the mechanisms being deployed, and thus are incapable of thinking its implications in totality, and responding accordingly.

The New York Times, headlining their article on a subset of this issue “On YouTube Kids, Startling Videos Slip Past Filters”, highlights the use of knock-off characters and nursery rhymes in disturbing content, and frames it as a problem of moderation and legislation. YouTube Kids, an official app which claims to be kid-safe but is quite obviously not, is the problem identified, because it wrongly engenders trust in users. An article in the British tabloid The Sun, “Kids left traumatised after sick YouTube clips showing Peppa Pig characters with knives and guns appear on app for children” takes the same line, with an added dose of right-wing technophobia and self-righteousness. But both stories take at face value YouTube’s assertions that these results are incredibly rare and quickly removed: assertions utterly refuted by the proliferation of the stories themselves, and the growing number of social media posts, largely by concerned parents, from which they arise.

But as with Toy Freaks, what is concerning to me about the Peppa videos is how the obvious parodies and even the shadier knock-offs interact with the legions of algorithmic content producers until it is completely impossible to know what is going on. (“The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.”)

Take a look at this video. It’s profoundly disturbing, especially if you think about little kids streaming this stuff into their consciousnesses unawares — and it’s an example of what Bridle is talking about:

[youtube https://www.youtube.com/watch?v=uXjJdv5fj5k]

Bridle concludes by expressing unease at the illiberal consequences of his conclusions (see how he opened the essay), but then:

This is a deeply dark time, in which the structures we have built to sustain ourselves are being used against us — all of us — in systematic and automated ways. It is hard to keep faith with the network when it produces horrors such as these. While it is tempting to dismiss the wilder examples as trolling, of which a significant number certainly are, that fails to account for the sheer volume of content weighted in a particularly grotesque direction. It presents many and complexly entangled dangers, including that, just as with the increasing focus on alleged Russian interference in social media, such events will be used as justification for increased control over the internet, increasing censorship, and so on. This is not what many of us want.

I’m going to stop here, saying only this:

What concerns me is not just the violence being done to children here, although that concerns me deeply. What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects. As I said at the beginning of this essay: this is being done by people and by things and by a combination of things and people. Responsibility for its outcomes is impossible to assign but the damage is very, very real indeed.

Read the whole thing.  I’m telling you, read it. You may dismiss him as a nut, but I don’t think you will, not if you read the thing. The standard anti-Luddite response will not work here.

It’s like a science fiction nightmare come true. Don’t let your kids get started on the Internet, on YouTube. Don’t.

I’m still not sure what I think about this essay. Eager to hear y’all’s takes.

UPDATE: A reader comments:

Rod, I’m not sure if you’re fully understanding the issue here that and how nightmarish it truly is. Trolls are not the primary problem here; it’s the tidal wave (literally tens of thousands) of automatically generated videos like the one you linked that are all optimized to be selected by YouTube’s algorithm to play next in an infant’s YouTube queue. The only goal in mind is to *get more views*.

Again, these videos are literally *generated by software* with no input from a human being on the “creator’s” side based on popularity of past videos (whose ratings and views are inflated by bots by the way) and then “filtered” by software on YouTube’s side, which has no ability to evaluate the artistic quality of a video as a whole. Whatever nihilistic nightmare videos these mindless algorithms are puking out are turning babies’ brains into ground beef. At least in Brave New World they had the objective of conditioning people for a greater purpose of social cohesion.

Advertisement

Comments

Want to join the conversation?

Subscribe for as little as $5/mo to start commenting on Rod’s blog.

Join Now