fbpx
Politics Foreign Affairs Culture Fellows Program

Why Facebook’s Experiment Threatens Further Manipulation

Facebook has taken a lot of heat in the past few days for toying with users’ emotions—albeit for scientific purposes. In January 2012, 700,000 Facebook users were the subjects of a scientific study, in which data scientists tweaked the website’s news feed algorithm to skew content in either a positive or negative emotional direction. The news feed […]
shutterstock_141090136

Facebook has taken a lot of heat in the past few days for toying with users’ emotions—albeit for scientific purposes. In January 2012, 700,000 Facebook users were the subjects of a scientific study, in which data scientists tweaked the website’s news feed algorithm to skew content in either a positive or negative emotional direction. The news feed posts were chosen based on the amount of positive or negative words in each post. Users were then observed, to see whether these emotionally charged news feeds had any effect on their own moods. Sure enough: by the end of the week, study subjects were likely to post more positive or negative words, based on the information they’d been shown throughout the past seven days.

The study has been greeted with rampant disapproval and alarm. Writers have questioned the data privacy issues at stake, and discuss their fear that Facebook is treating them like “lab rats” in a grand social experiment. Wired called it a “blatant violation of trust.” As Laurie Penny wrote at the New Statesman,

Nobody has ever had this sort of power before. No dictator in their wildest dreams has been able to subtly manipulate the daily emotions of more than a billion humans so effectively. There are no precedents for what Facebook is doing here. Facebook itself is the precedent. What the company does now will influence how the corporate powers of the future understand and monetise human emotion.

But additionally, one must question why the study was allowed in the first place. Adam D.I. Cramer, a Facebook data scientist, said they conducted the study because “We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out,” according to a post on his Facebook page. “At the same time,”he continued, “we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.”

What did Facebook intend to do (or what have they, perhaps, already done) as a result of this fear? Skew our news feeds in a more positive direction, to shield our fragile egos and comparison-prone selves? Do they intend to shield us from our worst selves by only giving us the information they deem important, worthwhile, positive, happy?

This is not Facebook’s job, and this is not part of providing a “better service,” as Cramer said was their intent (“The goal of all of our research at Facebook is to learn how to provide a better service.”). Doing research to provide “better service” involves shielding users from hackers, bugs, and glitches in the system. It involves creating a platform (not an information-sorting station) for users to interact with friends and family, without having to fear subtle manipulation. Yet that is exactly what Facebook was doing, and has in fact progressively done over the past several years. Far from providing a simple social media platform, Facebook now shapes news feeds using a series of algorithms that sort and post information according to users’ supposed preference. We have some control over this, but not all. Information gets parceled according to the pages we visit most, the links most likely to fit our profiles—yet this has something of a cyclic effect. The users or links that don’t show up on our profile are likely to be forgotten.

If Facebook uses this study to sculpt users’ news feeds in the future, it could have an even more intrusive effect. I don’t want Facebook to act as a nanny, shielding me from information they deem negative or depressing. I’d rather use my own judgment in determining whether certain posts are worth reading, and curate my news feed accordingly.

Besides, the study didn’t reveal anything that common sense couldn’t tell us already. The fact that joyful words uplift us, and depressing or doleful attitudes make us sad—that truth is as old as Solomon. We don’t need a scientific study to tell us that. The important ingredient here is that in everyday, face-to-face interactions—or even in the curation of Twitter feeds—we’ve always had a choice: to read or not to read, to open ourselves up to the positive/negative, or to seek out different emotional influences. Facebook took that choice away from its users, and appears to threaten a continued sifting of information into the future. Facebook shouldn’t provide this hidden filter for us, even if it is doing so “for our own good.”


Advertisement

Comments

Become a Member today for a growing stake in the conservative movement.
Join here!
Join here