fbpx
Home/Articles/Culture/When the Problem Becomes the Solution

When the Problem Becomes the Solution

Medical researchers claim social media data offer a chance to improve health care, but what is the human cost?

WASHINGTON, DC - OCTOBER 23: Facebook co-founder and CEO Mark Zuckerberg testifies before the House Financial Services Committee in the Rayburn House Office Building on Capitol Hill October 23, 2019 in Washington, DC. Zuckerberg testified about Facebook's proposed cryptocurrency Libra, how his company will handle false and misleading information by political leaders during the 2020 campaign and how it handles its users’ data and privacy. (Photo by Chip Somodevilla/Getty Images)

The Old Man and The Sea is a staple of high school literature for good reason. For ninth grade writers, it curbs the inclination to be verbose. The solitary setting teaches ambitious young minds the beauty of a quiet life. And the story of Ernest Hemingway’s life, ended in suicide, serves as a good lesson that some of the greatest writers have lived some of the most destructive lives. Never meet your heroes, kid.

Not everyone can be Ernest Hemingway. That is to say, alcohol makes artists of a few, but fools of most. (We may even wish to say Hemingway was an artist despite his drinking problem, rather than because of it, but the two were so intertwined, his literary success and his sprees, that it is hard to imagine what kind of writer a sober Hemingway would have been. A less interesting one, I have to imagine.) For most of the would-be poets and world-changers, lounging in grunge cafes and getting sloshed on a Tuesday at noon, the lifestyle is the problem, not the solution, to their lack of creative muse.

Which brings us to social media, another drug that has neutered many minds far weaker than the big two-hearted writer. In the months since Frances Haugen exposed Facebook, now Meta, for knowingly and even intentionally addicting young girls and boys to its apps, much ink has been spilled on the future of social media. Few have offered true solutions. It seems we’re more interested in making an example of one man—Mark Zuckerberg, in this case—than understanding the forces that produced Facebook, the incentives that encourage developers to make their apps addictive, and the solution, which I expect involves much more than time limits. That the dame of the Wall Street Journal, Peggy Noonan, recently suggested regulating social media for the health of our children shows just how mainstream the idea of curbing the addiction is becoming. This success comes with new challenges—quick-fix solutions that underestimate the nature of the problem, for example, or, as Facebook scrambles to reassure the public of its worthiness, hackish studies to repudiate those who would say the app is only harmful.

One such study was recently featured in the Journal, showing how the massive data troves collected by social media can be used to improve medicine, particularly in the field of mental health.

Writes the Journal:

In the study, the authors trained algorithms to sort through roughly 50,000 Facebook posts from about 50 young adult and adolescent psychosis patients who suffered relapses and rehospitalizations. Patients in the study gave the researchers consent to see both their medical records and social-media posts. The algorithms detected distinct changes in the months before a relapse, including: changes in language patterns, especially the use of more words associated with anger, death or emotional withdrawal; changes in the number of posts between midnight and 5 a.m. and the frequency of tagging or friending.

Psychologically, this is a genius strategy. Most people are uncomfortable when they learn just how much data companies like Facebook and Google collect on them, even those who don’t have an account or hardly use their account. But while we perceive such knowledge as a threat, it’s also amorphous: We don’t know what Facebook could do with that data. Surely nothing good? On the other hand, the medical research solution offers a positive use of the data that assuages the mind. If a man’s scrolling habits enable his doctor to prevent that man from committing suicide, or if a woman’s search history helps the cancer doctor offer her more comfort, we’re suddenly much more amenable to the prospect. Who doesn’t want to help improve medical technology? You can’t argue against it.

But the prospect is still a bad one, and for several reasons, especially the one Facebook would least like you to remember: They caused the very problem they are now selling your data to solve. Skyrocketing depression and suicide rates in the last decade have been the effect of many causes, but none with as clear a relationship as the very apps that are now offering themselves as a mental health cure.

Remember, this all became acceptable discourse only after Ms. Haugen pointed out that Facebook knew its app was harmful to users’ mental health, particularly that of teen girls. Users, Americans and elsewhere, have known social media is a negative influence since almost the beginning; the fly in the ointment was that Facebook said Yeah, that’s actually the point. Until that point, the Big Tech conversation centered on censorship on public-square-esque platforms. Now that the script has widened to include our national wellbeing, particularly that of the next generation, social media is playing catch-up on messaging. We are fools to imagine this will take very long.

It’s not just Facebook. Earlier this year, Politico reported on the non-profit Crisis Text Line. Crisis Text Line is a suicide hotline, among the most prominent in the world, which responds to calls and texts from men and women at their most desperate moments. The idea is that someone on the other end of the line can help talk an anguished person off the ledge. In the process, the hotline has, unsurprisingly, collected a wealth of information about human beings—data which it then shared with its for-profit spinoff brand, Loris.ai, an automated customer service app that uses artificial intelligence to be abnormally personable. Loris says the data from the hotline was stripped of any personal information (how they define this is unclear) and enabled them to create a “more human” service. Still, the move raised enough ethical questions and eyebrows that Loris has also pledged to share some of its revenue with Crisis Text Line.

Is it moral to draw a person in and study him like a lab rat, if it means you can in turn offer him a good service, or even a better life?

But that’s not even the real question, at least not the way social media has posed it. Like Hemingway, the success cases are the outliers—those whose lives are bettered, whose disease is cured, or who are saved from suicide may be one in 100 users, but more likely they will be one in 1 million. The other 999,999 are mined, digested by the algorithm, and spit back out with all the addiction, depression, isolation, and emasculation that has become all too familiar in the social media era.

about the author

Carmel Richardson is the 2021-2022 editorial fellow at The American Conservative. She received her B.A. from Hillsdale College in political philosophy with a minor in journalism. She firmly believes that the backroads are better than the interstate, and though she currently resides in Northern Virginia, her home state will always be Tennessee.

leave a comment

Latest Articles