fbpx
Politics Foreign Affairs Culture Fellows Program

The Controlling Power Of Big Data

'Small data points monitored continuously can be very predictive of behaviors'
Humorous mobile cloud computing conceptual image

I’m working this morning on the chapter of my new book in which I talk about how the power and ubiquity of data mining, and the technology that supports it, puts us in an unprecedented position regarding vulnerability to totalitarianism. Just saw this column in the NYT talking about how much health data is available to companies. Excerpts:

I initially got on the phone with Sobhani to get a sense of how our medical devices might be compromised. But the discussion quickly veered into different territory. She argued that our focus on medical data as the information coming out of connected pacemakers isn’t nearly as vulnerable as the information coming from far less secure sources. There’s a bigger security risk here, she argued, saying “all our data is health data.”

What Sobhani is talking about is a relatively new field called digital phenotyping, which was coined at the Harvard T.H. Chan School of Public Health. It means taking information from our digital behaviors — on websites, via our phones — and using it to gain insight into potential health issues. “People don’t realize that small data points monitored continuously can be very predictive of behaviors and health and that’s why I’m worried,” Sobhani said. She noted research that suggests early Parkinson’s motor issues might be more accurately detected by typing patterns on keyboards. And she mentioned one initial study tying language in social media posts and Facebook likes that accurately predicted depressive episodes.

Social media companies and marketers use this kind of metadata as a way to predict our behaviors all the time; for example, using shopping behaviors, companies often try to gauge whether someone is pregnant. It only makes sense that as technology encroaches deeper into our lives and data analysis gets better, more brazen researchers and organizations would try to glean insights. In such a world, Sobhani argues, even some of our most trivial data — the way our eyes move in a video clip — could be thought of as health data. “As a researcher, I think about the Amazon Echo and if our group had continuous data from voice. We’d learn so much — and I don’t just mean from the emotional content but from the language analysis and things like pauses in your speech.”

What if your health insurance company, as a purchaser of data, has reason to suspect that you are suffering from a particular condition before you do, and decides to cut you off? [UPDATE: A number of readers have pointed out that the ACA forbids this. Sorry for the mistake. — RD]

If this topic interests you, read Shoshanna Zuboff’s The Age Of Surveillance Capitalism.

Very few of us understand what’s happening, because it is hard to comprehend. This quote — “People don’t realize that small data points monitored continuously can be very predictive of behaviors and health and that’s why I’m worried” — ought to be lit in flashing neon. Zuboff explains in detail how this works. Essentially, tech companies are constantly mining bits of data from our online activities — which include, for example, every time you pay for something with a debit or credit card, and everywhere you drive with location services turned on in your mobile phone. They have gotten very, very good at using this data to predict what you like and what you’re going to do. As Sobhani says, they are going to be able to detect potential health issues using this data and technology.

Last month in Poland, I spoke to a young, very successful tech entrepreneur, who told me that the technology is getting so sophisticated that the algorithms will “know” what we think before we do, and will figure out how to nudge us in particular ways, without our being aware that we are being directed to reach certain conclusions, and to carry out certain behaviors. We will be unfree, but have no awareness of that fact. This is not science fiction. This is coming, and is almost here.

I’m encountering some conflicting data myself in my interviews with people who grew up under communism. Most of them tell me that they knew that their societies were being run by liars and tyrants, and that knowledge allowed them to build internal resistance. But the first person I interviewed who grew up in the Soviet Union told me that nobody in her environment really understood the extent of the lies and manipulation. Why? She said that unless you lived in a major city (she did not), you had no way of getting outside the totalitarian information environment to compare what they told you with the way the world really was. In the major cities, you stood a chance of getting bits and pieces of information from the West, or at least hearing of dissidents. Where she lived, it was a total Soviet-controlled information environment. People didn’t know that life could be otherwise.

Obviously that kind of information environment is impossible to impose on us today. Would-be totalitarian controllers in our time and place would have to convince us that we are seeing the truth, and living in the truth, amid a radical diversity of information and information sources. In Soviet totalitarianism, the information was top down — and this was possible when the state controlled the information environment. Now, the information environment is vastly more complex and integrated, involving to-and-fro, from countless different directions. A totalitarian controller would have to massage the message.

This may well be easier to do than you think. Hannah Arendt, in her Origins Of Totalitarianism, says that one precursor to totalitarianism is a people giving up on the idea of Truth, and the importance of Truth, and instead choosing to affirm truth claims that affirm what they already believe. If a person, and a society, comes to believe that Truth is only an ideological construct, they prepare themselves for the controllers. If all digital reality is virtual, then it can be whatever we want it to be. And if the controllers know how to make us want what they want, without us knowing that we are being manipulated, we are in a world of trouble.

I repeat what the anti-communist dissident Kamila Bendova told me a couple of months ago in Prague: That there is no such thing as the innocent compilation of one’s personal data. “You can’t have lived through what we lived through and believe otherwise,” she said (I’m paraphrasing). “Sooner or later, that information is going to be used against you.”

UPDATE: From CNBC today:

Amazon said this week its facial recognition software can detect a person’s fear.

Rekognition is one of many Amazon Web Services (AWS) cloud services available for developers. It can be used for facial analysis or sentiment analysis, which identifies different expressions and predicts emotions from images of people’s faces. The service uses artificial intelligence to “learn” from the reams of data it processes.

The tech giant revealed updates to the controversial tool on Monday that include improving the accuracy and functionality of its face analysis features such as identifying gender, emotions and age range.

“With this release, we have further improved the accuracy of gender identification,” Amazon said in a blog post. “In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear.’”

From The Age Of Surveillance Capitalism:

How? Through cameras in your smartphone, tablet, and laptop. Smart TVs will soon be equipped with cameras and software capable of reading the reaction of viewers to ads. Again: this is not science fiction. This is happening. And it’s not being done by a totalitarian state; it’s being done by major corporations.

 

Advertisement

Comments

Want to join the conversation?

Subscribe for as little as $5/mo to start commenting on Rod’s blog.

Join Now