fbpx
Politics Foreign Affairs Culture Fellows Program

Life As A Social-Credit Untouchable

Tech industry reader gives details about what dissidents are going to face under soft totalitarianism
IMG_6113

We’re getting close to the Tuesday launch of Live Not By Lies. After church this morning, I was talking to one of my fellow parishioners about the social credit system, and how last week, a reader in the tech industry had sent me a report from a trade publication that focuses on coming trends; the report was aimed at showing the industry how to prepare itself to profit from the coming social credit system in America. I blogged about it here, and also included some text from my book about how the Chinese totalitarian state controls its population through a pioneering social credit system.

In Live Not By Lies, I predict that the soft totalitarianism coming to the US will use an Americanized version of this method to compel conformity with its “social justice” ideology. As I explained to my churchgoing friend this morning, the state won’t have to send secret police after you; if you tell most Americans that you and/or your children won’t be able to gain access to the best schools, or you won’t be able to travel as freely, or go to some sporting events, etc., because you have a low social credit score, they will do as they are told. Over the weekend, I recorded an episode of a video podcast in which one of the hosts, a pastor whose church includes university faculty, said that the academics in his flock have been telling him that they can see the rudiments of this kind of thing already taking shape in their workplace. They are being asked questions that they’ve never before been asked by superiors, and they know that their answers are being noticed.

All of that is a preface to an extraordinary e-mail I received from a reader of this blog. I know his real name, but he has asked me not to use it. He reminded me that he had written me before to encourage me about the importance of the Live Not By Lies project, and now he has something else to add. Read this:

I hope some of this will be of insight or use to your readers and maybe offer some advice on how to reduce the impact of the coming Social Credit system on their lives. To start, I am a software developer who has worked with machine learning and “AI” professionally. While it’s not my primary focus of my daily work, I tend to keep up with developments in the field and am skilled enough to use it to solve problems in production systems — in other words, things that are consumed by end users and actually need to work right. Problems I have used it for are recommending content to users, predicting essentially how long a user will remain engaged on a site, and so on. Properly used, it is an extremely powerful tool that can improve peoples’ lives by serving them content they would be more interested in, and not wasting time with stuff they’re not.

Maliciously used, well…at a minimum, it can be used to manipulate people into staying on a site longer by serving content that is designed to stimulate the brain’s pleasure centers. Facebook does this to keep people reading items which are tailored to what experience they tend to prefer and things they’ve liked. If these things can be used to increase a site’s user engagement even by only a few percentage points, it can pay off big in terms of increasing ad revenues. Other practical applications include quality control, homicidal Teslas, security systems, and so on.

Unless you work within the field, most people don’t understand what artificial intelligence can and cannot do. To start with, the definition is misleading. Computers have not yet attained true “intelligence” in the human sense. We are probably a ways off from any system attaining consciousness in the classical sense, though I suppose it’s worth pointing out that systems can be designed to act rationally within a given set of information that is presented to it. DeepMind, Google’s AI “moonshot” company that intends to create a true artificial intelligence, has designed a system that can play old Atari games without any instructions. While I haven’t read more into the details, I would imagine that this happens by the system trying an action and see if it results in a more positive state (losing fewer lives, achieving a higher score, etc).

On the other hand, computer games with artificial intelligence generally don’t use true AI techniques to simulate a player, as it’s too computationally expensive and often gives less than desired results. A good example of this was a strategy game where the AI simply had its units run away to an isolated corner of the map because this was the most “rational” decision it could make within the framework of the game. In many senses, though, a true thinking machine is on the same timeline as a flying car or fusion power.

That said, a social credit system does not actually need true artificial intelligence to function, and this can actually be done with some very simple techniques. You take a set of data points and then run them against an algorithm to determine the likelihood of there being a match of a person against that data set. For example, if you’re trying to determine if someone is an active member of a “bigot” denomination of Christianity, or holds their views, you would find some people who fit the profile, then extract data points that distinguish them from someone who is not, and then check unclassified people against those points.

So, if you know a “bigot” Christian shops at a religious bookstore, gives more than five percent of their income, frequents certain forums, etc, then you can develop a data profile of what this type of person looks like. In turn, if someone only meets one of those three criteria, such as visiting forums but not engaging in the other two activities, the possibility of them being a “bigot” Christian is much lower. Maybe they’re just an atheist trolling on the forums. Likewise, if they visit a religious bookstore, they might just be browsing or buying a gift, and this would require adjusting the algorithm inputs to filter out casual shoppers.

The big challenges involved in doing this really are not actually running the algorithms, but classifying and processing the data that the algorithms operate on. This is where data science and big data come in.

What happens is that there are statistical analysis programs which can be run against a data set to begin filtering out irrelevant things and looking for specific patterns. Continuing with the above example, let’s say that a both a “bigot” Christian and hardcore woke social justice warrior buy bottled water and celery at the store. This data point doesn’t distinguish the two, so that it gets tossed out. By refining the data through statistical techniques, it quickly becomes clear what points should be looked out to distinguish membership in a given data set. The SJW doesn’t attend church, or attends only a “woke” one, so they can be filtered on that point. This is why seemingly innocuous things like loyalty cards, social media posts, phone GPS and so on are actually dangerous. They essentially can tie your life together and build out a profile that can be used to classify and analyze you in ways with connections you never thought would be made. All it takes is building “training sets” and then throwing live data at these sets to have a usable outcome.

Ultimately, the power of all this is to be able to do on an automated basis what would have been a ton of legwork by a secret or thought police. Even if, say, there is still a secret police or social media police force involved it making ultimate decisions about how to handle a particular report, machine learning can help sort out private grievances from real dissenters by looking at what distinguishes a legitimate report from a fake one. No longer does expertise have to be produced and retained via training and personal experience, but can now simply be saved to a database and pulled out, refined, and applied as needed. These routines can be packed up and distributed and run at will, or run by accessing cloud servers. Huge numbers of profiles can be run at any given time, too. While building the profiles is computationally expensive, running them is very quick.

The other thing that people in your comment section don’t grasp is that this is not a political issue in any sense of the word. Tech and the consolidation of power doesn’t really have a right or left to it; it is about technocracy.

The reality of why this will be applied to people opposed to critical race theory is simply that opposing CRT means that you are a skeptic, that you still think 2 + 2 = 4, and oppose what the elite are teaching you. People who think this is overblown or won’t apply to them, whatever their politics, are naive. Anything which militates against accepting a common vision is what is the marker here, and it could be anything else down the road as trends change and what is acceptable shifts.

Driving a gasoline-powered car, having your recycling bin half full, or buying bottled water, might all be things that impact your social credit score if it begins to be applied to environmental issues. Drink three beers instead of two at a restaurant? You’re going to be flagged as an alcoholic and watch your health care premiums shoot up, or perhaps lose it altogether if we have a single-payer system. The true evil with this is how it dehumanizes people, categorizes them, and removes their individuality by reducing them to a statistic. “I’m not a number, I’m a name” no longer applies. Mental illness is overrepresented in the tech world, and you run into all sorts of people that are narcissistic, sociopathic, and so on. How well and fast and elegantly something runs is the true yardstick, not if it is ethical and moral.

Also, the notion that there will be laws against this sort of thing, that there will be a legal deus ex machina that will stop this soft totalitarianism, is just laughable. Things like GDPR [General Data Protection Regulation, an EU law — RD] are a step in the right direction, but data and web services are so interconnected today that trying to erase all your digital tracks is going to be very difficult. Besides, if you’ve been offline for several years, you’re trying to hide something, right? Tech used to be full of people with a very libertarian and free-thinking mindset. This was also when it was at its most innovative. These days, identity politics is pushing out libertarian politics, and the idea of curtailing speech and access for people who are “racist,” etc, is not just accepted but promoted. Even if law doesn’t come into it, technology has been biased against personal freedom and privacy for a long time.

If nothing else, underfitting a data curve — that is, too broadly applying — might result in people being unfairly barred from banking, jobs, public transit, and so on. Think of the poor guy in Fahrenheit 451 that the mechanical hound killed in the place of Montag after he escaped. You likely won’t be able to appeal, as this would reveal too much about the algorithms that are being used by the system. Maybe you’ll get scored differently again after a period of time, but there is no guarantee of that, either. The system will always err on the side of underfitting, too. In the new Sodom, there are not fifty righteous men. Everyone is guilty of something.

Dealing with this is trickier than it might seem, but can also be spoofed somewhat. As you point out, China’s system relies on cameras being present everywhere, and also associating with people who meet certain criteria to have a lower score. The first and most important thing to remember is that you are going to have to be cautious and be fully “underground.” Public pronouncements that run contrary to the acceptable narrative are going to be an automatic black mark on your score. Keep your mouth shut, don’t post memes on Facebook. Otherwise, you’re going to suddenly find that your bank account is locked for “supporting extremism,” and you’ll have a pink slip waiting for you at work the next day.

Now, getting down to practical matters, if you and someone with a low social credit score ride the same bus to work, probably not an issue, but if you ride the same bus and then have lunch together, big red flag. Will sitting next to each other matter? Maybe, but you might also get on at the same stop, so this would be less of a red flag, particular if you don’t talk to each other beforehand. Change where you sit on the bus, sit together some days, and not together the next, and change your seating position relative to each other. At some point, it becomes increasingly difficult to develop rules from a pattern and your behavior might be thrown out as an outlier. This is going to be harder to maintain if you need long-term interaction with someone, like at a prayer group, but can still be used for some one on one time, important communication between groups, spreading information manually, etc.

Speaking of prayer groups, the obvious answer is to congregate in places where there is not likely to be any cameras. As they become smaller, less visible, hooked into networks, and with better low light performance and resolution, it’s going to be increasingly difficult to know when you’re being watched and when you’re not. I’d expect parks to be monitored, and if not inside the park by drones or cameras, then at least the entrances and exits. Same group of ten or so people going to the same place every week for the same amount of time, especially if one or two are known to be problematic? Big red flag. On the other hand, people meeting in random locations, in varying sizes, in varying times, this might slide by the system and be lost in the “noise.”

Phones will be tracking people, of course, and phones being in the same location at the same time, big red flag. If you leave your phone at home, hope that you don’t have an internet of things device connected, or a camera on your building, or you’ll be known as a person who leaves their phone behind. Big red flag. If you leave your phone in your dwelling, but are seen to go exercise without it, maybe less of a red flag. Just don’t be gone for three hours on a “quick run.”

There is also the idea of too little data being available, a “black hole” if you will. If you don’t carry a phone, usual social media, obviously associate with anyone, and so on, you’re likely going to be flagged because you’re seen as trying to hide your life and activity. It’s worth noting that phones are basically ubiquitous in Chinese society, and people were trying to estimate the actual impact of Covid-19 in China based on how many fewer people were buying and using cell phones after December of last year. Why are phones ubiquitous? Because people need their positive behavior to be recorded.

Ultimately, the idea is to either engage in the bounds of normal behavior or engage in behavior that doesn’t meet an expected pattern and will likely be trimmed as an outlier (assuming outliers aren’t investigated as well). If you need to meet, do so in a way that would be socially acceptable and plausible, like an art class or a pickup sports game in the park. Taking a break on a random park bench with random people at random times might work as well. Use written communication or learn sign language to avoid bugs (conversations can be analyzed for keywords as well). The thing is, the more effective a social credit system becomes, the less human intervention and legwork there is likely going to be. No one’s going to bother with looking at what you wrote on a notebook, because it would take too much effort to track someone down and actually examine. They care more about where you go, when you go there, who’s there, and so on. The faith in technology is such that there is a strong bias against manual intervention.

No, none of this is going to help us stop the soft totalitarianism, and I have been repeating it over and over that orthodox Christians and other soon-to-be-unpersoned groups need to really start understanding and preparing for a life as “untouchables.” If you post the wrong thing, say the wrong thing, hang out with the wrong people, your card is going to be cancelled, no other banks will pick you up, you’re likely not going to be able to get a job due to a low score, and so on. You might not even be able to pick up menial work. Under the table work will be gone once everything needs a card for a transaction. All cards are issued by banks and most of them are woke. Think there will be a “rogue” bank that will do so? Good luck with that. If you think, okay, you can start your own small business growing and selling food, other goods, etc…you need to be able to buy and sell supplies. Cashless economy requires having a card and an account. You won’t be able to open an account due to “extremism.”

As you’ve been hammering home over and over again, now is the time to form communities. These can quite literally provide support and safe harbor for internal exiles, if they are careful. This isn’t just about maintaining the faith, but about maintaining those who will have nowhere else to go. Barter, self-sufficiency, low tech, these things are going to be massively important.

A good follow-up to Live Not By Lies would be a detailed how-to guide for living underground in the coming tech-based tyranny. It should be published while we can still do that sort of thing. Anyway, Live Not By Lies has really more to do with how and why to prepare yourself spiritually to resist. It’s a book aimed specifically at subversive Christians, but any person of faith, or person who rejects the SJW ideology now taking over institutions, will benefit from it. A liberal friend of mine who disagrees with my thesis tells me that what I call  “soft totalitarianism” is not that, but simply the frightening (to people like me) novelty living in a society that doesn’t operate from Christian assumptions. Pointing to the social credit system is the first thing I would say to him in response. There won’t be any way for any dissenter to opt out of this, or to build a stable, prosperous life on society’s margins.

And by the way, I’m slated to be on Tucker Carlson Tonight on Monday evening to talk about the book, and why this matters. Tune in.

Advertisement

Comments

Want to join the conversation?

Subscribe for as little as $5/mo to start commenting on Rod’s blog.

Join Now