fbpx
Politics Foreign Affairs Culture Fellows Program

Gays Against Groomers Get Financially Deplatformed

Dissident group forbidden by PayPal and Venmo from using their services. This is how the social credit system will be used against us all
Screen Shot 2022-09-21 at 12.10.08 PM

Just like that:

Advertisement

This is how soft totalitarianism works: no gulags, no jail time, just being excluded from the marketplace. We are rapidly approaching the point where one may not buy or sell without permission of the Regime.

This is also how soft totalitarianism works: the "Regime" is not the State alone, as in the earlier iteration of totalitarianism. It is rather the informal coalition of elites in government, media, finance, academia, and private industry (Yarvin's term "the Cathedral" is also good) who share the same illiberal left-wing convictions, and act in concert. It is Venmo's and PayPal's right to do what they're doing. But the effect is bad for democracy.

It's like with Amazon, when it decided not to sell Ryan T. Anderson's book critical of transgender ideology, and similarly-themed books. It's Amazon's right -- but if Amazon, with its dominant market share of the book market, decides that it will not sell a certain kind of book, then that kind of book will not be published.

It's entirely legal. Do you want a system in which a book seller is forced to sell books he finds immoral? I don't. But in Amazon's case, making a fully legal decision has dramatic consequences of freedom of speech and debate.

I don't know how this should work, in terms of legislation to solve the problem of financial deplatforming. But this is an issue conservative, libertarian, and authentically liberal politicians should start talking about -- and, when workable policies and laws present themselves, then acting on them. If not, people who dissent from the Regime's ideology will find themselves more and more driven to the margins, and forced through non-violent means to comply.

Advertisement

Two years ago, just before the publication of Live Not By Lies, one of my readers e-mailed me (I posted it at the time; this is a reprint):

I hope some of this will be of insight or use to your readers and maybe offer some advice on how to reduce the impact of the coming Social Credit system on their lives. To start, I am a software developer who has worked with machine learning and “AI” professionally. While it’s not my primary focus of my daily work, I tend to keep up with developments in the field and am skilled enough to use it to solve problems in production systems — in other words, things that are consumed by end users and actually need to work right. Problems I have used it for are recommending content to users, predicting essentially how long a user will remain engaged on a site, and so on. Properly used, it is an extremely powerful tool that can improve peoples’ lives by serving them content they would be more interested in, and not wasting time with stuff they’re not.

Maliciously used, well…at a minimum, it can be used to manipulate people into staying on a site longer by serving content that is designed to stimulate the brain’s pleasure centers. Facebook does this to keep people reading items which are tailored to what experience they tend to prefer and things they’ve liked. If these things can be used to increase a site’s user engagement even by only a few percentage points, it can pay off big in terms of increasing ad revenues. Other practical applications include quality control, homicidal Teslas, security systems, and so on.

Unless you work within the field, most people don’t understand what artificial intelligence can and cannot do. To start with, the definition is misleading. Computers have not yet attained true “intelligence” in the human sense. We are probably a ways off from any system attaining consciousness in the classical sense, though I suppose it’s worth pointing out that systems can be designed to act rationally within a given set of information that is presented to it. DeepMind, Google’s AI “moonshot” company that intends to create a true artificial intelligence, has designed a system that can play old Atari games without any instructions. While I haven’t read more into the details, I would imagine that this happens by the system trying an action and see if it results in a more positive state (losing fewer lives, achieving a higher score, etc).

On the other hand, computer games with artificial intelligence generally don’t use true AI techniques to simulate a player, as it’s too computationally expensive and often gives less than desired results. A good example of this was a strategy game where the AI simply had its units run away to an isolated corner of the map because this was the most “rational” decision it could make within the framework of the game. In many senses, though, a true thinking machine is on the same timeline as a flying car or fusion power.

That said, a social credit system does not actually need true artificial intelligence to function, and this can actually be done with some very simple techniques. You take a set of data points and then run them against an algorithm to determine the likelihood of there being a match of a person against that data set. For example, if you’re trying to determine if someone is an active member of a “bigot” denomination of Christianity, or holds their views, you would find some people who fit the profile, then extract data points that distinguish them from someone who is not, and then check unclassified people against those points.

So, if you know a “bigot” Christian shops at a religious bookstore, gives more than five percent of their income, frequents certain forums, etc, then you can develop a data profile of what this type of person looks like. In turn, if someone only meets one of those three criteria, such as visiting forums but not engaging in the other two activities, the possibility of them being a “bigot” Christian is much lower. Maybe they’re just an atheist trolling on the forums. Likewise, if they visit a religious bookstore, they might just be browsing or buying a gift, and this would require adjusting the algorithm inputs to filter out casual shoppers.

The big challenges involved in doing this really are not actually running the algorithms, but classifying and processing the data that the algorithms operate on. This is where data science and big data come in.

What happens is that there are statistical analysis programs which can be run against a data set to begin filtering out irrelevant things and looking for specific patterns. Continuing with the above example, let’s say that a both a “bigot” Christian and hardcore woke social justice warrior buy bottled water and celery at the store. This data point doesn’t distinguish the two, so that it gets tossed out. By refining the data through statistical techniques, it quickly becomes clear what points should be looked out to distinguish membership in a given data set. The SJW doesn’t attend church, or attends only a “woke” one, so they can be filtered on that point. This is why seemingly innocuous things like loyalty cards, social media posts, phone GPS and so on are actually dangerous. They essentially can tie your life together and build out a profile that can be used to classify and analyze you in ways with connections you never thought would be made. All it takes is building “training sets” and then throwing live data at these sets to have a usable outcome.

Ultimately, the power of all this is to be able to do on an automated basis what would have been a ton of legwork by a secret or thought police. Even if, say, there is still a secret police or social media police force involved it making ultimate decisions about how to handle a particular report, machine learning can help sort out private grievances from real dissenters by looking at what distinguishes a legitimate report from a fake one. No longer does expertise have to be produced and retained via training and personal experience, but can now simply be saved to a database and pulled out, refined, and applied as needed. These routines can be packed up and distributed and run at will, or run by accessing cloud servers. Huge numbers of profiles can be run at any given time, too. While building the profiles is computationally expensive, running them is very quick.

The other thing that people in your comment section don’t grasp is that this is not a political issue in any sense of the word. Tech and the consolidation of power doesn’t really have a right or left to it; it is about technocracy.

The reality of why this will be applied to people opposed to critical race theory is simply that opposing CRT means that you are a skeptic, that you still think 2 + 2 = 4, and oppose what the elite are teaching you. People who think this is overblown or won’t apply to them, whatever their politics, are naive. Anything which militates against accepting a common vision is what is the marker here, and it could be anything else down the road as trends change and what is acceptable shifts.

Driving a gasoline-powered car, having your recycling bin half full, or buying bottled water, might all be things that impact your social credit score if it begins to be applied to environmental issues. Drink three beers instead of two at a restaurant? You’re going to be flagged as an alcoholic and watch your health care premiums shoot up, or perhaps lose it altogether if we have a single-payer system. The true evil with this is how it dehumanizes people, categorizes them, and removes their individuality by reducing them to a statistic. “I’m not a number, I’m a name” no longer applies. Mental illness is overrepresented in the tech world, and you run into all sorts of people that are narcissistic, sociopathic, and so on. How well and fast and elegantly something runs is the true yardstick, not if it is ethical and moral.

Also, the notion that there will be laws against this sort of thing, that there will be a legal deus ex machina that will stop this soft totalitarianism, is just laughable. Things like GDPR [General Data Protection Regulation, an EU law — RD] are a step in the right direction, but data and web services are so interconnected today that trying to erase all your digital tracks is going to be very difficult. Besides, if you’ve been offline for several years, you’re trying to hide something, right? Tech used to be full of people with a very libertarian and free-thinking mindset. This was also when it was at its most innovative. These days, identity politics is pushing out libertarian politics, and the idea of curtailing speech and access for people who are “racist,” etc, is not just accepted but promoted. Even if law doesn’t come into it, technology has been biased against personal freedom and privacy for a long time.

If nothing else, underfitting a data curve — that is, too broadly applying — might result in people being unfairly barred from banking, jobs, public transit, and so on. Think of the poor guy in Fahrenheit 451 that the mechanical hound killed in the place of Montag after he escaped. You likely won’t be able to appeal, as this would reveal too much about the algorithms that are being used by the system. Maybe you’ll get scored differently again after a period of time, but there is no guarantee of that, either. The system will always err on the side of underfitting, too. In the new Sodom, there are not fifty righteous men. Everyone is guilty of something.

Dealing with this is trickier than it might seem, but can also be spoofed somewhat. As you point out, China’s system relies on cameras being present everywhere, and also associating with people who meet certain criteria to have a lower score. The first and most important thing to remember is that you are going to have to be cautious and be fully “underground.” Public pronouncements that run contrary to the acceptable narrative are going to be an automatic black mark on your score. Keep your mouth shut, don’t post memes on Facebook. Otherwise, you’re going to suddenly find that your bank account is locked for “supporting extremism,” and you’ll have a pink slip waiting for you at work the next day.

Now, getting down to practical matters, if you and someone with a low social credit score ride the same bus to work, probably not an issue, but if you ride the same bus and then have lunch together, big red flag. Will sitting next to each other matter? Maybe, but you might also get on at the same stop, so this would be less of a red flag, particular if you don’t talk to each other beforehand. Change where you sit on the bus, sit together some days, and not together the next, and change your seating position relative to each other. At some point, it becomes increasingly difficult to develop rules from a pattern and your behavior might be thrown out as an outlier. This is going to be harder to maintain if you need long-term interaction with someone, like at a prayer group, but can still be used for some one on one time, important communication between groups, spreading information manually, etc.

Speaking of prayer groups, the obvious answer is to congregate in places where there is not likely to be any cameras. As they become smaller, less visible, hooked into networks, and with better low light performance and resolution, it’s going to be increasingly difficult to know when you’re being watched and when you’re not. I’d expect parks to be monitored, and if not inside the park by drones or cameras, then at least the entrances and exits. Same group of ten or so people going to the same place every week for the same amount of time, especially if one or two are known to be problematic? Big red flag. On the other hand, people meeting in random locations, in varying sizes, in varying times, this might slide by the system and be lost in the “noise.”

Phones will be tracking people, of course, and phones being in the same location at the same time, big red flag. If you leave your phone at home, hope that you don’t have an internet of things device connected, or a camera on your building, or you’ll be known as a person who leaves their phone behind. Big red flag. If you leave your phone in your dwelling, but are seen to go exercise without it, maybe less of a red flag. Just don’t be gone for three hours on a “quick run.”

There is also the idea of too little data being available, a “black hole” if you will. If you don’t carry a phone, usual social media, obviously associate with anyone, and so on, you’re likely going to be flagged because you’re seen as trying to hide your life and activity. It’s worth noting that phones are basically ubiquitous in Chinese society, and people were trying to estimate the actual impact of Covid-19 in China based on how many fewer people were buying and using cell phones after December of last year. Why are phones ubiquitous? Because people need their positive behavior to be recorded.

Ultimately, the idea is to either engage in the bounds of normal behavior or engage in behavior that doesn’t meet an expected pattern and will likely be trimmed as an outlier (assuming outliers aren’t investigated as well). If you need to meet, do so in a way that would be socially acceptable and plausible, like an art class or a pickup sports game in the park. Taking a break on a random park bench with random people at random times might work as well. Use written communication or learn sign language to avoid bugs (conversations can be analyzed for keywords as well). The thing is, the more effective a social credit system becomes, the less human intervention and legwork there is likely going to be. No one’s going to bother with looking at what you wrote on a notebook, because it would take too much effort to track someone down and actually examine. They care more about where you go, when you go there, who’s there, and so on. The faith in technology is such that there is a strong bias against manual intervention.

No, none of this is going to help us stop the soft totalitarianism, and I have been repeating it over and over that orthodox Christians and other soon-to-be-unpersoned groups need to really start understanding and preparing for a life as “untouchables.” If you post the wrong thing, say the wrong thing, hang out with the wrong people, your card is going to be cancelled, no other banks will pick you up, you’re likely not going to be able to get a job due to a low score, and so on. You might not even be able to pick up menial work. Under the table work will be gone once everything needs a card for a transaction. All cards are issued by banks and most of them are woke. Think there will be a “rogue” bank that will do so? Good luck with that. If you think, okay, you can start your own small business growing and selling food, other goods, etc…you need to be able to buy and sell supplies. Cashless economy requires having a card and an account. You won’t be able to open an account due to “extremism.”

As you’ve been hammering home over and over again, now is the time to form communities. These can quite literally provide support and safe harbor for internal exiles, if they are careful. This isn’t just about maintaining the faith, but about maintaining those who will have nowhere else to go. Barter, self-sufficiency, low tech, these things are going to be massively important.

From Live Not By Lies:

Why should corporations and institutions not use the information they harvest to manufacture consent to some beliefs and ideologies and to manipulate the public into rejecting others?

In recent years, the most obvious interventions have come from social media companies deplatforming users for violating terms of service. Twitter and Facebook routinely boot users who violate its standards, such as promoting violence, sharing pornography, and the like. YouTube, which has two billion active users, has demonetized users who made money from their channels but who crossed the line with content YouTube deemed offensive. To be fair to these platform managers, there really are vile people who want to use these networks to advocate for evil things.

But who decides what crosses the line? Facebook bans what it calls “expression that . . . has the potential to intimidate, exclude or silence others.” To call that a capacious definition is an understatement. Twitter boots users who “misgender” or “deadname” transgendered people. Calling Caitlyn Jenner “Bruce,” or using masculine pronouns when referring to the transgendered celebrity, is grounds for removal.

To be sure, being kicked off of social media isn’t like being sent to Siberia. But companies like PayPal have used the guidance of the far-left Southern Poverty Law Center to make it impossible for certain right-of-center individuals and organizations—including the mainstream religious-liberty law advocates Alliance Defending Freedom—to use its services. Though the bank issued a general denial when asked, JPMorgan Chase has been credibly accused of closing the accounts of an activist it associates with the alt-right. In 2018, Citigroup and Bank of America announced plans to stop doing some business with gun manufacturers.

It is not at all difficult to imagine that banks, retailers, and service providers that have access to the kind of consumer data extracted by surveillance capitalists would decide to punish individuals affiliated with political, religious, or cultural groups those firms deem to be antisocial. Silicon Valley is well known to be far to the left on social and cultural issues, a veritable mecca of the cult of social justice. Social justice warriors are known for the spiteful disdain they hold for classically liberal values like free speech, freedom of association, and religious liberty. These are the kinds of people who will be making decisions about access to digital life and to commerce. The rising generation of corporate leaders take pride in their progressive awareness and activism. Twenty-first century capitalism is not only all in for surveillance, it is also very woke.

Nor is it hard to foresee these powerful corporate interests using that data to manipulate individuals into thinking and acting in certain ways. Zuboff quotes an unnamed Silicon Valley bigwig saying, “Conditioning at scale is essential to the new science of massively engineered human behavior.” He believes that by close analysis of the behavior of app users, his company will eventually be able to “change how lots of people are making their day-to-day decisions.”

Maybe they will just try to steer users into buying certain products and not others. But what happens when the products are politicians or ideologies? And how will people know when they are being manipulated?

If a corporation with access to private data decides that progress requires suppressing dissenting opinions, it will be easy to identify the dissidents, even if they have said not one word publicly.

In fact, they may have their public voices muted. British writer Douglas Murray documented how Google quietly weights its search results to return more “diverse” findings. Though Google presents its search results as disinterested, Murray shows that “what is revealed is not a ‘fair’ view of things, but a view which severely skews history and presents it with a bias from the present.”

Result: for the search engine preferred by 90 percent of the global internet users, “progress”—as defined by left-wing Westerners living in Silicon Valley—is presented as normative.

In another all-too-common example, the populist Vox party in Spain had its Twitter access temporarily suspended when, in January 2020, a politician in the Socialist Party accused the Vox party of “hate speech,” for opposing the Socialist-led government’s plan to force schoolchildren to study gender ideology, even if parents did not consent.

To be sure, Twitter, a San Francisco-based company with 330 million global users, especially among media and political elites, is not a publicly regulated utility; it is under no legal obligation to offer free speech to its users. But consider how it would affect everyday communications if social media and other online channels that most people have come to depend on—Twitter, Gmail, Facebook, and others—were to decide to cut off users whose religious or political views qualified them as bigots in the eyes of the digital commissars?

What is holding the government back from doing the same thing? It’s not from a lack of technological capacity. In 2013, Edward Snowden, the renegade National Security Agency analyst, revealed that the US federal government’s spying was vastly greater than previously known. In his 2019 memoir, Permanent Record, Snowden writes of learning that

the US government was developing the capacity of an eternal law-enforcement agency. At any time, the government could dig through the past communications of anyone it wanted to victimize in search of a crime (and everybody’s communications contain evidence of something). At any point, for all perpetuity, any new administration—any future rogue head of the NSA—could just show up to work and, as easily as flicking a switch, instantly track everybody with a phone or a computer, know who they were, where they were, what they were doing with whom, and what they had ever done in the past.

Snowden writes about a public speech that the Central Intelligence Agency’s chief technology officer, Gus Hunt, gave to a tech group in 2013 that caused barely a ripple. Only the Huffington Post covered it. In the speech, Hunt said, “It is really very nearly within our grasp to be able to compute on all human-generated information.” He added that after the CIA masters capturing that data, it intends to develop the capability of saving and analyzing it.

Understand what this means: your private digital life belongs to the State, and always will. For the time being, we have laws and practices that prevent the government from using that information against individuals, unless it suspects they are involved in terrorism, criminal activity, or espionage. But over and over dissidents told me that the law is not a reliable refuge: if the government is determined to take you out, it will manufacture a crime from the data it has captured, or otherwise deploy it to destroy your reputation.

I'm on my way to Canada now to give a couple of LNBL-themed speeches. I have more to talk ab out now. I do, every day.

Comments

Want to join the conversation?

Subscribe for as little as $5/mo to start commenting on Rod’s blog.

Join Now
Giuseppe Scalas
Giuseppe Scalas
Those guys might want to move their business to other payment gateways. I recommend Switzerland or Israel. Or even the EU.
schedule 2 years ago
Zenos Alexandrovitch
Zenos Alexandrovitch
If they are based in Texas, the new 5th court ruling reopens the ability of people to sue over corporations placing their nonexistent right of censorship over the enumerated free speech rights of users. You know, like older court precedent already decided but corporations want everyone to forget.
schedule 2 years ago