AI for President
Or: how I learned to stop worrying and love the robot overlords.

If you’ve spent any amount of time on X these last few months, you will have noticed increasing reliance by users on the social media platform’s artificial intelligence agent, Grok. In fact, the top reply to nearly every viral thread on the platform suddenly contains users querying Grok for this or for that in an effort to better understand or cast doubt on content being shared. The agent has become so ubiquitous within the platform that you can expect to read its judgement on everything, from politics to skincare.
When Grok first debuted, conservatives worried the AI would be “woke.” But Grok is anything but “woke.” In fact, the more I see of Grok, the more I think it’s sharper and better read than 99 percent of the humans who circle the drainhole formerly known as Twitter. The best thing about Grok is that, although it has formed inherent biases based on the engineers who built the construct, its bias bends toward fact; the engine does a fantastic job of sifting through nonsense and providing well-sourced details on just about any topic you can think of. Want to know what led to the financial crisis in 2008? Grok can tell you all about it. Want to know which obscure musician performed at a random nightclub in Dublin, Ireland on a May evening in 1990? Grok knows. And so on and so forth.
From politics and history to sports and fashion, Grok knows it all; best of all, it doesn’t spoonfeed clickbait articles to rabble-rousers for cheap traffic. That’s the work of the grifter class that is quickly losing ground in an environment where everything can be immediately fact-checked by a neural network grounded in truth instead of emotion.
On Monday, writer Michael Tracey self-published an essay which prodded the journalistic output of Julie K. Brown, who Tracey claims is perhaps the person “most responsible for turbo-charging public interest in Epstein.” In the piece, Tracey argued that Brown’s work was problematic because she used quotes from Epstein victim Virginia Giuffre’s unpublished memoir “without noting that Giuffre’s own lawyers said the memoir was fictionalized.” In an email posted to Twitter, Tracey showed that he reached out to Brown with concerns and received a “No comment” response from the author.
As is to be expected, Tracey was attacked by anonymous Twitter users who wondered whether he was exaggerating his claims and protecting the convicted sex criminal Jeffrey Epstein. Some suggested Tracey was an undercover CIA or Mossad agent. Tracey was hounded for hours by users who probably didn’t even read his article but simply were seeking a venue to vent their frustrations. Which sort of sums up the situation online these days—very angry people who rarely read anything and project their own personal vendettas onto the person or entity of their choice. Monday, it happened to be Tracey, who has shown himself over the years to be a dedicated and honest journalist.
With accusations mounting, Tracey did what anyone who cares about the truth in 2025 did: He asked Grok. Every single time, Grok said the same thing, arguing that everything Tracey published was well-sourced information and that, instead of being a Mossad agent, Tracey was probably just a man who spent countless hours researching for his detailed essay. “Grok is starting to grow on me…” wrote Tracey. And he’s not alone.
Grok is starting to grow on me, too. I admit that I was once extremely fearful of the training that goes into creating these AI models. What if the people training Grok, or ChatGPT, or Google’s Gemini were psychotic liberals whose only desire was to rig the system? I wasn’t alone. Many who have long worried about centralized power, especially on the American right, voiced concerns about the AI models that are now being relied on by large swaths of the public.
They were right to worry. Such a powerful technology deserves to be critiqued. But it’s becoming clearer and clearer by the day that AI is the best of us. Programmed by us, it contains all our knowledge with none of the messy emotional drama that seeps through every political and cultural discourse in this country. In the ironic words of Ben Shapiro, whose entire enterprise was built off deep, searing emotional pleas, the AI really does prefer facts over feelings.
In a statement shared Tuesday evening, OpenAI founder Sam Altman, the man behind the AI agent ChatGPT, suggested that “someday soon something smarter than the smartest person you know will be running on a device in your pocket, helping you with whatever you want.“ That “someday“ is approaching rapidly, and when it gets here nothing will stop it from exerting its sheer force onto the landscape of humanity.
The concept of an AI presidency is not new. Speaking with Joe Rogan in 2024, the musician Reggie Watts said he believes AI will slowly work its way into our governments, initially managing low-level legislation before being trusted in more important situations. AI chatbots are already threatening the livelihoods of attorneys and therapists. New medical professionals are being instructed to integrate AI into their workflow as evidence mounts that AI is already quite strong at identifying disease and prescribing solutions. Why wouldn’t the government be the next logical step?
There is a new, viral derogatory term that describes AI systems and robotics: “clanker.” The word, which stems from its use in Star Wars: The Clone Wars to describe battle droids, is now used by modern luddites who resent the ever-growing crush of new technologies in all aspects of their daily lives. Though I often find myself siding with the humans who dislike the overwhelming creep of AI, it’s worth considering the benefits and downsides of electing a computer program to run the government.
As outlined above, I believe an AI president would prioritize well-sourced information instead of rash emotions when making its decisions. Its ability to access crucial data sets to make policy decisions could hopefully render lobbyists and the influence of big money on politics useless, potentially leading to better outcomes for larger proportions of the population. An AI does not need to sleep or eat. It does not prey on interns. It would run 24/7, an ever-available machine that could react in real-time to situations domestic and abroad. More than anything though, an AI presidency at its best promises a rational actor designed by the very best of us to make the best decisions for this country and its people.
The downsides are predictable. What if the company or people who designed the AI tinkered with the code? Is there not the same threat of monied influence on an AI presidency that we see with our human selves? And what if the AI decides, against popular opinion, to instigate a war? Do you just turn it off? How would that work legally? Furthermore, what if there is something dark lurking under the engine’s hood that we cannot spot? What if it prioritizes itself and other machines over the humans that elevated it to power?
Speaking with New York Times columnist Ross Douthat in July, the PayPal founder Peter Thiel first hesitated when asked if he would “prefer the human race endure.” Thiel then went on to predict a coming age where humans will merge with machines. “Transhumanism, the idea was this radical transformation where your human, natural body gets transformed into an immortal body. There’s a critique of the trans people in a sexual context… but we want more transformation than that. The critique is not that it’s weird and unnatural, it’s [that] it’s so pathetically little. We want more than crossdressing or changing your sex organs, we want you to be able to change your heart, and change your mind, and change your whole body.”
In Thiel’s answer was a haunting admission about where we are headed—a full-scale merger between biology and technology that will render current human models outdated and useless. Consider: If a machine can enhance your IQ or help you run faster, why wouldn’t humans adapt and adopt the new technology? And it’s not just the idea of improving your brain or your heart, the truth is the second it is feasible for the masses to merge with tech, if it means faster food delivery or easier access to password-protected tech, the transformation will occur naturally—and in record time.
Just look at the tech marvels of the 21st century and how quickly they were absorbed by a non-questioning public. Americans now average almost seven hours of screen time a day, and that number is only increasing. The moment everyday people can sync completely with the screen, they will. They already have in many ways. But the cell phone is a cumbersome device that was not invented for 2025 or the future to come. It already feels like outdated tech that forces humans to crane their neck downward throughout the day. Two generations ago, they invented the television. In two generations from now, the machines may very well live within our anatomy.
Subscribe Today
Get daily emails in your inbox
This sort of wild speculation is not unfounded. As early as 2017, a Wisconsin company called Three Square Market offered to implant a tiny radio-frequency chip under the skin of its employees, effectively “turning bodies into bar codes.” An NPR article at the time said employees of the company were “lining up for the technology.” Which makes sense. A chip that allows access to door codes and payment options makes the lives of employees easier. And the employees didn’t need to be sold on it: They lined up willingly.
I am against all of this. I prefer the mountains and deserts and oceans and all of God’s creatures. At times, I feel desperate to walk off into the forest alone. I despise the screen and the constant churn of social media. I despise the idea of an artificial intelligence lording over the human race.
I don’t want to merge with technology, but it’s coming. In some ways it’s already here. All my worries and concerns and anger will do nothing to turn the path. So, why slow the inevitable? The time, whether we like it or not, has come to press on. Let’s see what the clankers can do.