Politics Foreign Affairs Culture Fellows Program

Constitutional Intelligence

There’s nothing artificial about the need for checks and balances when it comes to A.I. regulation.

Credit: 3rdtimeluckystudio

Madison’s Mechanism 

If Artificial Intelligence were an angel, there’d be no need for its governance. That’s a 21st-century updating of James Madison’s famous dictum from the 18th century: In Federalist 51 he wrote, “If men were angels, no government would be necessary.” Nothing man-made is angelic. A.I. is no exception.   


We don’t all need to be experts on A.I. to survive and flourish, but we need some experts, duly elected or otherwise rightly chosen, looking out for our interests. None of the experts will be angels, either, and so we’ll also have to keep an eye on them. It’s heartening to know that Americans were born with a great tool for guiding A.I.’s impact: The U.S. Constitution, which we celebrate on September 17, Constitution Day.  Over the last 234 years, the Constitution has proven itself equal to the challenge of managing non-angels, both human and man-made. 

It’s basic conservative wisdom that people, flawed as we are, need a framework to stay on the path of virtue and safety. Absent such a structure, even the good can break bad. Madison (or possibly Alexander Hamilton) knew this, and so he counseled in Federalist 55, “Had every Athenian citizen been a Socrates; every Athenian assembly would still have been a mob.” The point is that democracy runs amok if the system lacks republican structures. That is, separated powers, as called for in the Constitution, serving as checks and balances on each other.  

Lacking such mediating mechanisms, democracy becomes ochlocracy—mob rule—and so people will be, in Madison’s vivid words, “destroying and devouring one another.” Ever the conservative realist, Madison added, “There is a degree of depravity in mankind which requires a certain degree of circumspection and distrust.” And yet, he continued, hope abides, as “there are other qualities in human nature which justify a certain portion of esteem and confidence.” That’s what the Constitution is about: Doing its best to eliminate the negative, accentuate the positive. 

In the summarizing words of Ashland University’s Christopher Burkett, writing for Constituting America

The Founders’ study of history revealed that in some fundamental ways, human nature never changes. Human beings are capable of being reasonable and therefore self-governing, but one should not ignore the propensity of mankind to pursue and abuse power for self-interested purposes.  By framing a constitution upon a realistic understanding of unchanging human nature, they anticipated all sorts of new political developments: the forms of tyranny might change in the future, but the sources would not.


In their time, the Founders could see glimmers of new kinds of machine power; they anticipated that the forms of tyranny might change in the future. Mindful of confronting unknown future threats, Hamilton emphasized the new document’s adaptability: “Constitutions should consist only of general provisions: The reason is, that they must necessarily be permanent, and that they cannot calculate for the possible changes of things.”

Since then, things have, for sure, changed. In our time, the “computer-as-villain” is a staple of popular culture, from 2001 to Westworld to Terminator. Now, A.I. makes the risk all the more palpable: Elon Musk says, “A.I. is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production...it has the potential of civilization destruction.” Okay, so that’s the potential downside of A.I.; as Madison 2.0 might say, plenty of room there for depravity.  

Yet A.I. isn’t going anywhere: It’s too big to fail. Studies and projections about its impact scatter all over, from 80 percent of jobs affected to important but over-hyped. Yet as a snapshot of possible future upsides, one industry projection holds that in just six years, A.I. will add value for just two companies, Amazon and Walmart, to the tune of $580 billion. With such huge dollar numbers in play, few companies, and few shareholders, are going to ignore A.I. Indeed, if we wish to look on the bright side, here’s the CEO of OpenAI, Sam Altman, cheerleading for his opus, Chat GPT: “What if we’re able to cure every disease? That would be a huge victory on its own. What if every person a hundred years from now is a hundred times richer?” Are people going to turn that down? Just say no? Some will, but most won’t, just as they haven’t turned down electricity, or television, or the smartphone.  

The Globalists Are Coming

Still, there’s the question of how, exactly, to deal with this demiurge, this dynamo. To that end, OpenAI has offered $100,000 grants for best ideas on A.I. governance. Microsoft has its own vision, relying, not surprisingly, on “public-private partnerships.” The software behemoth adds hopefully, “The key to success will be to develop concrete initiatives and bring governments, respected companies, and energetic NGOs together to advance them.” Speaking of NGOs, the anywhere-ish Center for the Advancement of Trustworthy A.I. commits itself to assuring “safe, trustworthy A.I. for all by advancing agile A.I. governance on a global scale.” Some “mA.I.vens” argue for a light touch, while others argue for socialization. And, of course, there’s the potential for a bipartisan framework. In the meantime, worthy conferences galore. And for those looking to get read into the A.I.-mandarinate, fellowships for A.I. policymaking. 

In the meantime, some private players have already stated their policy aims: SoftBank CEO Masayoshi Son says that with A.I., his firm will “rule the world.” (Insert ironic sci-fi plot twist here.)  

Of course, governments have weighed in, too. In 2021, the European Union enacted legislation to make A.I. “safe, transparent, traceable, non-discriminatory,” and, of course, “environmentally friendly.” Yes, everything these days must be seen through a green prism. The current issue of Foreign Affairs makes that clear—even as it reminds us that we Americans might need a modern-day Paul Revere. Why? Because the globalists are coming. Co-authors Ian Bremmer and Mustafa Suleyman, foreseeing a planetary “technopolar” order, point to the Intergovernmental Panel on Climate Change (IPCC) as a model for managing the A.I. multipoles. Yes, that’s right: Let the A.I.-maven equivalents of Al Gore, John Kerry, Klaus Schwab, and Larry Fink run the show. So if you love the IPCC, you’ll love what we might call “IPCC A.I.”  Warming up to their desired role model, Bremmer and Suleyman write, “To create a baseline of shared knowledge for climate negotiations, the United Nations established the Intergovernmental Panel on Climate Change and gave it a simple mandate: Provide policymakers with ‘regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.’” And so, the Foreign Affairs men conclude, 

AI needs a similar body to regularly evaluate the state of AI, impartially assess its risks and potential impacts, forecast scenarios, and consider technical policy solutions to protect the global public interest. Like the IPCC, this body would have a global imprimatur and scientific (and geopolitical) independence. And its reports could inform multilateral and multistakeholder negotiations on AI, just as the IPCC’s reports inform UN climate negotiations. 

Is that enough Greco-Latin technocrat-ese for you? And is this what we want? An IPCC for A.I.? As for the original IPCC, it’s worth noting that 195 countries are members, including Cuba, Iran, North Korea, and Somalia. In other words, the IPCC is another U.N. General Assembly, which has never been a friend to the U.S. Of course, as with anything at the U.N. that actually makes anything happen, there’d be an executive committee of some kind. So, just as the IPCC is shaped by billionaires, NGOs, and deep states, so an IPCC A.I. would be shaped by...billionaires, NGOs, and deep states.

Yet as with any multilateral entity, one sees roadblocks at the top. For instance, there’s China. That country is undoubtedly the main rival to the U.S. on A.I. (as well as on everything else). So what would happen were the two nations to confront each other on a hypothetical “A.I. Security Council”? China’s record as an IPCC member does not offer hope; while Beijing talks a good game, its walk is much different. It is avidly building coal plants, and just in July, it used what The Financial Times headlined as “wrecking tactics” at climate talks. In other words, China is playing Gore, Kerry, Schwab, Fink, et al. for suckers. (That the Western climate-change elite seems to know this, and not mind, is a tale for another time.)  

Moreover, there’s the further riddle of the enigmatic nature of the communist regime. Taking note of the sudden disappearance of China’s foreign minister, airbrushed out like one of Stalin’s commissars, veteran Sinologist Michael Schuman writes, “If the world’s best China experts can’t figure out what happened to one of China’s most internationally recognizable officials, then imagine what else remains hidden behind the regime’s closed doors.” The lesson for the rest of us: If China can disappear its top diplomat, it can hide a rogue A.I. algorithm.  

Yet U.N. Secretary General António Guterres, sensing a legacy-making opportunity, is down with A.I. On July 18, he pointed with pride to the IPCC and declared of AI’s potential for turf-building, “I therefore welcome calls from some Member States for the creation of a new United Nations entity to support collective efforts to govern this extraordinary technology.”

So is this the A.I. prospect? If we don’t wish to leave the technology solely in the hands of its non-angelic bro creators and capitalistic would-be world-rulers, do we have to instead put it in the hands of neoliberal globalists? Who are also, of course, would-be world-rulers? Either way, all along the horseshoe, populists—from Bernie Sanders and Antifa to Donald Trump and MAGA—will be reaching for their revolvers.  

Happily, there’s a better way forward for America.

The Treaty Treatment 

Article II, Section 2, Clause 2 of the Constitution declares that the president “shall have Power, by and with the Advice and Consent of the Senate, to make Treaties, provided two thirds of the Senators present concur.” The Founders knew they couldn’t know where the nation would be headed in the future, but they were sure of two things: First, the executive branch was not to be trusted with exclusive foreign deal-making power; and second, any foreign deal must not change the basic law of the United States. That is, no leader could, in effect, amend the Constitution through the back door of a foreign treaty. As Hamilton averred to President George Washington in 1795, “A treaty cannot be made which alters the Constitution of the country or which infringes any express exceptions to the power of the Constitution.”  

Yet in recent decades, some American politicians have sought to undermine these bedrock principles of our national sovereignty. Oftentimes, the argument has been that technological innovation has obsolesced the Constitution. For instance, after World War Two, many among the technocratic, mostly liberal, elite wanted international control of the nuclear weapons that the U.S. had pioneered. Yes, many of our best and brightest thought that sharing everything with Stalin would make the world safer—or at least make the Soviet Union safer. Fortunately, realism prevailed; although, as we learned only too late, some among the elite were okay with sharing our nuclear knowhow with Stalin’s spies. 

Despite these setbacks, our constitutional system survived. The first arms control deal of the Cold War, the Nuclear Test Ban, was a formal treaty, just as the Constitution requires. In 1963, the Kennedy administration, having negotiated it, duly submitted it to the Senate, which then ratified it.

Yet increasingly, presidents have resisted constitutional accountability. We might recall that the two biggest diplomatic deals of the Obama-Biden administration, the Iran nuclear agreement and the Paris Climate Accords, were both conspicuously not sent to the Senate as treaties.  And now A.I. is getting the same non-treaty treatment. At a G-7 meeting in May, the Biden administration signed an A.I. agreement that Congress could read about in the newspaper. The national security adviser Jake Sullivan asked and answered: “How do we come together in an international format to effectively try to align approaches so that we’re dealing with this incredibly fast-moving technology with these incredibly far-reaching implications?”  

Sullivan characterized the discussion as “a good start.” Good start?  Maybe. But maybe not. In any case, it’s not for the Bidenites alone to decide. The Constitution says the Senate gets the final say on treaties, and A.I. is important enough that any agreements concerning it—which could, for instance, emerge from a possible A.I. Summit later this year in the United Kingdom—should be classified as treaties, and treated as such. 

As an aside, it must be said that Congress is often happy not to be involved in dicey decisions. Why take a position, many members think to themselves, on some matter that will make somebody mad? It’s better, they figure, to have no fingerprints on anything problematic, leaving them free to position themselves, pro or con, as expedience might suggest. That’s a craven political stance, shirking constitutional duty, and yet the craven are often more mindful of their own ease and re-election. It’s hard to know how to overcome this cynical syndrome, other than the citizenry demanding better of the people they elect. And perhaps a critical mass of voters will be triggered, in a good way, by the thought of yet another IPCC-type operation making globaloney pronouncements and setting policy—with the U.S., of course, bearing the brunt and footing the bill.  

So it’s time for the American people to stand athwart history, yelling “Stop!”  Let’s hunker down on first principles, namely, the wisdom of the U.S. Constitution, and see that both presidents and lawmakers play by its rules. We can learn, always, from Madison and his allies, and yet we can also learn from interpreters since.  

For instance, there’s James Burnham, long a stalwart at National Review, and author, in 1959, of Congress and the American Tradition.  Burnham held quite a number of intellectual positions in his long life—from youthful Trotskyism to managerial theorist to anticommunist—and yet in his later, more orthodox conservative phase, he championed Congressional power. He saw the first branch as the rightful bulwark against “Caesarism-tending political leadership” atop a “pervasive governmental bureaucracy...supported by a bureaucratized military and police apparatus.”  

In response to a Caesar, Burnham believed, Americans needed a Cicero, and more than one, in the form of a legislature. “The people cannot be represented by or embodied in a single leader,” he wrote, “precisely because of the people’s diversity.” Yes, properly understood as a matter of conscience, as well as color, diversity is a friend to the free.  

Interestingly, Burnham devoted a whole chapter to Congress and treaties. “It seems clear,” he wrote, “that the Fathers expected the Senate to participate as a kind of ‘executive council’ in the negotiation of treaties and the active conduct of foreign relations.” Burnham was fully aware of arguments that the pace of technological change had outrun Congressional competence; if the Senate can’t keep up, progressives asserted, best to leave matters to the experts. (You know, like the experts who gave us Covid lockdowns.)  

Burnham rejected this expertise argument, and he further railed against the Senate’s tendency toward “obedient acquiescence.”  Indeed, Burnham insisted that the pace of change argued for more Congressional involvement, to keep tabs on the doings of the Executive Branch: “The speed-up and expansion of communications, transport, technology and so on have made almost everything—from atoms to air rules to radio frequencies—a matter of potential international concern, and thus a possible subject for treaty determination.”  Appealing to institutional prerogative, Burnham reminded senators, “The prime mark of autonomous power, of independence, is the ability to say No.”

It’s possible that an IPCC for A.I. is a good idea.  Or whatever else the Biden administration has cooking, A.I.-wise, with the G-7, or the U.N., or some other international forum. But if so, let it make its case in the correct, constitutional, manner. Package it as a treaty, and let—and if need be, make—the Senate vote on it. And if the Senate doesn’t approve it, that’s okay, because our A.I. policy, like our A.I. capacity, is probably better off being driven from home, not depending on unverifiable promises from other countries. The tides of our A.I. should always be channeled through the canals of our Constitution. Such a process doesn’t guarantee a happy outcome, but at least it will support an American outcome—and that’s likely to be good enough.  

The States Check-and-Balance the Federal Government

Of course, there’re many other facets to the gem of our Constitution.  One of these is federalism, or, more bluntly, states’ rights.  Summed up in the Tenth Amendment, federalism gives the states the right, in most cases, to chart their own course.  So if the states can be laboratories of democracy, they ought to be, also, laboratories of technology.  And that might well mean an A.I. policy independent of whatever the feds come up with. 

Yes, this is diversity in action, and no less than the Great Madison approved. In 1798, the principal author of the Constitution wrote of the states, “It is their duty to watch over and oppose every infraction of those principles which constitute the only basis of that Union.” He continued, “The states who are parties thereto, have the right, and are in duty bound, to interpose for arresting the progress of the evil, and for maintaining within their respective limits, the authorities, rights and liberties appertaining to them.”  We can pause over that important word, “interpose.” As in, the states can interpose their sovereign powers between the federal government and their  citizenry. That’s a concept that’s been in eclipse for a long time, although no good cause is ever truly lost (or, of course, ever truly won). 

In any case, the states’ rights idea is making a comeback, as states, blue and red, carve out their own path on issues ranging from abortion to school choice to climate change. So why not A.I.? Should Texas, for example, think for a moment that the Biden administration—biased as it is toward Big Tech, California, and its own re-election—has Lone Star’s best interests at heart? Self-determination for a state could mean that it shuns A.I., although as argued here, that would be a mistake. Far better for each state, or at least each region, to think about developing its own version. Developing a Red A.I. would be a lot of work, but the 25 states of Red possess an aggregate GDP of more than $10 trillion, so the money’s there if leaders wish to ensure their emancipation from Blue’s A.I.  

Yes, lots of laboratories, of democracy and technology both.  But we should be optimistic that with our tried-and-true constitutional bulwark in place, we can subordinate A.I. to the useful and the good.