State of the Union

What the Election Means for the Republican Brand

Tuesday’s Republican tide wasn’t surprising, but there’s more to be said about it than just the obvious. The obvious is that this class of Senate seats was last up in 2008, a presidential year that was the high-water mark for Democratic turnout going back a generation. There weren’t going to be nearly as many Democrats heading to the polls this year, but what should have alarmed Democrats all the more is that 2008 rather than 2012 remains their high-water mark: Obama is the first president since World War II to be re-elected by a margin smaller than that of his original victory. That can hardly be interpreted as a vote of confidence in the Democratic brand, even if two years ago voters found the Republicans’ “dog food” even more distasteful.

Have the Republicans overcome their 2012 problem? They picked up Senate seats in red states (Arkansas, North Carolina) and historically red purple states (Colorado, Iowa). They held onto the governorships of the two most important large swing states—Florida and Ohio—but lost an incumbent governor in Pennsylvania, which the GOP has dreamed  of retaking in presidential contests for more than a decade. Republican governor Scott Walker handily won his third election in Wisconsin.

These are impressive results that probably do not change the 2016 map. Obama, after all, won Florida, Ohio, and Wisconsin in 2012 with the same Republican governors in office, and in two years’ time voters in those states may be as tired of their governors as those nationwide are of the president today. Unfortunately for Democrats, voters are likely to be even more fatigued by their party’s presence in the White House after another two years of Obama, but in any case fatigue can work both ways.

The Republicans’ gains in purple America this year are what could be expected given the contrast of this electorate with 2008 and 2012 presidential turnouts: these states are purple because they are battlegrounds, and if Democrats are not out in force as heavily in midterms as in a presidential year, they stand to lose. (They came close to losing Mark Warner’s Senate seat in Virginia, too, after sweeping the Old Dominion’s statewide elections last year: Virginia is a state on the tipping point, and while it seems to be tipping the Democrats’ way, even a small shudder from voters could tip it back for a time. In this light, Governor McAuliffe won’t necessarily be an asset for Hillary Clinton in 2016.)

Republicans won important victories in several deep blue states’ gubernatorial races: Illinois, Massachusetts, and the surprise of the night, Maryland. These states have all had a penchant for electing Republicans to statewide office while remaining firmly blue in presidential elections, however, and none of these wins heralds the return of moderate “Northeastern” Republicanism to the national stage. Nor, of course, does Scott Brown’s defeat in New Hampshire’s Senate contest.

So is all this just business as usual, an uptick for the opposition party in the dying days of a two-term presidency, with a reversion of many states to their historic—and sometimes quite idiosyncratic—patterns? If that’s the case, then Republicans did very well on Tuesday without changing in the slightest, and facing a less favorable electorate in the future, or with worse luck in selecting candidates, they will be right back to the where they were in 2012: as the less popular of two troubled parties.

There’s a deep problem here. While movement conservatives have always chafed at the assumption that George W. Bush embodied their ideology, he most certainly did: as The Economist‘s John Micklethwait and Adrian Wooldridge noted in The Right Nation, Bush was the first Republican president who had come of age with the conservative movement—Nixon, Reagan, and the elder Bush were products of an earlier environment. Conservatism was an open-ended question in their time, but for the second Bush it was one that had been answered all his life by self-identified conservative institutions: think tanks, magazines, books, and blocs of politicians. Whatever Bush’s personal and opportunistic deviations, his administration’s defining policies—tax cuts, wars, and expansion of executive power in the name of national security—hewed to the movement’s playbook. Movement conservatism’s organs of opinion and policy were happy with Bush overall and eager to silence his critics.

But with Bush’s downfall came a need to redefine the Republican Party’s ideology and brand. After the country as a whole repudiated Bush by turning to Democrats in 2006 and 2008, the GOP also repudiated him by turning in 2010 to the Tea Party and a new brand of liberty-minded Republicans exemplified by Sen. Rand Paul and Rep. Justin Amash. These “liberty movement” Republicans were few in number but represented a qualitative change in tone and policy emphasis for the GOP, particularly on national security and foreign policy. One could easily imagine Republicans of this sort as the wave of the future, if the GOP were to have any future at all: these were the kind of Republicans who might represent a viable conservatism in an increasingly diverse country where marijuana is legal and same-sex marriage commands majority support. Their anti-authoritarianism and commitment to cultural federalism suggested a way forward for the party. Win or lose in years to come, they were certainly not the same Bush brand that voters had rejected in 2006, 2008, and indeed 2010.

Yet now Bush is ancient history, and the un-Bush of 2008, Barack Obama, has begun to exhibit distinctly shrublike characteristics—as Bruce Bartlett has shown, Obama is something between a moderate Republican of the old Rockefeller variety and a direct continuation of George W. Bush. The powerful but ill-defined anti-Bush “brand” that shaped both parties between 2006 and 2012 has given way to a Democratic Party that now defends the Bush-like policies it once defined itself against and a Republican Party that in opposing Obama does so for reasons unrelated to his resemblance to his predecessor. Republicans today can once again employ their familiar decades-old ideological armament against a militarily inept, big-spending, socially liberal Democrat. These weapons have done the trick for decades—until the Bush disaster deprived them of their effectiveness—so who needs new ideas?

The party does have new faces. Joni Ernst is 44, Cory Gardner is 40, Tom Cotton is 37, and many of the GOP’s other new officeholders are also in their 30s and 40s. They are old enough to have been ideologically shaped by movement conservatism as it existed in the ’80s and ’90s—when neoconservatism and the religious right were ascendant—but not young enough to have had Bush’s debacles as a formative childhood experience. They are the Alex P. Keaton generation.

Can these fortyish idols of a party philosophically defined by Fox News—whose median viewer age is 68—win over millennial voters and the electorate of the future? They will if there’s no one organized enough to compete against them. The well-oiled machinery of movement conservatism remains fully in the hands of people who think the only trouble with George W. Bush was that he did not go far enough. Heritage and AEI have lately tried to present softer images on a number of domestic issues—prison reform, policies to help the working class—but they are as single-mindedly hawkish as ever when it comes to foreign policy and just as dedicated as the Bush administration to expanding executive power. Young Republicans like Tom Cotton represent the worst aspects of the movement’s ideology, and none of the new faces appears to represent the best.

On these great issues of war and peace, legislative government or executive prerogative, Republican realists and libertarians have a much weaker infrastructure to begin with, and for most libertarian institutions and their benefactors gutting regulation remains a higher priority than stopping any war. Democrats, meanwhile, are once more terrified of seeming too dovish, as Obama’s botched policies—interventionist but reluctantly so—teach his party anew that McGovernite and Carter-esque weakness is fatal. (This is true: peace in strength is what America’s voters want.) So it’s back to the Democrats’ answer to Bush: Clinton, and the female of the species may soon prove deadlier than the male.

Still, the public does have some say in all this, and it has shown to have no appetite for the decades-long wars that Tom Cotton’s Republican Party appears to portend. The market for realism and non-authoritian politics remains. But can anyone organize the institutions and policy-making cadres to serve this demand? If not, there is little chance of a lone politician or small group of liberty-movement Republicans redirecting their party, much less their country, away from futile wars and executive consolidation: we will be back to the Bush and Clinton era, with Rand Paul as lonely a dissenter as ever his father was. At least, that is, until the Cottons and Clintons lose another, bigger war and plunge the country into something even worse than the Great Recession. Then we’ll get change without the hope.

30 comments

American Machiavelli

illustration by Michael Hogue
illustration by Michael Hogue

America is badly governed. Congress has dismal approval ratings, sometimes as low as single digits. Presidential elections, settled by popular landslides in most postwar contests, now see margins of less than 5 percent separating winner from loser. Half or more of the country at any time disapproves of the president.

Politics is polarized. Yet activists left and right are frustrated that our politics also seems stuck in an unprincipled middle. Republicans and Democrats employ violent rhetoric against one another but are more similar than not in their behavior. Republican and Democratic presidents alike expand the welfare state; both parties endorse free trade; both are quick to use military force abroad. Even on divisive social issues, where popular passions are most irreconcilable, the conformity among the elite can be surprising. Only after Republicans like Ken Mehlman and Ted Olson had come out in support of same-sex marriage did the Clintons and Obama do so. Democrats are not necessarily as liberal, nor Republicans as conservative, as they seem.

Meanwhile, the troubles facing the country are grave. Wars, terrorism, and a sense of losing ground economically and strategically beset the national psyche. Politics seems inadequate to the crises.

These appear to be a variety of different and even paradoxical problems—how can our politics be both too extreme and too consensual? Yet one writer’s work pulls all this into focus. He was one of the key thinkers of the postwar conservative movement, though his thought is badly neglected on the right today. The man whose mind explains our politics today and suggests a diagnosis—if not a cure—for our condition is James Burnham. Once a Marxist, he became the American Machiavelli, master analyst of the oligarchic nature of power in his day and ours.

He was one of William F. Buckley Jr.’s first recruits for the masthead of National Review before the magazine’s launch in 1955. Burnham, born in 1905, had already had a distinguished career. He had worked with the CIA and its World War II-era precursor, the OSS. Before that, as a professor of philosophy at New York University, he had been a leading figure in the American Trotskyist movement, a co-founder of the socialist American Workers Party.

But he broke with Trotsky, and with socialism itself, in the 1940s, and he sought a new theory to explain what was happening in the world. In FDR’s era, as now, there was a paradox: America was a capitalist country, yet capitalism under the New Deal no longer resembled what it had been in the 19th century. And socialism in the Soviet Union looked nothing at all like the dictatorship of the proletariat: just “dictatorship” would be closer to the mark. (If not quite a bull’s-eye, in Burnham’s view.)

Real power in America did not rest with the great capitalists of old, just as real power in the USSR did not lie with the workers. Burnham analyzed this reality, as well as the fascist system of Nazi Germany, and devised a theory of what he called the “managerial revolution.” Economic control, thus inevitably political control, in all these states lay in the hands of a new class of professional managers in business and government alike—engineers, technocrats, and planners rather than workers or owners.

The Managerial Revolution, the 1941 book in which Burnham laid out his theory, was a bestseller and critical success. It strongly influenced George Orwell, who adapted several of its ideas for his own even more famous work, 1984. Burnham described World War II as the first in a series of conflicts between managerial powers for control over three great industrial regions of the world—North America, Europe, and East Asia. The geographic scheme and condition of perpetual war are reflected in Orwell’s novel by the ceaseless struggles between Oceania (America with its Atlantic and Pacific outposts), Eurasia (Russian-dominated Europe), and Eastasia (the Orient). The Managerial Revolution itself appears in 1984 as Emmanuel Goldstein’s forbidden book The Theory and Practice of Oligarchical Collectivism.

Could freedom of any sort survive in the world of 1984 or the real world of the managerial revolution? Burnham provided an answer—one Orwell didn’t want to hear—in his next book, The Machiavellians: Defenders of Freedom. Liberty’s only chance under any economic or political system at all was to be found in a school of political realism beginning with the author of The Prince.

Machiavelli poses yet another paradox. The Florentine political theorist seems to recommend a ruthless and manipulative ethos to monarchs in The Prince—the book is a veritable handbook of tyranny. Yet his other great work, the Discourses on the First Decade of Titus Livy, is as deeply republican as The Prince appears to be despotic. Whose side was Machiavelli on?

Scholars still argue, but Burnham anticipated what is today a widely accepted view: Machiavelli was fundamentally a republican, a man of the people, yet one who took a clear-eyed, even scientific view of power. And by discussing the true, brutal nature of politics openly, Machiavelli provided any of his countrymen who could learn a lesson about how freedom is won and lost. As Burnham writes:

If the political truths stated or approximated by Machiavelli were widely known by men, the success of tyranny and all the other bad forms of oppressive political rule would become much less likely. A deeper freedom would be possible in society than Machiavelli himself believed attainable. If men generally understood as much of the mechanism of rule and privilege as Machiavelli understood, they would no longer be deceived into accepting their rule and privilege, and they would know what steps to take to overcome them.

From his experience in government and reading of the classics Machiavelli distilled a number of lessons, which Burnham further refines. “Machiavelli insists,” he notes, that in a republic “no person and no magistrate may be permitted to be above the law; there must be legal means for any citizen to bring accusations against any other citizen or any official…” Freedom also requires a certain extent of territory, even if the means by which that territory is to be acquired are not as republican as one would wish: hence Machiavelli’s call for a prince to unify Italy. Machiavelli was a Florentine patriot, but he had seen his beloved city ruined by wars with other cities while mighty foreign kingdoms like France overawed them all. Cities like Florence and their citizens could be free only if Italy was.

Most importantly, within any polity “only out of the continuing clash of opposing groups can liberty flow,” writes Burnham:

the foundation of liberty is a balancing of forces, what Machiavelli calls a ‘mixed’ government. Since Machiavelli is neither a propagandist nor an apologist, since he is not the demagogue of any party or sect or group, he knows and says how hypocritical are the calls for a ‘unity’ that is a mask for the suppression of all opposition, how fatally lying or wrong are all beliefs that liberty is the peculiar attribute of any single individual or group—prince or democrat, nobles or people or ‘multitude.’

All well and good—but what has any of this to do with the perils of America in 1943, let alone those of seven decades later? The answer begins to emerge once later contributions to the Machiavellian tradition are taken into account. Burnham focuses on four late 19th- and early 20th-century thinkers: Italian social theorists Gaetano Mosca and Vilfredo Pareto; French syndicalist Georges Sorel; and German sociologist Robert Michels. Together, their work explains a great deal about 21st-century American oligarchy—and what can be done about it.

Mosca’s signal contribution was a categorical one: all societies are logically divided into two classes, rulers and ruled. This may seem like common sense, yet in fact Mosca’s taxonomy dispels two persistent myths, those of autocracy and democracy—of one-man rule and rule by everyone. For even the most absolute monarch depends on a class of advisers and magistrates to develop and enforce his policies, while in the most liberal modern democracy there is still a practical difference between the elected and appointed officials who make or execute laws and the ordinary citizen who does neither.

The rationale according to which a society justifies the division between rulers and ruled is what Mosca calls its “political formula.” In the U.S. today, representative democracy is the political formula. For early modern monarchies, it was a theory of divine right. Under communism, the political formula was the idea of the party as the vanguard of the proletariat—the class that according to Marx would inherit the earth. For the Nazis the formula was the identification of the party and its leader with the mystical essence of the Volk.

Just as Machiavelli does not entrust liberty to any one class—nobles, king, or people—Mosca does not believe freedom depends on any particular political formula. Such doctrines are myths, even if some historically correspond only to the most repressive regimes. The reality is that liberty comes from specific conditions, not abstract formulas—conditions that permit open competition among what Mosca calls “social forces.” Burnham explains: “By ‘social force’ Mosca means any human activity which has significant social and political influence,” including “war, religion, land, labor, money, education, technological skill,” all of which are represented by different factions and institutions in society.

The ruling class represents the strongest forces—but which ones are strongest changes over time. Practices that allow competition among social forces thus imply a ruling class of some permeability, as well as one tolerant of organized opposition and dissent.

A lesson here for America’s nation-building efforts in the Islamic world should be plain—democracy and a paper-based rule of law count for nothing; actual social forces are everything. In Afghanistan, Iraq, Syria, and Libya we understand nothing about the forces of tribe, sect, and interest. As a result, trillions of dollars and thousands of American lives lost are not nearly enough to create order, let alone freedom. We let our own political formula blind us to foreign realities.

Not all myths are politically debilitating, however. Burnham finds in the work of Georges Sorel—a revolutionary syndicalist who early in the 20th century became a fellow traveler of Charles Maurras’s Action Française and the nationalist right—a theory of myth as constitutive of political identity and a driver of political action. “A myth that serves to weld together a social group—nation, people, or class—must be capable of arousing their most profound sentiments,” says Burnham, “and must at the same time direct energy toward the solution of the real problems which the group faces in is actual environment.”

For Sorel, the archetypal myth of this sort was the anarcho-syndicalist idea of the general strike, in which all workers cease their labor and bring down society, resulting in spontaneous creation of a new and more just order. A Sorelian myth is not a utopian vision—the utopia is what comes after the mythical action—but it is also not a thing that occurs in time and space. It is an aspiration that in theory could be fulfilled but in practice never will be, yet in working toward this impossible goal much real progress—in terms of organization, reform, and empowering one’s group—is achieved.

Myths of this sort are plentiful in American politics. On the right, they include the idea of ending all abortion or returning to a pristine interpretation of the U.S. Constitution. On the left, they include the goals of eliminating all discrimination and bringing about universal human equality—as if more equality in some things might not lead to more inequality in others.

“A myth cannot be refuted,” however, “since it is, at bottom, identical with the convictions of a group, being the expression of these convictions in the language of movement; and it is, in consequence, unanalyzable into parts which could be placed on the plane of historical descriptions,” Sorel writes. Such myths “are not descriptions of things but expressions of determination to act.”

The key is the ability of myths to organize groups and mass movements. The effects of such mobilization, however, can be paradoxical. The election of a Tea Party senator like Ted Cruz, a Princeton and Harvard graduate whose wife is a Goldman Sachs executive, or a left-wing populist like Elizabeth Warren, a Harvard professor herself who is, if not a one-percenter, much closer to the top one percent than to the bottom 90, shows how the myths of the masses serve society’s winners.

Even organizations that come into being to rally the masses are themselves subject to the scientific laws of power, in particular the “iron law of oligarchy” described by Burnham’s third subject, the German-Italian sociologist Robert Michels. In works such as the book known in English as Political Parties, Michels shows that all organizations and movements have a leadership class whose interests and abilities are distinct from those of the membership. Democracy or equality—the idea that everyone participates on the same level—is antithetical to the very concept of organization, which necessarily involves different persons, different “organs,” serving different roles. And some roles are more powerful than others.

Not only do leaders corrupt organizations, Burnham observes, but the oligarchic nature of organization affects even the most selfless leader as well. “Individual saints, exempt in individual intention from the law of power, will nonetheless be always bound to it through the disciples, associates, and followers to whom they cannot, in organized social life, avoid being tied.” Many a grassroots true believer faults bad advisers for the mistakes of a Ronald Reagan or Ron Paul—but the problem is not bad advisers, it’s advisers, period. They are necessary, and they necessarily have their own motives and perspectives. Without them, however, there would be no organization. This is as true of grassroots groups, even purely volunteer ones, as of Beltway cliques.

Effective politics therefore means accepting the limits of human nature and organization and working within those limits, not expecting perfection. An organization as a whole must harmonize the interests of the leaders with those of the membership and direct them all toward political achievement.

The greatest of the modern Machiavellians considered by Burnham is the one he covers last: Vilfredo Pareto, whose accomplishments spanned the fields of economics and sociology. Pareto’s work on elitism sums up and extends the thinking of the others, though Mosca considered him a rival and copycat. Pareto examines not only social class but classes of social psychology: his magisterial Mind and Society reduces human motives to six fundamental classes of what Pareto terms “residues.” (They are residues in that they are what remains when everything less stable has been boiled away by analysis.)

Only the first two classes are important for Burnham’s investigation. Class I residues involve the “instinct for combinations”—manufacturing new ideas and tastes from the disassembling and reassembling of old ones; creating complex systems from simple materials; incorporating experiences of the world with ideas in novel ways. These are the instincts that drive the verbalist and theorist, the filmmaker, the philosopher, the magician. Class II residues, by contrast, involve “group persistences” and encourage the preservation of existing institutions and habits. These are the psychological forces of social inertia; they are also the forces of loyalty.

Burnham observes that Class I residues correspond to the character type Machiavelli describes as the fox, cunning and quick to use fraud to get his way. Class II residues correspond to Machiavelli’s lions, more comfortable with force than manipulation. A society needs both types. “If Class II residues prevail” in all strata of society, Burnham warns,

the nation develops no active culture, degenerates in a slough of brutality and stubborn prejudice, in the end is unable to overcome new forces in its environment, and meets disaster. Disaster, too, awaits the nation given over wholly to Class I residues, with no regard for the morrow, for discipline or tradition, with a blind confidence in clever tricks as the sufficient means for salvation.

After residues, the rock-bottom motives of men and women, come what Pareto calls “derivations.” These, writes Burnham, are “the verbal explanations, dogmas, doctrines, theories”—and ideologies—“with which man, with that passionate pretense of his that he is rational, clothes the non-logical bones of the residues.” Derivations may seem to be expressions of rational thinking, but they are not. “It is for this reason,” Burnham continues,

that the ‘logical’ refutation of theories used in politics never accomplishes anything so long as the residues remain intact. Scientists proved with the greatest ease that Nazi racial theories were altogether false, but that had no effect at all in getting Nazis to abandon those theories; and even if they had abandoned them, they would merely have substituted some new derivation to express the same residue.

Facts about voter fraud and the suppressive effects of voter ID requirements, for example, thus count for very little in today’s discussions of such laws—not because either side is consciously dishonest about its intentions but because such arguments are driven by emotional commitments that are not subject to proof or disproof. This is also why our cable news channels put little effort into persuading skeptics. The politics to which they cater is about group loyalty and its derivative mythologies. (To be sure, this costs Fox and MSNBC their credibility with people in whom Class I residues are stronger—not necessarily because such people are devoted to the truth but because they at least desire variety and complexity. Fox News is for lions, not foxes.)

No one is a slave of a single class of residues, however, and both within the individual mind and within society there are always competing currents. Elites in particular must cultivate a mixture of fox-like and lion-like qualities if they hope to retain power. An imbalance of these characteristics leads to social upheaval and what Pareto terms “the circulation of elites,” the fall of one ruling class and rise of another.

This happens especially when foxes, having outmaneuvered the lions in the struggle for power within society, are confronted by an external threat that cannot be overcome without violence. Foxes are inept in the use of force, apt to apply too much or too little, and always they prefer to secure their goals by deceit or diplomacy.

There is also internal danger from an imbalance of residues. Talented verbalists denied admittance to an elite whose ranks are closed will, instead of competing for power within the institutions of society, attempt to gain power by subverting those institutions—including through revolution, which they foment by sowing alienation and anger among the lions of the public.

Burnham feared that foxes were dangerously dominant in the America of his own time, which is why he followed The Machiavellians with a series of books arguing for a hard line in the Cold War: The Struggle for the World in 1947, The Coming Defeat of Communism in 1949, and Containment or Liberation? in 1953. His column in National Review, which he wrote from 1955 until ill health ended his career in 1978, was called first “The Third World War” and later, only a little mellowed, “The Protracted Conflict.”

He died in 1987, much honored by the conservative movement he had helped build. Yet he is poorly understood today, remembered only as a Cold Warrior rather than a brilliant social theorist of enduring urgency. Ironically, Burnham’s last original book, The Suicide of the West in 1964, may have contributed to misperceptions about his work. In it, Burnham describes liberalism as “the ideology of Western suicide”—meaning not that it was the cause of the West’s loss of ground and nerve but that it was a rationale expressing a more fundamental mood of surrender. Burnham was assumed to be saying that the managerial revolution, having put in a place a new liberal ruling class since World War II, was leading America into weakness and withdrawal.

In fact, though Burnham hardly emphasized this for his free-market readership in National Review, liberalism was the ideology of Western capitalism’s suicide in the face of an assertively managerial Communist bloc. Burnham had, after all, argued in The Managerial Revolution that of the three great nations in the throes of the revolution—the U.S., Russia, and Nazi Germany—the U.S. was least far along the path and the most torn between its capitalist past and managerial future. Liberalism, even in its left-wing, statist iteration, is the characteristic ideology of capitalism, and it was the capitalist system as well as the West—Burnham identified both with the British Empire—that was committing suicide.  thisarticleappeared-novdec14

This might suggest Burnham’s social thought is even more antiquated than his Cold War strategizing. After all, the managerial Soviet Union is gone, and the capitalist U.S. has not only survived but thrived for decades in what is now a globalized free-market system. While the political theory of The Machiavellians doesn’t depend on The Managerial Revolution—it’s surprising, in fact, how little connected the two books are—his reputation must surely suffer for getting such a basic question wrong.

Only he didn’t get it wrong—for what is the political and economic system of China if not what Burnham described in The Managerial Revolution? Engineers, industrial planners, and managers have led China for decades, with unarguable results. Indeed, several East Asian economies, including those of American allies Taiwan, South Korea, and Japan, are managerialist. As Burnham predicted, these economies have been highly successful at controlling unemployment and raising standards of living.

The American ruling class, by contrast, has pursued a largely anti-managerial policy, ridding the country of much strategic manufacturing. Such industry—including shipbuilding and semiconductor fabrication—is now overwhelmingly based in Asia. The U.S. hybrid system, transitioning from capitalism to managerialism, outperformed the Soviet Union. Whether it can outperform the next wave of managerial revolution is very much uncertain.

For the Machiavellians, freedom is not a thing to be won by popular revolt against the ruling class—for any revolt can only replace one ruling class with another. Instead, freedom requires that factions among the elite—representatives of different social forces or rival elements of the same one—should openly compete for power and seek to draw into their ranks the most talented foxes and lions of the people, to gain advantages in skill and strength over their rivals. In such a system, the people still do not rule directly, but they can influence the outcome of elite contests at the margin. This leads evenly matched elites constantly to seek popular support by looking out for the welfare of the common citizen, for perfectly self-interested reasons.

What has happened in America since the end of the Cold War, however, is that competition for popular favor has been reduced to a propaganda exercise—employing myths, symbols, and other “derivatives”—disconnected from policies of material interest to the ruling class. Thus monetary policy, foreign policy, and positions on trade and immigration vary little between Republican and Democratic presidents. This is a terrible situation—if you’re not part of the elite. If you are, all the gridlock and venom of our politics is simply irrelevant to the bottom line. For the non-elite, however, insecurity of all kinds continues to rise, as does a sense that the country is being sold out from under you.

America’s ruling class has bought itself time—for continuing capitalism in an age of worldwide managerial revolution—at the expense of America’s middle and working classes. Reform, alas, will not come from “throw the bums out” populism of either the Tea Party or Occupy Wall Street varieties. It can only come from two directions: the best of the people must grow conscious of how oligarchy operates and why populist leadership is a paradox, and new factions among the elite must be willing to open competition on more serious fronts—campaigning not only on myths and formulas but on the very substance of the managerial revolution.

Daniel McCarthy is editor of The American Conservative.


Was the American Revolution Secessionist?

Battle of Yorktown re-enactors.Visions of America / Shutterstock.com
Battle of Yorktown re-enactors.Visions of America / Shutterstock.com

A few people may be a little unclear about the argument of my last post on secession as a principle of liberty (or not, as I argue). Its inspiration was the fact that it seemed curious for Americans to long for Scottish secession when the Scots themselves had voted against it. Whatever was being expressed was not sympathy for the Scottish people, so what was it? The answer was a general case for secession as an inherently good thing, in radical libertarian theory, because it leads to smaller states, and maybe no states. I pointed out various problems with this notion, which seems to have greater emotional force than reasoning behind it.

Obviously there are cases in which secession or political breakup—we’ll get to the difference in a minute—can be good things. In the case of Czechoslovakia, division came peacefully and fulfilled a democratic-nationalistic wish on the part of the constituent peoples to govern their own affairs separately. The relationship between such democratic-nationalistic motives and the individual liberty that is dear to U.S. libertarians is complex, but it looks as if there are cases where the former doesn’t harm the latter and may even advance it. That does not mean the two things are always compatible, however, and nationalism—including of the Scottish variety—very often involves policies abhorrent to the advocates of free markets.

Ironically or not, criticizing a general enthusiasm for secessionism elicits a certain amount of patriotic ire from some individualists who play the American Revolution as their trump card. Is the argument that secessionism is generally good because the American Revolution was good, or is it that the American Revolution was good because secessionism generally is? There’s a difference here between constitutionalist libertarians, who take the former position, and radical libertarians, who take the latter. But in any case, both are wrong: questions of political union or breakup depend upon the particulars. The American Revolution didn’t derive its legitimacy from radical libertarian arguments about anti-statist secession, and the revolution doesn’t tell us anything about the merits of other breakaway movements.

The American Revolution itself is a very complicated thing and doesn’t in all senses belong in the category of secession. The Declaration of Independence, for example, decribes the break from Britain in terms of a revolution in which an already separate people deposes its monarch and severs its ties with his government in another country. This is framed rather as if the relationship between the American colonies and Britain were akin to the relationship between Scotland and England before the Act of Union. Before that, Scotland and England had a single monarch but separate parliaments and governments—they were, constitutionally speaking, separate countries with a joint head of state. The American colonies had sound grounds for considering themselves a parallel example: they too had their own legislatures, even though they shared a king with the United Kingdom.

Note that had Scottish secession succeeded, it would only have undone the Act of Union while retaining the Queen as head of state—in other words, it would have put Scotland after secession in much the same constitutional position as the American colonies were in before the War of Independence, which is another reason to think “secession” is not the right word for what the colonists were doing. They were already outside of the Union of Great Britain and Ireland and by their own lights were simply affiliated with it through a shared king. This, by the way, is why the Declaration of Independence is directed against the Crown rather than Parliament, even though by 1776 the latter made most policy decisions.

What the colonists had been demanding before they turned revolutionary was also something rather like what the Scottish got out of the Act of Union: representation in the Parliament of the United Kingdom. The colonists were aggrieved because the king was allowing Parliament, in which Americans were not represented, to set policy for the colonies, over the objections of the colonists’ own legislatures. Self-government was very much the crux of the Americans’ concerns, and it applied not only to legislatures and governors (who were often royal appointments) but also to church governance: even American Anglicans were quite Protestant in character, and they feared that the precedent of the king allowing a Catholic bishop certain authority in Quebec—which Britain had taken in the French and Indian War—would ultimately translate into the king sending Church of England bishops to America to take control of American church governance.

But there was a foreign-policy angle to the revolution as well, one that should disquiet patriotic libertarians who think of America as a freedom-loving republic fighting to separate itself from an evil empire. For the American colonists, Britain was not imperial enough, at least where the Indians were concerned. Britain had provided for the colonists’ security against the French and Indians, and the taxes Britain wanted to impose to help pay for that war were one of things to which the Americans objected. But it wasn’t just the money: Americans were being taxed for an ongoing foreign policy that failed to do what many colonists dearly wanted to do—namely, seize more western land.

This is why the Declaration’s litany of grievances against the king ends with the claim that he “has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages, whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.” The British did not want Americans taking it upon themselves to settle the west, and they were not much inclined to extend military protection to settlers whose disregard for the frontier provoked Indian attacks. The Declaration is naturally couched in defensive terms, but what many colonists wanted wasn’t defense but a foreign policy that would enable private land-grabs and colonization efforts. It’s no accident that the one really big piece of legislation the U.S. managed to pass under the Articles of Confederation was the Northwest Ordinance. Colonizing the west was an imperative of building the American nation.

Mercenary and strategic motives are hard to separate: the “vacuum” of Indian territory would have been filled by an organized state or states sooner or later—if not by the U.S. then by a European colonial power or by Americans like Aaron Burr (or Sam Houston) carving out republics of their own, or by the Indians themselves forming a lasting, state-like confederation. Any of these alternatives would have had security implications for the Americans whether they were independent or part of the British Empire. Needless to say, the Americans had more freedom to set their own policies for conquest and defense alike if they were out from under the thumb of the king and his Parliament.  The independent U.S. itself tried to regulate expansion into Indian territory—one of many respects in which the newly established federal government picked up where the British had left off—and tensions between a restraining central government and eager-to-colonize states and individuals continued. But ultimately an American central government was going to be more sympathetic to expansion than London was.

The security logic of the American Revolution is hard to argue with, and Americans certainly considered what they were doing to be extending freedom—their own, at least. But a libertarian today who wants to take a universal view of things has to see all this as something murkier than a victory for self-governing good against imperial evil. Had America remained British, the slavery trade might well have been abolished as soon and as peacefully as it was by the rest of the British Empire—the very fact that colonial slave-holders did not have formal representation in Parliament (the thing that Americans were clamoring for) was what allowed abolition to take place. Three wars might have been averted: the War of Independence; the unsuccessful U.S. war of conquest that followed in 1812; and the Civil War that arose from as a result of a Constitution that divided sovereignty and left the question of slavery open. On the other hand, America would have had some involvement in the Napoleonic Wars—assuming there had been a French Revolution and a Napoleon in the first place without the American Revolution.

Such counterfactuals are troubling, and they can hardly be waved away with talk about American freedoms, such as free speech, that are actually British in origin, if more strongly established here. What would have been lost is not individual liberty but a model of republicanism that is so tightly entwined with the American psyche that in a real sense we would not exist as a people without it. But for that reason—the fact that this deeply, originally Protestant commitment to self-government has defined who we are since long before the revolution—independence was perhaps inevitable and would always have set the world on an unpredictable course.

Posted in , . 23 comments

Secession Is Not a Principle of Liberty

Ron Paul has stirred a media buzz by praising Scotland’s secession effort—an effort the Scots themselves rejected. Dr. Paul’s views are shared by many libertarians and conservatives, as well as a few folks on the left. Americans tend to think of secession only in the context of our own Civil War, but most acts of breaking away from a larger political unit have nothing to do with chattel slavery. Unfortunately, they don’t necessarily have anything to do with individual liberty either.

“The growth of support for secession should cheer all supporters of freedom,” the former Texas congressman writes, “as devolving power to smaller units of government is one of the best ways to guarantee peace, property, liberty—and even cheap whiskey!” Alas, there’s reason to think otherwise, and not just because Diageo is a London-based multinational.

The specifically libertarian case for secessionism is manifold: in fact, it’s several cases for different things that may not add up to a coherent whole. First, there is the radical theory that secessionism in principle leads to free-market anarchism—that is, secessionist reduction of states to ever smaller units ends with reduction of the state to the individual. Second, there is the historical claim that smaller states tend to be freer and more prosperous. Third is the matter of self-determination, which is actually a democratic or nationalistic idea rather than a classically liberal one but historically has been admixed with liberalisms of various kinds. What it means is that “a people” has “a right” to exit a state along with its territory and create a new state.

A fourth consideration is that suppressing secession may require coercion. And finally there is the pragmatic idea that secession is the best way to dismantle the U.S. federal government, the summum malum for some libertarians. (As an addendum, one can mention the claim that the U.S. Constitution in particular tacitly approves secessionism, but that’s a separate argument from cheering for secession more generally.)

It should be obvious that the first and third claims negate one another, and in practice the third overrules the first: real-world secession never leads to individualist anarchism but only to the creation of two or more states where formerly there was one. The abstract claim that every minority within the newly formed states should then be allowed to secede doesn’t translate into anyone’s policy: instead, formerly united states that are now distinct security competitors tend to consider the residual minorities who belong to the other bloc to be internal security threats. These populations left behind by secessionism may or may not be disloyal, but they are readily used as pretexts for aggressive state actions: either for the stronger state to dismember or intimidate the weaker one in the name of protecting minorities or for either state to persecute minorities and build an internal security apparatus to suppress the (possibly imaginary) enemy within. Needless to say, none of this is particularly good for liberty.

The coercion point doesn’t stand without support from nationalistic or democratic claims. After all, “coercion” is a function of legitimacy—no libertarian thinks that using force on one’s own property against trespassers constitutes coercion. Yet radical individualists have no adequate theory of national self-determination. What gives the people in seceding territory X the right to shoot at people from integrated territory X+Y? “Coercion” is a question-begging argument: it says, on some unstated non-individualistic principle, that the South has the right to shoot at the Union but not vice versa.

Only the second argument for secession is not easily dismissed. It can be divided into two kinds of assertion: 1.) an abstract claim that smaller states are always better (freer, more prosperous, etc.) than larger states, and 2.) concrete historical claims that many in fact have been better.

I’ll offer a few summary remarks on this point. First, smaller states can indeed be freer and more prosperous, although there’s a hitch: circumstances in which this is true tend to be those in which small states are free riders on international security provided by large states. Hong Kong, Singapore, Monaco, San Marino, Belgium, and Switzerland are all cases in point. None of these micro-states are capable of defending themselves against large aggressors. Their security depends on great powers keeping the peace on a continental or oceanic scale. Hong Kong had first British, now Chinese protection–hardly an unmixed blessing, to be sure. Singapore had first British, now U.S. protection. Small states such as Monaco, San Marino, Belgium, and Switzerland have derived their security from a balance of power in Europe underwritten by Britain or the United States. None of them alone, or even in concert with one another, could prevail against a Revolutionary France or Nazi Germany (or indeed non-Nazi Germany).

A few radical libertarians seem to think that foreign conquest shouldn’t matter because it just trades one master, one state, for another. But of course, if that’s true, there’s no argument for secession since in practice it too merely trades one state for another. The question that has to be asked is a prudential one: is a particular state more or less free than the alternatives? There’s no abstract, dogmatic answer where secession is concerned.

To note that secession is not a “good idea” in principle is not to say there aren’t good examples of secessions in practice. Czechoslovakia peacefully separated. The decomposition of the Soviet Union was a positive development of world-historic proportions—though surprisingly large numbers of former Soviet citizens themselves disagree: Gallup found last December that “residents in seven out of 11 countries that were part of the union are more likely to believe its collapse harmed their countries than benefited them.”

The leaden lining to the silver cloud of Soviet secession comes in part from the security competition it has entailed: Russia and neighboring Soviet successor states have had difficult dealings with one another—including wars—and ethnic tensions are sometimes grave. Despite all that, the world is well rid of the Soviet Union. But even this sterling example of secession is not without its tarnish.

Closer to home, the case that most Americans think of when they hear the word “secession” runs entirely in the other direction from Soviet disintegration. Successful Southern secession would have entailed results even more illiberal than the outbreak of the Civil War, which is saying a lot. The Southern Confederacy would have maintained slavery and looked to extend it to new territory, including perhaps Western territory claimed by the United States. Instead of fighting a war with the powerful North, however, the Confederacy might have sought expansion to the south through Cuba and Latin America, as indeed some Confederates dreamed of doing. In the North, meanwhile, you would have had industrial “Hamiltonian” policies and a domestic political climate over time much closer to a European level of statism than has ever been possible with the South as part of the Union. A social-democratic North and a slave South, each ready for war with the other, and at least one looking to expand. What’s libertarian about this?

The case can be made that the threat of secession at least imposes a check on central government expansion—a Washington with the secessionist sword of Damocles hanging over its head would have to respect to states’ rights. But this neglects reasons why the Union was created in the first place: notably, in the competitive world of empires and nation-states, bigger is more secure—not always, but often enough. Keeping the British Empire at bay—fortified as it was in Canada and for many years on the Mississippi and even in U.S. territory—was best achieved with a federation more tightly knit than that provided for by the Articles of Confederation.

But hadn’t America beaten the British once before under the Articles? Yes—with the help of another predatory superpower, France. A country that has the choice of providing its own security or living at the pleasure of others tends to go for growth, unless, like Japan and Germany in the last century, it gets beaten down. And to say that a territory is too large for self-government begs an important question—how can there be self-government at all if a state is not large enough to be secure?

American independence from Great Britain was in the first place driven by concerns for civil and ecclesiastical self-government: the colonists gambled their security and won. The continental United States proved to be a defensible territory without need of larger imperial union with Britain or permanent alliance with France. America’s neighbors north and south were weak and underpopulated. (Mexico’s population boomed only in the 20th century.) Further wars with the British Empire after 1815 were precluded by a balance of power: the growing military, demographic, and economic power differential between Canada and the booming U.S. meant that Canada, far from being a British imperial threat to the U.S., became a hostage held by the U.S. to insist upon British good behavior. The British could not defend Canada; America did not want war with the Royal Navy. That was the balance. A weaker or fragmented U.S. would have been in less of a position to keep it.

Elsewhere in the world, union and secession are questions of ethnonationalism, but for the English-speaking peoples security has always been the crux. It accounts in large part for why Scotland and England united to begin with. Scotland for most of its history was too weak and poor to resist English power. An independent Scotland was a Scotland subject to English predation. But England’s own security was jeopardized by its weak neighbor, which was at times a near-failed state—quite capable of launching raids across the border, if not much more—and at others, even when it posed no direct military nuisance, was a strategic threat as a potential base for French influence.

Great Britain as an island is readily defensible. So the English solved a security problem, and the Scottish conceded the unchangeable reality that their neighbor to the south was more powerful and prosperous. Scotland could start wars with England; it could not win them. Better for both, then, to have no more. Prosperity may be unequally divided, but Scots would be no worse off as a minority within a union dominated by England than they were as a weaker people outside its borders and legal order. British security was perfected by the Act of Union, and it led to a century of British global preeminence—quite an attainment for a country much smaller than France or Spain.

Today, of course, Britain’s security is guaranteed not by British arms, which proved inadequate by themselves in two World Wars, but by an American alliance. Scotland could indeed act like Switzerland or San Marino in this secure, American-backed European order. But such an “independent” Scotland today would only be choosing a hoped-for union with Europe over union with the rest of the UK. Would America’s enthusiasts for Scottish secession want their own state to secede from the U.S. merely to join Mexico or Canada? Size still matters, and Scotland depends for its security and prosperity on someone else—the U.S. and UK or the U.S. and Europe—in any scenario.

Self-government is only possible within the context of security, and individual liberty arises from the rule of law that self-government makes possible. None of these things is synonymous with the others, but all are intimately related. Union and secession have to be considered, even at the theoretical level, in this setting.

The world is relatively peaceful today not because peace among states is natural but because the power differential between the top and almost everyone else is so great as to dissuade competition. Indeed, the world order is so top-heavy that the U.S. can engage in wars of choice, which have proved disastrous for almost everyone. A world consisting of more states more evenly matched, however, would almost certainly not be more peaceful. Libertarians and antistatist conservatives, of all people, should appreciate that all states are aggressive and seek to expand, if they can—the more of them, the more they fight, until big ones crush the smaller.

For America, as historically for Britain, secession and union are questions of security and power, which undergird prosperity, self-government, and individual freedom. For much of the rest of the world, poisoned by ethnic and sectarian hatreds, secession means nationalism and civil strife. In both cases, breaking up existing states to create new ones is a revolutionary and dangerous act, one more apt to imperil liberty than advance it.

Ron Paul and others who make the case for secession do us all a service, however: these are serious matters that deserve to be taken seriously, not taken for granted. Secession is a thing to be discussed—and finally, as in Scotland, rejected.

91 comments

Does Liberalism Mean Empire?

Map of the World Showing the Extent of the British Empire in 1886.  Walter Crane - http://maps.bpl.org/id/M8682/. Licensed under Public domain via Wikimedia Commons.
Map of the World Showing the Extent of the British Empire in 1886. Walter Crane - http://maps.bpl.org/id/M8682/. Licensed under Public domain via Wikimedia Commons.

Two libertarians familiar to TAC readers—Robert Murphy and Sheldon Richman—have lately offered critiques of my “Why Liberalism Means Empire” essay. Libertarians consider themselves liberals, or at least heirs to classical liberalism, and have been among the most outspoken opponents of “empire” in contemporary American politics. So on the face of it, they have good reason to object to the connection I draw between their ideology and global power they abjure. The trouble is, the objections don’t stand up, and liberalism remains deeply implicated in the security conditions of empire.

Murphy attacks my argument at its strongest point, the case of World War II. “Look at what the Soviets did to Eastern Europe after the Americans provided them with all sorts of aid and attacked Germany from the west,” he writes. “If we dislike that outcome—and McCarthy and I do both dislike it—then to have achieved a more balanced outcome, surely the US should not have jumped in on the side that ended up winning.”

Soviet domination of half of Europe after the war is certainly an undesirable outcome. Can we imagine a worse one—or three? Easily.

Without the U.S., the outcomes available to Europe as a whole in World War II, West as well as East, were a.) Nazi control, b.) Soviet control, or c.) divided Nazi-Soviet control. Would any liberal prefer one of these outcomes to what occurred with U.S. intervention, namely d.) divided U.S./Western-Soviet control?

U.S. intervention certainly did strengthen the USSR. But strength in international affairs is a relative thing. A weaker USSR still strong enough to prevail against Nazi Germany without American help would have been more than strong enough to subdue war-torn Western as well as Eastern Europe. Stalin had fifth columns at the ready among resistance forces in France and elsewhere. As it happened, the USSR was in no position to claim France, Italy, West Germany, and Greece after World War II because the U.S. and the allies America rallied stood in the way. Without that check, little would have prevented Stalin from doing to Western Europe what he did to the East.

Alternatively, had a weaker USSR fallen to Hitler, the Nazis would have had an even easier time consolidating control of the Continent. What force could resist them? Franco and Salazar were not about to do so, whatever their differences with Hitler. Finally, had a USSR not supported by the United States fought the Germans to a standstill, the result might have been the worst of all worlds, with all of Europe unfree and the internal resistance to each totalitarian bloc being drawn toward the ideology of the other totalitarian bloc. A Cold War between the USSR and Nazi Germany would by any measure be worse than the one we had between the USSR and United States.

What of the hope that a draw might have brought anti-totalitarian revolution to Nazi Germany and the USSR? Murphy, Richman, and I all agree that war is bad for liberty and liberalism—but for that very reason, wars fought between totalitarian powers are unlikely to have a liberalizing effect on them. Quite the contrary: wars and crises tend to create conditions that allow totalitarianism to arise in the first place, and war is one environment in which the totalitarian ethos seems to thrive. The idea that the USSR and Nazi Germany, fighting one another, would each have collapsed, giving way to some tolerably liberal government, could only be believed by someone whose ideology insists that he believe it. It’s Lysenkoism applied to history.

Richman has better arguments, but they tend to be tangential to my points. He writes:

McCarthy has a rather liberal notion of liberalism—so liberal that it includes the illiberal corporate state, or what Albert Jay Nock called the “merchant-state,” that is, a powerful political-legal regime aimed first and foremost at fostering an economic system on behalf of masters, to use Adam Smith’s term. (The libertarian Thomas Hodgskin, not Marx, was the first to disparage “capitalists” for their use of the state to gain exploitative privileges.)

What Richman calls a loose, “rather liberal notion of liberalism” is intended to be a broadly accurate description of real-world, actually existing liberalism. A few years ago I commissioned Richman to write an essay on “free-market anti-capitalism,” a term that might sound like a contradiction to people who equate capitalism with free markets. What we have here is a parallel case: just as capitalism in practice is often antithetical to market freedom as understood in theory, so liberalism in practice—a state system involving capitalism, free trade, representative government, legal individualism, religious liberty, etc.—often falls short of the tenets of liberal theory. The problem is, practice comes first.

My essay claims that the security provided by the British Empire and later U.S. hegemony—or American Empire, if we want to be indelicate—has promoted liberal practice, and liberal practice, messy and imperfect though it might be, has promoted liberal theory. The claim here is not deterministic at the individual level: it’s plainly not the case no one can come up with liberal ideas amid an illiberal environment. Rather, a liberal environment is more conducive than an illiberal one to the extension and refinement of liberal thought among a populace.

This is why the largest concentration of classical liberals in 19th-century politics and the greatest volume of classical-liberal literature were to be found in Britain, and it’s why libertarianism today finds the most followers and is most strongly institutionalized—in think tanks, magazines, and a nascent political movement—in the United States. Liberalism is a luxury security affords, and hegemons have the security in the greatest abundance.

Security by itself is not enough, of course: a state that enjoyed tremendous international security, as Japan did for centuries, might or might not spontaneously develop broadly liberal ideas. Given the presence of liberal seeds, however, security seems to encourage their growth—this was true even in the Soviet-dominated Eastern Bloc during the Cold War and in the USSR itself.

The extended Soviet Empire was distinctly illiberal in ideology but enjoyed supreme security: there was never much prospect that NATO would simply invade Eastern Europe. (Just as NATO deterred the Soviets themselves from doing any invading of the West.) What liberal ideas survived Soviet repression or otherwise made their way through black-market channels into Soviet-controlled domains often met with a welcoming audience, and over decades, under conditions of peace, those liberal ideas grew stronger while the totalitarian ideology of the USSR grew weaker, including in Russia itself. Ironically, the Soviet Union’s greatest success—its conquest of Eastern Europe and guarantee of Russia’s security—contributed to its undoing. It created conditions in which liberalism could grow.

(And note, alas, what is happening in Eastern Europe and on Russia’s periphery now that security competition has returned: nationalism and even fascism is gaining ground.)

Anyone as an individual may be able to hold almost any idea at any time. But of the many ideas to which our minds give rise, only a few find substantial political expression. “McCarthy is wrong in thinking that power, that is, force, rules the world,” Richman writes. “There is something stronger: ideas.” In fact, only some ideas rule the world—the ones that succeed in acquiring power.

But power is a protean thing; it doesn’t just mean state power but any kind of hold over human beings. This highlights one of the paradoxes of liberalism: the ideology gains more power in terms of popular appeal at the expense of the states that make it possible. This is a good thing to the extent that liberal attitudes check abusive government. It’s a bad thing to the extent that liberal attitudes deprive states and populations alike of the wherewithal to combat external threats when they do arise. Pacifism, as a cousin or acute manifestation of liberalism, is a case in point. It’s one of the ideological luxuries made possible by security, but if adopted generally there would soon be no security left to leave it a choice for anyone but martyrs.

What’s more, radical liberals may call for complete nonintervention, but most self-identified liberals, including a contingent of libertarians, favor humanitarian warfare and aggressive efforts to “liberalize” countries that are insufficiently liberal and democratic. This is another irony of liberalism: it was fostered by non-ideological empires—Britain obtained hers in a fit of absence of mind; America acquired hers with tremendous reluctance and a troubled conscience. But once non-ideological empire has promoted the growth of liberal ideology, that ideology takes on a more radical, demanding character: a liberal minority adopt the anarcho-pacifist position, calling for dismantling the empire today; while a larger number of liberals call for using the empire to promote liberal ideological ends. Reining in empire thus requires reining in the demands of liberalism—realism as an antidote to ideology.

Murphy and Richman both point to the ways in which war and empire have made the United States less liberal in practice. War’s illiberal effects are indeed a major part of my argument: war is the opposite of security, and conditions of war—i.e., the absence of security—are dreadful for liberty. The question is what minimizes conditions of war and maximizes conditions of security.

That’s not a question that can be answered in the abstract; it’s one that must be answered in the context of particular times. In the case of 19th-century Europe, a balance of power safeguarded by the British Empire as an “offshore balancer” seems to have done the trick. In the case of 20th-century Europe, a 45-year balance between the United States and a contained USSR kept the peace from the fall of Nazi Germany until the collapse of the Soviet Union. One thing I hope my essay will do is prompt libertarians to think more seriously about historical security conditions and what viable “libertarian” options there may have been in the foreign-policy crises of the past. If there were no viable libertarian options, that’s a problem for libertarianism.

It’s a practical problem being confronted by Rand Paul right now. What liberal or libertarian thinkers can he draw upon for practical foreign-policy advice? There are a few, but most radical libertarians are simply not interested in real-world foreign-policy choices. And once libertarians do engage with reality, they start to seem a lot less libertarian.

Richman compares the hazards of foreign policy to those of domestic economic planning. In the case of the economy, the libertarian alternative is the free market; no planning. In the case of foreign policy, is the libertarian alternative also no policy? How can a state in a world of states—all of which, as libertarians know, have a coercive character—have no foreign policy? It’s true that the less power foreign-policy planners have the less trouble they can get up to. This is something on which libertarians and realists who favor restraint can agree. But realists recognize that this tendency for too much power to lead to abuse must be weighed against the dangers of other states’ power. Libertarians seem to see no danger in that direction at all.

Any people that has ever been invaded might find that perverse—indeed, my libertarian friends are often confounded by how their fellow libertarians in Poland or Ukraine can be so hawkish. But the U.S. is in an exceptionally strong geostrategic position. Invasion is highly impractical, if not impossible. My essay, however, notes that world conditions can have a dangerous influence on the U.S. even without foreign boots on our soil. On the one hand, foreign ideologies exert a certain attraction to Americans; and on the other hand, Americans have historically been rather paranoid about foreign ideological influence. Threats both real and imagined attend insecurity, and both kinds lead to illiberal policies.

Luckily, there are at present only a handful of geostrategic positions around the planet that offer secure bases for power projection and ideological dominance. North America is one of them. The second is the European continent. And the third is East Asia, which of the three is by far the least island-like and defensible.

Preventing a hostile power from dominating Europe and keeping a balance in East Asia is “empire” enough. Beyond that, prosperity and industrial strength, along with our nuclear arsenal, are the keys to our security. This is a historically realistic vision, one that solves the great problems of the past—what to do about Nazi Germany or the USSR—and the otherwise insoluble problems of the present, such as what to do about the Middle East: namely, minimize our exposure to crises that we cannot fix and that do not affect the top-tier distribution of power. Today what is most ethical and what is politically and strategically realistic coincide reasonably well: we should not seek to enlarge our commitments; we should preserve our naval power; we should use diplomacy and economics to advance our interests and contain disruptive powers.

This is not a strategy of hard-heartedness toward the oppressed peoples of the world. A secure and prosperous U.S. is in a position to be an ideological counterweight to any illiberal state or insurgency, and it can act when necessary only because it does not act when not necessary. Morale is as limited as men, money, and materiel, and wasting any of these—on a strategic level, we wasted them all in Iraq, as the present crisis demonstrates—is bad for our prosperity, our security, and everyone else’s as well.

Realism and restraint are the watchwords. If libertarians have a stronger strategic argument, I’m eager to hear it.

26 comments

Help The American Conservative Win the Peace

Cover of the TAC's inaugural October 2002 issue
Cover of the TAC's inaugural October 2002 issue

You don’t win a war unless you win the peace. This ought to be clear enough—the situation in Iraq illustrates it perfectly. The U.S. won the war, both in terms of deposing Saddam Hussein and in withdrawing troops from a country nominally at peace in 2011. But that peace was lost from the start: the U.S. had not created institutions that could keep order and prevent the next war from happening.

It’s an old story, and it applies at home as well as abroad. Last year the antiwar movement thought it had won its own metaphorical war—by preventing Obama from launching a real one against Assad’s Syria—yet the subsequent peace was lost. No institution had arisen that could prevent another round of mass-media sensationalism from taking America to war again. No noninterventionist think tank or peace organization commanded the attention or imagination of policymakers, whose minds remained filled only with possibilities provided by interventionists. Neoconservatives and humanitarian interventionists are still almost the only players at the table in Washington, despite their decades of failure.

If peace is to be a reality rather than a dream, its institutions must be built—not in the Middle East but here at home, above all in the precincts where foreign policy is actually made. Policymakers must have more on their menu than the dishes prepared for them by Bill Kristol and Samantha Power.

The American Conservative, in a small but steady way, has been trying to win the peace since 2002, when we were founded to oppose the impending war against Iraq. A decade ago the situation seemed hopeless, even if the fight was as noble as ever. In the 2004 presidential contest, Democrats rebuffed even the mildly antiwar Howard Dean in favor of a hawkish John Kerry, who went down to defeat in November at the hands of the president who had launched the Iraq War.

Movement conservatives closed ranks to silence critics even before the war began. On its eve, National Review published an attack on the most outspoken figures of the antiwar right. And while its author, David Frum, claimed “Unpatriotic Conservatives” was not aimed against all antiwar conservatives, the only one he typically exempted from his charges, Heather Mac Donald, kept her dissent almost entirely to herself. As Alexandra Wolfe noted in the New York Observer on March 10, 2003:

Heather Mac Donald, a fellow at the right-wing think tank, the Manhattan Institute, is that somewhat rare breed, an anti-war Republican. Among the “pro-war fanatics” she dines with regularly, she said, “you’re confident in your opinion, but why bother when it’s a futile gesture anyway?” So she mostly keeps quiet.

“I have a friend who works at The Wall Street Journal on the editorial side,” said Ms. Mac Donald, “and he’s anti-war and he won’t even mention it, because there the unanimity is so strong.”

Such unanimity prevailed across the conservative movement, and there was a price to pay for breaching it. Donald Devine elicited outrage when he merely failed to stand and applaud for Bush at an American Conservative Union banquet. For actually criticizing the Republican president and his policies, foreign-policy scholar John Hulsman was fired from the Heritage Foundation. And economist Bruce Bartlett happened to lose his job at the National Center for Policy Analysis just as his book Impostor: How George W. Bush Bankrupted America and Betrayed the Reagan Legacy was about to be published. No wonder the adminisration’s right-leaning critics kept their mouths shut.

Before 2006, The American Conservative stood alone in D.C. as a conservative institution that refused to surrender its principles to the Bush administration and its wars. Traditional conservatives who had minds of their own that they refused to yield up to any party or president came close to being deprived of a presence in the nation’s capital and conversation. But we wouldn’t go away, no matter the financial hardships we faced.

And then a remarkable thing happened: the public lost its patience. It threw the president’s party out of the House Speaker’s chair and the Senate Majority Leader’s office. Conservatives long disgusted not only by Bush’s policies but by the right-wing omerta that protected him began to speak out loudly and forcefully, even if it cost them their standing in the movement.

Ron Paul soon rallied a new antiwar base on the right. His son was elected to the U.S. Senate in 2010. And today in the House of Representatives there is a liberty caucus with realist and noninterventionist leanings—Justin Amash, Thomas Massie, Mark Sanford, the stalwart Walter Jones. They reinforce Jimmy Duncan of Tennessee, the last of the Republicans who voted against the Iraq War over a decade ago.

What once looked like token resistance from a small institution like TAC helped keep alive a debate and a point of view that otherwise lacked expression. And once the political climate changed, that point of view became a force that could rapidly grow and gain further institutional footholds.

Consider the careers of some of the young people drawn to work at The American Conservative who have gone on to bring something of its sensibility to other outlets. Michael Brendan Dougherty is now at The Week. Jordan Bloom and James Antle are at the Daily Caller and Daily Caller News Foundation. An ocean away, former TAC literary editor Freddy Gray is managing editor of The Spectator in London. Former interns John Glaser and Matthew Feeney are at Cato. Lewis McCrary, now a Robert Novak Journalism Fellow of the Fund for American Studies, was before that managing editor of The National Interest.

The American Conservative has been happy to be an institutional force multiplier—a farm team for a new generation of conservative talent, and during the dark days of the last war fever both a shelter and a megaphone for traditional conservatives unwelcome in the pages of movement magazines and websites. What’s more, The American Conservative will be here even when a future Republican president demands “unanimity” for his wars and other follies. Keeping TAC alive and growing is vital institution-building work, and not just for TAC itself.

To build peace, you have to build institutions. Those institutions will have to be complex, durable, far-sighted, flexible—not ideological, brittle, and simplistic. They will have to confront the realities on the ground, hard political, economic, and strategic realities.

Based on the evidence of more than a decade, this is beyond the abilities of the United States in the Islamic world. So we might try building institutions of peace and civil balance here at home. Institutions that favor restraint. Institutions that encourage conservatism in its most basic sense, a defense of what we have and are in danger of losing: self-government, a strong middle class, national security not existentially jeopardized by even the bloodiest terrorists, and liberties hard won over many generations.

The American Conservative is intended as just such an institution—a small but indispensable one. The scope of debate and dialogue in our pages, online and in print, is part of our mission. This is not a conversation for traditionalist conservatives alone or for libertarians alone or even for the right alone. Our core sensibility is a capacious one, grounded in realism and a Burkean constitutional temperament.

The side of peace can’t repeat the errors of the zealots of war: a with-us-or-against-us mentality, oversimplified analyses, and an ideology to be forced upon the world without regard to the realities of human life and security.

Whatever you donate to The American Conservative helps not only TAC itself but a discourse that was almost silenced ten years ago. No single institution can win the peace alone, but The American Conservative has since 2002 been a foundation stone upon which much more can be built. Help us continue and grow—and win.

Posted in , , . Post a comment

Who Supports Reality-Based Conservatism?

You do—you’re supporting it just by reading The American Conservative. Support it some more by making a donation, so a journal you enjoy lives and thrives.

Traditional conservatives have an obvious interest in seeing their ideas presented in the most articulate, passionate, and realistic fashion possible. Bold men and women of the left, meanwhile, enjoy a thoughtful challenge from the right that keeps them on their toes. And independent minds who identify with neither side appreciate principled pluralism in our national discourse, a pluralism made possible by reasonable voices that don’t scream from a box or reduce difficult questions to partisan cliches.

This is why you read The American Conservative. It sharpens, it clarifies, it informs and provokes—as a good magazine should, online or in print. (And in our case, both.) TAC heartens and encourages and invigorates, sometimes by ticking you off. That’s also what a good magazine does.

We are reader-supported, and most of our readers don’t have deep pockets. But there are a lot of you, and you’re a committed and generous group. If you can give, I think you will.

The military-industrial complex has big money—you can see it splashed on subway posters in the D.C. metro system and throughout the pages of all sorts of public-policy magazines. Corporate America has big money—and a devotion to free enterprise that extends only as far as crony capitalism’s bottom line. We’re opposed to war and Pentagon pork; we’re also opposed to an economy that’s increasingly hostile to the middle class, and indeed almost all Americans. So the big advertising bucks will not come our way any time soon.

We don’t need them; we have readers. Their support—your support—keeps the lights on and keeps The American Conservative‘s mindshare growing.

You aren’t alone in this struggle: small foundations, especially, are there to help, groups that support such unfashionable but deeply human and conservative causes as beautiful architecture and livable cities. TAC is very much about small voices and small institutions fighting back against the monotonous reverberations of the big media and partisan outlets of big business.

You know why you read The American Conservative. Help us continue bringing it to you—and to wider audiences—by contributing to our fall fundraising campaign. I thank you for it.

Posted in , . 3 comments

How Obama Learned to Love the Bomb

ISIS

Barack Obama has adopted Bill Clinton’s policy toward Iraq: bomb it until it gets better. Clinton—and before him, George H.W. Bush—bombed Saddam Hussein’s Iraq to safeguard the Kurdish north, degrade Saddam’s military capabilities, and perhaps weaken his regime to the point of collapse. Twenty years later, Obama is bombing the Islamic State in Iraq and al-Sham to “degrade and ultimately destroy” ISIS, while protecting the Kurdish north and what remains of the Iraqi state until recently ruled by Nouri al-Maliki.

We are well into the third decade of U.S. military operations against Iraq—dating back to 1991—but a free, stable, non-sectarian state has yet to emerge. Maybe a few more bombing sorties will do the trick.

George W. Bush got one thing right: he recognized that what his father and Bill Clinton had been doing in Iraq wasn’t working. Rather than continue indefinitely with airstrikes and sanctions that would never tame or remove Saddam, Bush II simply invaded the country and set up a new government. In the abstract, that was a solution: the problem was the regime, so change it. But change it into what?

Iraq is a patchwork of tribes and blocs of Sunni and Shi’ite Muslims. A dictator like Saddam could keep order, but since the end of the Cold War America has found dictators distasteful, so Iraq would have to be democratic—which means, no matter how intricate the electoral system might be, one faction would dominate the others. So Iraq plunged into years of sectarian violence, and when it was over, the Shi’ite Maliki was in charge. Sunnis were never entirely happy about this, and they only became less so over time. Once ISIS surged across the border from neighboring Syria—experiencing its own (almost) post-dictatorial disintegration—many Sunnis welcomed them, and the bloodshed resumed.

Obama doesn’t want to answer the question that Bush I and Bill Clinton also avoided: namely, what kind of government could Iraq possibly have after Saddam that would satisfy the United States? Bush II had an answer, and it proved to be the wrong one. Obama knows it’s a trick question. There is no realistic outcome in Iraq that will not involve violence and repression. Either the country must have another dictator (unacceptable), or it must be dominated by one sect or the other (in which case it risks becoming the Islamic State or another Iran—also unacceptable), or else we have to pretend that it’s about to turn into an Arab Switzerland (entirely acceptable, and also impossible).

ISIS is an exceptionally violent revolutionary group, but it’s only a symptom of the more fundamental disease: the lack of a government strong enough to keep order but not so sectarian or tyrannical in temper as to persecute anyone. If Obama is successful against ISIS—as George W. Bush was successful against Saddam—how long before a new evil congeals? ISIS itself is a successor to another terrorist group, al-Qaeda in Mesopotamia, that was beaten once before in Iraq. The rise of yet another terrorist or tyrannical force is a virtual given, a fact established by more than one cycle of history: as surely as AQIM followed Saddam and ISIS followed AQIM, something else awaits to follow ISIS.

Unable to break this cycle, Obama resorts to bombing because our pundits demand that he “do something.” Leaving Iraq to its own devices, to suffer, burn, and ultimately rebuild, is too cruel, and ISIS with its spectacular propaganda videos makes a great cable news bite and social-media campaign. It’s evil, it’s scary, it’s on YouTube, so what are we going to do about it? Obama would be weak and callous if he did nothing. That he can’t actually do much that matters in the long run is unimportant—our humanitarian urges and Islamophobic fears will be satisfied as long as we get some kind of action right now. So we bomb.

There’s no political risk in bombing, as there is in putting “boots on the ground.” There won’t be too many body bags shipped home to Dover AFB to trouble voters. What’s more, bombing can be of any intensity political conditions demand: if John McCain is howling louder than usual on “Meet the Press,” just drop a few more bombs. That shows you’re a real leader.

This may sound grotesque—not the reality of what Obama is doing and the politics that lead him to do it, but to my saying it out loud, when there are real human beings in Syria and Iraq for whom none of this is abstract. ISIS is a deadlier threat to their lives than American bombing is, and real men and women can make choices about violence and politics that anyone’s fulfill anyone’s grim projections. There may be no ideal “moderate resistance” to Assad or ISIS itself, but there many degrees of better and worse, and they are matters of life and death to the people of the region.

All of which is true, and opponents of our 23-year policy toward Iraq, such as myself, should not be complacent about far-away people’s lives. If this is something that war critics must keep in mind, however, supporters must be equally serious about political realities—not immutable realities, but probabilities so strong as to require that our hopes and ambitions take account of them. Peace and tolerance depend on order, and under these circumstances order depends on a strong state. It would be foolish for Obama or anyone else to name in advance what kind of state, under whose control, will emerge victorious, but whenever the Iraqis and Syrians themselves give rise to a leader or faction capable of maintaining order, America must be prepared to accept the result and demand only the most basic concessions to our own values and security.

During the Cold War, it was often enough that a state or faction be anti-Communist for it receive American approbation. Dictators and sectarians as well as democrats passed the test, sometimes to our regret. After 1989, with our own security unassailable, we raised our expectations of others: we could afford to moralize and cajole. This proved to be disastrous in many cases, as botched attempts at democratization and economic liberalization urged on by the U.S. led to unstable regimes and countervailing extremism. If the U.S. no longer wishes to apply as crude a test for regime acceptability as it did during the Cold War, it must nonetheless devise criteria more realistic than those that prevailed over the past 20 years.

Obama’s bombs and other measures may or may not lead to regime change in the territory now controlled by ISIS. He can’t control that, and he cannot even do much, given the way our media and politics work. But what he can do is begin the long process of clarifying America’s understanding of how much like or unlike us we really expect other regimes to be. If U.S. can arrive at non-utopian answer to that question, we can perhaps again have a strategy that matches means to ends—rather than one that falls back on air power as the ever-present means to impossible ends.

Posted in . 18 comments

Why Liberalism Means Empire

Illustration by Michael Hogue
Illustration by Michael Hogue

History ended on October 14, 1806. That was the day of the Battle of Jena, the turning point, as far as philosopher G.W.F. Hegel was concerned, in humanity’s struggle for freedom. Once Napoleon triumphed over the reactionary forces of Prussia, the ideals that post-revolutionary France represented—not just liberté, égalité, and fraternité, but the modern state and its legal order—would serve as the model for Europe and world.

When Francis Fukuyama revisited this idea in “The End of History?”—with a question mark—in the pages of The National Interest a quarter century ago, he had to remind readers what Hegel had meant. Events would still happen, including big events like wars. What had ended was a sequence of political and cultural forms whose internal contradictions each gave rise to the next step in freedom’s development: from the ancient world to medieval Christendom to, finally, what one 20th-century interpreter of Hegel called “the universal homogeneous state.” Or as Fukuyama called it, “liberal democracy.”

By 1989 it was obvious that Hegel had been right: the long series of rear-guard actions attempted by Europe’s reactionary powers came to an end after World War I. Fascism and Soviet Communism thereafter proposed themselves as alternative endings to history—competing modernities—but neither could prevail against liberal democracy, whether on the battlefield or in the marketplace.

This was welcome news to America’s foreign-policy elite. Fukuyama had not set out to justify a “unipolar moment” or America’s world role as the “indispensable nation”—indeed, he thought boredom lay ahead for those unlucky enough to live beyond history’s end—yet his essay could not help but add to the triumphalism of the time. The Cold War was over; henceforth, the American way of life would be everyone’s way of life: inevitably, forever, from Moscow to Beijing to Baghdad.

“The victory of liberalism has occurred primarily in the realm of ideas of consciousness,” Fukuyama wrote, “and is as yet incomplete in the material world.” America’s mission would hence be to complete it, through international trade agreements, promotion of human rights, and of course war.

But what if Fukuyama was wrong and liberal democracy is not the end of the history after all? What if, on the contrary, the American way of life is an accident of history—one made possible only by a special kind of global security environment?

What in fact has triumphed over the last 250 years—not since the Battle of Jena in 1806 but since the end of the Seven Years’ War in 1763—is not an idea but an institution: empire. Successive British and American empires created and upheld the world order in which liberalism could flourish. Fukuyama’s “liberal democracy” turns out to be a synonym for “the attitudes and institutions of a world in which Anglo-American power is dominant.”

Britain’s Liberal Empire

Victory against France in the Seven Years’ War confirmed not only British naval superiority—and thus the ability to project power more widely than any other nation in the late 18th or 19th centuries—but also the superior resilience of British financial and political institutions. Britain paid a steep price for the conflict, with the loss of 13 North American colonies that rebelled against the taxes king and parliament levied to pay for what colonists called the “French and Indian War.” But while King George III lost America, the king of France lost his head. To get his country’s finances in order after the Seven Years’ War and the American Revolution, Louis XVI fatefully summoned the Estates-General in 1789, and thus was the French Revolution begun.

Seventeen years later, Hegel was not wrong to see in Napoleon’s armies an unwitting force of progress. In 1806, the possibility was wide open for the 19th century to be the French century—or the German century, after defeat at Jena spurred Prussia to modernize and ultimately become the nucleus of a unified Germany. But France and Germany had the misfortune to be neighbors, and the reciprocal invasions they launched and suffered gave rise not only to political modernization, but also to nationalism and state repression.

By contrast, Britain’s territory remained inviolate—a necessary if not sufficient precondition for the flourishing of liberalism. The Nazi-era German political theorist Carl Schmitt observed the irony that this country which had given the world Thomas Hobbes’s Leviathan should itself have avoided the need for such a consolidated state:

The English Isle and its world-conquering seafaring needed no absolute monarchy, no standing land army, no state bureaucracy, no legal system … such as became characteristic of continental states. Drawing on the political instinct of a sea and commercial power, a power that possessed a strong fleet that it used to acquire a world empire, the English people withdrew from this kind of closed state and remained ‘open.’

Land empires of the sort that Napoleon tried to build proved uncongenial to liberalism—they elicited nationalistic reactions and political centralization. But a naval empire was a different matter, not only sparing the homeland the ravages of foreign reprisal and border clashes but also providing a ready framework for capitalism, the great engine of liberalization and democracy. Free trade, for example, a cornerstone of liberal economics, historically developed out of trade within the British Empire.

“England’s imperialism,” noted the Austrian economist Ludwig von Mises in his 1929 work Liberalism, “was primarily directed not so much toward the incorporation of new territories as toward the creation of an area of uniform commercial policy out of the various possessions subject to the King of England.”

Mises was an ardent classical liberal and no advocate of anyone’s subjugation. Yet faced with a choice between imperialism and liberalism together or national self-determination that might jeopardize international commerce, even Mises sided with empire. “The economy of Europe today is based to a great extent on the inclusion of Africa and large parts of Asia in the world economy as supplies of raw materials of all kinds,” he wrote:

Any stoppage in these trade relations would involve serious economic losses for Europe as well as for the colonies and would sharply depress the standard of living of great masses of people. … Ought the well-being of Europe and, at the same time, that of the colonies as well to be allowed to decline further in order to give the natives a chance to determine their own destinies, when this would lead, in any event, not to their freedom, but merely to a change of masters?

This is the consideration that must be decisive in judging questions of colonial policy. European officials, troops, and police must remain in these areas, as far as their presence is necessary in order to maintain the legal and political conditions required to insure the participation of the colonial territories in international trade.

As shocking as these words might seem coming from one of the free market’s greatest champions, the conditional quality of Mises’s prescription ought to be noted: if trade is possible without colonialism, then national self-determination can be permitted. Liberal imperialism is not directed toward gratuitous conquest but toward maintaining a global environment conducive to liberalism.

Liberalism and empire reinforced one another in manifold ways. Britain met military necessities of the Napoleonic wars with moves toward domestic liberalization—more civil rights for Catholics and Dissenting Protestants, who could hardly be asked to serve under arms while being required to swear religious oaths and denied the chance to participate in politics. The manpower needed to police the seas even after Napoleon’s defeat provided further incentives for reform, as did the growing wealth brought about by the trade that empire and peace made possible.

As British industrial magnates became wealthier, they demanded a greater role in politics; as their employees became more numerous, they too demanded representation and rights. The franchise expanded, religious liberty was extended, and liberal democracy as we know it steadily evolved within the context of empire.

Britain was not the only place where the domestic development of liberalism was made possible by the pax Britannica. For the newly independent United States as well, security was a precondition for liberalism. But during the first 30 years of the republic, that security was jeopardized by conflicts between Europe’s great powers—which were also the New World’s great powers. Britain and revolutionary France came to represent ideological poles for America’s domestic political factions, which dealt with one another in distinctly illiberal ways. Federalists passed laws like the Alien and Sedition Acts to suppress French agents and their American sympathizers, while mobs of Jeffersonian Republicans occasionally lynched pro-British Federalists and launched the War of 1812 against Britain’s remaining North American stronghold, Canada.

The partisan fury died down only with the “Era of Good Feelings” that followed the War of 1812—and, not coincidentally, Britain’s defeat of Napoleon in Europe. The resolution of Europe’s great-power conflict removed the source of much of America’s ideological unrest. Once there was only one superpower off America’s coasts, one that proved uninterested in reclaiming its long-lost colonies, domestic tranquility could ensue.

Britain vouchsafed the post-Napoleonic order in Europe by acting as an “offshore balancer,” checking the rise of any potential hegemon in the Old World. This inadvertently freed the United States to expand across its own continent. A young country whose development might easily have been stunted by war and insecurity was instead afforded the luxury to industrialize and solidify its institutions in peace.

Looking back from the perspective of 1951, George Kennan described this situation in American Diplomacy:

[Britain was] prepared to hover vigilantly about the fringes of the Continent, tending its equilibrium as one might tend a garden, yet always with due regard for the preservation of her own maritime supremacy and the protection of her overseas empire. In this complicated structure lay concealed not only the peace of Europe but also the security of the United States.

Whatever affected it was bound to affect us. And all through the latter part of the nineteenth century things were happening which were bound to affect it: primarily the gradual shift of power from Austria-Hungary to Germany. This was particularly important because Austria-Hungary had not had much chance of becoming a naval and commercial rival to England, whereas Germany definitely did have such a chance and was foolish enough to exploit it aggressively…

American foreign policy had never been peaceful merely for the sake of being peaceful. Security was the paramount concern, but with Britain keeping any possible global predator at bay, American statesmen could pursue their ends through means other than war. The Monroe Doctrine was of a piece with Britain’s strategy of offshore balancing: its author, Secretary of State John Quincy Adams, understood that only in peace—that is, in the absence of great power competition in the New World—could 19th century America develop politically and economically, and to keep that peace was worth fighting for.

The liberalizing effects of the security environment fostered by the British Empire were felt far beyond the English-speaking world. Europe, too, enjoyed a peace during which it could industrialize and slowly democratize. But Europe could never enjoy quite as much security as the island-states of Great Britain and (in effect) the United States, and the bitter experiences of the Napoleonic Wars left even the continent’s liberals infected with nationalist resentments. Thus while “free speech,” for example, in a security context like that of the Anglo-American peoples was not disloyal speech, matters were otherwise where the proverbial wolf was at the door, as it was for much of continental Europe.

In time, liberal sentiment grew so strong within imperial Britain that its exponents began to lose sight of the security context that made liberalism possible. Idealists and pacifists—the privileged children of empire—fancied that the peace was a product not of power but of good intentions; of love. Other liberals developed attitudes that foreshadowed today’s humanitarian interventionists: for them, power was more than just a means to a strategic balance in which freedom might flourish; it was an instrument by which despotic regimes could be directly overthrown and transformed into liberal or democratic governments.

Yet what sent the British Empire into eclipse in the second decade of the 20th century was neither a loss of nerve nor utopian overextension, but the brute fact that Britain did not have the wherewithal to contain a united Germany forever. Sooner or later, Germany would upset the continent’s balance and challenge the Royal Navy’s supremacy on the seas. Britain was not, therefore, acting irrationally when it entered into alliances against Germany ahead of World War I: the British way of life depended on the empire, which in turn depended on maritime hegemony.

A Britain ready to fight Germany might win and preserve its world order; or it might lose. But a Britain unwilling to fight could only lose. In the event, Britain won a Pyrrhic victory. Germany was defeated, but only with American help; the British Empire was no longer the archstone of the global system.

In the 19th century, the empire on which the sun never set could plausibly lay claim to represent “The End of History.” And if Francis Fukuyama were right—if ideas rather than institutions are the drivers of history—then the waning of British imperial power need not have meant a twilight for the ideals of liberalism and democracy as well. But in fact, the collapse of the security environment Britain had preserved for a century did indeed coincide with the downfall of liberal democracy—certainly on the European continent, where weak liberal regimes gave way to the likes of Il Duce and Der Führer, but also in Britain and America, where the intelligentsia increasingly looked to fascism, Bolshevism, and other profoundly illiberal creeds for inspiration.

A fair test of Fukuyama’s idea is whether liberal democracy endures in the absence of Anglo-American empire. In the interwar period, however, liberal democracy—divorced from British power, not yet remarried to American hegemony—looked well and truly moribund.

Why Pax Americana?

America might have been expected to fill the security gap left by the receding British Empire. But a people who had known almost nothing but international peace for a hundred years could scarcely imagine that it was anything other than the natural state of human affairs. Just how much America’s prospering liberal democracy owed to global conditions another nation’s empire had engineered was far from obvious.

The U.S. could certainly have stayed out of World War I. A fanciful scenario in which Germany won the war would not inevitably have led to trouble for the United States, which might have remained aloof from the Old World’s troubles as long as they did not wash up on American shores—and Wilhelmine Germany was hardly an exporter of revolution. If, as is most likely, the European powers had exhausted themselves, America would have been in a position to assert dominion over the oceans without firing a shot.

Alternatively, once the U.S. was in the war, the objective might have been to achieve a traditional balance of power, with the Kaiser preserved in Germany and Allied support for governments strong enough to suppress revolutionary movements. Such illiberal measures would in fact have done the most to preserve the international order that made liberalism possible.

In the event, however, U.S. involvement in Europe’s war was disastrous. The terms of the peace exacerbated the continent’s political instability, relying on a weak Weimar liberalism to withstand Bolshevism or Bolshevism’s fascist antithesis, and establishing an even feebler League of Nations to do with laws what Britain had once done with the Royal Navy.

Even as European liberalism was set up to fail without an imperial power to support it, America returned to her disengaged republican—and Republican—ways. Woodrow Wilson’s party was repudiated, and Presidents Harding, Coolidge, and Hoover kept out of Europe’s disintegrating affairs. Franklin Roosevelt only succeeded in leading the country into World War II after lying in his 1940 re-election campaign, violating the Neutrality Acts, and placing Japan in an economic stranglehold. It took Pearl Harbor to get Americans interested in the next war.

The old myths of natural peace and prosperity, which had taken root in America during a century of pax Britannica, died hard. In the decades between the wars, honorable men—not pro-Nazis but Americans who had seen their country grow to greatness without becoming entangled in European affairs—argued that events in Europe posed little danger to America and were frankly none of our business.

Their argument doesn’t hold up. Although the two great anti-liberal powers, Soviet Russia and Nazi Germany, eventually turned on one another, a scenario in which they completely canceled one another out is implausible, to say the least. More likely one would have overcome the other, and the alacrity with which Soviet power did in fact fill the vacuum left by the defeated Nazis in Eastern Europe after World War II suggests what would have happened to all of Europe had one totalitarian juggernaut triumphed.

Just as the world order made possible by the British Empire had a liberalizing effect on the United States, a Soviet or Nazi world order would have profoundly influenced American development in the opposite direction. In such a world, the U.S. would have faced both domestic and foreign pressures to assimilate to the Soviet or Nazi model, and resisting such pressure could itself have taken an illiberal turn. This is not so hypothetical: the Palmer Raids of the Wilson years and the McCarthyism of the 1950s show that America could indeed revert to its less-than-liberal 1790s sensibilities about free speech and disloyalty when faced with a foreign threat in a risky security environment.

But during the Red Scare and the McCarthy era, America was not facing a totalitarian power anywhere near as persuasive as it would have been had it conquered all of Europe. To think that American intellectuals would not have been as easily seduced by a victorious Nazi Germany or Soviet Russia as they were, in historical fact, by those same totalitarianisms when they had rather less territory under their sway seems rather naïve. Intellectuals worship power, and everybody worships success.

A Cold War between an embattled, increasingly illiberal and security-conscious America and a burgeoning USSR or Nazi Germany is not at all hard to conceive of—because in fact, we got just such a thing even with American involvement in World War II. Had Nazi Germany and Soviet Russia fought one another to a standstill in the 1940s, the results would have been much the same: a Cold War, only one whose poles were Moscow and Berlin rather than Washington and Moscow.

Had we stayed out of World War II, there is every reason to think that all of the illiberal measures taken by the U.S. in the Cold War that we actually did fight with the Soviets—in which the U.S. held the upper hand from the start—would have been taken in a much worse strategic, economic, and cultural climate. America might still have prevailed against an inhuman and unsustainable Soviet or Nazi system, but the America that emerged would hardly have been likely to be more liberal or democratic than the one we have today.

In the 19th century, the United States enjoyed the advantages of an international security environment propitious to liberalism and democracy without having to incur the costs of empire necessary to sustain those conditions. America could be liberal without having to be imperial—although the Indians, Mexicans, and Filipinos might well disagree. Beginning with World War II, however, if America wished to remain liberal and democratic, it would have to become imperial in many of the ways Britain had been—including playing a leading role in Europe and on the oceans. Indeed, America would have to do much of what the British Empire had done in the previous century on an even larger scale.

The efflorescence of liberal democracy in the latter half of the 20th century—the growth of international trade and support for democracy and human rights to the point where the total package appeared to be the “End of History”—was not a spontaneous, natural development. It was driven by U.S. prestige and power. Germany is now deeply committed to political liberalism, and Japan may in some respects be more consumerist than the U.S. itself. But these states were, of course, remade by the U.S. after World War II.

This is not to say there aren’t genuinely local traditions of liberalism or democracy to be found among America’s allies, nor that American arms can simply transform any other kind of regime into a liberal and democratic one: the apparent success of nation-building in Japan and Germany owed as much to the threat that the Soviet Union posed to those states as to anything America did. The Germans and Japanese had the most urgent incentive imaginable to make their newly liberal and democratic constitutions work—because aligning with the U.S. was the only insurance they could buy against being annexed by the Soviet empire instead.

There is a crucial difference between the Napoleonic, land-empire mentality that wants to revolutionize other states—a mentality taken to extremes by the Soviets and exhibited with considerable fervor by many neoconservatives and liberal hawks today—and the example set by Britain in the 19th century, which was a liberal but not revolutionary world power and encouraged liberalization mostly though indirect means: via trade, culture, and above all, by upholding a relatively un-Hobbesian global security environment.

Liberal anti-imperialists today, whether libertarian or progressive, make the same mistakes Britain’s pacifists and America’s interwar noninterventionists once did: they imagine that the overall ideological complexion of the world, as determined by the state most capable of projecting power, need not affect their values and habits at home. They believe that liberalism is possible without empire. Web issue image

There is little historical evidence for this. When libertarians point to how economically liberal city-states like Hong Kong or Singapore are, they ignore the imperial strategic contexts in which those city-states are historically set. No city-state can resist the military force of a superpower; thus, the liberalism of a city-state tends to be entirely contingent on the liberalizing security conditions established by some great empire.

Yet liberal anti-imperialists are entirely correct about the price of the ideological wars that the other sort of liberal—the empire-loving kind—extols. These aggressive liberals, whether they call themselves humanitarians or neoconservatives, also misunderstand the world order that underwrites liberalism: they have Napoleonic ambitions to liberalize the planet through revolution, not merely to preserve conditions in which the happy accident of liberalism can survive and grow, if at all, by a slow process of assimilation.

Just as there are idealists who deny that power is the basis of the peaceful order upon which liberal democracy rests, there are other, more dangerous idealists who deny that power is a limited commodity that cannot simply be wished into existence by a feat of will. This is a view characteristic of neoconservatives such as Robert Kagan, who never evince any sense that the U.S. could overextend itself in regime-changing crusades.

Liberal democracy depends on empire, but there are strict limits to what empire can achieve. This point is best understood by the conservative critics of liberalism and empire alike. Figures such as George Kennan and Patrick Buchanan are relatively untroubled by the implications of noninterventionism for liberal values and practices because the America they wish to see is a more self-sufficient and nationally self-conscious one. They are consistent anti-imperialists and anti-liberals: opposed to open borders, free trade, consumerism, and mass democracy as well as to the global power projection that makes such things possible; they would like America to be more like Sparta than Athens.

But after 200 years, liberalism has soaked too deep into the fiber of America’s national character for a new path of national self-sufficiency to hold much popular appeal. Thus while the anti-liberal anti-imperialists are among our greatest critics, they are also among our most neglected. They preach what a liberal nation will not hear.

This leaves one final view to be examined, that of the conservative realist—who is a realist not only in understanding the role that power plays in shaping ideology and world conditions (including economics), but also in recognizing the bitter truth about liberalism and its imperial character. The conservative realist knows that America will not be anything other than broadly liberal and democratic for a long time to come, and liberal democracy requires a delicately balanced system of international security upheld by an empire or hegemon. This balance is apt to be upset not only by some rampaging foreign power—by a Napoleonic France or a Nazi Germany or Soviet Union—but also by our own revolution-loving, democracy-promoting liberals.

The conservative realist emphasizes four points in thinking about American hegemony today. First, judgment must be exercised to discern essential conflicts (like the Cold War and World War II) from absolutely inessential ones (like Iraq) and relatively ambiguous ones like World War I. The individual cases matter; no ideological framework that renders predetermined answers about the use of force can suffice.

Second, if liberalism is ineradicably imperial—or hegemonic, if we’re being polite—it is also true that the only secure liberal order is one upheld by offshore balancing rather than crusading on land.

The third point, a corollary to the second, is that liberal democracy grows by evolution and osmosis; active attempts on the part of great empires to transform other regimes are usually counterproductive. Power upholds the strategic, economic, and cultural environment in which other states can pursue their own intimations of liberalism. Power cannot save souls or build heaven upon earth—it cannot “immanentize the eschaton,” as conservatives used to say, or expedite the “End of History.”

And fourth, because in fact liberal democracy is not the end of history, it can and will disappear in the long run. Thus its limited resources—moral, military, and economic—must not be wasted on utopian delusions. If liberal democracy is to continue as long as possible, its strategic posture must be realistic and conservative.

Liberal democracy is unnatural. It is a product of power and security, not innate human sociability. It is peculiar rather than universal, accidental rather than teleologically preordained. And Americans have been shaped by its framework throughout their history; they have internalized liberalism’s habits and rationales. Not surprisingly, they have also acquired the habits and rationales of empire—and now they must understand why.

Daniel McCarthy is the editor of The American Conservative


Hobby Lobby, Pluralism, and Privacy

Of the many foolish things said about the Hobby Lobby case, a contender for most foolish is the “Buy your own contraception!” snark on the right that runs parallel to the left-wing exaggerations about bosses dictating contraception choices to women. What the snark disguises is that the principle at stake is the same for both sides: if you’re compelled to purchase a service, you—whether “you” are a business or an individual—want to have some say in what it is you’re buying. Since it’s no longer a market transaction when government insists that it must take place, the question of just what is being bought has to become a political and legal question. Women who say that they should get a basic service when they are forced to buy an insurance plan are in exactly the same position as a company that says it has religious objections to certain kinds of services. Neither claim is risible; both arise from the straightforward notion that you should get what you want, and not get what you don’t want, when you have to buy something.

If sex and religion are too polarizing to offer a clear illustration, consider whether “buy your own!” would make sense in a different context. If Washington commanded that everyone must purchase a car through his or her employer, but employers didn’t offer, say, headlights with the vehicle, would having to pay extra out of pocket seem reasonable, or would most people feel ripped off? Conversely, if employers were told that they had to offer, say, gold-plated hubcaps with all cars, wouldn’t they be entitled to object?

With auto parts, it might be possible to come up with a public consensus on what features were reasonable. That’s simply not possible, and not even desirable, where the questions at the heart of the Hobby Lobby case are concerned. Despite the best efforts of ideologues, sex and religion are still personal rather than political matters for most people. Americans do not want their relationships or beliefs supervised by the federal government, or by any government—not by a bureaucracy, not by a court, and not by the democratic process. Why should anyone get to vote on your faith or whether your insurance covers contraception? But the problem with the HHS mandate, and with Obamacare itself, is that it makes these very personal matters unavoidably public as well: matters for bureaucrats, judges, lawyers, politicians, Rush Limbaugh, and the people you do business with.

Apologists for the mandate can say that there’s already a public dimension to these things, which is true. There’s no wall of separation between the people as a political actor and citizens’ personal feelings about sex and religion, and there’s obviously not supposed to be a wall of separation between the people and their government. But there’s a difference between the indirect, tiered influence that private persons exercise on the public and the public exercises on the state—and vice versa—and the kind of simplification that ideologues wish to see, in which individual, community, and state are all harmonized according to a single, unchanging set of values. The trouble with ideologues left or right isn’t just what they want, it’s how oblivious they are to their own excesses: they can’t imagine that anyone could have a reason not to want to subsidize someone else’s contraception or that any woman might feel cheated and demeaned by a company failing to provide insurance that covers birth control. You don’t actually have to agree with the metaphysical apparatus behind either side to see that something conscientious and intimate is being traduced by closing the gap between government, business, and private life.

Public policy is going to involve a clash of values one way or another, and even when one side “wins,” the political fighting doesn’t stop—the stakes are much lower, in practical terms, than ideologues can afford to admit. Popular governments aren’t meant to attain a steady equilibrium; opinions are always in motion, and thus so is politics. In pointing out the overreach that characterizes the simplifiers on both sides, the objective isn’t to arrive at a uniformly agreeable middle policy—some ideal formula for what’s personal and what’s political—but to maintain a certain space, however compromised, for life and feeling at a distance from politics. No house is completely private and invisible to the outside world, but that doesn’t mean we should let ideologues tell us that our walls might as well be transparent. Religion and sex ought to remain more personal than political, imperfect though the separation may be, and policies that more thoroughly mix these things are simply bad.

Posted in , , . 86 comments

Ralph Nader, Tim Carney, and Me: at Cato Tomorrow

If you’re in the D.C. area, drop by the Cato Institute at noon Friday for a panel discussion of Ralph Nader’s new book Unstoppable: The Emerging Left-Right Alliance to Dismantle the Corporate State. Nader, AEI’s Tim Carney, and I will be taking part, with the Kauffman Foundation’s Brink Lindsey moderating.

The American Conservative excerpted the book—specifically, the chapter on the forgotten distributist conservatives of the 1930s–in our May-June issue. There’s much else in it that conservatives and libertarians will find fascinating, as Nader explores what figures as disparate as Frank Meyer, Peter Viereck, and Murray Rothbard have had to say about the conjunction and centralization of economic and political power—and what the alternatives might be.

(While I’m touting books, let me also mention that the distributist/agrarian classic Who Owns America?, which Nader discusses in the TAC excerpt, is on sale now from ISI.)

A few weeks ago, Cato’s Christopher Preble and I attended a conference Nader organized, and the two of us participated on a panel to discuss left-right approaches to trimming the defense budget. Here’s the video:

And here are all the panels from that event, including remarks by Grover Norquist.

 Tagged , , , , . 5 comments

Writers Who Change How You Read

Yale's Sterling Memorial Library. Arthur Connors / Shutterstock.com
Yale's Sterling Memorial Library. Arthur Connors / Shutterstock.com

In late 2008 I put myself through a crash course in the works of Willmoore Kendall, the “wild Yale don,” as Dwight Macdonald called him, who had been one of the founding senior editors of National Review. This was research for an essay that would appear in The Dilemmas of American Conservatism. I’d read some Kendall before—a desultory stroll through The Conservative Affirmation in America, at least—and hadn’t profited much from the experience. But the second, more attentive perusal was different. Kendall himself had told of how R.G. Collingwood had taught him at Cambridge to read a book by asking what question the author was trying to answer. I didn’t find that approach too insightful, but I picked up something else from Kendall’s own methods—the habit of asking “What conditions would have to be true in order for this author’s arguments to make sense?”

That’s a more productive thing to ask of a serious work than simply, “Do this author’s arguments make sense?” The latter invites the reader to supply a misleading context: the author’s arguments may not match up with reality, but they must match up at least with his own view of reality, and that’s something worth figuring out and contrasting against whatever the reader thinks he already knows.

Stated so plainly this isn’t likely to strike anyone else as particularly insightful, just as Kendall’s report of how Collingwood reshaped his thinking didn’t do much for me. But that’s a lesson, too: it’s the act of thinking along with a text or teacher, and the new context created by that act, which makes a dead question come alive.

I thought of this when I recently came across Peter Witonski’s 1970 NR review of The Basic Symbols of the American Political Tradition, a book that began as a series of Kendall lectures and was finished after his death by George Carey. The review doesn’t do justice to the book—it elicited a sharp letter from Carey, who thought Witonski hadn’t even read what he purported to be reviewing—but Witonski does capture the effect Kendall can have, even decades after his death, perfectly:

What Kendall is all about is thinking—thinking about theoretical problems in politics. The device is that of the master professor, the man who by definition professes because he is wise, and is wise because he professes. The failure to convince, the difficult prose, are the essence of this device. In not convincing, Kendall makes you think the problem over again and again. I recognized this for the first time several years ago, when I met Kendall, for the first and last time, in a suburb of Paris, and spent many hours arguing with him.

The man, like the writer, was convincing and unconvincing. That night he spent a good deal of time propounding the general idea behind a book he had been engaged in writing, dealing with the American tradition. His argument was, of course, brilliant. But when I left him I was as unconvinced as ever. As I walked away from his flat I found myself thinking about what he had said. Suddenly I realized that I was thinking about such things as the Federalist Papers and the Declaration of Independence with a new freshness and vigor. I was rethinking them. I still did not agree with Kendall, but in his own perverse way he had taught me a great deal in a short period of time about subjects in which I had long since considered myself to be expert. Kendall was a master teacher.

Kendall and Collingwood are by no means alone in this heuristic impact. But it’s a rare thing: there are many memorable books and teachers that impart facts or insights; there aren’t so many who change the way an interlocutor reads.

Posted in . 4 comments

RT ≠ Endorsement

On air, Liz Wahl quits Russia’s English-language propaganda network.

She’s been getting a bit of snark from Twitter over her belated realization that maybe RT is a less than rigorously objective news source. Yet I’m more exasperated by RT’s viewers than by hosts who are, after all, only making a living, however dubiously, by reading from the Kremlin’s script.* In particular, how can certain libertarians or government-skeptical leftists think that as long as the spin is coming from a government other than America’s it must actually be the truth?

Unfortunately, the answer is all too plain: if you think that the U.S. federal government is the source of all evil in your life, your country, and the world, then it stands to reason—almost—that whatever contradicts Washington is on the side of truth. Moscow and Beijing therefore become beacons of light. The ideologues who fall prey to this don’t necessarily hate America—there’s a distinction between the country and its government, after all—and they don’t think of themselves as pro-authoritarian or, in the case of the Middle East, pro-dictator. But they do think, ultimately, that foreign authoritarians and dictators are really more liberal than the liberal-but-really-authoritarian United States. It’s a sour love affair: the U.S. fails to live up to liberal ideals, or even to come close, so regimes that have no intention of abiding by them must be no worse, or indeed a great deal better. Read More…

Posted in , . 27 comments

Three Paths for Putin

Evpatoria, Crimea.Vlad Galenko / Shutterstock.com
Evpatoria, Crimea.Vlad Galenko / Shutterstock.com

Yesterday I outlined what I still think is Russia’s preferred outcome in Crimea, one in which the strongly pro-Russian peninsula remains part of a Ukraine that is effectively subservient to Russia’s interests, no matter who is in charge in Kiev. That’s one path for Putin, and it hardly means avoiding military force—the key point is what result Russia’s aiming at.

There are two other scenarios, however, in which a Crimea more or less formally connected with Russia would make sense from Moscow’s perspective. The first is a variation on what’s already been suggested, only instead of using a Ukrainian Crimea as leverage over Ukraine as a whole, Putin uses the example of a Crimea severed from Ukraine to warn the Ukrainians that unless they play ball the Russian way, Putin will do to eastern Ukraine what he has already done to the Crimean south. A Ukraine without Crimea would have less love of Moscow, but that might be compensated, in Putin’s eyes, by greater fear.

The other possibility is that Putin is acting from weakness—that is, he’s calculated that there’s no plausible outcome in Ukraine as a whole that favors Russian interests, so he’s going to detach Crimea to salvage what he can. In this case, it doesn’t matter if removing Crimea from Ukraine makes Ukraine as a whole less cooperative with Russia because there is no chance for cooperation in any event.

And what if Russia just takes all of Ukraine? That’s basically the original scenario without the subtlety, and it comes with a great many headaches, not only in terms of the effort necessary to subdue Ukraine and the penalties the West would impose, but administering a territory as economically enfeebled and politically unstable as Ukraine isn’t an attractive prospect. An independent but subservient Ukraine looks to be what fits Russia’s interests best. The question is how Crimea fits into that—and if the best outcome, from Moscow’s perspective, is impossible, then a separated Crimea might be what Putin settles for.

(Putin also has to contend with the possibility that events will get away from him, of course—that the Crimeans may be more Catholic than the pope, so to speak, and be more eager to leave Ukraine than Putin himself would desire. And escalations of violence can throw this calculating style of politics completely out the window. But when thinking about Russia’s objectives, it’s worth keeping the big picture in mind.)

p.s. Here’s what the Russian foreign ministry is saying. Ignore the framing about far-right dangers in Ukraine and note the general political demand Russia is making:

We are surprised that several European politicians have already sprung to support the announcement of presidential elections in Ukraine this May, although the agreement of the 21 February envisages that these elections should take place only after the completion of the constitutional reform. It is clear that for this reform to succeed all the Ukrainian political forces and all regions of the country must become its part, but its results should be approved by a nationwide referendum. We are convinced that it is necessary to fully take into account concerns of deputies of eastern and southern regions of Ukraine, the Crimea and Sevastopol, which were expressed at the conference in Kharkov on the 22 February.

Posted in . 13 comments

Why Russia Doesn’t Want Crimea

Some TV and Twitter commentators have begun to suggest an independent Crimea as a solution for the region’s troubles. That may or may not be what a majority of Crimeans would like to see—some prefer union with Russia; others are content to remain with Ukraine—but from Russia’s point of view an independent or Russian-annexed Crimea is hardly the most desirable thing. Russia’s primary interest in Crimea, basing rights, is already secure even with the peninsula as part of Ukraine. An independent Crimea gives Russia nothing that Russia doesn’t already have. And it would deprive Russia of an invaluable asset: a large bloc of ethnic Russians within the Ukrainian electorate.

This conflict is about Ukraine, not Crimea. Russia has far-reaching interests in its neighbor—everything from pipelines to a strategic and ideological buffer zone—that are complicated by the fall of Yanukovich and the coming to power of anti-Russian leaders in Kiev. The circumstances of Yanukovich’s fall (and practically speaking, he has fallen, even if he refuses to admit it) further loosen Russia’s grip. Thus the upheaval in Crimea is a bargaining chip, not an end in itself: it’s a way for Putin to make sure that Russian interests in Ukraine as a whole are accommodated as the country’s political future is worked out.

Keeping Ukraine intact serves Russian interests better than splitting the country into separate states, but obviously Russia wants Ukraine’s integrity to be preserved on Russia’s terms. So this is the space within which negotiations can be expected to take place. What settlement is possible that will give pro-Russian Ukrainians a strong hand, and perhaps disproportionate one, within a united Ukraine, while satisfying a critical mass of the forces that toppled Yanukovich? Russia and the EU both have considerable economic stakes in Ukraine as a stable thoroughfare, so as difficult as the situation certainly is, there’s plenty of weight on the side of a grand bargain. And given how corrupt Ukrainian politics is on all sides, one suspects that money will talk louder even than nationalism—though that’s never an absolutely sure bet.

Posted in . 19 comments

Leo Strauss and the Right’s Civil War

University of Chicago

I recently reviewed Paul Gottfried’s Leo Strauss and the Conservative Movement in America for the University Bookman. Paul responds to my review here. Note that in addition to Paul’s book being available as an affordable paperback, the Kindle edition is now going for just $12.49—if you’re interested in this topic, be sure to read it for yourself.

In the review I say that whether or not Strauss was in some sense a “conservative” is not the most interesting thing about him or the debate over his work. Gottfried may be correct that Strauss is better understood—if he needs to be situated in the context of late 20th century politics at all—as a Cold War liberal. The deficiency with that approach, however, is that it fails to account for why Strauss and his disciples are more often seen to associate with the conservative movement than with the leading figures and institutions of liberalism. Strauss and Straussians have been a presence in National Review since the 1960s. They have never had a similar representation in the New Republic, let alone The Nation.

Paul points to the importance of Strauss’s critique of relativism to explain the affinity that conservatives, especially conservative Catholics, have felt for him and his disciples. He also, however, calls attention to the Strauss circle’s apparent preference for Democratic presidential candidates in the 1950s and 1960s as evidence of a left-leaning disposition. In the Bookman, I challenge he idea that presidential voting counts for much—I cite the preference of Murray Rothbard and Peter Viereck, two other ambiguously conservative or right-leaning figures, for Adlai Stevenson over Dwight Eisenhower as an indicator of how voting is not always a sure sign of ideological alignment. I chose those figures because they happened to agree with Strauss (according to Stephen Smith’s account of Strauss’s voting) in the elections of the 1950s and because they, like Strauss, are not easy to pigeonhole. The point can be expanded, however: Russell Kirk, a conservative’s conservative, liked Eugene McCarthy as much as Barry Goldwater, and James Burnham—an important influence on Gottfried’s fellow paleoconservative Sam Francis—strongly preferred liberal Republican Nelson Rockefeller over Goldwater.

The “relativism” question is far more important than presidential voting, and taken together with personal and institutional associations creates a much stronger case for placing Strauss among conservatives than among liberals like Louis Hartz or A.M. Schlesinger. National Review‘s William F. Buckley Jr. and Willmoore Kendall considered Strauss a comrade, as did Russell Kirk—though he came to have a more negative view of Strauss’s disciples after the 1980s.

This is worth stating explicitly because less historically informed commentators than Gottfried—who touches on such associations just briefly—may think there’s some mystery as to how latter-day Straussians came to occupy a prominent place in the conservative movement. The simple answer is: they inherited it, both from Strauss himself and from Harry Jaffa, who is ideologically idiosyncratic but has been influential in right-wing Republican and NR circles since the early 1960s. Read More…

Posted in , . 47 comments

Can Conservatives Support the Iran Deal? In Britain, They Do

UK Foreign Secretary William Hague—who was leader of the Conservative Party from 1997 to 2001—made a statement in the House of Commons that suggests how realist Republicans in the U.S. might look at the Iran deal. He said in part:

Mr Speaker, reaching this interim agreement was a difficult and painstaking process, and there is a huge amount of work to be done to implement it. Implementation will begin following technical discussions with Iran and the IAEA and EU preparations to suspend the relevant sanctions, which we hope will all be concluded by the end of January. A Joint Commission of the E3+3 and Iran will be established to monitor the implementation of these first-step measures, and it will work with the IAEA to resolve outstanding issues.

But the fact that we have achieved for the first time in nearly a decade an agreement that halts and rolls back Iran’s nuclear program, should give us heart that this work can be done and that a comprehensive agreement can be attained.

On an issue of such complexity, and given the fact that to make any diplomatic agreement worthwhile to both sides it has to involve compromises, such an agreement is bound to have its critics and opponents.

But we are right to test to the full Iran’s readiness to act in good faith, to work with the rest of the international community and to enter into international agreements.

If they do not abide by their commitments they will bear a heavy responsibility, but if we did not take the opportunity to attempt such an agreement then we ourselves would be guilty of a grave error. Read More…

Posted in . Tagged , . 3 comments

Can Liberals Understand Politics Anymore?

I’m baffled by liberals who don’t actually support Obamacare themselves—they want single-payer—but are furious at Republicans for voting against it. (Several comments here illustrate the phenomenon.) Let’s consider the logic. A great many Republicans actually do oppose Obamacare for bona fide reasons. Others are indifferent, and some might opposite it only on partisan grounds while liking the law in principle. Which of these groups of Republicans do liberals think would have had reason to vote for the law?

Those who secretly liked the law were going to see it passed anyway by a Democratic majority—they got the policy they wanted without having to risk blowback in a Republican primary. You’d have to be extremely idealistic to believe that taking a serious political risk to make no policy difference is a prudent move. Indifferent Republicans faced an even starker calculus: risk your career for a policy you don’t care about at all or that you think has as much chance of turning out badly as well. Finally, Republicans who did oppose PPACA out of principle surely can’t be expected to have a reason to vote for the law anyway absent something to change the equation.

This was the ex ante logic, and it proved to be correct: any Republican who had voted for Obamacare would have faced the wrath of the Tea Party, and if he had survived past 2010, he would still have to survive a 2014 midterm in which even Democrats risk losing their seats over their association with the law and its disastrous implementation.

Liberals are so politically inert that I don’t waste much time criticizing them. The Democratic Party, to the extent that it’s liberal, wins nationally because the GOP is a basket case. The hard left, which knows that Democrats are about as neoliberal as the GOP, lives in a nonsense world in which puppet-wielding protesters shape policy, or would if only they built more and bigger puppets. I know some very well meaning, otherwise intelligent antiwar leftists who are nonetheless the most politically infantile people you will ever meet. Politics is just magic to them. (Some of this comes of drawing the wrong lessons from Alinsky and Gramsci—wrong lessons the activist right is now busy committing to memory.)

What drives liberals’ political inanity is the same thing that accounts for why the Tea Party can’t govern: just as the grassroots right is against a lot of things but doesn’t feel much urgency about figuring out what it’s realistically for, liberals go hot with rage over Republican bad behavior and stop there, indulging in outrage rather than thinking about how to change the GOP’s incentives.

The Obamacare saga is the clearest example of the left’s failure to think politically. Lefties defend a law that they don’t even like, and which cost the Democratic Party enormously in 2010 and looks set to do so again in 2014. This would be like conservatives defending Medicare Part D if it had caused Republicans to lose the House in 2004. To their credit, most right-wingers, even the hawks, have the good sense not to attack anyone today for failing to support the Iraq War in 2003.

The only way to get a party or a politician to act against its interest or its principles is to change what those interests or principles are through powerful incentives. Offer X in return for Y, and if X is a higher priority for the other party than not-Y, you will get your way. Such negotiation isn’t always easy or even possible—the alignment of interests (including self-preservation) and principle involved in Republican opposition to Obamacare may have been insurmountable. At that point, a politician or party has a choice: press ahead with the policy knowing that you will bear 100 percent of the blame if things go wrong, or put your wager on something else and bide your time on this issue until the other side is more tractable. Obama, Reid, and Pelosi made their choice—fair enough—and they’re living with the consequences. The Republicans did the only thing that made sense in their position; and now they might reap a reward.

Liberals can comfort themselves in one respect, however: the Republicans have blown opportunities as good as this many times before. That’s why Harry Reid is still majority leader.

Posted in , . 81 comments

Harry Reid Nukes the Filibuster

The Senate majority leader and 51 other Democrats voted today to change the chamber’s rules—departing from over 200 years of tradition—to make a 51-vote majority sufficient to confirm most presidential nominees, though notably not those for the Supreme Court. It’s arguably the biggest change to the institutional character of the Senate since the ratification of the 17th amendment. Consensus-seeking collegiality had already broken down, but now the formal incentive to seek it is gone. That incentive, it need hardly be said, hadn’t proved very effective lately.

In theory you might now get more intensely partisan nominees and greater polarization of the federal bench, as well as cabinets that are even less aware of the need to build more-than-majority-support for policies that affect everyone. American government has generally not been a thing of bare majorities, which tend to be unstable, and the political problems of Obamacare—quite apart from its policy and technological ones—are a product of a short-lived Democratic majority trying to make a great change without securing even grudging support from the other side. This all-at-once approach cost the Democrats their House majority in the following election and set the stage for years of acrimony.

Other changes in the way the Congress operates, under Republicans and well as Democrats, have similarly chipped away at the consensual—or at least supermajoritarian—character of the federal government: without earmarks and the old committee system, the wheeling and dealing that could be used to build consensus across party lines has been made much more difficult. Even earlier, reforms to the appropriations process in 1974 paved the way for Congress to become an “incompetent bureaucracy,” as this Bruce Bartlett piece explains, unable to pass budgets in anything like a reliable manner.

Reid’s change to the filibuster rules, and the conditions that prompted it, are another chapter in a decades-long tale of institutional decay. The book still has many pages to go, but one can guess how it ends. In 1776 Americans were so passionate about the idea of representative government that they were willing to fight a revolution for it. Nowadays Congress is lucky if its approval ratings hover just above the single-digit range.

Posted in . 39 comments

The GOP’s Contraception Conundrum

Libertarian legal scholar Randy Barnett spots the corner into which “judicial conservatives” have painted themselves on contraception, whose nationwide legality is vouchsafed by a Supreme Court decision notoriously at variance with the right’s judicial philosophy:

judicial conservatives… believe that the Court in Griswold was wrong to protect a right to use contraceptives. … And the smarter and better trained they are as judicial conservatives, the more they are trapped by the accusation that state legislatures could ban contraceptives if they want, which then leads to the next questions [which] is whether they think state legislatures ought to ban contraceptives.  How they answer this question can then get themselves in trouble with parts of their socially conservative base.

In short, this is a morass for those conservative Republicans who have embraced judicial conservatism, and who are smart enough and well schooled enough to understand where the logic of their position truly leads.

Barnett’s solution is to propose “a constitutional conservatism that seeks to enforce the whole Constitution, including the parts that judicial conservatives are at pains to explain away, like the Ninth Amendment and the Privileges or Immunities Clause of the Fourteenth.” In Barnett’s view, this would require a strong rationale for any restriction on individual liberty by any level of government. Presumably one could come up with reasons why abortion or various hard drugs should be banned but contraception should not. Barnett doesn’t tackle these questions in his post, and he may not be sympathetic to the antiabortion and pro-drug-war elements of the right, but in theory what he proposes need not preclude their goals. That’s especially important where abortion is concerned since Griswold set the stage for Roe. Read More…

Posted in , , . 65 comments
← Older posts