State of the Union

Pedophile Rings in Thatcher’s Britain—Myth or Fact?

Leon Brittan WTO / cc
Leon Brittan WTO / cc

Leon Brittan, who died last week, had a very distinguished career in British public life. Among other things, he served as Margaret Thatcher’s Home Secretary and later became a member of the European Commission. It is startling, then, to find that among the standard eulogies for the great and the good, some news headlines reporting his death feature such unexpected words as “abuse ring,” “pedophile,” and “child murder.” Brittan had the misfortune to play a starring role in a long-simmering sex scandal currently fascinating that country’s media.

For years now, rumors have been floating about a “Westminster Pedophile Ring” that supposedly operated in the 1970s and 1980s and which included senior politicians, civil servants, and military figures, mainly right-wing Conservatives. Recently allegations reached new heights when police said they were seriously considering claims that the group had murdered several young boys. As the Independent headlined, “Tory MP Killed Boy During Sex Attack.” In themselves, these horrific charges contain nothing flagrantly impossible. Yet we need to be very careful indeed about accepting a story that depends on thorny issues of evidence and credibility that will be deeply familiar to American observers of our own country’s sexual politics.

The whole dreadful affair has now developed a complete mythology, with two pivotal hero figures. One was flamboyant Member of Parliament Geoffrey Dickens, who in 1984 compiled a massive dossier about pedophilia in British public life, with details on some 40 allegedly tainted politicians. He gave this to Home Secretary Leon Brittan, whose department promptly misplaced or buried it, supposedly as part of a general establishment cover-up. Only in recent years has the affair returned to life. The other key figure is the pseudonymous “Nick,” supposedly one of the abused boys from that earlier era. Finding his chilling account of witnessing murders “credible and true,” British police have now reopened the investigation, in the process generating sensational headlines.

Parts of the story are plausible. We know that in that era—roughly, the decade following 1975—several British public figures were indeed involved in outrageous and exploitative sexual misbehavior, including some cases of child abuse and child pornography. One horrific example was Liberal MP Cyril Smith, a 300-pound blimp with a penchant for spanking teenaged boys. Although such cases of sexual malfeasance were well-known to police and media, they were thoroughly hushed up, a process made vastly easier by draconian British libel laws.

The “pedophile ring” rhetoric is, though, misleading. If we look at the known sexual scandals from the politics of this era, they tended not to be “pedophile,” in the sense of involving someone sexually focused on children at or below the age of puberty. The word is thus chosen to maximize seriousness, implying young child victims, compulsive serial offending, and incorrigibility. In fact, the recorded cases commonly involved homosexual men interested in male teenagers or young adults, usually male prostitutes. That does not for a second excuse the behavior, but it does put it in a different category from molesters preying on infants.

That distinction is significant in light of the claims made about Geoffrey Dickens, who is today presented as a near-prophetic champion of decency and child protection confronting a perverted ruling class. Dickens was in fact an outrageous demagogue, who never found a sensational issue or moral panic that he failed to leap on. His special bugbear was homosexuality, a broad category that, for him, included pedophilia as one of its subsets. If we actually had a copy of the legendary dossier, we can be quite sure that it included very few actual pedophiles and a great many homosexuals. Almost certainly, too, the impressive-sounding term “dossier” dignifies a generalized rant.

Charges of rings and conspiracies should also be treated circumspectly. The “elite pedophilia” charges circulated very widely in tabloid media of the 1980s, usually in the context of lunatic theories of Satanism and supposed “ritual child abuse,” sometimes linked to anti-Masonic hysteria. Then as now, these fevered rumors Named Names, including Cabinet members and members of the royal family, as well as prominent Jews, like Brittan himself. It’s not surprising, then, that law-enforcement officials at the time were profoundly (and rightly) skeptical of any new nuggets Dickens had to offer.

But let’s move to the present day, and especially to “Nick,” the main (and seemingly only) source of the murder charges. I personally have no idea of Nick’s identity, or of his veracity, and it is possible that every appalling word he is uttering is grounded in truth. But based on the extensive media reports of the affair, I do have concerns.

I read, for instance, the accounts of the homicidal orgies attributed to the elite ring, in which at least one boy was strangled. This gives me a mighty sense of déjà vu because I know identical stories of actual, confirmed incidents that happened in London at this exact time and which have been known in the public domain for decades. Those crimes, though, involved a quite genuine pedophile crime network that was as far from “elite” as it was possible to be, a group of underclass trash who hung around fairgrounds to find child victims. They indeed killed repeatedly, in exactly the ways now credited to our “elite” perverts, and the similarity between those stories and the current charges bothers me. If someone were inventing “pedophile ring” crimes, this is what they would come up with.

Recently, one of the leading detectives in the renewed investigation remarked that “I believe what Nick is saying to be credible and true.” Based on reports to date, police have never referred to any actual corroboration of the charges, any piece of evidence that Nick gave that he would not have known if he had not been present at these crimes. Rather, we hear repeatedly of his “credibility,” a word that is thoroughly subjective: “I believe.”

When I say that X is “credible,” what we mean is that I find what he has to say believable, and that fact depends as much on my willingness to accept his statement as on any quality in his character or demeanor. This is a familiar theme in contemporary American debates over sexual assault, as when Rolling Stone found a witness who recounted fraternity rape stories, declaring her “credible” because it fitted their ideological needs to do so. Editors and journalists simply wanted and needed to believe. Seeking corroboration was unnecessary, and the mere suggestion of doing so would have blamed and demeaned the victim.

In Britain, too, there are ample reasons why authorities would now find Nick “credible” in the way they would not have done a decade or so back. The main new factor is the appalling case of disc jockey Jimmy Savile, who used his celebrity status to carry out a career of rape and molestation lasting half a century. Since 2012, desperately anxious to avoid new attacks on their integrity and competence, law-enforcement agencies have sought out and prosecuted celebrity sexual crimes from bygone years, commonly relying on the uncorroborated testimony of reported victim and survivors.

Sometimes, this exhumation of past horrors has undoubtedly served the cause of justice, but questions remain. Should an individual really be tried and convicted on the unsupported, uncorroborated evidence of alleged victims who report crimes from 30 or 40 years ago? Surely, we can now point to enough cases where such testimony has proved to be wholly fictitious, and malicious, so that real injustice resulted. Witnesses fantasize, and witnesses lie.

Perhaps British politicians of the Thatcher era were indeed sexual monsters. But we should pause before accepting what, on its surface, looks like a deranged fantasy.

Philip Jenkins is the author of Images of Terror: What We Can and Can’t Know About Terrorism. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

‘False Flags,’ Charlie Hebdo, and Martin Luther King

FBI
FBI

Seeking to explain recent terror attacks in France, conspiracy theorists have resorted to very familiar culprits: the Jews did it, specifically the mystical supermen of Israel’s Mossad. Such a theory is stupid and scurrilous, as well as on so many grounds self-evidently incorrect. That said, the Paris terror spree does raise significant questions about how we assign responsibility for terror attacks and what we can and can’t know by looking at the foot soldiers who carry out the deeds. Nor are debates over false claims and attributions wholly foreign to American history.

The most likely reconstruction of the Charlie Hebdo attack places primary blame on the Yemen-based al-Qaeda affiliate, Al-Qaeda in the Arabian Peninsula (AQAP). Al-Qaeda wanted to carry out a spectacular in order to distract attention from the enormous successes enjoyed recently by its upstart rival, ISIS, in Iraq and Syria. Only thus, thought al-Qaeda leaders, could the group recapture some of its old momentum and credibility. Accordingly, two of the militants involved made a point of yelling their support for AQAP in the streets they had turned into a battleground. Their accomplice, though, who stormed a kosher market, was so far from understanding the wider agenda that he publicly proclaimed his own fealty… to the ISIS Caliphate. Oops.

In itself, the gulf between generals and foot soldiers is not hard to grasp. Even in regular armies, ordinary privates rarely have much sense of the broad strategic goals motivating their campaigns, although at least they can be sure about which nation they are actually serving. Such certainty is a luxury in terrorist conflicts, where individual cells and columns might find themselves contracting for a bewildering variety of paymasters. This degree of disconnect can be potentially useful for anyone seeking to manipulate a cause. A group can recruit uninformed militants as muscle to undertake a particular attack, which can serve wider goals utterly beyond the comprehension of those rank-and-file thugs. This might mean discrediting some other rival cause or else achieving a desired goal without suffering any direct stigma for committing the deed. Such pseudonymous actions thus offer deniability.

That brings us back, perhaps, to one of the most notorious crimes of 20th-century American history.

In April 1968, Martin Luther King Jr. was assassinated in Memphis. Despite multiple claims through the years, we can confidently say that the assassin was a petty criminal and armed robber named James Earl Ray, who fled the country before being arrested in London. Ray’s motives have been much debated, but a congressional investigation in the 1970s assembled extensive (if confusing) evidence that cliques of Southern racists and white supremacists had conspired to kill King, using Ray as a low-level subcontractor.

That might be true, but it is not what Ray himself admitted. In 2001, British authorities released information about the arrest and detention of Ray, material that has been largely ignored in the United States. During his British stay, he offered none of the lengthy defenses and denials of the shooting that he would later maintain. Rather, he talked freely about the King murder, and he even suggested the culprits who might have arranged the killing. Instead of white supremacists, though, Ray’s main candidates for the principals in the conspiracy were the Black Muslims, the Nation of Islam followers of Elijah Muhammad.

Let me say immediately that the fact that Ray said this does not of itself constitute weighty evidence for the existence of any conspiracy, let alone its nature. Ray, who died in 1998, was anything but a reliable witness. He was at best a contractor in the killing, and even the Ray family’s legal representatives spoke scathingly of the general intelligence of Ray and his circle. And the fact that Ray made such remarks does not even mean that he necessarily believed them. Perhaps he was making mischief.

Odd as it may sound in retrospect, though, the Black Muslim theory is not ridiculous, and it is quite as plausible as the white supremacist angle. We need to think back to a time when the U.S. had been racked by escalating race riots for several summers. Even responsible observers were forecasting outright race war, with cities partitioned between armed black and white militias. Radical black separatists stood to gain from a sensational act that would polarize and divide the races still further. Enlisting a white man to kill Martin Luther King would be a lethally effective form of dissimulation, and it was also wholly deniable. Elijah Muhammad had a track record of involvement in violence, and he is widely held responsible for ordering the assassination of Malcolm X in 1965.

If the London evidence had been better publicized in the 1970s, it would presumably have been more thoroughly investigated, and that might possibly have pointed to interesting connections. Lacking such an investigation, however, we really can add little to what is already known about King’s death: no reasonable person would build a new conspiracy theory solely on the shifting sands of James Earl Ray’s often-changing testimony. I am certainly not claiming any grand breakthrough in the case.

But from the point of view of terror investigations, the Ray affair contradicts so many of our regular assumptions. When individuals X and Y launch an attack, the media will direct all their efforts to determining what made them do it, and how they became so fanatically devoted to their cause. The problem is that the people pulling the triggers do not necessarily know much about the wider causes for which they are fighting. And what they do know might be totally wrong.

Philip Jenkins is the author of Images of Terror: What We Can and Can’t Know About Terrorism. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Saddam’s Strategy Against ISIS

Georgios Kollidas / Shutterstock.com

It does not take great powers of prophecy to discern the outcome of the latest U.S. intervention in Syria and Iraq. Soon, ground forces will become more directly involved. Fighting bravely and intelligently, those forces will win many victories, although at a high cost in battle casualties and terrorist outrages. Meanwhile, Islamic State forces only have to stay on the defensive until the patience of the U.S. public becomes exhausted, prompting another undignified American withdrawal in 2016 or 2020. Islamists will then regain power, just as the Taliban will almost certainly do in Afghanistan. Americans will be left scratching their heads seeking to explain another strategic failure.

Actually, American or other Western forces could win such wars very easily, obliterating their enemies to the point where they would never rise again. The problem is that they could do so only by adopting tactics that Americans would find utterly inconceivable and intolerable—in effect, the tactics of Saddam Hussein. Yet without these methods, the West is assuredly destined to lose each and every of its future military encounters in the region. I emphatically do not advocate these brutal methods. Rather, I ask why, if the U.S. does not plan to fight to win, does it become embroiled in these scenarios in the first place?

To illustrate the principles at work, think back to the attack on the U.S. compound in Benghazi in 2012. Ordinary Libyans were furious at the killing of an American diplomat they respected greatly, and they struck hard at the terror groups involved. With dauntless courage, they stormed the militia bases, evicting many well-armed Islamist fighters. Explaining his fanatical behavior under fire, one of the attackers was quoted as saying “What do I have to fear? I have five brothers!” As in most of the Muslim world, whether in the Middle East, North Africa, or South Asia, people operate from a powerful sense of family or clan loyalty, with an absolute faith that kinsmen will avenge your death or injury. That process of vendetta and escalating violence continues until the family ceases to exist. As a corollary, the guilt of one is the guilt of all. An individual cannot shame himself without harming his wider family.

Through the centuries, that basic fact of collective loyalty and shared responsibility has absolutely shaped the conduct of warfare in the region. It means, for instance, that governments disarmed rivals by taking members of their families as hostages for good behavior. Those hostages were treated decently and honorably, but their fate depended on the continued good conduct of their kinfolk. Governments kept order by deterrence, enforced by the ever-present threat of collective retaliation against the kin-group and the home community of any potential insurgents. As individuals scarcely matter except as components of the organic whole of family and community, nothing prevents avenging the misdeeds of one man on the body of one of his relatives or friends.

Everyone in the region understands the collective principle, which was powerfully in evidence during the Lebanese civil war of the 1980s. If a militia kidnapped one of your kinsmen or friends, you could only save his life if you very quickly grabbed a relative of one of the culprits, and thus began negotiations for a swap. If your kinsman was already dead, then further atrocities could only be pre-empted by swift retaliation against the kidnapper’s family. So you have five brothers? Well, we will track them all down, one by one.

Only slowly did local Beirut fighters realize that the Americans were actually naïve enough not to target the relatives of kidnappers, even when they knew perfectly well who the guilty men were. That insight—the knowledge that you could target those foreigners without risking your brothers or cousins—was what led to the hostage crisis of the Reagan years, which almost brought down the U.S. presidency. The Russians, by the way, enthusiastically played by local rules, retaliating savagely against the brothers and cousins of those who laid hands on one of their own. In consequence, the Russians suffered only one kidnap crisis, before establishing a successful balance of terror.

Once we understand that principle, even the seemingly intractable problem of deterring suicide attacks actually becomes simple. An individual—a Mohammed Atta in New York, a Mohammad Sidique Khan in London—might in his last moments dwell on nothing but the glories awaiting him in Paradise. Why should he hesitate to kill? Matters would be utterly different if he knew that his act would bring ruin to his family and neighbors, to the violent death of all his kinsmen and the extirpation of his bloodline.

A dictatorial regime like Saddam’s had not the slightest problem imposing such a group punishment, and extending it to every woman and child of that family. Western forces have always been far more principled, but even the colonial empires were quite prepared to inflict collective punishments on the towns or villages that produced notorious rebels. When Israeli soldiers today demolish the houses of terrorists’ relatives, they are treading in familiar British footsteps.

Today’s Islamic State pursues an extremist ideology in which there are literally no limits to cruel or outright evil behavior. The only enemy they have to fear is death, and they have been taught to welcome this. Short of introducing some mighty new deterrent factor, conventional military operations against them are wildly unlikely to succeed. Quite the contrary, endemic wars will generate ever more fanatics.

In theory, a recipe does exist for decisively ending the Islamists’ run of victories. Through means of collective and family punishment, which explicitly targets individuals who have done no wrong, governments and armies must introduce a brutal deterrent regime that will even outweigh the massive temptations of martyrdom and an instant road to Paradise.

No U.S. government would ever introduce such a policy, and if it did, it would cease to be anything like a democratic society. The U.S. could only adopt such avowedly terrorist methods following a wrenching national debate about issues of individual and group responsibility, and the targeting of the innocent. Could any U.S. government avowedly take hostages? We would be looking at a fundamental transformation of national character, to something new and hideous. But what other solutions could or would be possible?

Given that U.S. administrations are not going to fight the Islamic State by the only effective means available—and thankfully, they aren’t—why are they engaging in this combat in the first place?

Why start a war when you don’t plan to win it?

Philip Jenkins is Distinguished Professor of History at Baylor University and serves as Co-Director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

The Case Against a Unified Kurdistan

Daniel Pipes has announced his conversion to the cause of an independent Kurdistan, to be built on the foundations that ethnic group has established in Northern Iraq. In the 1990s, he says, he doubted the idea on multiple grounds, not least that “it would embolden Kurds to agitate for independence in Syria, Turkey, and Iran, leading to destabilization and border conflicts.” Now, though, he greets the prospective new nation with a hearty “Hello, Kurdistan!”

As the U.S. becomes ever more deeply involved against ISIL, we are going to hear many such calls to support a free Kurdistan. By the standards of the region, the Kurds are undoubtedly the good guys, the closest thing we might have to an actively pro-Western state. The problem is that defining this nascent Kurdistan is a fiendishly difficult project, which at its worst threatens to spread massacre and ethnic cleansing to parts of the region that are presently relatively safe. Actually, we should listen closely to the wise words of the unreconstructed Pipes, version 1.0.

You can make an excellent case for supporting the independence of a Kurdistan in roughly its present location in Northern Iraq. But the Kurdish people are spread widely over the region, with communities in Syria, Iran, and Turkey, and the eight million Iraqi Kurds constitute only a quarter of the whole.

With commendable frankness, Pipes takes his ambitions to the limit. As he asks, “What if Iraqi Kurds joined forces across three borders—as they have done on occasion—and formed a single Kurdistan with a population of about thirty million and possibly a corridor to the Mediterranean Sea?” He presents a map of the new mega-Kurdistan, which is produced by “partially dismembering its four neighbors.” Yes, he says, this would dismay many, but the region “needs a salutary shake-up.”

This is not dismaying, it’s actively terrifying.

As Syria and Iraq are already in dissolution, little additional damage would be caused by tearing off extra fragments of their territory. In Iran, though, any attempt at Kurdish secession would of necessity generate a bloody civil war, but that prospect does not deter Pipes: secession “would helpfully diminish that arch-aggressive mini-empire.” Turning relatively stable Iran into a fragmented failed state would be music to the ears of U.S. and Israeli hawks, but it is a recipe for escalating carnage for decades to come.

But it is in Turkey that any Kurdish ambitions meet a massive reality check. The country has 15 million Kurds, around a fifth of the whole population, spread over the southeastern third of the country. Turkey’s revolutionary PKK, the Kurdish Workers Party, is an extremely active and dangerous movement, and its decades-long nationalist guerrilla struggle is currently on hiatus. While rightly stressing that the Kurdish state has rejected the terrorist tactics used by Turkish groups, Pipes specifically notes schemes by the Kurdish military to ally with the Turkish Kurds, and his imagined mega-state incorporates huge swathes of present Turkey.

A renewed secessionist movement in Turkey would be catastrophic. It would cause many thousands of deaths and cripple one of the region’s most successful societies. Beyond civil conflict and terrorism, expect a rash of outright wars between the new and emerging mini-states. Violence would likely spread into Turkish and Kurdish communities in Western Europe.

Why on earth does Pipes think such an outcome is worth risking? The only seeming benefit is to punish Turkey’s President Erdoğan, who has shown undemocratic ambitions. More to the point, though, he has become a harsh critic of Israel and of Western policies in the Middle East. As Pipes writes, “Kurds’ departing from Turkey would usefully impede the reckless ambitions of now-president Recep Tayyip Erdoğan.” Even if you assume the very worst of Erdoğan, he still falls very far short of the region’s dictators and demagogues, making Pipes’s proposed solutions wildly disproportionate, and, yes, reckless.

A salutary shake-up is one thing. Provoking a regional cataclysm is quite another.

Philip Jenkins is Distinguished Professor of History at Baylor University and serves as Co-Director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

The Paranoid Style in Liberal Politics

As a shrewd cultural critic, Alan Wolfe is always worth reading. Recently though, he made an unfortunate diversion into the realm of necromancy, raising the shades of  unwanted and unneeded dead theories. In a recent issue of the Chronicle of Higher Education, Wolfe discussed how far Richard Hofstadter’s theory of the Paranoid Style could be applied to contemporary US politics. It would be sad if Wolfe’s imprimatur inspired any revival of a fatally flawed, but long influential, theory.

Richard Hofstadter was a Columbia University historian, whose best-known books were Anti-Intellectualism in American Life (1963) and The Paranoid Style in American Politics (1965). The title essay in this latter book originally appeared in Harper’s at the time of the 1964 election. A classic JFK liberal, he used his historical skills to analyze what he saw as the political menaces of his day. He described the beliefs and rhetoric of Barry Goldwater and what he termed the radical Right with about as much balance and intuitive sympathy as an al-Qaeda spokesman expounding US policy in the Middle East. Hofstadter located contemporary Right-wing views in a deep-rooted and ugly tradition of hatred, xenophobia, Nativism, and racism, traceable to colonial times. (He always spoke of the Right: conservatism might in theory be acceptable, but America, in his view, had no “true” conservatives).

Hofstadter saw no point in trying to comprehend Rightism as a system of rational political beliefs. Rather, it was based on paranoid fantasies—delusions of persecution, visions of conspiracy, and messianic dreams of absolute victory in a future that would vindicate all present excesses. Only the word “paranoia” “adequately evokes the sense of heated exaggeration, suspiciousness, and conspiratorial fantasy.” All these views, ultimately, were grounded in irrational fears, of projections of the troubled self. Drawing on the faddish therapeutic creeds of the time, Hofstadter presented Rightism as a pathological disorder. “Paranoia,” in his usage, was not just a rhetorical label, but a certifiable personality disorder.

For Hofstadter, America’s political choice in 1964 could be summarized readily: we are liberal; you are mentally ill. Read More…

Posted in , , . Tagged , , . 23 comments

How a Shopping Mall Becomes a Killing Zone

This really is frightening.

Terrorist incidents tell us nothing new about human nature. We already knew that people are capable of horrendous violence, especially when they have come to regard some other subset of human beings as unworthy of full human status. It’s not surprising, then, to see the terrorists of Somalia’s loathsome al-Shabaab movement violating all laws of humanity by slaughtering innocent victims of all ages. People can become monsters, and they did in the Nairobi mall attack that began on September 21.

What really is alarming, though, is to see terrorists create a radical new tactic against which there is no obvious response or defense. There was nothing surprising, for instance, in the idea that terrorists might hijack airliners, but only in 2001 did we realize that hijackers might use them for suicide attacks, turning those aircraft into deadly missiles. Nairobi has just shown us another horrible innovation. It might be that we won’t realize how effective this could be against the U.S. until we face yet another day when we are counting the dead in their hundreds. We have to confront this issue immediately.

Think about it. How would one attack a shopping mall, whether in Nairobi or Minneapolis? Presumably a number of pickup trucks draw up in the parking lot, and 20 or so armed men and women get out, carrying their weapons and ammunition. Then they enter the mall and begin killing until they can do no more harm. They are strictly limited by the number of bullets and grenades they can carry. When police and military forces arrive, the terrorists might hold out for an hour or two before being eliminated.

That’s one way to do it, but it’s clearly not what happened in Nairobi, where firefights were still in progress several days after the initial assault. Even more amazing, terrorists were still putting up resistance against strong Kenyan forces, reputedly trained and assisted by British and Israeli special forces.

How on earth did the terrorists do it? Why, they rented a store. Read More…

Posted in , , , . Tagged , . 12 comments

Syria’s Christians Risk Eradication

U.S. policy towards Syria is bafflingly inconsistent. If U.S. leaders are so concerned about regimes slaughtering thousands of their own people, did they notice what just happened in Egypt? If they are so exercised over about weapons of mass destruction, are they aware that Israel has two hundred nuclear warheads, with delivery systems? Will American warships in the region be making those other stops on their liberating mission?

Most puzzling of all, though, is why the United States seems so determined to eradicate Christianity in one of its oldest heartlands, at such an agonizingly sensitive historical moment.

Syria has always been a complex place religiously. Although the country has a substantial Sunni Muslim majority, it also has large minority communities—Christians, Alawites, and others—who together make up over a quarter of the population. Those communities have survived very successfully in Syria for centuries, but the present revolution is a threat to their continued existence.

Sadly, Westerners tend to assume that Arabs are, necessarily, Muslims, and moreover, that Muslims are a homogeneous bunch. Actually, 10 percent of Syrians are Alawites, members of a notionally Islamic sect that actually draws heavily from Christian and even Gnostic roots: they even celebrate Christmas. Locally, they were long known as Nusayris, “Little Christians.” Syria is also home to several hundred thousand Druze, who are even further removed from Sunni orthodoxy.

And then there are the Christians. If Christianity began in Galilee and Judea, it very soon made its cultural and intellectual home in Syria. St. Paul famously visited Damascus, and for centuries Antioch was one of the world’s greatest Christian centers. (The city today stands just over the Turkish border.) A sizable Christian population flourished under Islamic rule, and continued under the Ottomans. Muslim and Christian populations always interacted closely here. A shrine in Damascus’s Great Mosque claims to be the location of John the Baptist’s head.

Christian numbers fluctuated dramatically over time. A hundred years ago, “Syria,” broadly defined, was home to a large and diverse Christian population, including Catholics, Orthodox, and Maronites. In the 1920s, the French arbitrarily carved out the country’s most Christian sections and designated that region “Lebanon,” with its capital at Beirut.

In theory, that partition should have drawn a clear line between Christian Lebanon and non-Christian Syria. But Syria itself was changing in the aftermath of the catastrophic events of the First World War. The year 1915 marked the beginning of the horrendous genocide of perhaps 1.5 million Armenians, as well as hundreds of thousands of Assyrians, Maronites, and other Christian groups. After the war, Christians increasingly concentrated in Syria, where they benefited from French protection.

Arab Christians, though, were anything but imperial puppets. Determined to avoid a repetition of the horrors of 1915, Christians struggled to create a new political order in which they could play a full role. This meant advocating fervent Arab nationalism, a thoroughly secular order in which Christians and other minorities could avoid being overwhelmed by the juggernaut power of Sunni Islam. All Arab peoples, regardless of faith, would join in a shared passion for secular modernity and pan-Arab patriotism, in stark contrast to reactionary Islamism. The pioneering theorist of modern Arab nationalism was Damascus-born Orthodox Christian Constantine Zureiq. Another Orthodox son of Damascus was Michel Aflaq, co-founder of the Ba’ath (Renaissance) Party that played such a pivotal role in the modern history of both Iraq and Syria.

Since the 1960s, Syria has been a Ba’athist state, which in practice has meant the hegemony of the religious minorities who dominate the country’s military and intelligence apparatus. Hafez al-Assad (President from 1971 through 2000) was of course an Alawite, but by the 1990s, five of his seven closest advisers were Christian. His son Bashar is the current president, and America’s nemesis in the region.

Quite apart from their political influence, Christians have done very well indeed in modern Syria. Although they try to avoid drawing too much attention, it is no secret that Aleppo (for instance) has a highly active Christian population. Christian numbers have even grown significantly since the 1990s, as Iraqis fled the growing chaos in that country. Officially, Christians today make up around 10 percent of Syria’s people, but that is a serious underestimate, as it omits so many refugees, not to mention thinly disguised crypto-believers. A plausible Christian figure is at least 15 percent, or three million people.

To describe the Ba’athist state’s tolerance is not, of course, to justify its brutality, or its involvement in state-sanctioned crime and international terrorism. But for all that, it has sustained a genuine refuge for religious minorities, of a kind that has been snuffed out elsewhere in the region. Although many Syrian Christians favor democratic reforms, they know all too well that a successful revolution would almost certainly put in place a rigidly Islamist or Salafist regime that would abruptly end the era of tolerant diversity. Already, Christians have suffered terrible persecution in rebel-controlled areas, with countless reports of murder, rape, and extortion.

Under its new Sunni rulers, minorities would likely face a fate like that in neighboring Iraq, where the Christian share of population fell from 8 percent in the 1980s to perhaps 1 percent today. In Iraq, though, persecuted believers had a place to which they could escape, namely Syria. Where would Syrian refugees go?

A month ago, that question was moot, as the Assad government was gaining the upper hand over the rebels. At worst, it seemed, the regime could hold on to a rump state in Syria’s west, a refuge for Alawites, Christians, and others. And then came the alleged gas attack, and the overheated U.S. response.

So here is the nightmare. If the U.S., France, and some miscellaneous allies strike at the regime, they could conceivably so weaken it that it would collapse. Out of the ruins would emerge a radically anti-Western regime, which would kill or expel several million Christians and Alawites. This would be a political, religious, and humanitarian catastrophe unparalleled since the Armenian genocide almost exactly a century ago.

Around the world, scholars and intellectual leaders are debating how to commemorate the approaching centennial of that cataclysm in 2015. Through its utter lack of historical awareness, the United States government may be pushing towards not a commemoration of the genocide but a faithful re-enactment.

Even at this late moment, can they yet be brought to see reason?

Philip Jenkins is Distinguished Professor of History at Baylor University and serves as Co-Director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

After al-Qaeda

NEW PRESIDENT DECLARES VICTORY IN WAR ON TERROR—Patriot Act to be Repealed—Department of Homeland Security for Dissolution.

This will not be a headline in 2013, or anytime thereafter, because by its nature the War on Terror can have no end. If you are fighting a war, then you can envisage a victory in which the opposing force is destroyed. In the case of terrorism, particular movements might decline or vanish—and happily, al-Qaeda itself is on a downward trajectory—but terrorism as such is not going away.

Terrorism is a tactic, not a movement. As such, it can be deployed by states, movements, or small groups regardless of ideology. It is not synonymous with Islam, nor with Islamism. That runs contrary to the thinking of many supposed experts and media commentators, who see Islamic terrorism as the definitive form of the phenomenon. As Dennis Prager writes, “A very small percentage of Muslims are terrorists. But nearly every international terrorist is Muslim.” In this view, Islamist organizations are the standard by which all terror groups must be measured, the model imitated by rivals. If terror has a history, it will be found in the Islamic past—shall we start with the medieval Assassins? Or better, just list the index entry: “Terrorism: See Jihad”?

In reality, terrorism in its modern form has a long history in the West—over a century—but not until the 1980s did Islamists play any role, and virtually never as innovators or leaders. The history of terrorism is strikingly diverse, with perpetrators of every race, creed, and color. The modern phenomenon probably begins in the 1880s with Irish bomb attacks against England and with Russian leftists and European anarchists of the 1890s pursuing their cult of the bomb.

More recently, the decade or so after World War II was an era of notable creativity, as Zionist extremists pioneered many new strategies—truck bombs directed against hotels and embassies, attacks against buses and crowded public places. For a time, Zionist groups also led the way in international terrorism, with letter-bomb attacks on British soil, the bombing of the British embassy in Rome, and plots to assassinate foreign dignitaries such as German Chancellor Konrad Adenauer. The Algerian struggle of the 1950s popularized these innovations and spawned yet others.

But the golden age of terrorism occurred between 1968 and 1986. Then as now, Arab and Middle Eastern causes drove a wave of global violence, making the “Arab terrorist” as familiar a stereotype as today. Baby boomers recall the horrible regularity of waking up to hear of some new massacre of Western civilians, of kidnapping and hostage taking, and (with monotonous frequency) of attacks on airliners and transportation systems. They may remember the
simultaneous hijacking and destruction of five airliners in Jordan in 1970—fortunately, without fatalities—or the massacre of Israeli athletes at the 1972 Munich Olympics.

Some attacks of this era stand out even today for their sadism and indiscriminate violence. In 1972, three Japanese tourists landed at Israel’s Lod Airport, where their nationality prevented them from attracting suspicion. They proved to be members of the Japanese Red Army, working in alliance with the Arab Popular Front for the Liberation of Palestine, the PFLP. Producing automatic weapons, they slaughtered everyone they could see in the terminal—26 civilians, mainly Christian Puerto Rican pilgrims. The following year, Palestinian guerrillas attacked Rome’s Fiumicino airport, throwing phosphorus grenades at an airliner and burning alive some 30 civilians. In 1974, Palestinian guerrillas killed 25 hostages in the Israeli town of Ma’alot. Horror was piled on horror.

The most notorious terrorist of the era was Palestinian mastermind Abu Nidal, as infamous in the 1970s and 1980s as Osama bin Laden has been in recent times. His career reached gruesome heights in the 1980s with a series of attacks that wrote the playbook for al-Qaeda. He specialized in simultaneous strikes against widely separated targets to keep security agencies off balance and win maximum publicity. Typical was the 1985 double-attack at the airports of Rome and Vienna in which 19 civilians were killed. Throughout the 1980s, the prospect of Abu Nidal obtaining a nuclear weapon alarmed intelligence services worldwide.

At this point, the identification of Islam with terrorism might appear to stand up well, with all these Arabs and Palestinians. Then as now, international terrorist actions tended to track back to the Middle East—but not to Islam. The militants of that era distanced themselves from any faith. Abu Nidal usually served Iraq’s secularist Ba’ath regime, which persecuted Islamists.

Like Abu Nidal himself, most Palestinian activists in those years were secular socialist nationalists, and Christians played a prominent role in the movement’s leadership. The most important Arab guerrilla leader of those years—a pioneer of modern international terrorism—was PFLP founder George Habash. He was an Eastern Orthodox Christian who eschewed religion after he became a strict Marxist-Leninist. He discarded his faith when Israeli forces expelled his family from their homes: “I was all the time imagining myself as a good Christian, serving the poor. When my land was occupied, I had no time to think about religion.” Abandoning his church certainly did not mean adopting Islam: his inspiration was not some medieval Islamic warrior but rather Che Guevara.

Habash’s story is emblematic. Also Orthodox was Wadie Haddad, who orchestrated the Dawson’s Field attacks and the 1976 airliner seizure that provoked Israel’s raid on Entebbe. Haddad, incidentally, recruited the once legendary Latin American playboy who earned notoriety as international terrorist Carlos “the Jackal.”

Equally non-Islamist were the PFLP’s several spinoffs, like the Maoist Democratic Front, DFLP, which murdered the hostages at Ma’alot. That faction’s leader, Nayif Hawatmeh, was born Catholic. Several Palestinian attacks in these years sought to put pressure on Israel to release its most prestigious captive, Melkite Catholic Archbishop Hilarion Capucci, jailed for running guns to the guerrillas. Only in the late 1980s, after the rise of Hamas, did an Islamist group take the lead in armed assaults on Israel.

Earlier Middle Eastern movements had no notion of suicide terrorism, which was, moreover, unknown to the Islamist militant tradition before about 1980. The movement that used suicide attacks most frequently and effectively, the Tamil Tigers, is in fact Sri Lankan and mainly Hindu-Marxist. In other cases too, hideous terrorist actions we have come to associate with Islamic extremism have clearly non-Islamic roots. Think for instance of those unspeakable al-Qaeda videos depicting the ritualized execution of hostages in Iraq and elsewhere. To quote Olivier Roy, one of the most respected European scholars of Islamist terrorism, these videos are “a one-to-one re-enactment of the execution of Aldo Moro by the Red Brigades [in Italy in 1978], with the organization’s banner and logo in the background, the hostage hand-cuffed and blind-folded, the mock trial with the reading of the sentence and the execution.”

Through the 1970s and 1980s, terrorism was kaleidoscopic in its political coloring. White Europeans, on the left and the right, made their own contributions. During the 1970s, Italian far rightists and neo-Nazis tried many times to carry out a mega-terror attack on that nation’s rail system. After several bloody attempts, they succeeded in killing 85 at Bologna’s central station in 1980. The United States, meanwhile, had its own domestic terrorist violence, as Puerto Rican separatists carried out deadly bomb attacks in New York and Chicago. And after so many years, Irish terror groups, Protestant and Catholic alike, still pursued their age-old traditions of violence directed against rival civilians.

By no means was international terrorism the preserve of Arabs, let alone Muslims. In 1976, an anti-Castro rightist group based in Florida blew up a Cuban airliner flying from Barbados to Jamaica, killing 76. Prior to 9/11, the dubious record for the worst terror attack in history was held by the Sikh group that destroyed an Air India 747 in 1985, killing 329 innocent people. So commonplace were international attacks, and so diverse, that when a bomb killed 11 people at New York’s La Guardia airport in 1975, the possible perpetrators were legion. (The current best guess points to Croatian opponents of Yugoslavia’s Marshal Tito.)

Where, amidst all this bloodshed, were Islamist terror groups? They added little to the story prior to the rise of Hezbollah during the Lebanese civil war, with the bombing of the U.S. embassy in Beirut in 1983 and the subsequent attack on the Marine barracks. Only from the early 1990s do we find fanatical Sunni networks spreading mayhem around the world, including the early actions of al-Qaeda.

This chronology raises interesting questions for understanding the roots of terrorism. If Islam is so central to the phenomenon, we need to explain why Muslim terrorists should have been such latecomers. Why were they not the prophets and pioneers of terrorism? Why, moreover, did they have to draw all their tactics from the fighters of other religions and of none—from Western anarchists and nihilists, from the Catholic IRA and Latin American urban guerrillas, from Communists and fascists, from Zionist Jews and Sri Lankan Hindus?

Apart from the crucial element of suicide bombing, al-Qaeda brought little to the international terrorist repertoire. The Madrid rail station attack of 2004 neatly replayed the fascist strike at Bologna, while even 9/11 borrowed many elements straight from Abu Nidal, including the simultaneous targeting of multiple airliners. In its methods and strategies, the modern terrorist tradition owes much to the Marxist tradition—to Lenin, Guevara, and Mao—and next to nothing to Muslims.

None of these points should come as a surprise to anyone who remembers the 1970s and 1980s. In its day, the Dawson’s Field affair of 1970 transfixed global media almost as much as the 9/11 enormity did a decade ago. So did the Munich Olympics attack of 1972, or the 1976 saga of the hostages at Entebbe. It’s remarkable to see how readily modern audiences credit suggestions about the novelty of international terrorism or its association with Islamist groups. Particularly startling is how thoroughly Americans have forgotten their own terrorist crisis of the mid-1970s. How can something as horrendous as the La Guardia massacre have vanished from public memory? And is it really possible that the once satanic name of Abu Nidal carries next to no significance for anyone below the age of 50? There is no better illustration of how present-day concerns have eclipsed the older realities.

Terrorism can be used by groups of any ideological shade. The scale or intensity of terrorist violence depends on the opportunities available to militants and the potential opposition they face from law-enforcement agencies. By these criteria, Western nations will continue to be subject to attacks, and those events will follow precedents that we have witnessed over the past 40 years.

However hard we try, we cannot make our society invulnerable. The more we think about the gaps in our defenses, the more astonishing it is that incidents have occurred so rarely. If you fortify aircraft, terrorists attack airports; if you fortify airports, they can bring down aircraft with missiles; if you secure all aircraft, they attack ships; if you defend all public transportation, they undertake massacres in malls and sports stadiums.

Armed groups need only a handful of shooters and bombers to create havoc. The Provisional IRA probably never had more than 500 soldiers at any time, while the Basque ETA peaked around 200—supported, of course, by a larger penumbra of sympathizers. Both maintained campaigns spanning 30 or 40 years. A group of just ten or 20 militants can keep a devastating effort going for a year, and until they are hunted down they can convince a powerful Western nation that it is suffering a national a crisis.

No government can defend itself against terrorism solely by enhancing security. Ultimately, defense must always rely on effective intelligence, which means surveillance of militant groups and their sympathizers, infiltrating those groups, and winning over informants. The fact that attacks on U.S. soil have been so rare means that our intelligence agencies have been doing a pretty good job.

But there will always be vulnerabilities. However thoroughly agencies maintain surveillance on potential troublemakers, on occasion they will fail to mark those individuals who have made the transition from isolated blowhards to dedicated killers. By definition, they are most likely to err when confronting someone who does not fit the profile of the time—when, for instance, the suspect is a white Nazi rather than an Arab Muslim or vice versa. At some point a bomber or assassin—an Anders Breivik, a Timothy McVeigh, or a Mohamed Atta—will slip through, with catastrophic results.

We might call this the Apache Theory of terrorism. Of all the enemies the U.S. faced during its wars against Indian tribes in the 19th century, the Apaches were the most determined and resourceful. When nervous white residents of the Southwest asked, “How many Apaches are hiding in this room right now?” the answer was always, “As many as want to.” Will there be terrorism in the U.S. or Europe? If enough people want to perpetrate it, some will get through.

And who are the new Apaches who might someday surpass the Islamist menace? While prophecy would be foolhardy, we know enough about the history of terrorism to suggest some areas of danger.

One peril is that old causes now quiescent will again spring to life. In the United States, that could mean the ultra-right groups that have such a lengthy record of activism. Presently they are close to inactive, and the menagerie of largely harmless militia groups serves mainly to provide bogeymen for leftist speculation. But that could change overnight: Oklahoma City was the work of one cell.

European groups could also revive, especially if the continent descends into economic anarchy. Imagine poorer nations like Ireland and Greece driven to ruin by what they see as exploitation by Europe’s financial elite. Given the long experience of the Irish with direct militant action, do we think they will do nothing? Diehard IRA elements have for years threatened to renew their attacks on England, but their impact would be massively greater if they targeted European financial or political centers like Frankfurt or Strasbourg. Across the continent, economic collapse could reawaken ethnic hatreds we thought had perished with the Habsburg Empire.

Nor has Europe’s neo-fascist tradition vanished. Although the media treated Breivik as a loner, he stands in a long and bloody tradition, one especially strong in those southern European nations most vulnerable to financial collapse. In the 1970s and 1980s, both left- and right-wing militants in Italy made bizarre deals to obtain weapons from Middle Eastern sources, including Iran and Libya. Who is to say those connections are extinct?

Terrorism also continues to be a weapon of state power, a covert means for achieving goals that cannot be obtained through the open exercise of force. In different forms, state-sponsorship has always been key to terror movements. Even in tsarist days, the Russians freely used terrorist proxies, and Mussolini’s secret service, the OVRA, honed this tactic to a fine art. While the Soviet KGB was legendary for arming and funding extremist groups, it was absolutely not unique.

Some countries have even used the tactic as barefaced extortion. Through the 1980s, you could tell when an Arab Gulf state had fallen behind on money it owed Saddam Hussein because the mysterious “Abu Nidal Organization” would leap into action with an assassination or airliner bombing. When Mideast countries engaged in actual war—as Iran and Iraq did through the 1980s—they used their overseas proxies to promote clandestine goals. In retrospect, many of the terror attacks on European soil in the mid-1980s seem intended to persuade Western nations to supply arms to one or the other of the combatants in the Iran-Iraq conflict.

For some 40 years now, Libya, Syria, and Iran have sponsored surrogate terrorist movements worldwide. Arguably, the weaker those regimes become, the more likely they will be to use those proxies to strike out at opponents, including the United States. Of course, attacks will not carry a brand identifying the country responsible; a strike would come under cover of some bogus front or Islamist cell. As long as unscrupulous states wish to exert pressure on others—to embarrass them, to force them to take steps they do not want, to make their position in some region untenable—we can expect to see terrorism used as a form of proxy war.

That means terrorism will be with us as long as the world knows ethnic hatred and social division—which is to say, until the end of humanity. The phenomenon cannot be ended entirely, but individual movements certainly can be defeated and suppressed. And we should not imagine “terrorism” as a monolithic enemy that demands we militarize our whole society to meet the challenge.

Above all, we should not forget the lessons of the past. However appalling it might be to study individual groups or incidents, in the long term the story of terrorism contains a surprisingly positive lesson. Terrorism can inflict dreadful harm on a society, even claiming thousands of lives. But in the overwhelming majority of instances, these movements are not only beaten but annihilated—so thoroughly, in fact, that later generations forget they even existed.

Philip Jenkins, Edwin Erle Sparks Professor of History and Religious Studies at Pennsylvania State University, is the author of Images of Terror and Jesus Wars.

The American Conservative needs the support of readers. Please subscribe or make a contribution today.

Three Martyrs

Please join with me in commemorating a group of three British Muslim martyrs. Seriously.

Haroon Jahan, Abdul Nasir, and Shazad Ali died Tuesday night in Birmingham’s impoverished Winson Green area. After two days of rioting, looting, and casual arson, mainly by black gangs, the local community despaired of seeking help from a police force that was not making the slightest effort to intervene to defend them. As the small businessmen and shopkeepers of the area, the local South Asian community had most to lose. Organizing from the local mosque, they dispatched groups of young volunteers to patrol the area. A speeding car hit a group of these community defenders, killing three. (The driver is charged with murder). The victims were classic hard-working immigrants, one a mechanic, another ran a car wash. In the words of one observer, “They lost their lives for other people, doing the job of the police. They weren’t standing outside a mosque, a temple, a synagogue or a church – they were standing outside shops where everybody goes. They were protecting the community as a whole.”

If you have been following media coverage of the British riots, you have seen a great many explanations of the violence, including such classic theories as urban deprivation, youth unemployment, and anger at police racism, and all have some substance. What has been fascinating this time round is to see how even the most mainstream liberal outlets – even the New York Times – have focused on the vicious hooliganism and criminality driving the mobs, how they are driven not by an inchoate rage against injustice but by strictly rational desires for high-class consumer goods. Some even remark on the growth of “feral” gangs of white people, black and white. Read More…

Posted in . 18 comments

Myth of a Catholic Crisis

Is the Roman Catholic Church a cover for the world’s largest criminal sex ring? Over the past few months, a steady stream of news stories seems to have confirmed the bleakest possible vision of global conspiracy, the most extreme claims of anticlerical propaganda through the ages. Even moderate commentators are writing as if priests around the world have taken secret vows of conspiracy, perversion, and omerta. Worse, this deviance is allegedly built into the church’s structures of command and control. According to the darkest visions, clergy are almost encouraged to pursue careers of abuse and pedophilia, secure in the knowledge that their crimes will be sheltered by fellow molesters in the hierarchy, all the way to the Vatican itself, with Pope Benedict as the boss of all bosses. Suddenly, even the rants of Maureen Dowd and Katha Pollitt appear almost plausible.


If all this seems far-fetched, it is. Sexual abuse by clergy is a reality, and a real problem demands a response. But the problem is vastly different from that described so enthusiastically by the media, and most of the critical measures have already been taken.


Although the alleged crisis is now being portrayed in global terms, I will focus on the U.S. experience because this is by far the most intensely studied aspect. The American abuse scandal, now a quarter-century old, has produced rock-solid quantitative evidence that allows us to make general statements about abuse by clergy and to dispel myths.


Most tellingly, we can say one thing quite confidently, however strongly it goes against prevailing wisdom: there is no credible evidence that Roman Catholic clergy abuse young people at a rate different from that of clergy of any other denomination or from members of secular professions who deal with children. If anyone believes that such evidence exists, the burden is upon him to present it.


By far the best quantitative evidence derives from the survey carried out by John Jay College of Criminal Justice in New York in 2004, entitled “The Nature and Scope of the Problem of Sexual Abuse of Minors by Catholic Priests and Deacons in the United States.” Specifically, it examined all plausible complaints of sexual abuse by U.S. clergy between 1950 and 2002, a cohort of around 110,000 men. Although this study was sponsored by the U.S. Conference of Catholic Bishops, the researchers were independent, and the final report was widely praised.


By social science standards, this was an impressively thorough study, and the sample size was immense. Obviously, the John Jay researchers failed to detect many cases, including those that had not come to light by 2004, and other acts that would never be reported. But they worked hard to compensate for such omissions by using a strikingly low standard of proof for the allegations that were known. Investigators counted all charges “not withdrawn or known to be false,” and total exoneration is a very high standard. The list thus includes allegations that would not have surfaced except in the furor of 2002-03, following the dreadful scandals in the Boston Archdiocese.


A couple of points leap out about the allegations, particularly about the image of the “pedophile priest” pursuing his decades-long career of crime under the de facto protection of the Church. The John Jay study concluded that in this period, perhaps 4 percent of all U.S. priests had been plausibly accused of at least one act of sexual misconduct with a minor. But of the 4,392 accused priests, almost 56 percent faced only one misconduct allegation, and at least some of these would certainly vanish under detailed scrutiny.


Very few of the accused priests were pedophiles, in the sense of having abused a minor under the age of puberty, say 12 or 13 for a boy. In the U.S. at least, the great majority of cases of sexual misconduct by priests involve older boys, often aged between 15 and 17, or even older. This behavior is illegal, harmful, and sinful, but it is not pedophilia. The technical name for this kind of act is ephebophilia, but many would call it pederasty or even homosexuality. Drawing this distinction certainly does not excuse or minimize the behavior, but it is critically important for understanding the statistics. Pedophiles are compulsive offenders who are highly likely to repeat their acts, often claiming hundreds of victims. The fact that true pedophile priests formed such a minority of offenders meant that the overall number of victims was mercifully far smaller than it might have been.


Pedophile priests certainly did exist, but in tiny numbers. At the heart of the clergy abuse crisis was a core of highly persistent serial pedophiles, who massively “over-produced” criminal behavior, and some were the targets of hundreds of plausible complaints. Out of 100,000 priests active in the U.S. in this half-century, a cadre of just 149 individuals—one priest out of every 750—accounted for over a quarter of all the allegations of clergy abuse. These 149 super-predators also explain the surprisingly large number of very young victims that the study reported. The average age of offenders for the whole era has been gravely distorted by counting the sizable number of child victims assaulted by these reprehensible serial pedophiles.


Nor was clerical misconduct a persistent or steady-state phenomenon, as we would expect if abusive behavior resulted inevitably from the agonies of the celibate lifestyle. In the U.S. at least, recorded malfeasance was quite rare until an explosion of criminal activity in one short period, namely between 1975 and 1980. These six years accounted for an astonishing 40 percent of all the alleged acts of clerical abuse for the 52-year period under examination. Just why these years were so horrific is open to debate, but there seems to have been a sharp decline in the moral and disciplinary controls that higher authorities exercised over priests. Also, clergy in the 1970s were vulnerable to powerful social pressures encouraging sexual experimentation, the sense that old injunctions against adultery or pederasty were destined to perish in the new age of ethical relativism, and some priests succumbed to temptation. Of the priests ordained in the year 1970, a startling 10 percent would ultimately be the focus of abuse allegations. But the crisis was a byproduct of a specific historical era, not of some essential quality of the clerical status or of the Church’s structures.


Let’s put all this in context. In any given year between 1950 and 2002, the Catholic Church in the United States averaged around 50,000 priests, serving 45 to 55 million members. Assuming all the charges reported by the Jay study were true, then each year, an average of around 200 children were abused or molested by priests nationwide. Obviously, given what we know about the under-reporting of molestation, that figure must be a gross underestimate, and even if it was not, the problem would still be appalling: 200 instances of priestly victimization is 200 too many. But the documented evidence for clerical crime is far less extensive than is widely believed. Even in the overheated and litigious atmosphere following the Boston scandals, the Jay study reported no allegations against 24 priests out of every 25.


To say that X percent of Catholic priests might have engaged in abuse or molestation might be troubling, but the figure is meaningless unless we can compare it with some other group. Suppose, for the sake of argument, that we could say confidently that priests abuse at a rate 10 or 100 times larger than Presbyterian ministers or Jewish rabbis or than the male population as a whole. Then we could begin to seek the roots of the Catholic problem, whether we located them in the fact of celibacy or in the secretive clerical subculture. Unfortunately, we have not the slightest point of comparison with any other group. As a result of the furious investigations of the past decades, and particularly the Jay study, the U.S. Catholic clergy are now the only major group on the planet that has ever been subjected to such a detailed examination of abuse complaints, using internal evidence that could not have come to light in any other way. Nothing vaguely comparable exists for other groups, for Presbyterian pastors or Lutheran clergy or, indeed, journalists.


Actually, that is not entirely true. Before commenting on the priestly situation, any observer should read the writings of Professor Charol Shakeshaft of Virginia Commonwealth University, who for years has been studying sexual and physical abuse by America’s public-school teachers. The volume of misconduct she reports is staggering and far exceeds the rate of documented abuse by Catholic clergy. Hard to imagine, public schools sometimes deal with their problem faculty by quietly transferring them to other institutions without warning the new employers of the dangers they face. It sounds a lot like the worst charges against Catholic dioceses, doesn’t it? Thank heaven we don’t worry too much about the sexual dangers facing our children in the schools, or else we might have to think seriously about this issue.

So if Catholic priests are no worse than other professions in this regard—and maybe a lot better—why do we hear so much about them being abusers? Several reasons explain this focus, none of which necessarily reflect any anti-Catholic bias in courts or media. By far the most important factor involves the way in which cases come to light, which is through civil litigation. An individual accuses a particular priest of abuse, and quite possibly, the charge is perfectly true. Lawyers then use that case as a means of forcing a diocese to disclose ever more information about past charges against other priests, which might date back into the 1940s or ’50s and which can also lead into other jurisdictions. One case thus becomes the basis for a whole network of interlocking investigations, which proceed ad infinitum. The Catholic Church suffers acutely from its pack-rat character, of being a highly bureaucratic institution that prides itself on preserving records of institutional continuity.


In contrast, imagine a charge against a Baptist or Pentecostal minister, who has no such institutional framework and little institutional memory, whose church has no deep pockets, so that the case begins and ends with him. Not to pick on any particular denomination, but stories of abuse by clergy of all sorts surfaced regularly through the 1990s, until most groups became massively more proactive in preventing and detecting abuse threats. Partly the new vigilance reflected intensified consciousness of threats to children, but at least as significant were the demands of insurance companies: either you adopt stringent new policies to safeguard minors, or kiss your liability protection goodbye. That was an offer no church could reasonably refuse.


For Catholics, though, with their distinctive structural set-up, the new environment offered no protection from old allegations that continued to surface, often involving alleged acts from 40 or 50 years ago. Even today, Catholic churches are still trying desperately to defend their actions in the distant past, when social attitudes to child sexual abuse were radically different from what we today regard as normal. In those bygone years, molestation was trivialized in both expert and public opinion, and offenders were commonly treated with kid gloves. Only the Catholic Church, however, is held to account for the decisions it took in this very different world of so long ago. Only the Catholic Church is subjected to the unforgiving standards of 20/20 hindsight.


Catholics, like other denominations, have made massive progress in preventing abuse by clergy. In the U.S. at least, very few of the cases that have come to public attention in the past few years refer to acts alleged to have occurred since 1990. Yet litigation resulting from earlier eras means that “pedophile priests” remain in the news almost daily, and that fact shapes (and mis-shapes) popular stereotypes.

Europe is not the U.S., and it is difficult to generalize across countries within Europe. Legal systems differ, as do social assumptions and sexual attitudes. Theoretically, it is possible to imagine that in some particular nation, the Catholic clergy became so vicious and corrupted that they preyed systematically on the young and conspired to hide their misdeeds. But any awareness of the American situation, and the florid mythology it has produced, must make us very careful about giving credence to any such nightmare interpretation.
__________________________________________

Philip Jenkins is Edwin Erle Sparks Professor of History and Religious Studies at Pennsylvania State University. He is the author of Pedophiles and Priests: Anatomy of a Contemporary Crisis and, most recently, Jesus Wars.

The American Conservative welcomes letters to the editor.
Send letters to: letters@amconmag.com

Third World War

Nobody is sure how it started. Perhaps Christian activists sent text messages warning that Muslims were trying to poison them. Maybe Muslims tried to storm a church. Whatever the cause, the consequence this past January was mayhem for the Nigerian city of Jos. Muslim-Christian rioting killed up to 500 people before the government intervened with its customary heavy hand.


The most striking point about these battles was that nobody found them striking. In Jos, as in countless other regions across Africa and Asia, violence between Christians and Muslims can erupt at any time, with the potential to detonate riots, civil wars, and persecutions. While these events are poorly reported in the West, they matter profoundly. All the attention in the Global War on Terror focuses on regions in which the U.S. is engaged militarily, but another war is raging across whole continents, one that will ultimately shape the strategic future. Uncomfortably for American policymakers, it is a war of religions and beliefs—a battle not for hearts and minds but for souls.

This is not to argue for an irreconcilable Clash of Civilizations, still less a struggle between Christian good and Muslim evil. In any African country divided between the two faiths—and that includes most lands south of the Sahara—day-to-day interfaith relations are remarkably good. Many families are amicably divided between Christians and Muslims and take great care to avoid sources of conflict. Business or political meetings commonly begin with prayers, and it is no great matter whether a pastor or a mullah leads them.


Yet over the past century, the spread of new religious forms worldwide has created the potential for violence wherever a surging Christianity meets an unyielding Islam. Riots such as those in Jos are one result; terrorism is another. Generally, Muslims have been the aggressors in recent conflicts, but Christians have their own sectarian mobs and militias.

However blame is apportioned, the two faiths have been at daggers drawn, often literally, for decades. As Eliza Griswold discusses in her forthcoming book, The Tenth Parallel, you can trace the fault by following the latitude line of ten degrees North. (Jos, conveniently, stands almost exactly at ten degrees.) A tectonic plate of religious and cultural confrontation runs across West and Northwest Africa, through Southeast Asia, Indonesia, and the Philippines. A decade ago, Indonesia witnessed some of the worst fighting, as Muslim militias launched bloody assaults on that nation’s Christian minority, some 25 million strong. For decades, the overwhelmingly Christian Philippines has suffered constant insurgency from a ruthless armed movement concentrated in the Muslim south. Mob attacks and pogroms have raged in Malaysia. In Africa, the Sudan is probably the best-known theater of mass martyrdom, while Nigeria remains deeply polarized. And that is not to mention ongoing killings in countries like Uganda and Kenya.


Humanitarian concerns apart, there are plenty of reasons for the West to be deeply worried about these conflicts. Nigeria has almost 160 million people and by 2050 is expected to have 300 million, making it one of the world’s most populous nations. If it ever escapes from its present political horrors, it will be the obvious leader of sub-Saharan Africa. Nigeria also matters enormously in terms of natural resources. It is the third largest source of U.S. crude-oil imports, ahead of Saudi Arabia. Other up-and-coming oil suppliers in West and Central Africa are also among the religiously divided nations. Meanwhile Indonesia, with 240 million people, is already a population giant, and unlike Nigeria, it seems set for serious economic development in the coming decade.


If such massive countries ever became monolithically Muslim, that would be significant enough for the West, especially because these states wield such cultural influence over their neighbors. But if they fell into the hands of a radical form of Wahhabi or Salafist Islam, that would be an epochal catastrophe. Conversely, imagine a world in which Christians predominated in these influential Global South nations. That would decisively shift the world’s balance of forces in pro-Western directions.


The relationship between Christianity and Islam poses a challenge for at least half of the 20 nations expected to have the world’s largest populations by 2050. By present projections, three of these future mega-states—Nigeria, Ethiopia, and Tanzania—will be almost equally divided between the two faiths. In several others, like the Congo, the Philippines, Russia, and Uganda, predominantly Christian nations will have Muslim minorities of 10 percent or more. Mainly Muslim states will coexist with comparable Christian sub-populations in Indonesia, Egypt, and the Sudan. In all of these places, if relations between the faiths do not improve over the next 40 years, prospects for civil order are terrifying. The world’s roster of failed states would have several new members.


Why the hostility? What are Christians and Muslims fighting about in Nigeria and Malaysia, Uganda and the Philippines? Western readers will think back to Samuel Huntington’s Clash of Civilizations, a richly provocative idea. But the notion of a world divided among vast religious-cultural blocs assumes that these units remain fairly constant, so that tension occurs only along their periphery. Yet cultural blocs change dramatically within their borders as well, and we are presently living through a dizzying era of shifting boundaries.


Look at those rapidly growing countries and think of how these burgeoning Christian heartlands might have struck an observer in 1914. Why are Christians so numerous in Africa and Asia? Did the past century witness a global religious revolution? The answer, of course, is yes, however dimly Westerners may be aware of it. Muslims have certainly seen the trend. One factor driving Islamic militancy in many nations is the sense that Christianity is growing. Outside of the West, evangelism and conversion are two of the most sensitive issues in the modern world.


Christianity, which a century ago was overwhelmingly the religion of Europe and the Americas, has undertaken a historic advance into Africa and Asia. In 1900, Africa had just 10 million Christians, representing around 10 percent of the continental population. By 2000, that figure had swollen to over 360 million, or 46 percent of the population. Over the course of the 20th century, millions of Africans transferred their allegiance from traditional primal faiths to one of the two great world religions, Christianity or Islam—but they demonstrated an overwhelming preference for the former. Around 40 percent of Africa’s population became Christian, compared to just 10 percent who chose Islam. As Muslims had earlier far outnumbered Christians, the result was to transform a massive Muslim majority into a reasonably equal confessional balance. Africa today is about 47 percent Christian, 45 percent Muslim, and some 8 percent followers of primal religions.


To appreciate this transformation, consider Nigeria. In 1900, the lands that would become that nation were about 28 percent Muslim and 1 percent Christian. Confident in their numbers, Muslims did not need even to think about Christians as rivals. For Muslims, the pagan population represented an inferior state of being, peoples to be ruled and, often, enslaved. One day in the future, the heathens might join the modern religious world, but it would be the world of Islam. But then things went wrong. By 1970, Muslims had increased their share of the population to 45 percent. But that 1 percent Christian minority had expanded incredibly, also to 45 percent. A land that seemed firmly under Muslim hegemony was suddenly split down the middle.


The question now was just how much further Christian numbers could grow. If you extrapolate recent Christian growth into the near future, no Muslim majority seems safe, even in a place like Nigeria, where some polls in recent years have suggested an outright Christian majority. (More conservative estimates register around 46 percent.) Even nonpolitical Muslims worry: might their grandchildren be kaffirs? Worse, these newer Christians are not like the minority communities familiar in a Middle Eastern context, groups like the Egyptian Copts, who of necessity were politically quietist: the new African believers are dynamic and expansionist. The most successful follow energetic Pentecostal and evangelical forms of faith rather than the sober liturgical habits of older groupings.


The new believers draw on Western, and specifically American, forms of evangelism, marketing their faith through videos and DVDs. They organize crusades and mass meetings for prayer and healing that can draw 2 million believers together in a single venue. For nervous Muslims, the Christian threat was epitomized by the legendary “Jesus” video, originally a British film biography produced in 1979, but subsequently promoted around the world. As a weapon of mass instruction, it has few equals. Christians in Jos or Jakarta would approach Muslims and offer to show them a really interesting film about the prophet Jesus. Many accepted the invitation, and some then decided to follow the Christian way rather than the path of Islam.


Christianity also attracted independent-minded women. In traditional societies, conversion occurred when the head of a clan or family accepted a new religion and brought his kin with him. Now, when a patriarch accepted Islam, youngsters demurred, preferring to seek personal salvation in Christianity. And inconceivably, women even refused to accept arranged marriages to suitable Muslim men. Religious splits became family feuds, escalating the potential for malice and retaliation.


Few Asian countries have seen anything like the Christian growth that characterizes Africa, but here, too, religious change generates social tensions. In lands like Indonesia and Malaysia, Christianity has been associated above all with minority communities, especially the Chinese, whom majority Muslim groups hate and fear for being rich, clannish, and arrogant. Economic crises, such as the Asian financial crash of 1997-98, bring ethnic conflicts, which bear a religious coloring.


In different societies, then, booming Christianity came to be associated with a variety of perils: the breakup of traditional communities, individualism, women’s independence, and everything associated with “the West”—libertarianism, sexual explicitness, and cultural aggression. When the Pentecostal movement reached full force, all these trends began to look like a juggernaut that might overwhelm familiar cultures. From an Islamic viewpoint, these things might be troubling enough if they were happening on the traditional Muslim-Christian frontier—say in the Mediterranean—but suddenly Christian expansion was accelerating in what should have been dependable Muslim territory.


This was the package of nightmares that faced Muslim communities from the 1970s onward, at exactly the time that a new countermovement, quite as radical in its own way, emerged from the Middle East. The key date was 1979, the year of the Iranian Revolution, but also of the radical coup against the Grand Mosque in Mecca. The Saudi regime survived that assault but in a chastened mood. Anxious to prevent a repeat performance, the Saudis made their devil’s bargain with the Islamists: go and do what you like around the world, and we will bankroll you, but stay out of our own beloved kingdom. That was the point at which Gulf oil money began rolling around the Muslim world, funding mosques and madrassas following the hardest of Islamist lines. By the end of 1979, the Soviet Union had invaded Afghanistan, sparking a war that would become a vehicle for training jihadis worldwide.

The outcome was a new and highly militant form of Islam, impatient with old-style moderate forms of faith and fanatically opposed to Christian incursions into continents seen as Muslim realms. For these militants, the growth of Christianity was proof of the failure of the old Muslim regimes. In the words of radical theorist Sayyid Qutb, these regimes had shown themselves infidels at heart, and it was up to true Muslims to condemn them as such (takfir) and remove themselves spiritually (make hijra) to a new and purer activism. In 1989, a revolutionary Islamist regime took power in the Sudan. The same year, at Abuja in Nigeria, a conference on Islam in Africa outlined a program for successful Islamization. That event entered Christian folklore, and one does not have to travel far on the continent to hear claims of all manner of secret plans to destroy Christianity across Africa and create a caliphate. If Islamists denounce the Christians as tools of America, Christians everywhere see the hand of Riyadh.

In many countries, Islamist sects formed militias, some affiliated with the nascent al-Qaeda. In 1993, for instance, Indonesian extremists formed the terrorist organization Jemaah Islamiyah, which would be responsible for the 2002 bombings that killed 200 in Bali. One of the deadliest anti-Christian groups in West Africa has been the al-Qaeda-linked “Nigerian Taliban,” known to themselves as the muhajiroun—those who make hijra.


When we see interfaith battles in Africa or Asia, we are generally not witnessing activism by al-Qaeda militants directed from some secret terrorist mission control, but we do find movements driven by exactly the same grievances that motivate bin Laden’s associates—above all, we see the same central fear of Christian expansion. For Muslims, whether political dissidents or actual Islamists, the world is evidently engaged in a culture war, a war of faiths, and groups like al-Qaeda are only one small and sensationalized portion of that. Christians likewise know the stakes. Educated African believers look back with trepidation at the great Christian churches that flourished in the northern regions of the continent 1,500 years ago, churches that would be snuffed out under Islamic rule. They are determined not to let that disaster be repeated.

This culture clash, so crucial to the fate of whole continents, has not impinged on the American consciousness. Stunningly, the crying need for interfaith peace in Africa and Asia featured not at all in Barack Obama’s much-touted speech in Cairo last June. Of course, American options are limited. The more that Western nations try to interfere directly in defense of Christians, the easier it is for Muslims to portray their enemies as imperialist agents. That is not a counsel of despair. American administrations can achieve something by pressuring allegedly friendly regimes like the Saudis to stop sponsoring anti-Christian propaganda across the Global South. But ultimately, resolving this conflict will depend on Africans and Asians themselves—if only Washington and Riyadh can refrain from pouring fuel on the hostilities.

__________________________________________

Philip Jenkins is Edwin Erle Sparks Professor of History and Religious Studies at Pennsylvania State University.

The American Conservative welcomes letters to the editor.
Send letters to: letters@amconmag.com

Terror Begins at Home

Since the New Deal, fears of terrorism and subversion have played a central role in U.S. political life. But the ways in which government and media conceive those menaces can change with astonishing speed. Such tectonic shifts usually occur because of the ideological bent of the administration in power. When a strongly liberal administration takes office, it brings with it a new rhetoric of terrorism, and new ways of understanding the phenomenon.


Based on the record of past Democratic administrations, in the near future terrorism will almost certainly be coming home. This does not necessarily mean more attacks on American soil. Rather, public perceptions of terrorism will shift away from external enemies like al-Qaeda and Hezbollah and focus on domestic movements on the Right. We will hear a great deal about threats from racist groups and right-wing paramilitaries, and such a perceived wave of terrorism will have real and pernicious effects on mainstream politics. If history is any guide, the more loudly an administration denounces enemies on the far Right, the easier it is to stigmatize its respectable and nonviolent critics.


It’s difficult to understand modern American political history without appreciating the florid conspiracy theories that so often drive liberals, and by no means only among the populist grassroots. Time and again, Democratic administrations have proved all too willing to exploit conspiracy fears and incite popular panics over terrorism and extremism. While we can mock the paranoia that drives the Left to imagine a Vast Right-Wing Conspiracy, such rhetoric can be devastatingly effective—as we may be about to rediscover.


Long before Sept. 11, 2001, America experienced repeated outbreaks of concern over terrorism. In terms of shaping liberal perceptions, the most important was that of the FDR years, when anti-government sentiment spawned a number of extremist organizations. Some were “shirt” groups, modeled on European fascists—America, too, had its Black Shirts and Silver Shirts—while the German-American Bund attracted Hitler devotees. Isolationism and anti-Semitism drew some urban Irish-Americans into the Christian Front, while the Klan experienced one of its sporadic revivals. Beyond doubt, far-Right extremism did exist, and these movements had their violent side, to the point of organizing paramilitary training. A few plotted real terrorist acts.


But the public response was utterly out of proportion to any danger these groups posed. From 1938 through 1941, the media regularly presented stories suggesting that the U.S. was about to be overwhelmed by ultra-Right fifth columnists, millions strong, intimately allied with the Axis powers. (Actual numbers of serious militants were in the low thousands at most.) Reportedly, the militant Right was armed to the teeth and plotting countless domestic terror attacks—bombings in New York and Washington, assassinations and pogroms, the wrecking of trains and munitions plants. Plotters were rumored to have high-placed allies in the military, raising the specter of a putsch. The ensuing panic was orchestrated by newspapers and radio and reinforced by films, newsreels, and comic books. Historians characterize these years as the Brown Scare.

If the more bizarre accusations sound like the common currency of the show trials in Stalin’s Russia in these very years, that is no coincidence. The main exposés of fascist conspiracy emanated from Communist Party journalists like Albert Kahn and John Spivak. (Spivak himself was an operative for the Soviet NKVD.) Charges circulated through Kahn’s newssheet The Hour before being picked up in the liberal press. The Red agenda was straightforward in that the Brown Scare allowed the Left to discredit any opponent of radical New Deal policies. Scratch the surface of any enemy of the Left, they claimed, and you would find a fascist spy, a lyncher, a storm trooper.

Leftist scaremongering worked to the advantage of a Roosevelt White House anxious to promote U.S. intervention in the coming war. The administration supplied many of the leaks that supported the Brown Scare, through Roosevelt aides like Harold Ickes and also the FBI. In 1940, the FBI announced that it had broken what it touted as a looming coup d’état by the Christian Front that would have been accompanied by murders, bombings, and pogroms. Meanwhile, FBI mole Avedis Derounian undertook the research that would lead to his 1943 bestseller, Under Cover, published under the pseudonym of John Roy Carlson. In both cases, however, the terrorist conspiracies were much less terrifying than they initially seemed. Try as it might, the government could never connect the Christian Front plot to more than a couple of dozen activists with no access to significant weaponry. Nor did Derounian’s revelations point to any serious conspiracy, and the government glaringly failed to convict national farRight leaders on sedition charges.


However thin the underlying charges, the Brown Scare clearly helped to promote a New Deal agenda at home and interventionism overseas. For interventionists, the Terror Crisis suggested that fascist powers already were attempting to subvert America, forcing the nation to confront the foreign danger. Above all, the scare provided a powerful weapon for defaming anyone on the Right who opposed FDR’s drift to war. Targets included not only isolationist senators and congressmen but also the potent antiwar organization America First, which drew support from a broad and reputable cross-section of public opinion—conservative, liberal, and socialist, Catholic and Protestant. By 1941, though, the antiwar movement was battered by allegations of fascist and anti-Semitic ties. Under Cover portrayed America First as an aboveground front for the most extreme and lethal paramilitary fascist groups. As so often before and since, a burgeoning antiwar movement was crippled by charges that it was covertly allied with the nation’s enemies. So successful was this tarring that in popular memory, America Firsters stand alongside Nazis and Klansmen as traitors, subversives, and bigots. In terms of achieving its goals, the Brown Scare worked superbly.


Such scares have occurred twice since FDR’s day—in the 1960s and again in the 1990s. So similar are these later events that we can offer a kind of historical rule: whenever a liberal administration replaces a long-established conservative predecessor, that change will give rise to right-wing populist and paramilitary movements. And within a couple of years, those movements will provide the basis for grossly exaggerated panic over domestic terrorism.


After JFK’s election in 1960, the devoutly anti-Communist Minutemen took first place in liberals’ demonology. As in the 1930s, the far Right was supposed to be closely tied to out-of-control military officers. Remember fictional treatments of the time like “Dr. Strangelove” and “Seven Days in May”? Once more, too, the supposed threat from far-Right extremism surfaced in mainstream politics, especially during the 1964 elections. Most political observers know that Barry Goldwater was denounced for advocating “extremism in the defense of liberty.” Few know exactly what kind of extremism he was supposedly invoking. The ensuing controversy makes no sense except in the context of the John Birch Society, which was pushing the Republican Party to harder anti-Communist positions, and also the well-armed Minutemen. As in the 1930s, the extremists existed, and some hotheads contemplated violence. But once again, a yawning gulf separated the reality of the threat from the public perception.


The most recent right-wing terror crisis followed Bill Clinton’s election in 1992, when citizen militias attracted hundreds of thousands of sympathizers. Media warnings about armed extremism were already widespread by the time of the Oklahoma City bombing in 1995, a genuine far-Right atrocity that had nothing to do with the militias. Although neo-Nazi Timothy McVeigh scorned the political and religious values of the militias, they nevertheless bore the brunt of public outrage and media denunciation. Militia numbers swiftly collapsed, leaving only a tiny core, although one would hardly realize this from the press and television coverage of the years that followed.

Between 1995 and 2001, America suffered the Great Militia Panic, when exposés of ultra-Right violence became a media staple. For liberal press outlets, America was facing a clear and present danger from the militias, from Nazis and skinheads, and even from dissident elements within U.S. Special Forces. Liberals accused the anti-Clinton Right of providing extremists with ideological aid and comfort. An impressive outpouring of books—peaking in 1996—warned of an imminent terrorist disaster. Typical titles raised the shadow of America’s Militia Threat, Terrorists Among Us, or The Birth of Paramilitary Terrorism in the Heartland. One book warned of the Harvest of Rage: Why Oklahoma City is Only the Beginning. The news media was open to the most improbable charges of right-wing atrocities. In 1996, television news shows discovered a (wholly spurious) wave of arson attacks in which white extremists were allegedly wiping out the nation’s black churches.


As recently as a decade ago, “terrorism” in the American public consciousness meant, almost entirely, domestic right-wing activism. This was certainly the case in the fictional media, where filmmakers discovered to their cost that any treatment of Muslim or Middle Eastern misdeeds could provoke boycotts. How much easier, then, to choose notorious villains who lacked defense groups and antidefamation organizations. That generally meant white right-wingers. Militias, skinheads, and neo-Nazis became stock villains in the popular culture of the era. On television, countless police and detective shows dealt with ultra-Right villains, who were usually on the verge of releasing weapons of mass destruction against a decent, liberal America too naïve to realize the forces arrayed against it. The high-water mark of fictional far-Right villainy occurred in the 1999 film “Arlington Road,” in which a terrorism expert comes to suspect that his too-perfect neighbors are in fact the masterminds of a deadly fascist conspiracy. (He should have known: after all, they listen to country music.) As the film’s publicity warns, “Your paranoia is real!”


Ideas have consequences, even if those ideas are dreadfully, embarrassingly wrong. In terms of American national interests, by far the worst consequence of the Militia Panic was the massive underplaying of Islamic terrorism in U.S. public discourse and the disproportionate focus on the domestic far Right. Liberal columnists scoffed knowingly at terrorism experts who warned about foreign militants like al-Qaeda, when every informed observer knew that the real menace was internal. That attitude naturally had its impact on policymakers and on intelligence agencies, who recognized just how sensitive investigations of Middle Eastern-related terror plots might be. Those overcautious attitudes go far to explaining the otherwise perplexing neglect of all the blaring alarm bells that the agencies should have heard in the lead-up to Sept. 11.


Belief in the extremist menace also had domestic political consequences. After Oklahoma City, attacks on the political Right helped re-elect President Clinton in 1996 with over 49 percent of the popular vote (up from 43 percent in 1992). When impeachment loomed two years later, it seemed only natural to rally the faithful by invoking—what else?—a “vast right-wing conspiracy.” Notably, one prominent Clinton adviser in these years was Harold Ickes, son and namesake of FDR’s Brown Scare hatchet man.


The prospects for a fourth round of panic in the Obama years seem excellent. Militias and rightist groups have never entirely vanished—even the Minuteman name survives, in the form of anti-immigration vigilantes—and they will probably enjoy a resurgence. No less probable is the over-interpretation they will receive from an administration deeply imbued with liberal conspiracy theories. The administration contains plenty of Clinton-era veterans who well recall the triumphant success of the earlier Militia Panic, and this time round, Obama’s ethnicity gives added credibility to charges of racist plotting.


Law-enforcement agencies, too, have everything to gain from a terrorism panic, whether it is rooted in the ideological Left or Right. Agencies usually have wish-lists of laws they would like to see passed to expand their powers, and periods of intense concern over terrorism offer a natural opportunity to get these measures onto statute books. Liberals complain bitterly about the Patriot Act of 2001, but Democratic administrations have also used fears of terrorism and subversion to expand official powers. Sweeping federal gun-control measures passed in 1938 and 1968, during the Brown Scare and the Minuteman era. In 1996, the Anti-Terrorism Act gave federal agencies all the powers they could reasonably have demanded up until then. The existence of such a potent body of laws gives police and prosecutors a strong vested interest in applying the terrorism label as widely as possible in order to secure all possible legal advantages. If public opinion permits, they will assuredly use anti-terrorism laws against unpopular right-wing sects.


Private organizations also provide an institutional foundation for a war on domestic terror. Plenty of liberal pressure groups are only too willing to offer their services in identifying far-Right activists and painting them in the most damaging and alarming colors. Some of the most successful through the years have been the Anti-Defamation League, the Feminist Majority Foundation, and the Southern Poverty Law Center (SPLC), with its affiliated Intelligence Project (formerly Klanwatch). While there is no reason to doubt the sincerity of their convictions, such groups would gain immensely from a new political emphasis on militias or rightist groups. If the government declares a domestic terror crisis, the media will automatically turn to the SPLC, for instance, giving that group added visibility and prestige. For the media, the SPLC and its ilk can be endlessly valuable. They supply convenient maps and lists of militias, broken down by state and region, as well as providing knowledgeable speakers to discuss militia history and ideology. This results in publicity for the group and its causes and encourages public support and donations. If a full-fledged right-wing terror network is not available, such pressure groups have every interest in hyping one into existence.

Paying proper attention to terrorist threats is laudable, whatever their source, and some right-wing extremists have through the years demonstrated their potential for violence: they need to be watched. Yet almost certainly, a renewed focus on the far Right will develop more out of an ideological slant than any reasonable perception of danger. Once again we will be dealing with a groundless social panic of the kind we have encountered so often in the past. Listening to official claims about terrorist dangers in the years to come, we need to exercise real critical skills—and never forget the lessons of history. 

__________________________________________

Philip Jenkins is a professor of history and religious studies at Pennsylvania State University and the author of The Lost History of Christianity: The Thousand Year Golden Age of the Church in the Middle East, Africa, and Asia—and How It Died.

The American Conservative welcomes letters to the editor.

Send letters to: letters@amconmag.com

United States of Argentina

Anyone not alarmed by the state of the U.S. economy is not paying attention. As our Dear Leader begins his term, the theory of very big government has the support of an alarmingly broad political consensus. Despite the obvious dangers—devastating inflation and the ruin of the dollar—the United States seems pledged to a debt-funded spending spree of gargantuan proportions.

In opposing this trend, critics face the problem that the perils to which they point sound very theoretical and abstract. Perhaps Zimbabwe prints its currency in multi-trillion units, but that’s a singularly backward African dictatorship: the situation has nothing to do with us. Yet an example closer to home might be more instructive. Unlike Zimbabwe, this story involves a flourishing Western country with a large middle class that nevertheless managed to spend its way into banana-republic status by means very similar to those now being proposed in Washington.

The country in question is Argentina, and even mentioning the name might initially make any comparison seem tenuous. The United States is a superpower with a huge economy. Argentina is a political and economic joke, a global weakling legendary for endemic economic crises. Between them and us, surely, a great gulf is fixed. Yet Argentina did not always have its present meager status, nor did its poverty result from some inherent Latin American affinity for crisis and corruption. A century ago, Argentina was one of the world’s emerging powers, seemingly destined to outpace all but the greatest imperial states. Today it is … Argentina. A national decline on that scale did not just happen: it was the result of decades of struggle and systematic endeavor, led by the nation’s elite. As the nation’s greatest writer, Jorge Luis Borges, once remarked, only generations of statesmanship could have prevented Argentina from becoming a world power.

For Americans, the Argentine experience offers multiple warnings, not just about how dreadfully things can go wrong but how a nation can reach a point of no return. Not only did Argentina squander its many blessings, it created a situation from which the society could never recover. Argentines still suffer from the blunders and hubris of their grandparents without any serious likelihood that even their most strenuous efforts will make a difference. A nation can get into such a situation easily enough, but getting out is a different matter. A corrupted economy can’t be cured without being wiped out and started over.

It is hard, looking at the basket case Argentina has become, to imagine what an economic powerhouse the country was before World War II. From the 1880s, Argentina was, alongside the U.S. itself, a prime destination for European migrants. Buenos Aires was one of the world’s largest metropolitan areas, in a select club that included London, Paris, Berlin, and New York City. Argentina benefited mightily from foreign investment, which it used wisely to create a strong infrastructure and an excellent system of free mass education. It had the largest and most prosperous middle class in Latin America. When World War I began, Argentina was the world’s tenth wealthiest nation.

Right up to the 1940s, American and European economists struggled to explain the glaring contrast between booming Argentina and slothful Australia. As many studies pointed out, both countries had begun at a roughly similar point, as agricultural producers dependent on fickle world markets. Yet Australia remained stuck in colonial status while Argentina made the great leap forward to the status of an advanced nation with an expanding industrial base and sophisticated commerce.

So what happened? Certainly the country was hit hard by the depression of the 1930s, but so were other advanced nations that ultimately recovered, and Argentina profited from intense wartime demand for primary products.

The country was killed by political decisions, and the primary culprit was Juan Perón. He dominated political life through the 1940s and ruled officially as president from 1946 to 1955, returning briefly in the 1970s. Although he did not begin the process, he completed the transformation of Argentine government so that the state became both an object of plunder and an instrument for plunder.

Perón came from a fascist and corporatist mindset, which became more aggressively populist under the influence of his second wife Eva. They aimed their rhetoric against the nation’s rich, a designation that was swiftly expanded to cover most of the propertied middle classes, who became an enemy to be defeated and humiliated. To equalize the supposed struggle between the rich and the dispossessed, the Peróns exalted the liberating role of the state. The bureaucracy swelled alarmingly as nationalization brought key sectors of the economy under official control. Government bought loyalty through a massive program of social spending while fostering the growth of labor unions, which became intimate allies of the governing party. Argentina came to be the most unionized nation in Latin America. Perón also ended any pretense of the independence of the judiciary, purging and intimidating judges about whom he had any doubts and replacing them with minions.

The Peronist model—a New Deal on steroids—evolved into an effective clientelism, in which party overlords and labor bosses ruled through a mixture of corruption and violence. Clientelism, in effect, means the annexation of state resources for the benefit of political parties and private networks. Right now, both the word and the concept are not terribly familiar to Americans, but this is one Latin American export that they may soon need to get used to.

As high taxes and economic mismanagement took their toll, the Peróns blamed the disasters on class enemies at home and imperialism abroad, but the regime could not survive the loss of the venerated Eva. After attempting briefly to swing back to the center, Juan Perón was overthrown and driven into Spanish exile. Later governments tried varying strategies to reclaim Argentina’s lost splendors and some enjoyed success, but Perón’s curse endured. Even when his party was driven underground, its traditions remained: demagogic populism, a perception of the state as a device for enriching supporters and punishing foes, and a contempt for economic realities. Utopian mass movements inspired by Peronist ideas and charisma segued easily into the far-left upsurge of the 1960s, when Argentina gave birth to some of the world’s most dangerous terrorist and guerrilla movements. By 1976, the military intervened to stave off the imminent collapse of the state and launched the notorious Dirty War that killed thousands.

Since 1976, Argentine economic policies have lurched from catastrophe to catastrophe. The military junta borrowed enormously with no serious thought about consequences, and the structures of Argentine society made it impossible to tell how funds were being invested. Foreign debt exploded, the deficit boomed, and inflation approached 100 percent a year. Economic meltdown had disastrous political consequences. By 1982, like many other dictatorships through history, the Argentine junta tried to solve its domestic problems by turning to foreign military adventures. And like other regimes, they found that their control over military affairs was about as weak as their command of the economy. Military defeat in the Falkland Islands destroyed the junta. By 1983, a civilian president was in power once more. But nothing could stop the nosedive. Inflation reached 672 percent by 1985 and 3,080 percent by 1989. The disaster provoked capital flight and the collapse of investor confidence, not to mention the annihilation of middle-class savings. In the words on one observer, José Ignacio García Hamilton, the nation became “an international beggar with the highest per capita debt in the world.”

Another civilian president, Carlos Menem, took office in 1989, and despite his Peronist loyalties he initially tried to restore sanity through a program of privatization and deregulation. But events soon proved that Menem was only following a familiar pattern whereby a new regime would speak the language of reform and moderation for a couple of years before facing a showdown with the underlying realities of Argentine society. Menem could not overcome the overwhelming inertia within the country, the juggernaut pressures toward the growth of the state, to bureaucratization and regulation, and the destruction of private initiative and free enterprise. Between 1991 and 1999, Argentine public debt burgeoned from 34 percent of GDP to 52 percent. During the same decade, government public debt more than doubled as a percentage of GDP. These burdens stifled private investment so that productive sectors of the economy languished.

Economic disaster led inevitably to a collapse of social confidence and the evaporation of loyalty to the state. The more heavily the country was taxed and regulated, the more Argentines took their transactions off the books, creating a black economy on par with that of the old Soviet Union. In terms of paying their taxes, Argentines are about as faithful as the Italians to whom most have blood ties. Tax evasion became a national sport, second only to soccer in the Argentine consciousness, and provided another stumbling block to fiscal integrity. The collapse of respect for authority also extended to the law: courts are presumed to operate according to bribes and political pressure.

Systematic corruption has had horrifying implications for national security. After all, once you establish the idea that the state is for sale, there is no reason not to offer its services to foreign buyers. One spectacular example of such outsourcing occurred in 1994, when Islamist terrorists blew up a Jewish community center in Buenos Aires, killing 85. The investigation of the massacre was thoroughly bungled, reportedly because the Iranian government paid Menem $10 million. It is trivial to list the many other allegations of corruption and embezzlement surrounding Menem: what else is politics for, if not to enrich yourself and your clients?

In 2001-02, Argentine fortunes reached depths hitherto unplumbed. A debt-fueled crisis provoked a run on the currency, leading the government to freeze virtually all private bank accounts for 12 months. At the end of 2001, the country defaulted on its foreign debt of $142 billion, the largest such failure in history. With the economy in ruins, almost 60 percent of Argentines were living below the poverty line. Street violence became so intense that the president was forced to flee his palace by helicopter.

Since 2002, yet another new government has presided over an illusory economic boom before being manhandled by the ugly ghosts of Juan and Evita. Those specters were on hand to whisper their excellent advice to a new generation: if you face a crisis caused by excessive government spending, borrowing, and regulation, what else do you do except push even harder to spend, borrow, and regulate? Over the past two years, new taxes and price freezes have again crippled the economy, bringing power blackouts and forced cuts in production. Public debt stands at 56 percent of GDP, and inflation runs 20 percent. Last October, the government seized $29 billion in private pension funds, hammering the final nail in the coffin of the old middle classes. Judging by credit default swap spreads on government debt, the smart money is now betting heavily on another official default before mid-year. The Argentine economy may not actually be dead yet, but it has plenty of ill-wishers trying hard to finish it off.

We all know that deficits drive inflation, which can destroy a society. Less obvious is the political dimension of such a national suicide. Debts and deficits must be understood in the context of the populism that commonly entices governments to abandon economic restraint. No less political are the probable consequences of such a course: authoritarianism, public violence, and militarism.

The road to economic hell is paved with excellent intentions—a desire to save troubled industries, relieve poverty, and bolster communities that support the present government. But the higher the spending and the deeper the deficits, the worse the effects on productive enterprise and the heavier the penalty placed on thrift and enterprise. As matters deteriorate, governments have a natural tendency to divert blame onto some unpopular group, which comes to be labeled in terms of class, income, or race. With society so polarized, the party in power can dismiss any criticism as the selfish whining of the privileged and concentrate on the serious business of diverting state resources to its own followers.

Quite rapidly, “progressive” economic reforms subvert and then destroy savings and property, eliminating any effective opposition to the regime. Soon, too—if the Perón precedent is anything to go by—the regime organizes its long march through the organs of power, conquering the courts, the bureaucracy, the schools, and the media. Hyper-deficits bring hyperinflation, and only for the briefest moment can they coexist with any kind of democratic order.

Could it happen here? The U.S. certainly has very different political traditions from Argentina and more barriers to a populist-driven rape of the economy. On the other hand, events in some regions would make Juan Perón smile wistfully. California runs on particularly high taxes, uncontrollable deficits, and overregulation with a vastly swollen bureaucracy while the hegemonic power of organized labor prevents any reform. Thankfully, the state has no power to devalue its currency, still less to freeze bank accounts or seize pension funds, and businesses can still relocate elsewhere. But in its social values and progressive assumptions, California is close to the Democratic mainstream, which now intends to impose its ideas on the nation as a whole. And at over 60 percent of GDP, U.S. public debt is already higher than Argentina’s.

When honest money perishes, the society goes with it. We can’t say we weren’t warned.   
__________________________________________

Philip Jenkins is the author of The Lost History of Christianity: The Thousand-Year Golden Age of the Church in the Middle East, Africa, and Asia—and How It Died.

United States of Argentina

Anyone not alarmed by the state of the U.S. economy is not paying attention. As our Dear Leader begins his term, the theory of very big government has the support of an alarmingly broad political consensus. Despite the obvious dangers—devastating inflation and the ruin of the dollar—the United States seems pledged to a debt-funded spending spree of gargantuan proportions.


In opposing this trend, critics face the problem that the perils to which they point sound very theoretical and abstract. Perhaps Zimbabwe prints its currency in multi-trillion units, but that’s a singularly backward African dictatorship: the situation has nothing to do with us. Yet an example closer to home might be more instructive. Unlike Zimbabwe, this story involves a flourishing Western country with a large middle class that nevertheless managed to spend its way into banana-republic status by means very similar to those now being proposed in Washington.


The country in question is Argentina, and even mentioning the name might initially make any comparison seem tenuous. The United States is a superpower with a huge economy. Argentina is a political and economic joke, a global weakling legendary for endemic economic crises. Between them and us, surely, a great gulf is fixed. Yet Argentina did not always have its present meager status, nor did its poverty result from some inherent Latin American affinity for crisis and corruption. A century ago, Argentina was one of the world’s emerging powers, seemingly destined to outpace all but the greatest imperial states. Today it is … Argentina. A national decline on that scale did not just happen: it was the result of decades of struggle and systematic endeavor, led by the nation’s elite. As the nation’s greatest writer, Jorge Luis Borges, once remarked, only generations of statesmanship could have prevented Argentina from becoming a world power.


For Americans, the Argentine experience offers multiple warnings, not just about how dreadfully things can go wrong but how a nation can reach a point of no return. Not only did Argentina squander its many blessings, it created a situation from which the society could never recover. Argentines still suffer from the blunders and hubris of their grandparents without any serious likelihood that even their most strenuous efforts will make a difference. A nation can get into such a situation easily enough, but getting out is a different matter. A corrupted economy can’t be cured without being wiped out and started over.


It is hard, looking at the basket case Argentina has become, to imagine what an economic powerhouse the country was before World War II. From the 1880s, Argentina was, alongside the U.S. itself, a prime destination for European migrants. Buenos Aires was one of the world’s largest metropolitan areas, in a select club that included London, Paris, Berlin, and New York City. Argentina benefited mightily from foreign investment, which it used wisely to create a strong infrastructure and an excellent system of free mass education. It had the largest and most prosperous middle class in Latin America. When World War I began, Argentina was the world’s tenth wealthiest nation.


Right up to the 1940s, American and European economists struggled to explain the glaring contrast between booming Argentina and slothful Australia. As many studies pointed out, both countries had begun at a roughly similar point, as agricultural producers dependent on fickle world markets. Yet Australia remained stuck in colonial status while Argentina made the great leap forward to the status of an advanced nation with an expanding industrial base and sophisticated commerce.


So what happened? Certainly the country was hit hard by the depression of the 1930s, but so were other advanced nations that ultimately recovered, and Argentina profited from intense wartime demand for primary products.


The country was killed by political decisions, and the primary culprit was Juan Perón. He dominated political life through the 1940s and ruled officially as president from 1946 to 1955, returning briefly in the 1970s. Although he did not begin the process, he completed the transformation of Argentine government so that the state became both an object of plunder and an instrument for plunder.


Perón came from a fascist and corporatist mindset, which became more aggressively populist under the influence of his second wife Eva. They aimed their rhetoric against the nation’s rich, a designation that was swiftly expanded to cover most of the propertied middle classes, who became an enemy to be defeated and humiliated. To equalize the supposed struggle between the rich and the dispossessed, the Peróns exalted the liberating role of the state. The bureaucracy swelled alarmingly as nationalization brought key sectors of the economy under official control. Government bought loyalty through a massive program of social spending while fostering the growth of labor unions, which became intimate allies of the governing party. Argentina came to be the most unionized nation in Latin America. Perón also ended any pretense of the independence of the judiciary, purging and intimidating judges about whom he had any doubts and replacing them with minions.


The Peronist model—a New Deal on steroids—evolved into an effective clientelism, in which party overlords and labor bosses ruled through a mixture of corruption and violence. Clientelism, in effect, means the annexation of state resources for the benefit of political parties and private networks. Right now, both the word and the concept are not terribly familiar to Americans, but this is one Latin American export that they may soon need to get used to.


As high taxes and economic mismanagement took their toll, the Peróns blamed the disasters on class enemies at home and imperialism abroad, but the regime could not survive the loss of the venerated Eva. After attempting briefly to swing back to the center, Juan Perón was overthrown and driven into Spanish exile. Later governments tried varying strategies to reclaim Argentina’s lost splendors and some enjoyed success, but Perón’s curse endured. Even when his party was driven underground, its traditions remained: demagogic populism, a perception of the state as a device for enriching supporters and punishing foes, and a contempt for economic realities. Utopian mass movements inspired by Peronist ideas and charisma segued easily into the far-left upsurge of the 1960s, when Argentina gave birth to some of the world’s most dangerous terrorist and guerrilla movements. By 1976, the military intervened to stave off the imminent collapse of the state and launched the notorious Dirty War that killed thousands.


Since 1976, Argentine economic policies have lurched from catastrophe to catastrophe. The military junta borrowed enormously with no serious thought about consequences, and the structures of Argentine society made it impossible to tell how funds were being invested. Foreign debt exploded, the deficit boomed, and inflation approached 100 percent a year. Economic meltdown had disastrous political consequences. By 1982, like many other dictatorships through history, the Argentine junta tried to solve its domestic problems by turning to foreign military adventures. And like other regimes, they found that their control over military affairs was about as weak as their command of the economy. Military defeat in the Falkland Islands destroyed the junta. By 1983, a civilian president was in power once more. But nothing could stop the nosedive. Inflation reached 672 percent by 1985 and 3,080 percent by 1989. The disaster provoked capital flight and the collapse of investor confidence, not to mention the annihilation of middle-class savings. In the words on one observer, José Ignacio García Hamilton, the nation became “an international beggar with the highest per capita debt in the world.”


Another civilian president, Carlos Menem, took office in 1989, and despite his Peronist loyalties he initially tried to restore sanity through a program of privatization and deregulation. But events soon proved that Menem was only following a familiar pattern whereby a new regime would speak the language of reform and moderation for a couple of years before facing a showdown with the underlying realities of Argentine society. Menem could not overcome the overwhelming inertia within the country, the juggernaut pressures toward the growth of the state, to bureaucratization and regulation, and the destruction of private initiative and free enterprise. Between 1991 and 1999, Argentine public debt burgeoned from 34 percent of GDP to 52 percent. During the same decade, government public debt more than doubled as a percentage of GDP. These burdens stifled private investment so that productive sectors of the economy languished.


Economic disaster led inevitably to a collapse of social confidence and the evaporation of loyalty to the state. The more heavily the country was taxed and regulated, the more Argentines took their transactions off the books, creating a black economy on par with that of the old Soviet Union. In terms of paying their taxes, Argentines are about as faithful as the Italians to whom most have blood ties. Tax evasion became a national sport, second only to soccer in the Argentine consciousness, and provided another stumbling block to fiscal integrity. The collapse of respect for authority also extended to the law: courts are presumed to operate according to bribes and political pressure.


Systematic corruption has had horrifying implications for national security. After all, once you establish the idea that the state is for sale, there is no reason not to offer its services to foreign buyers. One spectacular example of such outsourcing occurred in 1994, when Islamist terrorists blew up a Jewish community center in Buenos Aires, killing 85. The investigation of the massacre was thoroughly bungled, reportedly because the Iranian government paid Menem $10 million. It is trivial to list the many other allegations of corruption and embezzlement surrounding Menem: what else is politics for, if not to enrich yourself and your clients?


In 2001-02, Argentine fortunes reached depths hitherto unplumbed. A debt-fueled crisis provoked a run on the currency, leading the government to freeze virtually all private bank accounts for 12 months. At the end of 2001, the country defaulted on its foreign debt of $142 billion, the largest such failure in history. With the economy in ruins, almost 60 percent of Argentines were living below the poverty line. Street violence became so intense that the president was forced to flee his palace by helicopter.


Since 2002, yet another new government has presided over an illusory economic boom before being manhandled by the ugly ghosts of Juan and Evita. Those specters were on hand to whisper their excellent advice to a new generation: if you face a crisis caused by excessive government spending, borrowing, and regulation, what else do you do except push even harder to spend, borrow, and regulate? Over the past two years, new taxes and price freezes have again crippled the economy, bringing power blackouts and forced cuts in production. Public debt stands at 56 percent of GDP, and inflation runs 20 percent. Last October, the government seized $29 billion in private pension funds, hammering the final nail in the coffin of the old middle classes. Judging by credit default swap spreads on government debt, the smart money is now betting heavily on another official default before mid-year. The Argentine economy may not actually be dead yet, but it has plenty of ill-wishers trying hard to finish it off.


We all know that deficits drive inflation, which can destroy a society. Less obvious is the political dimension of such a national suicide. Debts and deficits must be understood in the context of the populism that commonly entices governments to abandon economic restraint. No less political are the probable consequences of such a course: authoritarianism, public violence, and militarism.


The road to economic hell is paved with excellent intentions—a desire to save troubled industries, relieve poverty, and bolster communities that support the present government. But the higher the spending and the deeper the deficits, the worse the effects on productive enterprise and the heavier the penalty placed on thrift and enterprise. As matters deteriorate, governments have a natural tendency to divert blame onto some unpopular group, which comes to be labeled in terms of class, income, or race. With society so polarized, the party in power can dismiss any criticism as the selfish whining of the privileged and concentrate on the serious business of diverting state resources to its own followers.


Quite rapidly, “progressive” economic reforms subvert and then destroy savings and property, eliminating any effective opposition to the regime. Soon, too—if the Perón precedent is anything to go by—the regime organizes its long march through the organs of power, conquering the courts, the bureaucracy, the schools, and the media. Hyper-deficits bring hyperinflation, and only for the briefest moment can they coexist with any kind of democratic order.


Could it happen here? The U.S. certainly has very different political traditions from Argentina and more barriers to a populist-driven rape of the economy. On the other hand, events in some regions would make Juan Perón smile wistfully. California runs on particularly high taxes, uncontrollable deficits, and overregulation with a vastly swollen bureaucracy while the hegemonic power of organized labor prevents any reform. Thankfully, the state has no power to devalue its currency, still less to freeze bank accounts or seize pension funds, and businesses can still relocate elsewhere. But in its social values and progressive assumptions, California is close to the Democratic mainstream, which now intends to impose its ideas on the nation as a whole. And at over 60 percent of GDP, U.S. public debt is already higher than Argentina’s.


When honest money perishes, the society goes with it. We can’t say we weren’t warned.
 

__________________________________________

Philip Jenkins is the author of The Lost History of Christianity: The Thousand-Year Golden Age of the Church in the Middle East, Africa, and Asia—and How It Died.

The American Conservative welcomes letters to the editor.

Send letters to: letters@amconmag.com

The Spirit of ’76

Historical analogies have been much in vogue since this election. Are we living at the end of 1932, preparing to face the glories and disasters of a revived New Deal? Or are we in a mirror-image 1980, the beginning of an era of liberal dominance, with a massive party realignment that might not even reach full fruition for another decade or so? These questions matter, not just because such debates give employment to academic historians. Deciding which year offers the closest parallel to the present forces conservatives to think how they will adjust to the new order. Just how radically have public attitudes shifted?

 

Actually, the year that offers the closest historical parallels to the present might be neither 1932 nor 1980 but 1976, and that analogy helps us understand the directions in which the country will be moving. Both in government and opposition, people might want to hold off on planning for the next New Deal, still less for a coming generation of liberal hegemony. In three or four years, the main political fact in this country could well be a ruinous crisis of Democratic liberalism.

 

 

Why 1976? That was the year Jimmy Carter defeated Gerald Ford for the presidency by a slim but convincing margin: Ford won 48 percent of the popular vote, a little more than John McCain’s 46 percent. Democrats did significantly better in the House in 1976 than they did last month. They held a two-to-one majority of seats, and they retained a supermajority of 61 in the Senate. Broadly, however, the 1976 results look similar to 2008.

 

 

The mood of the country in 1976 also parallels our present situation, with a pervasive sense of disgust at politics as usual and widespread fears of national decline. As if the end of the Vietnam War and the Watergate fiasco were not catastrophic enough, foreign-policy disasters in Africa and Asia suggested that the U.S. was losing its hegemony. The oil crisis pointed to a vast transfer of wealth and power to the Middle East, while many pundits predicted environmental catastrophe. The sharp economic downturn resulted in heavy unemployment and rising inflation. A concatenation of scandals tarnished once-trusted institutions: corporations, the military, intelligence agencies, police, and, of course, the politicians.

 

 

So disaffected was bicentennial America that it sought leaders unconnected to the establishment. In Jimmy Carter, voters found a candidate whose main qualifications were his lack of experience and connections within the Beltway or corporate worlds. Like Barack Obama, Carter claimed to rise above failed partisanship, while his New South background allowed him to symbolize racial healing. Carter, like Obama, sold himself mainly on the virtues of his character. He presented himself as a man of simple honesty, faith, and decency, and his lack of a track record allowed voters to see in him what they wanted, however far-fetched those hopes might be. If they hadn’t believed it, they wouldn’t have seen it with their own eyes. Above all, Carter promised change, a message that carried weight as long as its details remained nonspecific. The problem with messiahs from nowhere is that when they do exercise power, people discover to their horror what their leader’s actual views and talents are. The disillusion can be dreadful.

 

 

The rhetoric and psychology of the Democratic Party in 1976 also foreshadows the present day. And as they did in 1976, Democrats now show every sign of repeating the blunders that led to a generation-long discrediting of liberalism. As the phrase goes, they have learned nothing in the intervening years, and they have forgotten nothing. And they will soon face a barrage of issues that they have neither the will nor competence to understand. Liberal triumph in 1976 led inexorably to evisceration in 1980. The same trajectory is likely to recur in the Obama years.

 

 

The key mistake Democrats made in 1976 was failing to realize what brought them to power. Democrats won because of public dissatisfaction with the previous regime, which had overseen the economic crisis, and also because of a wider fear that America would have to live with diminished expectations. But although they won on largely economic grounds, Democrats acted as if they had a sweeping mandate for cultural transformation—for social libertarianism, affirmative action and egalitarianism, dovish internationalism, and idealistic notions of human rights. These ideas dominated a radical Congress and were enthusiastically adopted by the cohort of Carter appointments to the judiciary. They all ignored a basic principle: just because people are unhappy where they are does not mean they are willing to go anywhere you try to lead them.

 

 

In 1976, liberals were wrong on multiple counts, and all the signs point to them repeating the same mistakes. Even if Obama plays Mr. Moderate, the congressional party contains more than enough take-no-prisoners far leftists to torpedo any chance of bipartisanship or restraint. Specifically, liberals believe that the public will support radical change in three highly sensitive areas, and in each area they will overreach to the point of self-destruction. In domestic affairs, they believe the culture wars are over and that revolutionary social changes like gay marriage can now advance unchecked. They think that popular concern over environmental problems will translate into a blank check for limitless government spending and the decisive transfer of U.S. sovereignty to international agencies. And liberals are now sure that all that foolishness with international dangers and crises is firmly behind us so that we no longer need the military or intelligence capabilities developed to respond to them. As the coming three or four years will show, they are dreadfully wrong on all counts.

 

 

In the 1970s, liberal hubris manifested itself especially in domestic politics. Democrats focused obsessively on race and class, to the exclusion of culture, morals, and religion. Reading the situation in those terms allowed liberals an easy framework for explaining opposition to their policies, which must be based on overt or disguised forms of racism (and that was before they had a President Obama). If every social problem boiled down to matters of economic and racial justice, then there could be no legitimate grounds for concerns that presented themselves as cultural or religious.

 

 

That severely blinkered view goes a very long way to explaining the collapse of liberalism in 1979-80. America in the 1970s was undergoing traumatic social and moral changes, which caused widespread unhappiness and fear. Many social conservatives were alarmed that governments were using children as tools in social experimentation, an issue made most explicit in school busing. Popular opposition focused on the defense of community and local autonomy but above all on child safety. Once again, though, liberals had no valid answer to these fears, as any questioning of public education must of necessity be a disguised form of vulgar prejudice. Their response was predictable: Damn the racists, full speed ahead.

 

 

Across the board, the critical pressure points in the social politics of the 1970s involved children and young people. For the ’60s generation, progress demanded removing restraints on the actions of consenting adults, whether this involved sexual experimentation, gay rights, drug use, or participation in weird and wonderful fringe religions. Who was to say that individuals should not be allowed to go to hell in their chosen way? That principle worked splendidly, unless and until people began to reflect on the effects on children. Yes, an adult could consent to engage in bizarre or self-destructive behavior, but that libertarian approach did not and could not extend to the young. Time and again, Americans have shown themselves liberal on social issues that are framed in terms of “live and let live.” They draw the line when the behavior in question appears to threaten youth. Hence the most successful conservative campaigns on domestic issues of the late 1970s focused strictly on child protection, and those movements coalesced into a general concern about defending and restoring American culture.

 

 

From 1977—the pivotal year of the social-conservative revival—liberals suffered reversal after reversal, on issues of drug abuse, pornography, and gay rights. In every case, child protection gave the key to victory. Carter administration plans to decriminalize drugs foundered on the opposition of a burgeoning parents’ movement. Popular fears of threats to children defeated referenda on gay rights. Near universal nausea about the availability of child porn provoked the first serious questioning of ever expanding sexual frankness. Fears about threats against children merged easily with concerns about threats by children. The astonishing rise of violent youth crime, which reached its Himalayan peak between 1979 and 1981, was read as a symptom of a feral generation that had not been subject to appropriate family restraints or care. By the end of the 1970s, these various child-related themes drove a triumphant social conservative coalition, which included those newly galvanized religious voters mobilized in the Moral Majority.

 

 

America today has changed enormously since 1978, but many of those older issues survive in latent form and should resurface shortly. Questions of youth protection will transform the gay-marriage debate, which for most media observers has been framed in terms of social justice and equality. Presumably by judicial fiat, the practice will extend to many more states in the coming years and quite conceivably to all 50 states. This in itself will not be a popular move: recall the recent California referendum, which was decided by the blacks and Latinos who turned out to support Obama but who favored traditional family models.

 

 

How will attitudes to gay marriage evolve when people contemplate the proper age of consent in such unions? Assuming the age is to be the same as in heterosexual marriages, then adolescents of 18 will marry freely, and in many states parental consent will grant that right to boys of 16 or so. Are Americans ready to see blushing teenage male brides? And if boys of that age can marry, demands to reduce the age of sexual consent for all youngsters will certainly follow.

 

 

The more strenuously liberals press for gay equality in matters involving youth, in marriage and adoption, the more they will generate a child-protection reaction, even among people who consider themselves socially liberal, and the more likely this reaction is to take religious forms. Following the recent California referendum, Mormons bore the brunt of liberal fury, and Catholics and other religious groups will face legal challenges for refusing to participate in gay adoptions and marriages. Other areas like abortion, contraception, and transgender surgery promise to generate many confrontations between religious believers and the current sexual revolution, and religious sensibilities can expect no sympathy from government, courts, or media. The resulting battles should re-energize a religious constituency that is currently disoriented and disillusioned. Anyone for Moral Majority II: The Sequel?

 

 

As in the 1970s, the problem of out-of-control youth could very soon be back on the political agenda. Although youth crime hasn’t been on the national radar since the crack boom of the early 1990s, demographic trends confidently predict a rising storm that should break within two years or so. The crime surge of the 1970s was in large part the consequence of the baby boom reaching its most crime-prone years, as the huge cohort of those born around 1960 hit their late teens. Something very similar is about to happen again. The number of babies born in the U.S. in 1990 was only slightly smaller than the 1960 generation, and by 2010 we could be entering an alarming era of violent crime, manifested in soaring rates for homicide and robbery. Factor in the economic crisis, and American cities could look as frightening and dangerous as they did at the time of New York City’s 1977 blackout, with its rioting and looting.

 

 

Making the situation still worse, the massive expansion of union membership for which many Democrats clamor will add mightily to the plethora of urban problems. Imagine cities devastated by youth crime and gang wars, while emergency workers, hospitals, buses, and garbage services are regularly on strike. If you think Americans were alienated from government in 2008, come back in two years. Liberals will try to interpret the coming crisis in terms of race and class, a problem to be solved by unlimited social spending. Conservatives had better be ready to respond with ideas of individual and family responsibility and the defense of social order.

 

 

In other ways, too, liberals utterly misread public sentiment and will build their policy upon those delusions. Americans have shown themselves open to green rhetoric and feel that policies to protect the environment are generally a good thing. Few conservatives would criticize any move in the direction of energy independence, which would be a wonderful first step toward extracting the nation from Middle Eastern quagmires. But of course, that is not what we are going to get. We will instead be facing a determined and fanatical campaign to eliminate the vastly exaggerated menace of global warming, which will mean a wholesale assault on America’s energy supplies. This will translate into striking at coal- and oil-based energy while refusing to make progress toward reliance on nuclear resources, all the while seeking to curb carbon usage through onerous taxes and surcharges. Remember those Americans infuriated by strikes and intimidated by crime? They are also going to be freezing, living with rationed energy and brownouts. A grossly underpowered economy will find it all but impossible to reconstruct and revive when the coming depression ends.

 

 

As if all this isn’t bad enough, expect global-warming rhetoric to be used as a wedge to undermine national sovereignty. Under Obama, we face the virtual certainty of American accession to new treaties that go far beyond Kyoto in demanding radical cutbacks in carbon usage. The U.S. will presumably stand out as the only power attempting to enforce these standards, which would institutionalize the nation’s relative decline in the face of Chinese and Indian growth. The moral and political issue of sovereignty will thus be linked to the practical daily realities of the energy crisis at home.

 

 

And then there is national security. Democrats observe, quite rightly, that Americans are uncomfortable with images of Guantanamo and waterboarding, and they are profoundly unhappy with open-ended military commitments in Iraq and Afghanistan. But here, too, liberals will overreach when they interpret these moral qualms as a basis for winding up American military and intelligence capabilities.

 

 

However dreadful the Carter administration may have been, however widespread the domestic discontent, what actually finished off the Democrats and opened the door for Ronald Reagan was the Iran hostage crisis. And that was a direct and predictable consequence of overreach by the administration and Congress. Since 1976, congressional liberals had led a series of campaigns against the intelligence services, exposing supposed abuses and atrocities, and in the process discrediting the whole work of intelligence. By 1977, massive purges had removed many of the CIA’s best agents, while congressional restrictions made it all but impossible for the agency to pursue its work. In the Middle East and elsewhere, America was flying blind.

 

 

Underlying these bizarre actions was a theory of human rights that assumed the whole world could and should operate according to Western theories of democratic liberalism. Unfortunately, it didn’t. In Iran, the shah was an unsavory dictator with a heavy-handed secret police, but he exercised his powers to pursue a pro-American policy. Under the Carter regime, the U.S. ended its support of the shah, while ceasing to pay off the truly dangerous radical Islamists who would eventually replace him. American efforts at self-immolation succeeded in 1979, with the Islamic Revolution and the hostage crisis that destroyed the Carter administration.

Surely congressional liberals are not stupid enough to do anything like that again? Don’t believe it. By the end of 2009, expect a purge of U.S. intelligence agencies, as well as suffocating new constraints on intelligence-gathering capacities. These moves will probably be accompanied by a series of congressional hearings, which will provide maximum opportunities for showboating by politicos, while embarrassing the CIA. A blinded and disarmed Obama administration will then blunder anew into confrontations that will once again plumb the depths of national humiliation—if not in Iran, then in Taiwan, Ukraine, Venezuela, or Pakistan. If we’re very unlucky, airliners will again be crashing into our skyscrapers and cargo ships will be exploding in our ports. And as in the late 1970s, there will be plenty of discharged and disaffected former intelligence agents wandering the corridors of power, serving as endless sources of leaks and disinformation against the Obama regime. Expect the worst age of political scandal since, well, the 1970s.

All analogies limp, and no one is suggesting a straight replay of the Carter years, still less that some kind of new Reagan era is its inevitable sequel. But if liberals seem so determined to repeat the mistakes of that era, then we have at least a plausible sketch of the coming Obama administration—of its rise and ruin.  
__________________________________________

Philip Jenkins the author, most recently, of The Lost History of Christianity.

Prophets on the Right—and Left