State of the Union

Fertility and the Fate of Nations

Kurdish woman fighter Kurdishstruggle/Flickr
Kurdish woman fighter Kurdishstruggle/Flickr

When seeking to understand national security issues, demographics is commonly the missing dimension. The fertility of a particular population not only determines its overall numbers, but also contributes mightily to determining the balance of ethnic and linguistic groups, and a country’s chances of achieving any kind of lasting stability. Wise governments count their children. And without knowing something about demographic factors, we are going to be baffled by the behavior of some key players in the current Middle Eastern imbroglio.

A society’s population is shaped by both birth and death rates, but at present, I will focus on births, and especially on fertility. One key measure used by demographers is the total fertility rate, TFR, the total number of children that an average woman will bear during her lifetime. If that rate is around 2.1, then the population is stable, and that figure is known as the replacement rate. If it is significantly higher than replacement, say at 4.0 or 5.0, then we have a fast expanding population, a lot of young people, and probably a lot of instability. A rate below replacement points to an aging and shrinking population, and also a crying need for immigrants and new blood. From the 1960s, European countries moved to sub-replacement rates, and that situation is now spreading rapidly—though far from uniformly—around the world.

Those rates also tell us a lot about religious behavior, and there is a close if poorly understood linkage between fertility and faith. Populations with very high fertility rates tend to be highly religious in very traditional ways, while “modern” and educated populations have far fewer children, and those societies are usually very secular. Over time, high fertility gives way to low, and religion declines accordingly. The poster child for that story is resolutely secular Denmark, with a TFR around 1.7, but a similar process has swept over most of modern Catholic Europe.

We can argue at length about whether the religious change follows fertility, or vice versa. Perhaps a decline in religious ideologies weakens commitment to family as a primary means of defining identity; or else declining numbers of children reduce the community ties that bind families to religious institutions. Either way, growing numbers of people define their interests against those of traditional religious values, spawning numerous conflicts over issues of morality, sexuality, and sexual identity. Ireland’s TFR, for instance, has halved since 1971, and the decline is much steeper if we just consider old stock Irish families, rather than immigrants. The same years have witnessed repeated brushfire wars over such issues as contraception, divorce, and same sex marriage.

But changes in fertility do not affect all parts of a nation equally or simultaneously, especially when different regions show very different patterns of wealth and economic development. Over time, the higher birth rates of poorer and more religious populations will gain in relative numbers, and over two or three generations, that pattern of differential growth can have far-reaching consequences. In Europe, for instance, even without the migrant boom of the past couple of years, the proportion of Muslims in Europe was certain to grow significantly.

Many Westerners still think of Global South countries in terms of classic Third World population profiles, with very high fertility and teeming masses of small children. That perception is correct for some areas, particularly in Africa, but it is radically wrong for others. In fact, many Asian and Latin American countries now look thoroughly “European” in their demographics.

India offers a startling example of this change, and its explosive political consequences. Half of that country’s component states now have sub-replacement fertility rates comparable to Denmark, or even lower. Meanwhile, some very populous states (like vast Uttar Pradesh or Bihar) retain the old Third World model. That stark schism is the essential basis for any understanding of modern Indian politics. As we might have predicted, the high fertility states are firmly and traditionally religious, and provide the base for reactionary and even fascist Hindu supremacist movements. Those currents are quite alien to the “European” low fertility states, located chiefly in the south, which tend to be secular-minded, progressive, and tolerant. Balancing those different regions would pose a nightmarish choice for any government, but the current Hindu nationalist BJP regime aligns decisively with the high fertility regions that provide its electoral bastions. The lesson is grim, but obvious: when you have to choose between two such distinct demographic regions, it is overwhelmingly tempting to turn to the one with all the voters, and all the young party militants. Invest in growth!

With some variations, that situation is closely echoed in Turkey, and understanding that parallel helps us explain the otherwise puzzling behavior of the nation’s current Islamist AKP government. Why, for Heaven’s sake, is Turkey not more concerned about the ISIS threat? Why, when its air forces go into action, do they strike at Kurdish forces, rather than ISIS? Is the government deranged?

Here is the essential demographic background: Overall, Turkey’s fertility rate is a little below replacement, but that simple fact obscures enormous regional variations. The country can be divided into four zones, stretching from west to east. The Western quarter is thoroughly European in demographic terms, with stunningly low sub-Danish fertility rates of around 1.5. The rates rise steadily as we turn east, until the upland east has very high rates resembling those of neighboring Iraq or Syria. “Europe” and the Third World thus jostle each other within one nation.

High-fertility eastern Turkey is of course much more religious than the secular west, and this is where we find the Qur’an Belt that so regularly supports Islamic and even fundamentalist causes. It simply makes electoral sense for the government to respond to the interests of that populous growing area, and to drift ever more steadily in Islamist directions.

But there is a complicating fact. Those fast-breeding eastern regions are also home to what the Turkish government euphemistically calls the “Mountain Turks,” but which everyone else on the planet calls “Kurds.” Turkey’s Kurdish minority, usually estimated at around 15-20 percent of the population, is expanding very rapidly—to the point that, within a generation or two, it will actually be a majority within the Turkish state. This nightmare prospect is front and center in the mind of Turkish president Recep Tayyip Erdogan, who a couple of years ago issued an apocalyptic warning of a national Kurdish majority no later than 2038. That date is a little implausibly soon, but the principle stands.

In the face of seemingly imminent demographic catastrophe, what can Turkey do? One solution is for the government to plead with citizens to start breeding again—even those western secularists—and to get the national fertility rate closer to 3 than 2. But since that outcome is highly unlikely, the government must resort to short term solutions, and to extol religious, Islamic identities over ethnicity. Ideally, a return to Islam might even provide an incentive for families to reassert traditional values, and to have more children. Alongside that policy, the government has an absolute need to suppress stirrings of Kurdish nationhood or separatism on Turkish soil.

From a demographic perspective, the Turkish government is going to find any manifestations of Kurdish identity terrifying, far more than even the hardest-edged Islamism. ISIS is an irritant; the Kurds pose an existential demographic threat.

And in large measure, that explains why Turkish jets are targeting the Kurdish PKK militias, rather than ISIS.

It’s the fertility, stupid.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Britain’s Botched Child Abuse Scandal

To put it mildly, it was an awkward social situation. After murdering a small boy during a pedophile orgy, a senior Member of Parliament determined to castrate another victim with a knife. He was prevented from this act by a former British Prime Minister who was present at the event. That fellow-pervert suggested that this was going a little far, even for a group that had already murdered several other children. How far we have traveled from Downton Abbey.

If I seem to be treating such a horrific story sarcastically, I honestly do not know how else to respond to such a monstrous and fantastic allegation, or the fact that senior ranks of the British police have treated this phantasmagoric tale—and countless others of the same sort—as sober fact. Lives and reputations have been ruined. Fortunately, it now seems that the whole disastrous saga of falsehoods is about to collapse amidst purges and resignations, with growing warnings about police reliance on “narcissists and fantasists.” The main question presently is how much of the British legal and criminal justice system will go down with this horribly flawed investigation. Dare we hope that common sense will at last prevail?

Some months ago, I outlined the mythology of “elite pedophile rings” that had been circulating in Britain for some years. The allegedly homicidal MP in question was Harvey Proctor, the former Premier was Edward Heath, while other rumored perpetrators from the early 1980s included multiple figures at the highest ranks of politics, intelligence and the military. One of the accused was former Home Secretary Leon Brittan, others were the heads of MI5 and MI6. As the saying goes, extraordinary claims demand extraordinary evidence, and that was certainly applicable in this instance—but what was extraordinary was how incredibly weak-to-non-existent was the evidence offered.

The whole “Westminster Pedophile Ring” extravaganza depends on the unsupported testimony of one anonymous man, “Nick,” a 47-year old administrator in the National Health Service, and a former nurse, who reports being present at various events as an abused child victim. His claims may theoretically be correct, but as yet, no corroboration has been found to support them. Even so, he became the star witness for a special police investigative unit, Operation Midland, and last year one leading officer described Nick’s testimony as “credible and true.” As even the Metropolitan Police has now admitted, the prejudicial word “true” should never have been uttered in this context—but how convenient never to have to take any of these cases to trial! If the police were able simply to declare the charges true then ideally, nothing would be left except to build the bonfire for the accused.

Over the months, Nick’s tales have faced increasing skepticism, and major new exposés should be forthcoming within the coming weeks, as long-postponed investigative documentaries finally appear on British television. Nick’s many critics point out that he had for years reported suffering extensive abuse as a child, but only very recently did he add the charges about elite offenders and MPs. So why on earth did the police call Nick “credible”? Later statements explained what this meant. Yes, police might sometimes encounter serial liars and hopeless fantasists, but in police eyes, “Nick” did not fit that bill, as he is a respectable, well-spoken, middle class guy not actually slavering at the mouth. Therefore, he could not be lying. Even if he is reporting events factually, it is surely very bad practice to judge someone’s reliability or truthfulness wholly on their demeanor and speech habits.

We have to be cautious about using the word “lying,” which implies deliberately and knowingly making false statements. It is simple, though, to cite examples where people have falsely reported child abuse, without actually lying. In the 1990s, many thousands of patients of so-called recovered memory treatment recounted atrocious sufferings at the hands of ritualized and Satanic abuse rings. Some of those patients presumably had been abused in some form, but we can say with total confidence that the vast majority of the episodes they were reporting never happened as objective realities. These people, too, generally presented as rational, well-balanced adults, often from decent professional backgrounds, and they genuinely believed what they were saying. But what they described still never happened.

Quite apart from recovered memory cases, other people do indeed tell false tales—although we might again cavil that they are not consciously lying. Of course serial liars exist, and some genuinely reach the point where they do not know the difference between truth and fiction. Britain’s Daily Telegraph recently exposed one alleged abuse victim, “who also claimed to have evidence of two murders, had been convicted of making hoax bomb calls, had falsely confessed to murder, and been accused by a judge of telling ‘whopping lies’.”

In the real world, such a career history would devastate the witness’s credibility, but in the mirror universe of child abuse investigation, it actually bolsters the claims. Let me explain that paradox. Central to that whole mindset is one simple statement that has achieved the status of a religious creed: victims never lie about child abuse. Children don’t lie, nor do adults reporting their childhood sufferings. If you doubt this fact—if you use a word like “alleged” victim—then you are an accomplice to that abuse. If you seek to challenge or discredit statements about abuse, then you are also striking at all future victims and survivors who would be discouraged from reporting their experiences. Doubt is of the devil.

That approach helps us understand the extreme tolerance granted to purported victims who on the face of it sound deeply unconvincing: the ones with lengthy records of psychiatric treatment and commitment; with multiple convictions for petty crime and fraud; with decades-long involvement with substance abuse, including the hardest and most destructive drugs; and the serial liars. What about the ones who utter not a word about the alleged crimes until 20 or 30 years after the event? Surely, these are not credible?

Oh ye of little faith. Listen to the experts, read the professional journals, and you shall know the truth. In fact, we are told, the degree of mental disorder and social malfunction in adult life is a direct and inevitable consequence of the childhood abuse, and the severity of that abuse is directly proportionate to the degree of adult dysfunction. In simple terms, the worse the acts of childhood molestation and rape, the more likely the lying and substance abuse. Of course they seem to be crazy people, drug addicts, and persistent liars, and that fact proves the truth of their claims. Got that? Equally, the worse the abuse, the longer the time period before they might feel able to expose it to the world.

Then, we add the last critical element, of anonymity. No abuse victim wishes to be exposed to scorn, so of course he or she is granted anonymity in making charges, a luxury not afforded to the alleged abuser. That protection is doubly necessary when the victim is exposing the horrors attributed to politicians and intelligence officers, who might seek revenge. In practice, then, we have the perfect situation for the generation of accusations: the more improbable or ludicrous the charges, the more likely they are to be believed, and all accusations can be lodged from behind a wall of secrecy.

In a powerful press conference, Harvey Proctor gave the police a simple choice. Charge him immediately with murder and let the case go before a jury—or else identify “Nick” publicly, and prosecute him for wasting police time. And let the police involved be moved to some other function where their skills can be used more profitably, such as traffic duty.

His challenge stands.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels (Basic Books, forthcoming Fall 2015). He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Germany’s Coming Demographic Revolution

They still haven’t got it.

European media and policymakers have correctly realized that the present refugee crisis is an enormous challenge to the assumptions that have guided the continent for decades, to the point of potentially breaking the European Union. But apparently they still are not prepared to confront the specifically religious revolution now under way.

This issue places me in a strange and unprecedented position. Over the past decade, I have written about the presence of Islam in Europe, arguing repeatedly that the threat of “Islamization” is overblown. Overall, I have argued, Europe’s Muslim population is presently around 4.5 percent of the whole, which by U.S. standards is in no sense a massive minority presence. It might rise to 10 or 15 percent later in the century, but the change will be gradual, allowing plenty of time for assimilation.

My moderate position on this has been heavily criticized by various right-wing outlets such as FrontPage Magazine, a publication with which I agree on basically nothing. On most issues, I find FrontPage’s tone hysterical and alarmist. Now, suddenly, I myself have to criticize that magazine for being insufficiently concerned about Islam. These are strange times.

Here is the problem. Germany recently declared that it would take 800,000 refugees this year. That is a very large figure, but as the government points out, that is only one percent of the population of 80 million. In FrontPage, Daniel Greenfield stresses that the issue is much graver than it appears, since the refugees are mainly young men, who will massively raise the Muslim presence among that section of Germany’s population. Other writers like Christopher Caldwell also raise alarms about the massive security threats posed by the present crisis. He warns that European politicians “are trying to pass off a migration crisis as a humanitarian crisis. It may be on the verge of turning into a military crisis.”

Both Greenfield and Caldwell are right, but they are still missing large parts of the story, which are available to anyone who has followed German media over the past two weeks. The first point made repeatedly by German officials and journalists is that no sane person really believes in that 800,000 figure for this present year. Even as Germany has introduced “temporary” border controls in the past few days, the estimates for the actual number of migrants expected continues to grow. Vice Chancellor Sigmar Gabriel now tells his party that, “There are many indications that in this year we will not see 800,000 refugees, as predicted, but a million.”

Also, such officials are explicitly saying that something like this influx will continue more or less indefinitely. Sigmar Gabriel has also said that “we could certainly deal with something in the order of a half a million for several years.” If the present experience is anything to go by, that is likely to mean something like a million a year for how long? Five years? Ten?

However obvious this may be to say, there is no logical end to this process, even if the Syrian crisis ended tomorrow. As it becomes known that Germany is so open to migrants, that fact offers an irresistible invitation to anyone living in a country roiled by violence or economic crisis, which basically means most lands from Libya to Pakistan. There is no terminal point at which the nations sending migrants would ever run out of candidates seeking refuge and asylum. And even that projection takes no account of the likely spread of open warfare and terrorism into Turkey and Egypt in the coming decade.

So let’s put those numbers in context. Germany’s population is about a quarter that of the United States, so multiply all those refugee figures by four. Imagine if a U.S. president declared that the country would commit itself to taking between two and four million new refugees and migrants, annually, over the coming years—and that over and above other forms of immigration. Even given the diversity of the U.S. population, that would represent an inconceivably large social transformation.

Germany is also describing an epochal religious revolution. That point might not be clear from reading the very extensive articles in mainstream German media that discuss every aspect of the strains posed by the crisis, but somehow never mention the words Islam or Muslim. That reticence is understandable, given that Germans, more than any people, do not want to appear nativist or racist. But despite the taboos, that religious element is critical.

Who are the immigrants? On the good side, a sizable number are Syrians who are fleeing the rise of radical Islamism in their country because they are themselves non-Muslims, or at least non-Sunnis. Before the present post-2011 meltdown, perhaps 40 percent of Syria’s people fell into the various non-Sunni categories, including a great many Christians. Their presence in Germany might actually strengthen Christian traditions.

But these non-Sunni migrants will be an ever-smaller component of a migrant wave that is increasingly and overwhelmingly Muslim. Many of the present migrants are not in fact Syrian, and they adopt that description to win the sympathy of host nations (a thoroughly understandable decision). Many are in fact Iraqis and Turks, Libyans or Afghans. All those groups, incidentally, come from countries with very young populations and extremely high fertility rates, so their numbers would likely grow rapidly in their new European homes.

Before 2015, Germany’s Muslim population was around 5 percent of the whole, potentially rising to 7 percent or so by 2030. If the present wave of migrants and refugees continues, that figure could well be 15 or 20 percent by the 2030s, and it would be rising fast. For the first time ever, we would seriously be looking at something like the Islamization of Europe that has been a nativist nightmare for a generation. And in the German context, that process would be squeezed into just a couple of decades. That is radically destabilizing.

Personally, I don’t believe that the presence of Islam in Europe need of itself be harmful or even negative, nor that it would necessarily lead to violence. But I am quite certain that numerical changes on this scale do portend a cultural and social revolution without precedent.

Shouldn’t the Germans, and other Europeans, at least be allowed to discuss this openly?

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels (Basic Books, forthcoming Fall 2015). He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Blinded by the ‘Red Scare’

DonkeyHotey / flickr

Spot the odd one out: a) Orcs; b) Communists; c) Griffins; d) Velociraptors; e) Gorgons.

The answer is d) Velociraptors. Although all five are terrifying monsters, velociraptors are the only one that actually existed on the Earth at one point, many millions of years ago. The others are all wholly products of the imagination. Communists, Gorgons, and the rest are apparently mythical bugbears invented to terrify children. This lesson about the mythical nature of communism is brought home to you if you belong to any kind of professional organization in academe or education.

Regularly, one will read the obituary of some venerable hack who, circa 1950, faced terrible difficulties for his courageous stands on behalf of civil rights or labor unions. This was all part of the “Red Scare,” when Inquisitors sought out such brave freethinkers under the guise of pursuing those illusory “Communists.” Very rarely is it mentioned that, yes indeed, said hack was in fact a prominent and highly active member of the Communist Party, or a knowing sympathizer of several of its front organizations. And all this at a time when anyone with basic literacy skills knew exactly the nature of Stalinist rule in the Soviet Union and Eastern Europe.

But you see, it was all a “Scare” and all Americans under 40 have been brought up to see the phenomenon in those terms. “Witch Hunt” is another common term, and what sane person accepts the reality of witches?

You can get a good sense of this rhetorical trick if you watch the 2012 film “The Act of Killing,” one of the most acclaimed documentaries of recent decades. The film examines the Indonesian massacres of 1965-66 by focusing on some of the surviving perpetrators, a bunch of elderly gangsters who in their day led lethal death squads. According to the film, these purges killed a million victims, which is roughly double the figure that most historians accept, but let that pass. So triumphantly successful was “The Act of Killing” that in the past year it has spawned a sequel,The Look of Silence. Expect multiple prizes and awards to once again follow. 

“The Act of Killing” is a multiply fascinating film, and essential viewing for anyone interested in official repression. Particularly fascinating are the close linkages portrayed between the venerable gangsters and ultra-right patriotic parties, with their paramilitary youth wing, and with media magnates. The film looks like a case-study of the Marxist theory of organized crime. And none of those depicted come off at all favorably, not gangsters, not magnates, not politicos. Let’s not argue: they are all very bad people.

But what about their victims? It’s a reasonable assumption that very few Westerners watching the film will have any great sense of Indonesian history or politics, and will thus accept the brief sketch offered in the introductory titles. In 1965, we are told, the Indonesian generals overthrew the nation’s government, before launching a deadly purge aimed against so-called Communists. “Anyone opposed to the military dictatorship could be accused of being a Communist: union members, landless farmers, intellectuals, and the ethnic Chinese.” So, we think, there was a coup, and the new regime unleashed its gangsters and paramilitaries against the innocent and idealistic, all as part of a mindless, paranoid, Red Scare. Any suggestion that actual Communists might have been targeted is scarcely considered, nor the possibility that such mythical creatures might have been genuinely dangerous. What decent person could fail to oppose a military dictatorship?

From multiple perspectives, that sketch is baloney. Let me explain.

A new nation born in the aftermath of the Second World War, Indonesia’s politics were tumultuous from their beginning. The country had a very active and militant Communist Party, the PKI, which prior to 1965 was three-million strong. That made it, in fact, the world’s largest nonruling Communist Party. It also had a potent tradition of ruthless revolutionary violence and putschism. Notoriously, a PKI rising at Madiun in 1948 resulted in the murder of tens of thousands of rivals. The organization was, in short, terrifying, and parallels to the Khmer Rouge are quite plausible.

In 1965, Indonesia’s ruler was Sukarno, a classic Third World dictator. In order to counter political rivals, he decided to lurch to the radical left and to seek the support of the PKI. Internationally, Sukarno aligned with Mao’s China, the most homicidal regime on the planet, which was then on the verge of launching its horrendous Cultural Revolution. Fearing a repetition of Madiun on a national scale, Indonesia’s armed forces intervened, overthrowing Sukarno and beginning a national purge of the PKI. Although “The Act of Killing” looks exclusively at the role of gangsters and paramilitaries, the reaction was in fact a national affair, with Islamic and even Catholic movements coming to the fore.

The repression killed around half a million people, the vast majority of whom were certainly PKI leaders and cadres. So yes indeed, they were Communists, and not just harmless labor organizers, landless farmers, or dissident intellectuals. Many were Party organizers and fighters, who were the mirror image of the gangsters we see in that documentary. If circumstances had been slightly different, they would have committed identical acts of repression and murder against the political right. Instead of random massacres, it is better to see the 1965 slaughter as an ideological civil war, which was fought with savage ferocity. Fanatics slew fanatics.

Beyond doubt, the repression was a brutal affair, which claimed far too many lives. Was it in any way justified? Legally, certainly not. But looking at history, I wonder how critics might feel if, in 1932, the German government had decided to massacre thousands of leaders and paramilitaries of the surging Nazi Party. Could such a pre-emptive mass repression ever have been tolerable to Western opinion? I do wonder.

The problem at hand, however, is that we have become so accustomed to the “Red Scare” mythology that it leads us to ignore the existence of truly dangerous extremists, and the lethal peril they pose.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels (Basic Books, forthcoming Fall 2015). He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

The Disastrous Economics of Scottish Independence

Political commentators are still spellbound by the amazing success of the Scottish National Party (SNP) in the recent British election, when it took 56 of the country’s 59 Parliamentary seats. Suddenly, independence is back on the political agenda, and a second independence referendum is possible in the next couple of years. Whatever you may think about this prospect in the abstract, though, we can certainly agree on the simple fact that the current SNP leadership is utterly unqualified to lead such a venture. If independence were to be achieved in the next four or five years, the new state would soon come to envy more successful smaller European countries, such as Greece.

We can illustrate this from many points of view, but let us take one issue above all, namely the currency. The SNP has no idea what it is doing, or the risks it is running. Worse, nor does it seem to care.

During a debate in the referendum campaign last fall, then-SNP leader Alex Salmond was asked simply what currency an independent Scotland would have. That would be no problem, he said. We would carry on using the pound, together with the rest of the United Kingdom, and we would share control of a central bank. Absolutely not, said his unionist opponent. Every British political party has made it starkly clear that they would never accept such an outcome. By the way, that was last year, when the rest of Britain was feeling much less aggrieved than it is now by the SNP’s general demagoguery and hate campaigns.

So, given that the shared pound is not a starter, asked his critics, what was Salmond’s Plan B? Repeated questioning failed to shift Salmond at this point, demonstrating to all but his most unyielding supporters that there was no Plan B, and that the SNP had never even thought through the currency issue. That might have been the single moment at which the referendum campaign was lost. Salmond resigned as SNP leader after that debacle, but he has been very visible in recent days, repeating the familiar claims and boasts.

Fortunately, Scotland never had to confront the consequences of this insanity, but let us assume that, after the recent elections, they do become independent. What about the currency?

In the referendum debates, Salmond’s next option was a threat, something at which the SNP is expert. If the United Kingdom refused to share the pound, he said, then the new Scotland would refuse to pay its share of the national debt. The problem there is that an independent Scotland would begin its career as a nation in default, unable to raise credit even for its existing commitments, never mind covering the expense of the ever-expanding welfare state promised by Salmond’s party. The likely consequence would be social collapse and mass unemployment. Presumably English and European aid would prevent actual food riots.

There are indeed other options. Scotland could, if it wished, keep on using the pound without English consent, in the same way that countries like Panama use the U.S. dollar.

Sterlingization is possible, and England could not prevent it. But the only way to Sterlingize is to cut public spending enough to generate sufficient amounts of the master currency. Scotland, as I say, is a very generous welfare state, and has ambitions to become even more so. Thus, see my remarks earlier about the prospects for social collapse and civil disorder. And why Scotland would want to give control of its economy to a foreign central bank is a master of some mystery. Moreover, the fact that Scotland’s currency would not be under the control of any central bank of its own automatically closes the door on possible membership in the European Union.

Ah yes, the European Union. All SNP rhetoric is founded on the principle of a close and continuing link with Europe, and EU membership. In the referendum campaign, though, European officials made it clear that any breakup of the existing UK would preserve the continuing membership of the London government, while the new Scotland would be forced to apply for membership afresh. That opens multiple cans of worms. No solution for Scotland, whether political or economic, is possible without full and early EU membership. But the application process can take years or decades, and is only possible if no existing state raises difficulties. It is very much in the interests of some countries–Spain and Belgium, notably–to avoid giving the slightest encouragement to potential secessionists on their own territory, and they would likely oppose or delay Scottish aspirations. For several years, at least, Scotland would be outside all European trading blocs.

Even if Scotland did jump those complex hurdles, the EU is quite clear that any new member is strictly required to accept the Euro as its currency. That solves the currency issue neatly, but it also means that Scotland would be forced to accept all the EU’s stringent rules about public expenditure and debt. Hard though this may be for Scots to believe presently, the English would no longer be subsidizing their welfare systems with their present generosity. Again we see the fundamental dilemma that would suddenly become the central issue in Scottish politics, whether to accept huge social spending cuts, or national bankruptcy.

Scotland does have a serious alternative to all these nightmare scenarios. Accept independence outside Europe, and set up an autonomous currency under its own central bank–call it the Scottish Dollar. The currency would take years to establish, and when it did come fully into operation, it would be a very risky venture. Norway manages to stay outside the EU with its own currency, but it has immensely more oil reserves than the Scots could ever dream of. This policy would mean decades of penury for the new Scotland, but it would mean authentic independence. Why they would want that is a question of emotions and rhetoric, not economics.

Or maybe–just maybe–Scotland could enter a political and currency union with its large neighbor to the south, which is already a major European player, and which currently donates disproportionately to the Scottish economy! No, I’m sorry, that’s just too absurd to be contemplated.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels (Basic Books, forthcoming Fall 2015). He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

The Myth of the Inevitable War


U.S. military forces planned to ultimately seize Canada’s rich mineral resources around Sudbury, Ontario, but in the interim, they would launch surprise attacks on the key ports of Halifax, Nova Scotia, and Vancouver. While many planners disliked the prospect of using poison gas on civilians, it was essential to knock out those strategic centers swiftly, before the British mounted their inevitable counter-offensive. The British amphibious invasion would most likely land between Ocean City, Md., and Rehoboth Beach, Del.

If the above sounds like a Harry Turtledove alternative history scenario, it isn’t. What I have just described is the War Plan Red developed by the U.S. armed forces in the late 1920s, which remained on the books as a formal contingency plan through 1939. Such planning—and many other imaginary scenarios developed before it—reminds us of how frequently the U.S. and other nations have identified particular foes who would certainly demand to be defeated and destroyed. Often has the refrain gone: if war with such an enemy is utterly inevitable, why not start it now, and get it over with?

A little historical perspective, though, should make us quite humble about the whole notion of “inevitability,” not to mention the prospect of perpetual enemies.

Two centuries ago, in 1815, the British Empire was fighting two key enemies, respectively: France’s Emperor Napoleon and the new United States of America. Although Britain and the U.S. formally concluded hostilities the previous year, the culminating Battle of New Orleans did not occur until the start of 1815. The decisive British victory at Waterloo followed in June.

Had you told any informed observer in 1815 that Britain would never again engage in formal hostilities against either of those nations—not even in two entire centuries—you would have probably have been labeled as insane. Rivalry with the French had been the absolutely consistent factor in British affairs since 1689, and was clearly not going to vanish overnight. As historian Jules Michelet sagely remarked later in the 19th century, in understanding world affairs, there is France and Britain, and that is all.

For the rest of the 19th century, the question of the next Anglo-French war would be when, not if. Tensions reached ugly heights in the late 1850s, immediately after the Anglo-French cooperation in the Crimean War. During the Indian Mutiny of 1857, patriotic French newspapers suggested immediately sending the fleet to help the rebels evict the British Raj. Britain suffered a full-scale invasion scare in 1859-60, when volunteer rifle companies sprang up across the country to resist the threat of French occupation. The two countries came within an ace of open warfare in 1898, over colonial rivalries on the Nile.

Somehow, though, the inevitable struggle never occurred. One can legitimately point out that the two armed forces were at war in the 1940s, when the British fought to evict the Vichy French from colonial territories like Syria and Madagascar. Technically, though, this fell short of actual declared warfare, and the resulting battles have slipped into the realm of polite oblivion.

But even if the French menace could somehow be contained, surely the next American war was truly only a matter of time.

In Herman Melville’s books, especially White-Jacket (1850), we repeatedly are given the sense that conflict on the seas was imminent and inevitable. Naval and imperial rivalries made warfare certain, as did U.S.-Canadian border rivalries: frontier disputes in the Oregon territory stirred American war fever in the 1840s. Between 1840 and 1900, serious war scares were running at about one per decade.

At the start of the Civil War, heavy-handed U.S. actions against Confederate envoys on the high seas almost brought the British into the struggle. The U.S. owes an immense debt to Prince Albert, who drafted the diplomatic documents that kept the British out of war, and presumably saved the American Union. Time and again, from the 1860s onwards, border-crossing activities by Irish guerrillas threatened to sabotage the fragile U.S.-British modus vivendi. The two countries once more came close to war over Venezuela’s borders in the 1890s.

And that brings us to the 1920s, the era of War Plan Red. To U.S. commanders observing the likely military future at that time, by far the deadliest danger they could foresee was a British-Japanese alliance that would overwhelm the U.S. Navy. Canada, tragically, would be the battleground between the two great English-speaking nations, those inevitable foes destined to fight until only one survived. Who can withstand destiny?

Needless to say, none of those nightmare scenarios ever came to fruition. Somehow, the British evolved from being our eternal foes to becoming those nice folk across the pond who send us Masterpiece Theatre and Benedict Cumberbatch.

Other seemingly inevitable crises likewise failed to materialize.

People of the boomer generation might remember the hyperventilated coverage of Chinese events during the Cultural Revolution of the post-1966 decade. According to most reports, the country had seemingly created a generation of tens of millions of crazed fanatics pledged to world conquest. How could they be stopped, short of a nuclear pre-emptive strike (which the Russians were actually contemplating in 1969)?

The Cultural Revolution was indeed a ghastly tragedy for the Chinese people, but the feared external aggressions never occurred. By the late 1970s, the main organizers of that fanaticism were themselves discredited and imprisoned, and China was ready to rejoin the world community. Again, the “inevitable” proved to be an illusion. Somehow, too, the “unavoidable” global clash between the U.S. and the Soviet Union was, well, avoided.

We can debate the best means of preventing wars, and sometimes, a strong and vigilant military might indeed be the best way of keeping the peace. But if anyone ever says today that conflict with a particular nation or cause is “inevitable,” whether that contemporary foe is Iran, China or Russia, history offers plenty of reasons to doubt such claims. Somewhere down the road, in fact, those adversaries might become our best friends. Never say never.

Or to quote Lord Palmerston, speaking of England: We have no eternal allies, and we have no perpetual enemies.

Philip Jenkins is the author of Images of Terror: What We Can and Can’t Know About Terrorism. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

The Long Hot Summer of 2015

In the late-1980s, I was teaching criminal justice, and my most popular course focused on Terrorism and Political Violence. In that course, one of my regular lectures concerned the impact of racial conflict and urban rioting on the U.S. presidential election in the then-inconceivably distant year of 2016. My discussion was firmly tongue-in-cheek, a theoretical exercise in postulating cyclical patterns in American violence. In light of recent events in Baltimore and elsewhere, though, and with spreading tensions between police and African-American communities, I wonder if the time has come to brush off my crystal ball.

My choice of 2016 was anything but random. Over the past century or so, racially-based rioting in the U.S. has followed chronological patterns that are remarkably consistent, although the reasons underlying these cycles are anything but clear. Particularly serious and major events occur at intervals of about 48 years, with more minor and sporadic outbreaks at the mid-point of that long cycle. Do understand that I am not offering any mystical forms of numerological interpretation here. I merely remark that, for whatever reasons, events have followed this pattern.

Although the pattern can be traced back into the mid-1890s, we might begin our observation in 1919, the hideous year of racial rioting and lethal pogroms in Chicago, Omaha, Knoxville, and other centers. James Weldon Johnson memorably called it the Red Summer. Moving forward 24 years, we note serious but more localized outbreaks of racial conflict in 1943, with upheavals in Harlem and Detroit.

Another 24 years beyond that takes us to 1967, by far the worst year of the urban rioting of that decade. That was the legendary Long Hot Summer, when observers tabulated 159 riots across the nation. It was in 1967 that Newark burned, while the U.S. government sent the 82nd Airborne into Detroit.

Twenty-four years later, historically-inclined observers breathed a sigh of relief when 1991 passed without any grave outbreaks. The following Spring, though, brought the Los Angeles riots, and many lesser copycat events around the country. I stress that the pattern suggests gaps of roughly 24 years, rather than following a precise chronology.

It is not difficult to trace long-term historical patterns and even mystical dates. All you need to is to cherry-pick particular events, while ignoring others that do not fit the scheme. In this case, though, a pattern seems to emerge without such special pleading, as is suggested by the rarity of riots between the various peak years. Between the mid-1960s wave of riots and the Los Angeles events, for instance, American suffered remarkably few racial disturbances, the only real exception being the Miami riot of 1980.

Assuming they are grounded in some reality, what might account for such cycles? The obvious linkage is demographic, in that 24 years is roughly the span of a generation. We might for instance suggest that racial tensions rise to the point where they provoke severe violence, but that violence has far-reaching consequences. The sheer scale of loss and destruction deters people from seeking any recurrence of the event. Meanwhile, governments act to prevent such repetitions. The 1943 riots profoundly affected the thinking of liberals, inspiring the civil rights drive of the following two decades. Over time, though, new generations arise, lacking direct memories of the earlier carnage, and thus prepared to risk open confrontations with authority.

And so we list the years of major national violence: the Red Summer of 1919, the Long Hot Summer of 1967… and 2015? That was the theme of my long-ago lecture, and why I was speculating about whether those imaginary events might indeed have an impact on the presidential election immediately following. Hence my choice of 2016.

If this speculation proves unfounded, I look forward to publishing a groveling apology this time next year. I would love to be wrong.

Philip Jenkins is the author of Images of Terror: What We Can and Can’t Know About Terrorism. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Putin’s Corrupted Orthodoxy

President of the Russian Federation  / cc

If you remember nothing else about Andrei Zvyagintsev’s film “Leviathan”, the whale will remain with you.

In a squalid coastal town in Russia’s frigid north, a man gazes over the skeleton of a beached whale, the bones stark in their white purity. Although clearly suggesting death, the skeleton’s beauty and majesty stands in sharp contrast to the ugly trivialities of the town’s human population, lost in their obsessions with power and greed, in their corruption and hypocrisy. In the context of the film, it is hard not to see that “leviathan” as a symbol of the gigantic aspirations of the old Soviet Union, that other dead monster. Although the film-maker does not for a moment suggest that the former Soviet Union represented any kind of lost glory, “Leviathan” does portray a modern Russian society stumbling through a contemporary world utterly devoid of standards, morality, or hope. Most startling for a Western audience, that society now camouflages its vulgar graspings not in the language of Marxism-Leninism, but of Christianity.

Although “Leviathan” has been widely reviewed in both Europe and the U.S., few commentators have picked up that central religious message. “Leviathan” stands among the greatest films ever made about the corruption of religion. (Warning: the film concludes with a major twist, which will be revealed here).

“Leviathan” is set in the small fishing port of Pribrezhny, which is recreated in horrifyingly convincing detail. The story focuses on Kolya, an auto mechanic who spends most of his life in a drunken haze. Tragically for him, he owns a property that is coveted by the local mayor, Vadim, who gets everything he wants, and who readily deploys thugs to enforce his will. Ultimately, Kolya is railroaded on false charges and loses his home. Most viewers take Vadim as a transparent stand-in for Vladimir Putin, who similarly rules through violence and extra-legal trickery. For both men, law is merely a tool for the powerful.

Beyond that obvious satire, the film places these everyday Russian evils in a cosmic context. “Leviathan” is immersed in Biblical symbolism, drawing both on the Book of Job and the story of Naboth’s Vineyard, in which an evil king trumps up false charges to seize the belongings of a poor neighbor. To a Westerner, the name Leviathan recalls Thomas Hobbes’s vision of the all-powerful state, but in this case we should rather turn directly to the Old Testament. The Biblical leviathan is mentioned on several occasions, sometimes as a seagoing animal, but of occasion as a fearsome monster of evil, slain by God himself in cosmic warfare. In this apocalyptic vision, the image becomes “the piercing serpent, even Leviathan that crooked serpent.” Modern Russians live in the shadow of the slain leviathan.

Putin’s Russia is a deeply inhospitable environment for political satire, and the country’s media have largely ignored the international sensation that the film has created, including its prestigious awards in Europe and the U.S. Most controversial of all, though, has been the film’s treatment of the church, which is far more innovative and daring than the critique of Putin. So he’s corrupt and thuggish? Yes, we knew that.

The film’s other central character is the local Orthodox bishop. The most chilling scene is an intimate dialogue between Vadim and the bishop, a spiritual adviser who not only justifies the boss’s excesses but actually drives him to worse deeds. Is Vadim a good Christian, asks the cleric? Well, says Vadim, he tries. As a Christian ruler then, says the bishop, he must know that all power comes from God. Vadim has the absolute duty to exercise the power given to him, to solve all his issues and problems himself, and with all his might, lest the Enemy think he is weak. All is in God’s hands, it is all His will. We almost hear the voice of Dostoevsky’s Grand Inquisitor.

If the priest is not actually the driving force behind Vadim’s evils, he is at least an accomplice, and an enabler. In a devastating climax, we see exactly why Vadim was so desperate to steal Kolya’s property: he (and more specifically, the bishop) needed it to build a gaudy new Orthodox cathedral, as a shrine to Power. The film concludes with a splendid and utterly hypocritical sermon by the bishop, who thoroughly unites Russian nationalism with the interests of the Orthodox Church. His sermon calls for values of truth and justice, in a venue that exists solely because such values do not exist within Russia.

If Vadim is meant to be Putin, then Russian audiences waste little time before linking the priest to another prominent national figure, namely Kirill [Cyril], Metropolitan of the country’s Orthodox Church. He has also led his church into an intimate and, most would say, a profoundly unhealthy alliance with the post-Soviet regime.

After the Bolshevik Revolution, the Communist government savagely persecuted the Orthodox Church, killings many thousands of clergy and monastics, and closing the vast majority of churches and monasteries. When Communism fell, the church returned to visibility, and the last quarter-century has witnessed a startling and many-sided revival. Places of worship have been rebuilt, monasteries flourish again, and pilgrimage shrines have begun a new era of mass popularity. The post-Soviet religious restoration was supervised by the then-Patriarch Alexy II (1990-2008) and by his successor, Kirill.

In exchange for so many blessings, the church has of course given fervent support to the Putin government, lavishly praising it and providing ideological justifications for a strong government at home, and expansion beyond its borders. But such enthusiasm goes far beyond mere payback. Support for authoritarian regimes is deeply embedded in Orthodox political thought, and Russian Orthodoxy in particular has always been tinged with mystical and millenarian nationalism.

When Kirill presents Orthodox Russia as a bastion of true faith, besieged by the false values and immorality of a secularized West, his words are deeply appreciated by both the state and the church. The apocalyptic character of that conflict is made evident by the West’s embrace of homosexual rights, especially same-sex marriage. As so often in past centuries, Holy Russia confronts a Godless and decadent West. It is Putin, not Kirill, who has warned that “Many Euro-Atlantic countries have moved away from their roots, including Christian values. Policies are being pursued that place on the same level a multi-child family and a same-sex partnership, a faith in God and a belief in Satan.”

We should not see Kirill as a rogue cleric abandoning the interests of his church to seek political favors: he really believes every word. Whether Putin and his circle literally believe the religious rhetoric is not relevant: they act as if they do. The solidly Orthodox framing of Russian nationalism also ensures that powerful Rightist groups happily rally around Putin and his not-so-ex-KGB clique.

Over the past few years, the nature of Russia’s military-ecclesiastical complex has repeatedly become evident. Kirill extended the church’s blessings to the pro-Moscow regime in Belarus after a highly troubling election. In Ukraine, Kirill completely echoed Putin’s line that the Russian-sponsored separatist guerrillas were well-intentioned local citizens who justifiably feared oppression by the Kiev regime. Kirill even granted church honors to Cuba’s Castro brothers. All is in God’s hands, it is all His will.

So egregious is the portrayal of the priest in Leviathan, and so blatantly based on real life circumstances, that Orthodox activists have been the leading advocates for suppressing the film altogether.

The United States spends a great deal of time worrying about the state of Iran, which is dominated by theocratic cliques who relish apocalyptic dreams, and who hope someday to obtain a handful of nuclear weapons. We don’t have to travel too far from Iran to find another state where ambitious theocrats shape the national ideology of a government that presently disposes of some 1,500 active nuclear weapons, not to mention another 8,000 or so in storage.

In Russia’s case, like Iran’s, we will not understand the state’s ideological motivations without appreciating that religious dimension.

Philip Jenkins is the author of Images of Terror: What We Can and Can’t Know About Terrorism. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Pedophile Rings in Thatcher’s Britain—Myth or Fact?

Leon Brittan WTO / cc
Leon Brittan WTO / cc

Leon Brittan, who died last week, had a very distinguished career in British public life. Among other things, he served as Margaret Thatcher’s Home Secretary and later became a member of the European Commission. It is startling, then, to find that among the standard eulogies for the great and the good, some news headlines reporting his death feature such unexpected words as “abuse ring,” “pedophile,” and “child murder.” Brittan had the misfortune to play a starring role in a long-simmering sex scandal currently fascinating that country’s media.

For years now, rumors have been floating about a “Westminster Pedophile Ring” that supposedly operated in the 1970s and 1980s and which included senior politicians, civil servants, and military figures, mainly right-wing Conservatives. Recently allegations reached new heights when police said they were seriously considering claims that the group had murdered several young boys. As the Independent headlined, “Tory MP Killed Boy During Sex Attack.” In themselves, these horrific charges contain nothing flagrantly impossible. Yet we need to be very careful indeed about accepting a story that depends on thorny issues of evidence and credibility that will be deeply familiar to American observers of our own country’s sexual politics.

The whole dreadful affair has now developed a complete mythology, with two pivotal hero figures. One was flamboyant Member of Parliament Geoffrey Dickens, who in 1984 compiled a massive dossier about pedophilia in British public life, with details on some 40 allegedly tainted politicians. He gave this to Home Secretary Leon Brittan, whose department promptly misplaced or buried it, supposedly as part of a general establishment cover-up. Only in recent years has the affair returned to life. The other key figure is the pseudonymous “Nick,” supposedly one of the abused boys from that earlier era. Finding his chilling account of witnessing murders “credible and true,” British police have now reopened the investigation, in the process generating sensational headlines.

Parts of the story are plausible. We know that in that era—roughly, the decade following 1975—several British public figures were indeed involved in outrageous and exploitative sexual misbehavior, including some cases of child abuse and child pornography. One horrific example was Liberal MP Cyril Smith, a 300-pound blimp with a penchant for spanking teenaged boys. Although such cases of sexual malfeasance were well-known to police and media, they were thoroughly hushed up, a process made vastly easier by draconian British libel laws.

The “pedophile ring” rhetoric is, though, misleading. If we look at the known sexual scandals from the politics of this era, they tended not to be “pedophile,” in the sense of involving someone sexually focused on children at or below the age of puberty. The word is thus chosen to maximize seriousness, implying young child victims, compulsive serial offending, and incorrigibility. In fact, the recorded cases commonly involved homosexual men interested in male teenagers or young adults, usually male prostitutes. That does not for a second excuse the behavior, but it does put it in a different category from molesters preying on infants.

That distinction is significant in light of the claims made about Geoffrey Dickens, who is today presented as a near-prophetic champion of decency and child protection confronting a perverted ruling class. Dickens was in fact an outrageous demagogue, who never found a sensational issue or moral panic that he failed to leap on. His special bugbear was homosexuality, a broad category that, for him, included pedophilia as one of its subsets. If we actually had a copy of the legendary dossier, we can be quite sure that it included very few actual pedophiles and a great many homosexuals. Almost certainly, too, the impressive-sounding term “dossier” dignifies a generalized rant.

Charges of rings and conspiracies should also be treated circumspectly. The “elite pedophilia” charges circulated very widely in tabloid media of the 1980s, usually in the context of lunatic theories of Satanism and supposed “ritual child abuse,” sometimes linked to anti-Masonic hysteria. Then as now, these fevered rumors Named Names, including Cabinet members and members of the royal family, as well as prominent Jews, like Brittan himself. It’s not surprising, then, that law-enforcement officials at the time were profoundly (and rightly) skeptical of any new nuggets Dickens had to offer.

But let’s move to the present day, and especially to “Nick,” the main (and seemingly only) source of the murder charges. I personally have no idea of Nick’s identity, or of his veracity, and it is possible that every appalling word he is uttering is grounded in truth. But based on the extensive media reports of the affair, I do have concerns.

I read, for instance, the accounts of the homicidal orgies attributed to the elite ring, in which at least one boy was strangled. This gives me a mighty sense of déjà vu because I know identical stories of actual, confirmed incidents that happened in London at this exact time and which have been known in the public domain for decades. Those crimes, though, involved a quite genuine pedophile crime network that was as far from “elite” as it was possible to be, a group of underclass trash who hung around fairgrounds to find child victims. They indeed killed repeatedly, in exactly the ways now credited to our “elite” perverts, and the similarity between those stories and the current charges bothers me. If someone were inventing “pedophile ring” crimes, this is what they would come up with.

Recently, one of the leading detectives in the renewed investigation remarked that “I believe what Nick is saying to be credible and true.” Based on reports to date, police have never referred to any actual corroboration of the charges, any piece of evidence that Nick gave that he would not have known if he had not been present at these crimes. Rather, we hear repeatedly of his “credibility,” a word that is thoroughly subjective: “I believe.”

When I say that X is “credible,” what we mean is that I find what he has to say believable, and that fact depends as much on my willingness to accept his statement as on any quality in his character or demeanor. This is a familiar theme in contemporary American debates over sexual assault, as when Rolling Stone found a witness who recounted fraternity rape stories, declaring her “credible” because it fitted their ideological needs to do so. Editors and journalists simply wanted and needed to believe. Seeking corroboration was unnecessary, and the mere suggestion of doing so would have blamed and demeaned the victim.

In Britain, too, there are ample reasons why authorities would now find Nick “credible” in the way they would not have done a decade or so back. The main new factor is the appalling case of disc jockey Jimmy Savile, who used his celebrity status to carry out a career of rape and molestation lasting half a century. Since 2012, desperately anxious to avoid new attacks on their integrity and competence, law-enforcement agencies have sought out and prosecuted celebrity sexual crimes from bygone years, commonly relying on the uncorroborated testimony of reported victim and survivors.

Sometimes, this exhumation of past horrors has undoubtedly served the cause of justice, but questions remain. Should an individual really be tried and convicted on the unsupported, uncorroborated evidence of alleged victims who report crimes from 30 or 40 years ago? Surely, we can now point to enough cases where such testimony has proved to be wholly fictitious, and malicious, so that real injustice resulted. Witnesses fantasize, and witnesses lie.

Perhaps British politicians of the Thatcher era were indeed sexual monsters. But we should pause before accepting what, on its surface, looks like a deranged fantasy.

Philip Jenkins is the author of Images of Terror: What We Can and Can’t Know About Terrorism. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

‘False Flags,’ Charlie Hebdo, and Martin Luther King


Seeking to explain recent terror attacks in France, conspiracy theorists have resorted to very familiar culprits: the Jews did it, specifically the mystical supermen of Israel’s Mossad. Such a theory is stupid and scurrilous, as well as on so many grounds self-evidently incorrect. That said, the Paris terror spree does raise significant questions about how we assign responsibility for terror attacks and what we can and can’t know by looking at the foot soldiers who carry out the deeds. Nor are debates over false claims and attributions wholly foreign to American history.

The most likely reconstruction of the Charlie Hebdo attack places primary blame on the Yemen-based al-Qaeda affiliate, Al-Qaeda in the Arabian Peninsula (AQAP). Al-Qaeda wanted to carry out a spectacular in order to distract attention from the enormous successes enjoyed recently by its upstart rival, ISIS, in Iraq and Syria. Only thus, thought al-Qaeda leaders, could the group recapture some of its old momentum and credibility. Accordingly, two of the militants involved made a point of yelling their support for AQAP in the streets they had turned into a battleground. Their accomplice, though, who stormed a kosher market, was so far from understanding the wider agenda that he publicly proclaimed his own fealty… to the ISIS Caliphate. Oops.

In itself, the gulf between generals and foot soldiers is not hard to grasp. Even in regular armies, ordinary privates rarely have much sense of the broad strategic goals motivating their campaigns, although at least they can be sure about which nation they are actually serving. Such certainty is a luxury in terrorist conflicts, where individual cells and columns might find themselves contracting for a bewildering variety of paymasters. This degree of disconnect can be potentially useful for anyone seeking to manipulate a cause. A group can recruit uninformed militants as muscle to undertake a particular attack, which can serve wider goals utterly beyond the comprehension of those rank-and-file thugs. This might mean discrediting some other rival cause or else achieving a desired goal without suffering any direct stigma for committing the deed. Such pseudonymous actions thus offer deniability.

That brings us back, perhaps, to one of the most notorious crimes of 20th-century American history.

In April 1968, Martin Luther King Jr. was assassinated in Memphis. Despite multiple claims through the years, we can confidently say that the assassin was a petty criminal and armed robber named James Earl Ray, who fled the country before being arrested in London. Ray’s motives have been much debated, but a congressional investigation in the 1970s assembled extensive (if confusing) evidence that cliques of Southern racists and white supremacists had conspired to kill King, using Ray as a low-level subcontractor.

That might be true, but it is not what Ray himself admitted. In 2001, British authorities released information about the arrest and detention of Ray, material that has been largely ignored in the United States. During his British stay, he offered none of the lengthy defenses and denials of the shooting that he would later maintain. Rather, he talked freely about the King murder, and he even suggested the culprits who might have arranged the killing. Instead of white supremacists, though, Ray’s main candidates for the principals in the conspiracy were the Black Muslims, the Nation of Islam followers of Elijah Muhammad.

Let me say immediately that the fact that Ray said this does not of itself constitute weighty evidence for the existence of any conspiracy, let alone its nature. Ray, who died in 1998, was anything but a reliable witness. He was at best a contractor in the killing, and even the Ray family’s legal representatives spoke scathingly of the general intelligence of Ray and his circle. And the fact that Ray made such remarks does not even mean that he necessarily believed them. Perhaps he was making mischief.

Odd as it may sound in retrospect, though, the Black Muslim theory is not ridiculous, and it is quite as plausible as the white supremacist angle. We need to think back to a time when the U.S. had been racked by escalating race riots for several summers. Even responsible observers were forecasting outright race war, with cities partitioned between armed black and white militias. Radical black separatists stood to gain from a sensational act that would polarize and divide the races still further. Enlisting a white man to kill Martin Luther King would be a lethally effective form of dissimulation, and it was also wholly deniable. Elijah Muhammad had a track record of involvement in violence, and he is widely held responsible for ordering the assassination of Malcolm X in 1965.

If the London evidence had been better publicized in the 1970s, it would presumably have been more thoroughly investigated, and that might possibly have pointed to interesting connections. Lacking such an investigation, however, we really can add little to what is already known about King’s death: no reasonable person would build a new conspiracy theory solely on the shifting sands of James Earl Ray’s often-changing testimony. I am certainly not claiming any grand breakthrough in the case.

But from the point of view of terror investigations, the Ray affair contradicts so many of our regular assumptions. When individuals X and Y launch an attack, the media will direct all their efforts to determining what made them do it, and how they became so fanatically devoted to their cause. The problem is that the people pulling the triggers do not necessarily know much about the wider causes for which they are fighting. And what they do know might be totally wrong.

Philip Jenkins is the author of Images of Terror: What We Can and Can’t Know About Terrorism. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Saddam’s Strategy Against ISIS

Georgios Kollidas /

It does not take great powers of prophecy to discern the outcome of the latest U.S. intervention in Syria and Iraq. Soon, ground forces will become more directly involved. Fighting bravely and intelligently, those forces will win many victories, although at a high cost in battle casualties and terrorist outrages. Meanwhile, Islamic State forces only have to stay on the defensive until the patience of the U.S. public becomes exhausted, prompting another undignified American withdrawal in 2016 or 2020. Islamists will then regain power, just as the Taliban will almost certainly do in Afghanistan. Americans will be left scratching their heads seeking to explain another strategic failure.

Actually, American or other Western forces could win such wars very easily, obliterating their enemies to the point where they would never rise again. The problem is that they could do so only by adopting tactics that Americans would find utterly inconceivable and intolerable—in effect, the tactics of Saddam Hussein. Yet without these methods, the West is assuredly destined to lose each and every of its future military encounters in the region. I emphatically do not advocate these brutal methods. Rather, I ask why, if the U.S. does not plan to fight to win, does it become embroiled in these scenarios in the first place?

To illustrate the principles at work, think back to the attack on the U.S. compound in Benghazi in 2012. Ordinary Libyans were furious at the killing of an American diplomat they respected greatly, and they struck hard at the terror groups involved. With dauntless courage, they stormed the militia bases, evicting many well-armed Islamist fighters. Explaining his fanatical behavior under fire, one of the attackers was quoted as saying “What do I have to fear? I have five brothers!” As in most of the Muslim world, whether in the Middle East, North Africa, or South Asia, people operate from a powerful sense of family or clan loyalty, with an absolute faith that kinsmen will avenge your death or injury. That process of vendetta and escalating violence continues until the family ceases to exist. As a corollary, the guilt of one is the guilt of all. An individual cannot shame himself without harming his wider family.

Through the centuries, that basic fact of collective loyalty and shared responsibility has absolutely shaped the conduct of warfare in the region. It means, for instance, that governments disarmed rivals by taking members of their families as hostages for good behavior. Those hostages were treated decently and honorably, but their fate depended on the continued good conduct of their kinfolk. Governments kept order by deterrence, enforced by the ever-present threat of collective retaliation against the kin-group and the home community of any potential insurgents. As individuals scarcely matter except as components of the organic whole of family and community, nothing prevents avenging the misdeeds of one man on the body of one of his relatives or friends.

Everyone in the region understands the collective principle, which was powerfully in evidence during the Lebanese civil war of the 1980s. If a militia kidnapped one of your kinsmen or friends, you could only save his life if you very quickly grabbed a relative of one of the culprits, and thus began negotiations for a swap. If your kinsman was already dead, then further atrocities could only be pre-empted by swift retaliation against the kidnapper’s family. So you have five brothers? Well, we will track them all down, one by one.

Only slowly did local Beirut fighters realize that the Americans were actually naïve enough not to target the relatives of kidnappers, even when they knew perfectly well who the guilty men were. That insight—the knowledge that you could target those foreigners without risking your brothers or cousins—was what led to the hostage crisis of the Reagan years, which almost brought down the U.S. presidency. The Russians, by the way, enthusiastically played by local rules, retaliating savagely against the brothers and cousins of those who laid hands on one of their own. In consequence, the Russians suffered only one kidnap crisis, before establishing a successful balance of terror.

Once we understand that principle, even the seemingly intractable problem of deterring suicide attacks actually becomes simple. An individual—a Mohammed Atta in New York, a Mohammad Sidique Khan in London—might in his last moments dwell on nothing but the glories awaiting him in Paradise. Why should he hesitate to kill? Matters would be utterly different if he knew that his act would bring ruin to his family and neighbors, to the violent death of all his kinsmen and the extirpation of his bloodline.

A dictatorial regime like Saddam’s had not the slightest problem imposing such a group punishment, and extending it to every woman and child of that family. Western forces have always been far more principled, but even the colonial empires were quite prepared to inflict collective punishments on the towns or villages that produced notorious rebels. When Israeli soldiers today demolish the houses of terrorists’ relatives, they are treading in familiar British footsteps.

Today’s Islamic State pursues an extremist ideology in which there are literally no limits to cruel or outright evil behavior. The only enemy they have to fear is death, and they have been taught to welcome this. Short of introducing some mighty new deterrent factor, conventional military operations against them are wildly unlikely to succeed. Quite the contrary, endemic wars will generate ever more fanatics.

In theory, a recipe does exist for decisively ending the Islamists’ run of victories. Through means of collective and family punishment, which explicitly targets individuals who have done no wrong, governments and armies must introduce a brutal deterrent regime that will even outweigh the massive temptations of martyrdom and an instant road to Paradise.

No U.S. government would ever introduce such a policy, and if it did, it would cease to be anything like a democratic society. The U.S. could only adopt such avowedly terrorist methods following a wrenching national debate about issues of individual and group responsibility, and the targeting of the innocent. Could any U.S. government avowedly take hostages? We would be looking at a fundamental transformation of national character, to something new and hideous. But what other solutions could or would be possible?

Given that U.S. administrations are not going to fight the Islamic State by the only effective means available—and thankfully, they aren’t—why are they engaging in this combat in the first place?

Why start a war when you don’t plan to win it?

Philip Jenkins is Distinguished Professor of History at Baylor University and serves as Co-Director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

The Case Against a Unified Kurdistan

Daniel Pipes has announced his conversion to the cause of an independent Kurdistan, to be built on the foundations that ethnic group has established in Northern Iraq. In the 1990s, he says, he doubted the idea on multiple grounds, not least that “it would embolden Kurds to agitate for independence in Syria, Turkey, and Iran, leading to destabilization and border conflicts.” Now, though, he greets the prospective new nation with a hearty “Hello, Kurdistan!”

As the U.S. becomes ever more deeply involved against ISIL, we are going to hear many such calls to support a free Kurdistan. By the standards of the region, the Kurds are undoubtedly the good guys, the closest thing we might have to an actively pro-Western state. The problem is that defining this nascent Kurdistan is a fiendishly difficult project, which at its worst threatens to spread massacre and ethnic cleansing to parts of the region that are presently relatively safe. Actually, we should listen closely to the wise words of the unreconstructed Pipes, version 1.0.

You can make an excellent case for supporting the independence of a Kurdistan in roughly its present location in Northern Iraq. But the Kurdish people are spread widely over the region, with communities in Syria, Iran, and Turkey, and the eight million Iraqi Kurds constitute only a quarter of the whole.

With commendable frankness, Pipes takes his ambitions to the limit. As he asks, “What if Iraqi Kurds joined forces across three borders—as they have done on occasion—and formed a single Kurdistan with a population of about thirty million and possibly a corridor to the Mediterranean Sea?” He presents a map of the new mega-Kurdistan, which is produced by “partially dismembering its four neighbors.” Yes, he says, this would dismay many, but the region “needs a salutary shake-up.”

This is not dismaying, it’s actively terrifying.

As Syria and Iraq are already in dissolution, little additional damage would be caused by tearing off extra fragments of their territory. In Iran, though, any attempt at Kurdish secession would of necessity generate a bloody civil war, but that prospect does not deter Pipes: secession “would helpfully diminish that arch-aggressive mini-empire.” Turning relatively stable Iran into a fragmented failed state would be music to the ears of U.S. and Israeli hawks, but it is a recipe for escalating carnage for decades to come.

But it is in Turkey that any Kurdish ambitions meet a massive reality check. The country has 15 million Kurds, around a fifth of the whole population, spread over the southeastern third of the country. Turkey’s revolutionary PKK, the Kurdish Workers Party, is an extremely active and dangerous movement, and its decades-long nationalist guerrilla struggle is currently on hiatus. While rightly stressing that the Kurdish state has rejected the terrorist tactics used by Turkish groups, Pipes specifically notes schemes by the Kurdish military to ally with the Turkish Kurds, and his imagined mega-state incorporates huge swathes of present Turkey.

A renewed secessionist movement in Turkey would be catastrophic. It would cause many thousands of deaths and cripple one of the region’s most successful societies. Beyond civil conflict and terrorism, expect a rash of outright wars between the new and emerging mini-states. Violence would likely spread into Turkish and Kurdish communities in Western Europe.

Why on earth does Pipes think such an outcome is worth risking? The only seeming benefit is to punish Turkey’s President Erdoğan, who has shown undemocratic ambitions. More to the point, though, he has become a harsh critic of Israel and of Western policies in the Middle East. As Pipes writes, “Kurds’ departing from Turkey would usefully impede the reckless ambitions of now-president Recep Tayyip Erdoğan.” Even if you assume the very worst of Erdoğan, he still falls very far short of the region’s dictators and demagogues, making Pipes’s proposed solutions wildly disproportionate, and, yes, reckless.

A salutary shake-up is one thing. Provoking a regional cataclysm is quite another.

Philip Jenkins is Distinguished Professor of History at Baylor University and serves as Co-Director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

The Paranoid Style in Liberal Politics

As a shrewd cultural critic, Alan Wolfe is always worth reading. Recently though, he made an unfortunate diversion into the realm of necromancy, raising the shades of  unwanted and unneeded dead theories. In a recent issue of the Chronicle of Higher Education, Wolfe discussed how far Richard Hofstadter’s theory of the Paranoid Style could be applied to contemporary US politics. It would be sad if Wolfe’s imprimatur inspired any revival of a fatally flawed, but long influential, theory.

Richard Hofstadter was a Columbia University historian, whose best-known books were Anti-Intellectualism in American Life (1963) and The Paranoid Style in American Politics (1965). The title essay in this latter book originally appeared in Harper’s at the time of the 1964 election. A classic JFK liberal, he used his historical skills to analyze what he saw as the political menaces of his day. He described the beliefs and rhetoric of Barry Goldwater and what he termed the radical Right with about as much balance and intuitive sympathy as an al-Qaeda spokesman expounding US policy in the Middle East. Hofstadter located contemporary Right-wing views in a deep-rooted and ugly tradition of hatred, xenophobia, Nativism, and racism, traceable to colonial times. (He always spoke of the Right: conservatism might in theory be acceptable, but America, in his view, had no “true” conservatives).

Hofstadter saw no point in trying to comprehend Rightism as a system of rational political beliefs. Rather, it was based on paranoid fantasies—delusions of persecution, visions of conspiracy, and messianic dreams of absolute victory in a future that would vindicate all present excesses. Only the word “paranoia” “adequately evokes the sense of heated exaggeration, suspiciousness, and conspiratorial fantasy.” All these views, ultimately, were grounded in irrational fears, of projections of the troubled self. Drawing on the faddish therapeutic creeds of the time, Hofstadter presented Rightism as a pathological disorder. “Paranoia,” in his usage, was not just a rhetorical label, but a certifiable personality disorder.

For Hofstadter, America’s political choice in 1964 could be summarized readily: we are liberal; you are mentally ill. Read More…

Posted in , , . Tagged , , . 23 comments

How a Shopping Mall Becomes a Killing Zone

This really is frightening.

Terrorist incidents tell us nothing new about human nature. We already knew that people are capable of horrendous violence, especially when they have come to regard some other subset of human beings as unworthy of full human status. It’s not surprising, then, to see the terrorists of Somalia’s loathsome al-Shabaab movement violating all laws of humanity by slaughtering innocent victims of all ages. People can become monsters, and they did in the Nairobi mall attack that began on September 21.

What really is alarming, though, is to see terrorists create a radical new tactic against which there is no obvious response or defense. There was nothing surprising, for instance, in the idea that terrorists might hijack airliners, but only in 2001 did we realize that hijackers might use them for suicide attacks, turning those aircraft into deadly missiles. Nairobi has just shown us another horrible innovation. It might be that we won’t realize how effective this could be against the U.S. until we face yet another day when we are counting the dead in their hundreds. We have to confront this issue immediately.

Think about it. How would one attack a shopping mall, whether in Nairobi or Minneapolis? Presumably a number of pickup trucks draw up in the parking lot, and 20 or so armed men and women get out, carrying their weapons and ammunition. Then they enter the mall and begin killing until they can do no more harm. They are strictly limited by the number of bullets and grenades they can carry. When police and military forces arrive, the terrorists might hold out for an hour or two before being eliminated.

That’s one way to do it, but it’s clearly not what happened in Nairobi, where firefights were still in progress several days after the initial assault. Even more amazing, terrorists were still putting up resistance against strong Kenyan forces, reputedly trained and assisted by British and Israeli special forces.

How on earth did the terrorists do it? Why, they rented a store. Read More…

Posted in , , , . Tagged , . 12 comments

Syria’s Christians Risk Eradication

U.S. policy towards Syria is bafflingly inconsistent. If U.S. leaders are so concerned about regimes slaughtering thousands of their own people, did they notice what just happened in Egypt? If they are so exercised over about weapons of mass destruction, are they aware that Israel has two hundred nuclear warheads, with delivery systems? Will American warships in the region be making those other stops on their liberating mission?

Most puzzling of all, though, is why the United States seems so determined to eradicate Christianity in one of its oldest heartlands, at such an agonizingly sensitive historical moment.

Syria has always been a complex place religiously. Although the country has a substantial Sunni Muslim majority, it also has large minority communities—Christians, Alawites, and others—who together make up over a quarter of the population. Those communities have survived very successfully in Syria for centuries, but the present revolution is a threat to their continued existence.

Sadly, Westerners tend to assume that Arabs are, necessarily, Muslims, and moreover, that Muslims are a homogeneous bunch. Actually, 10 percent of Syrians are Alawites, members of a notionally Islamic sect that actually draws heavily from Christian and even Gnostic roots: they even celebrate Christmas. Locally, they were long known as Nusayris, “Little Christians.” Syria is also home to several hundred thousand Druze, who are even further removed from Sunni orthodoxy.

And then there are the Christians. If Christianity began in Galilee and Judea, it very soon made its cultural and intellectual home in Syria. St. Paul famously visited Damascus, and for centuries Antioch was one of the world’s greatest Christian centers. (The city today stands just over the Turkish border.) A sizable Christian population flourished under Islamic rule, and continued under the Ottomans. Muslim and Christian populations always interacted closely here. A shrine in Damascus’s Great Mosque claims to be the location of John the Baptist’s head.

Christian numbers fluctuated dramatically over time. A hundred years ago, “Syria,” broadly defined, was home to a large and diverse Christian population, including Catholics, Orthodox, and Maronites. In the 1920s, the French arbitrarily carved out the country’s most Christian sections and designated that region “Lebanon,” with its capital at Beirut.

In theory, that partition should have drawn a clear line between Christian Lebanon and non-Christian Syria. But Syria itself was changing in the aftermath of the catastrophic events of the First World War. The year 1915 marked the beginning of the horrendous genocide of perhaps 1.5 million Armenians, as well as hundreds of thousands of Assyrians, Maronites, and other Christian groups. After the war, Christians increasingly concentrated in Syria, where they benefited from French protection.

Arab Christians, though, were anything but imperial puppets. Determined to avoid a repetition of the horrors of 1915, Christians struggled to create a new political order in which they could play a full role. This meant advocating fervent Arab nationalism, a thoroughly secular order in which Christians and other minorities could avoid being overwhelmed by the juggernaut power of Sunni Islam. All Arab peoples, regardless of faith, would join in a shared passion for secular modernity and pan-Arab patriotism, in stark contrast to reactionary Islamism. The pioneering theorist of modern Arab nationalism was Damascus-born Orthodox Christian Constantine Zureiq. Another Orthodox son of Damascus was Michel Aflaq, co-founder of the Ba’ath (Renaissance) Party that played such a pivotal role in the modern history of both Iraq and Syria.

Since the 1960s, Syria has been a Ba’athist state, which in practice has meant the hegemony of the religious minorities who dominate the country’s military and intelligence apparatus. Hafez al-Assad (President from 1971 through 2000) was of course an Alawite, but by the 1990s, five of his seven closest advisers were Christian. His son Bashar is the current president, and America’s nemesis in the region.

Quite apart from their political influence, Christians have done very well indeed in modern Syria. Although they try to avoid drawing too much attention, it is no secret that Aleppo (for instance) has a highly active Christian population. Christian numbers have even grown significantly since the 1990s, as Iraqis fled the growing chaos in that country. Officially, Christians today make up around 10 percent of Syria’s people, but that is a serious underestimate, as it omits so many refugees, not to mention thinly disguised crypto-believers. A plausible Christian figure is at least 15 percent, or three million people.

To describe the Ba’athist state’s tolerance is not, of course, to justify its brutality, or its involvement in state-sanctioned crime and international terrorism. But for all that, it has sustained a genuine refuge for religious minorities, of a kind that has been snuffed out elsewhere in the region. Although many Syrian Christians favor democratic reforms, they know all too well that a successful revolution would almost certainly put in place a rigidly Islamist or Salafist regime that would abruptly end the era of tolerant diversity. Already, Christians have suffered terrible persecution in rebel-controlled areas, with countless reports of murder, rape, and extortion.

Under its new Sunni rulers, minorities would likely face a fate like that in neighboring Iraq, where the Christian share of population fell from 8 percent in the 1980s to perhaps 1 percent today. In Iraq, though, persecuted believers had a place to which they could escape, namely Syria. Where would Syrian refugees go?

A month ago, that question was moot, as the Assad government was gaining the upper hand over the rebels. At worst, it seemed, the regime could hold on to a rump state in Syria’s west, a refuge for Alawites, Christians, and others. And then came the alleged gas attack, and the overheated U.S. response.

So here is the nightmare. If the U.S., France, and some miscellaneous allies strike at the regime, they could conceivably so weaken it that it would collapse. Out of the ruins would emerge a radically anti-Western regime, which would kill or expel several million Christians and Alawites. This would be a political, religious, and humanitarian catastrophe unparalleled since the Armenian genocide almost exactly a century ago.

Around the world, scholars and intellectual leaders are debating how to commemorate the approaching centennial of that cataclysm in 2015. Through its utter lack of historical awareness, the United States government may be pushing towards not a commemoration of the genocide but a faithful re-enactment.

Even at this late moment, can they yet be brought to see reason?

Philip Jenkins is Distinguished Professor of History at Baylor University and serves as Co-Director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

After al-Qaeda

NEW PRESIDENT DECLARES VICTORY IN WAR ON TERROR—Patriot Act to be Repealed—Department of Homeland Security for Dissolution.

This will not be a headline in 2013, or anytime thereafter, because by its nature the War on Terror can have no end. If you are fighting a war, then you can envisage a victory in which the opposing force is destroyed. In the case of terrorism, particular movements might decline or vanish—and happily, al-Qaeda itself is on a downward trajectory—but terrorism as such is not going away.

Terrorism is a tactic, not a movement. As such, it can be deployed by states, movements, or small groups regardless of ideology. It is not synonymous with Islam, nor with Islamism. That runs contrary to the thinking of many supposed experts and media commentators, who see Islamic terrorism as the definitive form of the phenomenon. As Dennis Prager writes, “A very small percentage of Muslims are terrorists. But nearly every international terrorist is Muslim.” In this view, Islamist organizations are the standard by which all terror groups must be measured, the model imitated by rivals. If terror has a history, it will be found in the Islamic past—shall we start with the medieval Assassins? Or better, just list the index entry: “Terrorism: See Jihad”?

In reality, terrorism in its modern form has a long history in the West—over a century—but not until the 1980s did Islamists play any role, and virtually never as innovators or leaders. The history of terrorism is strikingly diverse, with perpetrators of every race, creed, and color. The modern phenomenon probably begins in the 1880s with Irish bomb attacks against England and with Russian leftists and European anarchists of the 1890s pursuing their cult of the bomb.

More recently, the decade or so after World War II was an era of notable creativity, as Zionist extremists pioneered many new strategies—truck bombs directed against hotels and embassies, attacks against buses and crowded public places. For a time, Zionist groups also led the way in international terrorism, with letter-bomb attacks on British soil, the bombing of the British embassy in Rome, and plots to assassinate foreign dignitaries such as German Chancellor Konrad Adenauer. The Algerian struggle of the 1950s popularized these innovations and spawned yet others.

But the golden age of terrorism occurred between 1968 and 1986. Then as now, Arab and Middle Eastern causes drove a wave of global violence, making the “Arab terrorist” as familiar a stereotype as today. Baby boomers recall the horrible regularity of waking up to hear of some new massacre of Western civilians, of kidnapping and hostage taking, and (with monotonous frequency) of attacks on airliners and transportation systems. They may remember the
simultaneous hijacking and destruction of five airliners in Jordan in 1970—fortunately, without fatalities—or the massacre of Israeli athletes at the 1972 Munich Olympics.

Some attacks of this era stand out even today for their sadism and indiscriminate violence. In 1972, three Japanese tourists landed at Israel’s Lod Airport, where their nationality prevented them from attracting suspicion. They proved to be members of the Japanese Red Army, working in alliance with the Arab Popular Front for the Liberation of Palestine, the PFLP. Producing automatic weapons, they slaughtered everyone they could see in the terminal—26 civilians, mainly Christian Puerto Rican pilgrims. The following year, Palestinian guerrillas attacked Rome’s Fiumicino airport, throwing phosphorus grenades at an airliner and burning alive some 30 civilians. In 1974, Palestinian guerrillas killed 25 hostages in the Israeli town of Ma’alot. Horror was piled on horror.

The most notorious terrorist of the era was Palestinian mastermind Abu Nidal, as infamous in the 1970s and 1980s as Osama bin Laden has been in recent times. His career reached gruesome heights in the 1980s with a series of attacks that wrote the playbook for al-Qaeda. He specialized in simultaneous strikes against widely separated targets to keep security agencies off balance and win maximum publicity. Typical was the 1985 double-attack at the airports of Rome and Vienna in which 19 civilians were killed. Throughout the 1980s, the prospect of Abu Nidal obtaining a nuclear weapon alarmed intelligence services worldwide.

At this point, the identification of Islam with terrorism might appear to stand up well, with all these Arabs and Palestinians. Then as now, international terrorist actions tended to track back to the Middle East—but not to Islam. The militants of that era distanced themselves from any faith. Abu Nidal usually served Iraq’s secularist Ba’ath regime, which persecuted Islamists.

Like Abu Nidal himself, most Palestinian activists in those years were secular socialist nationalists, and Christians played a prominent role in the movement’s leadership. The most important Arab guerrilla leader of those years—a pioneer of modern international terrorism—was PFLP founder George Habash. He was an Eastern Orthodox Christian who eschewed religion after he became a strict Marxist-Leninist. He discarded his faith when Israeli forces expelled his family from their homes: “I was all the time imagining myself as a good Christian, serving the poor. When my land was occupied, I had no time to think about religion.” Abandoning his church certainly did not mean adopting Islam: his inspiration was not some medieval Islamic warrior but rather Che Guevara.

Habash’s story is emblematic. Also Orthodox was Wadie Haddad, who orchestrated the Dawson’s Field attacks and the 1976 airliner seizure that provoked Israel’s raid on Entebbe. Haddad, incidentally, recruited the once legendary Latin American playboy who earned notoriety as international terrorist Carlos “the Jackal.”

Equally non-Islamist were the PFLP’s several spinoffs, like the Maoist Democratic Front, DFLP, which murdered the hostages at Ma’alot. That faction’s leader, Nayif Hawatmeh, was born Catholic. Several Palestinian attacks in these years sought to put pressure on Israel to release its most prestigious captive, Melkite Catholic Archbishop Hilarion Capucci, jailed for running guns to the guerrillas. Only in the late 1980s, after the rise of Hamas, did an Islamist group take the lead in armed assaults on Israel.

Earlier Middle Eastern movements had no notion of suicide terrorism, which was, moreover, unknown to the Islamist militant tradition before about 1980. The movement that used suicide attacks most frequently and effectively, the Tamil Tigers, is in fact Sri Lankan and mainly Hindu-Marxist. In other cases too, hideous terrorist actions we have come to associate with Islamic extremism have clearly non-Islamic roots. Think for instance of those unspeakable al-Qaeda videos depicting the ritualized execution of hostages in Iraq and elsewhere. To quote Olivier Roy, one of the most respected European scholars of Islamist terrorism, these videos are “a one-to-one re-enactment of the execution of Aldo Moro by the Red Brigades [in Italy in 1978], with the organization’s banner and logo in the background, the hostage hand-cuffed and blind-folded, the mock trial with the reading of the sentence and the execution.”

Through the 1970s and 1980s, terrorism was kaleidoscopic in its political coloring. White Europeans, on the left and the right, made their own contributions. During the 1970s, Italian far rightists and neo-Nazis tried many times to carry out a mega-terror attack on that nation’s rail system. After several bloody attempts, they succeeded in killing 85 at Bologna’s central station in 1980. The United States, meanwhile, had its own domestic terrorist violence, as Puerto Rican separatists carried out deadly bomb attacks in New York and Chicago. And after so many years, Irish terror groups, Protestant and Catholic alike, still pursued their age-old traditions of violence directed against rival civilians.

By no means was international terrorism the preserve of Arabs, let alone Muslims. In 1976, an anti-Castro rightist group based in Florida blew up a Cuban airliner flying from Barbados to Jamaica, killing 76. Prior to 9/11, the dubious record for the worst terror attack in history was held by the Sikh group that destroyed an Air India 747 in 1985, killing 329 innocent people. So commonplace were international attacks, and so diverse, that when a bomb killed 11 people at New York’s La Guardia airport in 1975, the possible perpetrators were legion. (The current best guess points to Croatian opponents of Yugoslavia’s Marshal Tito.)

Where, amidst all this bloodshed, were Islamist terror groups? They added little to the story prior to the rise of Hezbollah during the Lebanese civil war, with the bombing of the U.S. embassy in Beirut in 1983 and the subsequent attack on the Marine barracks. Only from the early 1990s do we find fanatical Sunni networks spreading mayhem around the world, including the early actions of al-Qaeda.

This chronology raises interesting questions for understanding the roots of terrorism. If Islam is so central to the phenomenon, we need to explain why Muslim terrorists should have been such latecomers. Why were they not the prophets and pioneers of terrorism? Why, moreover, did they have to draw all their tactics from the fighters of other religions and of none—from Western anarchists and nihilists, from the Catholic IRA and Latin American urban guerrillas, from Communists and fascists, from Zionist Jews and Sri Lankan Hindus?

Apart from the crucial element of suicide bombing, al-Qaeda brought little to the international terrorist repertoire. The Madrid rail station attack of 2004 neatly replayed the fascist strike at Bologna, while even 9/11 borrowed many elements straight from Abu Nidal, including the simultaneous targeting of multiple airliners. In its methods and strategies, the modern terrorist tradition owes much to the Marxist tradition—to Lenin, Guevara, and Mao—and next to nothing to Muslims.

None of these points should come as a surprise to anyone who remembers the 1970s and 1980s. In its day, the Dawson’s Field affair of 1970 transfixed global media almost as much as the 9/11 enormity did a decade ago. So did the Munich Olympics attack of 1972, or the 1976 saga of the hostages at Entebbe. It’s remarkable to see how readily modern audiences credit suggestions about the novelty of international terrorism or its association with Islamist groups. Particularly startling is how thoroughly Americans have forgotten their own terrorist crisis of the mid-1970s. How can something as horrendous as the La Guardia massacre have vanished from public memory? And is it really possible that the once satanic name of Abu Nidal carries next to no significance for anyone below the age of 50? There is no better illustration of how present-day concerns have eclipsed the older realities.

Terrorism can be used by groups of any ideological shade. The scale or intensity of terrorist violence depends on the opportunities available to militants and the potential opposition they face from law-enforcement agencies. By these criteria, Western nations will continue to be subject to attacks, and those events will follow precedents that we have witnessed over the past 40 years.

However hard we try, we cannot make our society invulnerable. The more we think about the gaps in our defenses, the more astonishing it is that incidents have occurred so rarely. If you fortify aircraft, terrorists attack airports; if you fortify airports, they can bring down aircraft with missiles; if you secure all aircraft, they attack ships; if you defend all public transportation, they undertake massacres in malls and sports stadiums.

Armed groups need only a handful of shooters and bombers to create havoc. The Provisional IRA probably never had more than 500 soldiers at any time, while the Basque ETA peaked around 200—supported, of course, by a larger penumbra of sympathizers. Both maintained campaigns spanning 30 or 40 years. A group of just ten or 20 militants can keep a devastating effort going for a year, and until they are hunted down they can convince a powerful Western nation that it is suffering a national a crisis.

No government can defend itself against terrorism solely by enhancing security. Ultimately, defense must always rely on effective intelligence, which means surveillance of militant groups and their sympathizers, infiltrating those groups, and winning over informants. The fact that attacks on U.S. soil have been so rare means that our intelligence agencies have been doing a pretty good job.

But there will always be vulnerabilities. However thoroughly agencies maintain surveillance on potential troublemakers, on occasion they will fail to mark those individuals who have made the transition from isolated blowhards to dedicated killers. By definition, they are most likely to err when confronting someone who does not fit the profile of the time—when, for instance, the suspect is a white Nazi rather than an Arab Muslim or vice versa. At some point a bomber or assassin—an Anders Breivik, a Timothy McVeigh, or a Mohamed Atta—will slip through, with catastrophic results.

We might call this the Apache Theory of terrorism. Of all the enemies the U.S. faced during its wars against Indian tribes in the 19th century, the Apaches were the most determined and resourceful. When nervous white residents of the Southwest asked, “How many Apaches are hiding in this room right now?” the answer was always, “As many as want to.” Will there be terrorism in the U.S. or Europe? If enough people want to perpetrate it, some will get through.

And who are the new Apaches who might someday surpass the Islamist menace? While prophecy would be foolhardy, we know enough about the history of terrorism to suggest some areas of danger.

One peril is that old causes now quiescent will again spring to life. In the United States, that could mean the ultra-right groups that have such a lengthy record of activism. Presently they are close to inactive, and the menagerie of largely harmless militia groups serves mainly to provide bogeymen for leftist speculation. But that could change overnight: Oklahoma City was the work of one cell.

European groups could also revive, especially if the continent descends into economic anarchy. Imagine poorer nations like Ireland and Greece driven to ruin by what they see as exploitation by Europe’s financial elite. Given the long experience of the Irish with direct militant action, do we think they will do nothing? Diehard IRA elements have for years threatened to renew their attacks on England, but their impact would be massively greater if they targeted European financial or political centers like Frankfurt or Strasbourg. Across the continent, economic collapse could reawaken ethnic hatreds we thought had perished with the Habsburg Empire.

Nor has Europe’s neo-fascist tradition vanished. Although the media treated Breivik as a loner, he stands in a long and bloody tradition, one especially strong in those southern European nations most vulnerable to financial collapse. In the 1970s and 1980s, both left- and right-wing militants in Italy made bizarre deals to obtain weapons from Middle Eastern sources, including Iran and Libya. Who is to say those connections are extinct?

Terrorism also continues to be a weapon of state power, a covert means for achieving goals that cannot be obtained through the open exercise of force. In different forms, state-sponsorship has always been key to terror movements. Even in tsarist days, the Russians freely used terrorist proxies, and Mussolini’s secret service, the OVRA, honed this tactic to a fine art. While the Soviet KGB was legendary for arming and funding extremist groups, it was absolutely not unique.

Some countries have even used the tactic as barefaced extortion. Through the 1980s, you could tell when an Arab Gulf state had fallen behind on money it owed Saddam Hussein because the mysterious “Abu Nidal Organization” would leap into action with an assassination or airliner bombing. When Mideast countries engaged in actual war—as Iran and Iraq did through the 1980s—they used their overseas proxies to promote clandestine goals. In retrospect, many of the terror attacks on European soil in the mid-1980s seem intended to persuade Western nations to supply arms to one or the other of the combatants in the Iran-Iraq conflict.

For some 40 years now, Libya, Syria, and Iran have sponsored surrogate terrorist movements worldwide. Arguably, the weaker those regimes become, the more likely they will be to use those proxies to strike out at opponents, including the United States. Of course, attacks will not carry a brand identifying the country responsible; a strike would come under cover of some bogus front or Islamist cell. As long as unscrupulous states wish to exert pressure on others—to embarrass them, to force them to take steps they do not want, to make their position in some region untenable—we can expect to see terrorism used as a form of proxy war.

That means terrorism will be with us as long as the world knows ethnic hatred and social division—which is to say, until the end of humanity. The phenomenon cannot be ended entirely, but individual movements certainly can be defeated and suppressed. And we should not imagine “terrorism” as a monolithic enemy that demands we militarize our whole society to meet the challenge.

Above all, we should not forget the lessons of the past. However appalling it might be to study individual groups or incidents, in the long term the story of terrorism contains a surprisingly positive lesson. Terrorism can inflict dreadful harm on a society, even claiming thousands of lives. But in the overwhelming majority of instances, these movements are not only beaten but annihilated—so thoroughly, in fact, that later generations forget they even existed.

Philip Jenkins, Edwin Erle Sparks Professor of History and Religious Studies at Pennsylvania State University, is the author of Images of Terror and Jesus Wars.

The American Conservative needs the support of readers. Please subscribe or make a contribution today.

Three Martyrs

Please join with me in commemorating a group of three British Muslim martyrs. Seriously.

Haroon Jahan, Abdul Nasir, and Shazad Ali died Tuesday night in Birmingham’s impoverished Winson Green area. After two days of rioting, looting, and casual arson, mainly by black gangs, the local community despaired of seeking help from a police force that was not making the slightest effort to intervene to defend them. As the small businessmen and shopkeepers of the area, the local South Asian community had most to lose. Organizing from the local mosque, they dispatched groups of young volunteers to patrol the area. A speeding car hit a group of these community defenders, killing three. (The driver is charged with murder). The victims were classic hard-working immigrants, one a mechanic, another ran a car wash. In the words of one observer, “They lost their lives for other people, doing the job of the police. They weren’t standing outside a mosque, a temple, a synagogue or a church – they were standing outside shops where everybody goes. They were protecting the community as a whole.”

If you have been following media coverage of the British riots, you have seen a great many explanations of the violence, including such classic theories as urban deprivation, youth unemployment, and anger at police racism, and all have some substance. What has been fascinating this time round is to see how even the most mainstream liberal outlets – even the New York Times – have focused on the vicious hooliganism and criminality driving the mobs, how they are driven not by an inchoate rage against injustice but by strictly rational desires for high-class consumer goods. Some even remark on the growth of “feral” gangs of white people, black and white. Read More…

Posted in . 18 comments

Myth of a Catholic Crisis

Is the Roman Catholic Church a cover for the world’s largest criminal sex ring? Over the past few months, a steady stream of news stories seems to have confirmed the bleakest possible vision of global conspiracy, the most extreme claims of anticlerical propaganda through the ages. Even moderate commentators are writing as if priests around the world have taken secret vows of conspiracy, perversion, and omerta. Worse, this deviance is allegedly built into the church’s structures of command and control. According to the darkest visions, clergy are almost encouraged to pursue careers of abuse and pedophilia, secure in the knowledge that their crimes will be sheltered by fellow molesters in the hierarchy, all the way to the Vatican itself, with Pope Benedict as the boss of all bosses. Suddenly, even the rants of Maureen Dowd and Katha Pollitt appear almost plausible.

If all this seems far-fetched, it is. Sexual abuse by clergy is a reality, and a real problem demands a response. But the problem is vastly different from that described so enthusiastically by the media, and most of the critical measures have already been taken.

Although the alleged crisis is now being portrayed in global terms, I will focus on the U.S. experience because this is by far the most intensely studied aspect. The American abuse scandal, now a quarter-century old, has produced rock-solid quantitative evidence that allows us to make general statements about abuse by clergy and to dispel myths.

Most tellingly, we can say one thing quite confidently, however strongly it goes against prevailing wisdom: there is no credible evidence that Roman Catholic clergy abuse young people at a rate different from that of clergy of any other denomination or from members of secular professions who deal with children. If anyone believes that such evidence exists, the burden is upon him to present it.

By far the best quantitative evidence derives from the survey carried out by John Jay College of Criminal Justice in New York in 2004, entitled “The Nature and Scope of the Problem of Sexual Abuse of Minors by Catholic Priests and Deacons in the United States.” Specifically, it examined all plausible complaints of sexual abuse by U.S. clergy between 1950 and 2002, a cohort of around 110,000 men. Although this study was sponsored by the U.S. Conference of Catholic Bishops, the researchers were independent, and the final report was widely praised.

By social science standards, this was an impressively thorough study, and the sample size was immense. Obviously, the John Jay researchers failed to detect many cases, including those that had not come to light by 2004, and other acts that would never be reported. But they worked hard to compensate for such omissions by using a strikingly low standard of proof for the allegations that were known. Investigators counted all charges “not withdrawn or known to be false,” and total exoneration is a very high standard. The list thus includes allegations that would not have surfaced except in the furor of 2002-03, following the dreadful scandals in the Boston Archdiocese.

A couple of points leap out about the allegations, particularly about the image of the “pedophile priest” pursuing his decades-long career of crime under the de facto protection of the Church. The John Jay study concluded that in this period, perhaps 4 percent of all U.S. priests had been plausibly accused of at least one act of sexual misconduct with a minor. But of the 4,392 accused priests, almost 56 percent faced only one misconduct allegation, and at least some of these would certainly vanish under detailed scrutiny.

Very few of the accused priests were pedophiles, in the sense of having abused a minor under the age of puberty, say 12 or 13 for a boy. In the U.S. at least, the great majority of cases of sexual misconduct by priests involve older boys, often aged between 15 and 17, or even older. This behavior is illegal, harmful, and sinful, but it is not pedophilia. The technical name for this kind of act is ephebophilia, but many would call it pederasty or even homosexuality. Drawing this distinction certainly does not excuse or minimize the behavior, but it is critically important for understanding the statistics. Pedophiles are compulsive offenders who are highly likely to repeat their acts, often claiming hundreds of victims. The fact that true pedophile priests formed such a minority of offenders meant that the overall number of victims was mercifully far smaller than it might have been.

Pedophile priests certainly did exist, but in tiny numbers. At the heart of the clergy abuse crisis was a core of highly persistent serial pedophiles, who massively “over-produced” criminal behavior, and some were the targets of hundreds of plausible complaints. Out of 100,000 priests active in the U.S. in this half-century, a cadre of just 149 individuals—one priest out of every 750—accounted for over a quarter of all the allegations of clergy abuse. These 149 super-predators also explain the surprisingly large number of very young victims that the study reported. The average age of offenders for the whole era has been gravely distorted by counting the sizable number of child victims assaulted by these reprehensible serial pedophiles.

Nor was clerical misconduct a persistent or steady-state phenomenon, as we would expect if abusive behavior resulted inevitably from the agonies of the celibate lifestyle. In the U.S. at least, recorded malfeasance was quite rare until an explosion of criminal activity in one short period, namely between 1975 and 1980. These six years accounted for an astonishing 40 percent of all the alleged acts of clerical abuse for the 52-year period under examination. Just why these years were so horrific is open to debate, but there seems to have been a sharp decline in the moral and disciplinary controls that higher authorities exercised over priests. Also, clergy in the 1970s were vulnerable to powerful social pressures encouraging sexual experimentation, the sense that old injunctions against adultery or pederasty were destined to perish in the new age of ethical relativism, and some priests succumbed to temptation. Of the priests ordained in the year 1970, a startling 10 percent would ultimately be the focus of abuse allegations. But the crisis was a byproduct of a specific historical era, not of some essential quality of the clerical status or of the Church’s structures.

Let’s put all this in context. In any given year between 1950 and 2002, the Catholic Church in the United States averaged around 50,000 priests, serving 45 to 55 million members. Assuming all the charges reported by the Jay study were true, then each year, an average of around 200 children were abused or molested by priests nationwide. Obviously, given what we know about the under-reporting of molestation, that figure must be a gross underestimate, and even if it was not, the problem would still be appalling: 200 instances of priestly victimization is 200 too many. But the documented evidence for clerical crime is far less extensive than is widely believed. Even in the overheated and litigious atmosphere following the Boston scandals, the Jay study reported no allegations against 24 priests out of every 25.

To say that X percent of Catholic priests might have engaged in abuse or molestation might be troubling, but the figure is meaningless unless we can compare it with some other group. Suppose, for the sake of argument, that we could say confidently that priests abuse at a rate 10 or 100 times larger than Presbyterian ministers or Jewish rabbis or than the male population as a whole. Then we could begin to seek the roots of the Catholic problem, whether we located them in the fact of celibacy or in the secretive clerical subculture. Unfortunately, we have not the slightest point of comparison with any other group. As a result of the furious investigations of the past decades, and particularly the Jay study, the U.S. Catholic clergy are now the only major group on the planet that has ever been subjected to such a detailed examination of abuse complaints, using internal evidence that could not have come to light in any other way. Nothing vaguely comparable exists for other groups, for Presbyterian pastors or Lutheran clergy or, indeed, journalists.

Actually, that is not entirely true. Before commenting on the priestly situation, any observer should read the writings of Professor Charol Shakeshaft of Virginia Commonwealth University, who for years has been studying sexual and physical abuse by America’s public-school teachers. The volume of misconduct she reports is staggering and far exceeds the rate of documented abuse by Catholic clergy. Hard to imagine, public schools sometimes deal with their problem faculty by quietly transferring them to other institutions without warning the new employers of the dangers they face. It sounds a lot like the worst charges against Catholic dioceses, doesn’t it? Thank heaven we don’t worry too much about the sexual dangers facing our children in the schools, or else we might have to think seriously about this issue.

So if Catholic priests are no worse than other professions in this regard—and maybe a lot better—why do we hear so much about them being abusers? Several reasons explain this focus, none of which necessarily reflect any anti-Catholic bias in courts or media. By far the most important factor involves the way in which cases come to light, which is through civil litigation. An individual accuses a particular priest of abuse, and quite possibly, the charge is perfectly true. Lawyers then use that case as a means of forcing a diocese to disclose ever more information about past charges against other priests, which might date back into the 1940s or ’50s and which can also lead into other jurisdictions. One case thus becomes the basis for a whole network of interlocking investigations, which proceed ad infinitum. The Catholic Church suffers acutely from its pack-rat character, of being a highly bureaucratic institution that prides itself on preserving records of institutional continuity.

In contrast, imagine a charge against a Baptist or Pentecostal minister, who has no such institutional framework and little institutional memory, whose church has no deep pockets, so that the case begins and ends with him. Not to pick on any particular denomination, but stories of abuse by clergy of all sorts surfaced regularly through the 1990s, until most groups became massively more proactive in preventing and detecting abuse threats. Partly the new vigilance reflected intensified consciousness of threats to children, but at least as significant were the demands of insurance companies: either you adopt stringent new policies to safeguard minors, or kiss your liability protection goodbye. That was an offer no church could reasonably refuse.

For Catholics, though, with their distinctive structural set-up, the new environment offered no protection from old allegations that continued to surface, often involving alleged acts from 40 or 50 years ago. Even today, Catholic churches are still trying desperately to defend their actions in the distant past, when social attitudes to child sexual abuse were radically different from what we today regard as normal. In those bygone years, molestation was trivialized in both expert and public opinion, and offenders were commonly treated with kid gloves. Only the Catholic Church, however, is held to account for the decisions it took in this very different world of so long ago. Only the Catholic Church is subjected to the unforgiving standards of 20/20 hindsight.

Catholics, like other denominations, have made massive progress in preventing abuse by clergy. In the U.S. at least, very few of the cases that have come to public attention in the past few years refer to acts alleged to have occurred since 1990. Yet litigation resulting from earlier eras means that “pedophile priests” remain in the news almost daily, and that fact shapes (and mis-shapes) popular stereotypes.

Europe is not the U.S., and it is difficult to generalize across countries within Europe. Legal systems differ, as do social assumptions and sexual attitudes. Theoretically, it is possible to imagine that in some particular nation, the Catholic clergy became so vicious and corrupted that they preyed systematically on the young and conspired to hide their misdeeds. But any awareness of the American situation, and the florid mythology it has produced, must make us very careful about giving credence to any such nightmare interpretation.

Philip Jenkins is Edwin Erle Sparks Professor of History and Religious Studies at Pennsylvania State University. He is the author of Pedophiles and Priests: Anatomy of a Contemporary Crisis and, most recently, Jesus Wars.

The American Conservative welcomes letters to the editor.
Send letters to:

Third World War

Nobody is sure how it started. Perhaps Christian activists sent text messages warning that Muslims were trying to poison them. Maybe Muslims tried to storm a church. Whatever the cause, the consequence this past January was mayhem for the Nigerian city of Jos. Muslim-Christian rioting killed up to 500 people before the government intervened with its customary heavy hand.

The most striking point about these battles was that nobody found them striking. In Jos, as in countless other regions across Africa and Asia, violence between Christians and Muslims can erupt at any time, with the potential to detonate riots, civil wars, and persecutions. While these events are poorly reported in the West, they matter profoundly. All the attention in the Global War on Terror focuses on regions in which the U.S. is engaged militarily, but another war is raging across whole continents, one that will ultimately shape the strategic future. Uncomfortably for American policymakers, it is a war of religions and beliefs—a battle not for hearts and minds but for souls.

This is not to argue for an irreconcilable Clash of Civilizations, still less a struggle between Christian good and Muslim evil. In any African country divided between the two faiths—and that includes most lands south of the Sahara—day-to-day interfaith relations are remarkably good. Many families are amicably divided between Christians and Muslims and take great care to avoid sources of conflict. Business or political meetings commonly begin with prayers, and it is no great matter whether a pastor or a mullah leads them.

Yet over the past century, the spread of new religious forms worldwide has created the potential for violence wherever a surging Christianity meets an unyielding Islam. Riots such as those in Jos are one result; terrorism is another. Generally, Muslims have been the aggressors in recent conflicts, but Christians have their own sectarian mobs and militias.

However blame is apportioned, the two faiths have been at daggers drawn, often literally, for decades. As Eliza Griswold discusses in her forthcoming book, The Tenth Parallel, you can trace the fault by following the latitude line of ten degrees North. (Jos, conveniently, stands almost exactly at ten degrees.) A tectonic plate of religious and cultural confrontation runs across West and Northwest Africa, through Southeast Asia, Indonesia, and the Philippines. A decade ago, Indonesia witnessed some of the worst fighting, as Muslim militias launched bloody assaults on that nation’s Christian minority, some 25 million strong. For decades, the overwhelmingly Christian Philippines has suffered constant insurgency from a ruthless armed movement concentrated in the Muslim south. Mob attacks and pogroms have raged in Malaysia. In Africa, the Sudan is probably the best-known theater of mass martyrdom, while Nigeria remains deeply polarized. And that is not to mention ongoing killings in countries like Uganda and Kenya.

Humanitarian concerns apart, there are plenty of reasons for the West to be deeply worried about these conflicts. Nigeria has almost 160 million people and by 2050 is expected to have 300 million, making it one of the world’s most populous nations. If it ever escapes from its present political horrors, it will be the obvious leader of sub-Saharan Africa. Nigeria also matters enormously in terms of natural resources. It is the third largest source of U.S. crude-oil imports, ahead of Saudi Arabia. Other up-and-coming oil suppliers in West and Central Africa are also among the religiously divided nations. Meanwhile Indonesia, with 240 million people, is already a population giant, and unlike Nigeria, it seems set for serious economic development in the coming decade.

If such massive countries ever became monolithically Muslim, that would be significant enough for the West, especially because these states wield such cultural influence over their neighbors. But if they fell into the hands of a radical form of Wahhabi or Salafist Islam, that would be an epochal catastrophe. Conversely, imagine a world in which Christians predominated in these influential Global South nations. That would decisively shift the world’s balance of forces in pro-Western directions.

The relationship between Christianity and Islam poses a challenge for at least half of the 20 nations expected to have the world’s largest populations by 2050. By present projections, three of these future mega-states—Nigeria, Ethiopia, and Tanzania—will be almost equally divided between the two faiths. In several others, like the Congo, the Philippines, Russia, and Uganda, predominantly Christian nations will have Muslim minorities of 10 percent or more. Mainly Muslim states will coexist with comparable Christian sub-populations in Indonesia, Egypt, and the Sudan. In all of these places, if relations between the faiths do not improve over the next 40 years, prospects for civil order are terrifying. The world’s roster of failed states would have several new members.

Why the hostility? What are Christians and Muslims fighting about in Nigeria and Malaysia, Uganda and the Philippines? Western readers will think back to Samuel Huntington’s Clash of Civilizations, a richly provocative idea. But the notion of a world divided among vast religious-cultural blocs assumes that these units remain fairly constant, so that tension occurs only along their periphery. Yet cultural blocs change dramatically within their borders as well, and we are presently living through a dizzying era of shifting boundaries.

Look at those rapidly growing countries and think of how these burgeoning Christian heartlands might have struck an observer in 1914. Why are Christians so numerous in Africa and Asia? Did the past century witness a global religious revolution? The answer, of course, is yes, however dimly Westerners may be aware of it. Muslims have certainly seen the trend. One factor driving Islamic militancy in many nations is the sense that Christianity is growing. Outside of the West, evangelism and conversion are two of the most sensitive issues in the modern world.

Christianity, which a century ago was overwhelmingly the religion of Europe and the Americas, has undertaken a historic advance into Africa and Asia. In 1900, Africa had just 10 million Christians, representing around 10 percent of the continental population. By 2000, that figure had swollen to over 360 million, or 46 percent of the population. Over the course of the 20th century, millions of Africans transferred their allegiance from traditional primal faiths to one of the two great world religions, Christianity or Islam—but they demonstrated an overwhelming preference for the former. Around 40 percent of Africa’s population became Christian, compared to just 10 percent who chose Islam. As Muslims had earlier far outnumbered Christians, the result was to transform a massive Muslim majority into a reasonably equal confessional balance. Africa today is about 47 percent Christian, 45 percent Muslim, and some 8 percent followers of primal religions.

To appreciate this transformation, consider Nigeria. In 1900, the lands that would become that nation were about 28 percent Muslim and 1 percent Christian. Confident in their numbers, Muslims did not need even to think about Christians as rivals. For Muslims, the pagan population represented an inferior state of being, peoples to be ruled and, often, enslaved. One day in the future, the heathens might join the modern religious world, but it would be the world of Islam. But then things went wrong. By 1970, Muslims had increased their share of the population to 45 percent. But that 1 percent Christian minority had expanded incredibly, also to 45 percent. A land that seemed firmly under Muslim hegemony was suddenly split down the middle.

The question now was just how much further Christian numbers could grow. If you extrapolate recent Christian growth into the near future, no Muslim majority seems safe, even in a place like Nigeria, where some polls in recent years have suggested an outright Christian majority. (More conservative estimates register around 46 percent.) Even nonpolitical Muslims worry: might their grandchildren be kaffirs? Worse, these newer Christians are not like the minority communities familiar in a Middle Eastern context, groups like the Egyptian Copts, who of necessity were politically quietist: the new African believers are dynamic and expansionist. The most successful follow energetic Pentecostal and evangelical forms of faith rather than the sober liturgical habits of older groupings.

The new believers draw on Western, and specifically American, forms of evangelism, marketing their faith through videos and DVDs. They organize crusades and mass meetings for prayer and healing that can draw 2 million believers together in a single venue. For nervous Muslims, the Christian threat was epitomized by the legendary “Jesus” video, originally a British film biography produced in 1979, but subsequently promoted around the world. As a weapon of mass instruction, it has few equals. Christians in Jos or Jakarta would approach Muslims and offer to show them a really interesting film about the prophet Jesus. Many accepted the invitation, and some then decided to follow the Christian way rather than the path of Islam.

Christianity also attracted independent-minded women. In traditional societies, conversion occurred when the head of a clan or family accepted a new religion and brought his kin with him. Now, when a patriarch accepted Islam, youngsters demurred, preferring to seek personal salvation in Christianity. And inconceivably, women even refused to accept arranged marriages to suitable Muslim men. Religious splits became family feuds, escalating the potential for malice and retaliation.

Few Asian countries have seen anything like the Christian growth that characterizes Africa, but here, too, religious change generates social tensions. In lands like Indonesia and Malaysia, Christianity has been associated above all with minority communities, especially the Chinese, whom majority Muslim groups hate and fear for being rich, clannish, and arrogant. Economic crises, such as the Asian financial crash of 1997-98, bring ethnic conflicts, which bear a religious coloring.

In different societies, then, booming Christianity came to be associated with a variety of perils: the breakup of traditional communities, individualism, women’s independence, and everything associated with “the West”—libertarianism, sexual explicitness, and cultural aggression. When the Pentecostal movement reached full force, all these trends began to look like a juggernaut that might overwhelm familiar cultures. From an Islamic viewpoint, these things might be troubling enough if they were happening on the traditional Muslim-Christian frontier—say in the Mediterranean—but suddenly Christian expansion was accelerating in what should have been dependable Muslim territory.

This was the package of nightmares that faced Muslim communities from the 1970s onward, at exactly the time that a new countermovement, quite as radical in its own way, emerged from the Middle East. The key date was 1979, the year of the Iranian Revolution, but also of the radical coup against the Grand Mosque in Mecca. The Saudi regime survived that assault but in a chastened mood. Anxious to prevent a repeat performance, the Saudis made their devil’s bargain with the Islamists: go and do what you like around the world, and we will bankroll you, but stay out of our own beloved kingdom. That was the point at which Gulf oil money began rolling around the Muslim world, funding mosques and madrassas following the hardest of Islamist lines. By the end of 1979, the Soviet Union had invaded Afghanistan, sparking a war that would become a vehicle for training jihadis worldwide.

The outcome was a new and highly militant form of Islam, impatient with old-style moderate forms of faith and fanatically opposed to Christian incursions into continents seen as Muslim realms. For these militants, the growth of Christianity was proof of the failure of the old Muslim regimes. In the words of radical theorist Sayyid Qutb, these regimes had shown themselves infidels at heart, and it was up to true Muslims to condemn them as such (takfir) and remove themselves spiritually (make hijra) to a new and purer activism. In 1989, a revolutionary Islamist regime took power in the Sudan. The same year, at Abuja in Nigeria, a conference on Islam in Africa outlined a program for successful Islamization. That event entered Christian folklore, and one does not have to travel far on the continent to hear claims of all manner of secret plans to destroy Christianity across Africa and create a caliphate. If Islamists denounce the Christians as tools of America, Christians everywhere see the hand of Riyadh.

In many countries, Islamist sects formed militias, some affiliated with the nascent al-Qaeda. In 1993, for instance, Indonesian extremists formed the terrorist organization Jemaah Islamiyah, which would be responsible for the 2002 bombings that killed 200 in Bali. One of the deadliest anti-Christian groups in West Africa has been the al-Qaeda-linked “Nigerian Taliban,” known to themselves as the muhajiroun—those who make hijra.

When we see interfaith battles in Africa or Asia, we are generally not witnessing activism by al-Qaeda militants directed from some secret terrorist mission control, but we do find movements driven by exactly the same grievances that motivate bin Laden’s associates—above all, we see the same central fear of Christian expansion. For Muslims, whether political dissidents or actual Islamists, the world is evidently engaged in a culture war, a war of faiths, and groups like al-Qaeda are only one small and sensationalized portion of that. Christians likewise know the stakes. Educated African believers look back with trepidation at the great Christian churches that flourished in the northern regions of the continent 1,500 years ago, churches that would be snuffed out under Islamic rule. They are determined not to let that disaster be repeated.

This culture clash, so crucial to the fate of whole continents, has not impinged on the American consciousness. Stunningly, the crying need for interfaith peace in Africa and Asia featured not at all in Barack Obama’s much-touted speech in Cairo last June. Of course, American options are limited. The more that Western nations try to interfere directly in defense of Christians, the easier it is for Muslims to portray their enemies as imperialist agents. That is not a counsel of despair. American administrations can achieve something by pressuring allegedly friendly regimes like the Saudis to stop sponsoring anti-Christian propaganda across the Global South. But ultimately, resolving this conflict will depend on Africans and Asians themselves—if only Washington and Riyadh can refrain from pouring fuel on the hostilities.


Philip Jenkins is Edwin Erle Sparks Professor of History and Religious Studies at Pennsylvania State University.

The American Conservative welcomes letters to the editor.
Send letters to:

Terror Begins at Home

Since the New Deal, fears of terrorism and subversion have played a central role in U.S. political life. But the ways in which government and media conceive those menaces can change with astonishing speed. Such tectonic shifts usually occur because of the ideological bent of the administration in power. When a strongly liberal administration takes office, it brings with it a new rhetoric of terrorism, and new ways of understanding the phenomenon.

Based on the record of past Democratic administrations, in the near future terrorism will almost certainly be coming home. This does not necessarily mean more attacks on American soil. Rather, public perceptions of terrorism will shift away from external enemies like al-Qaeda and Hezbollah and focus on domestic movements on the Right. We will hear a great deal about threats from racist groups and right-wing paramilitaries, and such a perceived wave of terrorism will have real and pernicious effects on mainstream politics. If history is any guide, the more loudly an administration denounces enemies on the far Right, the easier it is to stigmatize its respectable and nonviolent critics.

It’s difficult to understand modern American political history without appreciating the florid conspiracy theories that so often drive liberals, and by no means only among the populist grassroots. Time and again, Democratic administrations have proved all too willing to exploit conspiracy fears and incite popular panics over terrorism and extremism. While we can mock the paranoia that drives the Left to imagine a Vast Right-Wing Conspiracy, such rhetoric can be devastatingly effective—as we may be about to rediscover.

Long before Sept. 11, 2001, America experienced repeated outbreaks of concern over terrorism. In terms of shaping liberal perceptions, the most important was that of the FDR years, when anti-government sentiment spawned a number of extremist organizations. Some were “shirt” groups, modeled on European fascists—America, too, had its Black Shirts and Silver Shirts—while the German-American Bund attracted Hitler devotees. Isolationism and anti-Semitism drew some urban Irish-Americans into the Christian Front, while the Klan experienced one of its sporadic revivals. Beyond doubt, far-Right extremism did exist, and these movements had their violent side, to the point of organizing paramilitary training. A few plotted real terrorist acts.

But the public response was utterly out of proportion to any danger these groups posed. From 1938 through 1941, the media regularly presented stories suggesting that the U.S. was about to be overwhelmed by ultra-Right fifth columnists, millions strong, intimately allied with the Axis powers. (Actual numbers of serious militants were in the low thousands at most.) Reportedly, the militant Right was armed to the teeth and plotting countless domestic terror attacks—bombings in New York and Washington, assassinations and pogroms, the wrecking of trains and munitions plants. Plotters were rumored to have high-placed allies in the military, raising the specter of a putsch. The ensuing panic was orchestrated by newspapers and radio and reinforced by films, newsreels, and comic books. Historians characterize these years as the Brown Scare.

If the more bizarre accusations sound like the common currency of the show trials in Stalin’s Russia in these very years, that is no coincidence. The main exposés of fascist conspiracy emanated from Communist Party journalists like Albert Kahn and John Spivak. (Spivak himself was an operative for the Soviet NKVD.) Charges circulated through Kahn’s newssheet The Hour before being picked up in the liberal press. The Red agenda was straightforward in that the Brown Scare allowed the Left to discredit any opponent of radical New Deal policies. Scratch the surface of any enemy of the Left, they claimed, and you would find a fascist spy, a lyncher, a storm trooper.

Leftist scaremongering worked to the advantage of a Roosevelt White House anxious to promote U.S. intervention in the coming war. The administration supplied many of the leaks that supported the Brown Scare, through Roosevelt aides like Harold Ickes and also the FBI. In 1940, the FBI announced that it had broken what it touted as a looming coup d’état by the Christian Front that would have been accompanied by murders, bombings, and pogroms. Meanwhile, FBI mole Avedis Derounian undertook the research that would lead to his 1943 bestseller, Under Cover, published under the pseudonym of John Roy Carlson. In both cases, however, the terrorist conspiracies were much less terrifying than they initially seemed. Try as it might, the government could never connect the Christian Front plot to more than a couple of dozen activists with no access to significant weaponry. Nor did Derounian’s revelations point to any serious conspiracy, and the government glaringly failed to convict national farRight leaders on sedition charges.

However thin the underlying charges, the Brown Scare clearly helped to promote a New Deal agenda at home and interventionism overseas. For interventionists, the Terror Crisis suggested that fascist powers already were attempting to subvert America, forcing the nation to confront the foreign danger. Above all, the scare provided a powerful weapon for defaming anyone on the Right who opposed FDR’s drift to war. Targets included not only isolationist senators and congressmen but also the potent antiwar organization America First, which drew support from a broad and reputable cross-section of public opinion—conservative, liberal, and socialist, Catholic and Protestant. By 1941, though, the antiwar movement was battered by allegations of fascist and anti-Semitic ties. Under Cover portrayed America First as an aboveground front for the most extreme and lethal paramilitary fascist groups. As so often before and since, a burgeoning antiwar movement was crippled by charges that it was covertly allied with the nation’s enemies. So successful was this tarring that in popular memory, America Firsters stand alongside Nazis and Klansmen as traitors, subversives, and bigots. In terms of achieving its goals, the Brown Scare worked superbly.

Such scares have occurred twice since FDR’s day—in the 1960s and again in the 1990s. So similar are these later events that we can offer a kind of historical rule: whenever a liberal administration replaces a long-established conservative predecessor, that change will give rise to right-wing populist and paramilitary movements. And within a couple of years, those movements will provide the basis for grossly exaggerated panic over domestic terrorism.

After JFK’s election in 1960, the devoutly anti-Communist Minutemen took first place in liberals’ demonology. As in the 1930s, the far Right was supposed to be closely tied to out-of-control military officers. Remember fictional treatments of the time like “Dr. Strangelove” and “Seven Days in May”? Once more, too, the supposed threat from far-Right extremism surfaced in mainstream politics, especially during the 1964 elections. Most political observers know that Barry Goldwater was denounced for advocating “extremism in the defense of liberty.” Few know exactly what kind of extremism he was supposedly invoking. The ensuing controversy makes no sense except in the context of the John Birch Society, which was pushing the Republican Party to harder anti-Communist positions, and also the well-armed Minutemen. As in the 1930s, the extremists existed, and some hotheads contemplated violence. But once again, a yawning gulf separated the reality of the threat from the public perception.

The most recent right-wing terror crisis followed Bill Clinton’s election in 1992, when citizen militias attracted hundreds of thousands of sympathizers. Media warnings about armed extremism were already widespread by the time of the Oklahoma City bombing in 1995, a genuine far-Right atrocity that had nothing to do with the militias. Although neo-Nazi Timothy McVeigh scorned the political and religious values of the militias, they nevertheless bore the brunt of public outrage and media denunciation. Militia numbers swiftly collapsed, leaving only a tiny core, although one would hardly realize this from the press and television coverage of the years that followed.

Between 1995 and 2001, America suffered the Great Militia Panic, when exposés of ultra-Right violence became a media staple. For liberal press outlets, America was facing a clear and present danger from the militias, from Nazis and skinheads, and even from dissident elements within U.S. Special Forces. Liberals accused the anti-Clinton Right of providing extremists with ideological aid and comfort. An impressive outpouring of books—peaking in 1996—warned of an imminent terrorist disaster. Typical titles raised the shadow of America’s Militia Threat, Terrorists Among Us, or The Birth of Paramilitary Terrorism in the Heartland. One book warned of the Harvest of Rage: Why Oklahoma City is Only the Beginning. The news media was open to the most improbable charges of right-wing atrocities. In 1996, television news shows discovered a (wholly spurious) wave of arson attacks in which white extremists were allegedly wiping out the nation’s black churches.

As recently as a decade ago, “terrorism” in the American public consciousness meant, almost entirely, domestic right-wing activism. This was certainly the case in the fictional media, where filmmakers discovered to their cost that any treatment of Muslim or Middle Eastern misdeeds could provoke boycotts. How much easier, then, to choose notorious villains who lacked defense groups and antidefamation organizations. That generally meant white right-wingers. Militias, skinheads, and neo-Nazis became stock villains in the popular culture of the era. On television, countless police and detective shows dealt with ultra-Right villains, who were usually on the verge of releasing weapons of mass destruction against a decent, liberal America too naïve to realize the forces arrayed against it. The high-water mark of fictional far-Right villainy occurred in the 1999 film “Arlington Road,” in which a terrorism expert comes to suspect that his too-perfect neighbors are in fact the masterminds of a deadly fascist conspiracy. (He should have known: after all, they listen to country music.) As the film’s publicity warns, “Your paranoia is real!”

Ideas have consequences, even if those ideas are dreadfully, embarrassingly wrong. In terms of American national interests, by far the worst consequence of the Militia Panic was the massive underplaying of Islamic terrorism in U.S. public discourse and the disproportionate focus on the domestic far Right. Liberal columnists scoffed knowingly at terrorism experts who warned about foreign militants like al-Qaeda, when every informed observer knew that the real menace was internal. That attitude naturally had its impact on policymakers and on intelligence agencies, who recognized just how sensitive investigations of Middle Eastern-related terror plots might be. Those overcautious attitudes go far to explaining the otherwise perplexing neglect of all the blaring alarm bells that the agencies should have heard in the lead-up to Sept. 11.

Belief in the extremist menace also had domestic political consequences. After Oklahoma City, attacks on the political Right helped re-elect President Clinton in 1996 with over 49 percent of the popular vote (up from 43 percent in 1992). When impeachment loomed two years later, it seemed only natural to rally the faithful by invoking—what else?—a “vast right-wing conspiracy.” Notably, one prominent Clinton adviser in these years was Harold Ickes, son and namesake of FDR’s Brown Scare hatchet man.

The prospects for a fourth round of panic in the Obama years seem excellent. Militias and rightist groups have never entirely vanished—even the Minuteman name survives, in the form of anti-immigration vigilantes—and they will probably enjoy a resurgence. No less probable is the over-interpretation they will receive from an administration deeply imbued with liberal conspiracy theories. The administration contains plenty of Clinton-era veterans who well recall the triumphant success of the earlier Militia Panic, and this time round, Obama’s ethnicity gives added credibility to charges of racist plotting.

Law-enforcement agencies, too, have everything to gain from a terrorism panic, whether it is rooted in the ideological Left or Right. Agencies usually have wish-lists of laws they would like to see passed to expand their powers, and periods of intense concern over terrorism offer a natural opportunity to get these measures onto statute books. Liberals complain bitterly about the Patriot Act of 2001, but Democratic administrations have also used fears of terrorism and subversion to expand official powers. Sweeping federal gun-control measures passed in 1938 and 1968, during the Brown Scare and the Minuteman era. In 1996, the Anti-Terrorism Act gave federal agencies all the powers they could reasonably have demanded up until then. The existence of such a potent body of laws gives police and prosecutors a strong vested interest in applying the terrorism label as widely as possible in order to secure all possible legal advantages. If public opinion permits, they will assuredly use anti-terrorism laws against unpopular right-wing sects.

Private organizations also provide an institutional foundation for a war on domestic terror. Plenty of liberal pressure groups are only too willing to offer their services in identifying far-Right activists and painting them in the most damaging and alarming colors. Some of the most successful through the years have been the Anti-Defamation League, the Feminist Majority Foundation, and the Southern Poverty Law Center (SPLC), with its affiliated Intelligence Project (formerly Klanwatch). While there is no reason to doubt the sincerity of their convictions, such groups would gain immensely from a new political emphasis on militias or rightist groups. If the government declares a domestic terror crisis, the media will automatically turn to the SPLC, for instance, giving that group added visibility and prestige. For the media, the SPLC and its ilk can be endlessly valuable. They supply convenient maps and lists of militias, broken down by state and region, as well as providing knowledgeable speakers to discuss militia history and ideology. This results in publicity for the group and its causes and encourages public support and donations. If a full-fledged right-wing terror network is not available, such pressure groups have every interest in hyping one into existence.

Paying proper attention to terrorist threats is laudable, whatever their source, and some right-wing extremists have through the years demonstrated their potential for violence: they need to be watched. Yet almost certainly, a renewed focus on the far Right will develop more out of an ideological slant than any reasonable perception of danger. Once again we will be dealing with a groundless social panic of the kind we have encountered so often in the past. Listening to official claims about terrorist dangers in the years to come, we need to exercise real critical skills—and never forget the lessons of history. 


Philip Jenkins is a professor of history and religious studies at Pennsylvania State University and the author of The Lost History of Christianity: The Thousand Year Golden Age of the Church in the Middle East, Africa, and Asia—and How It Died.

The American Conservative welcomes letters to the editor.

Send letters to:

← Older posts