State of the Union

How Brexit is Turning into a Disaster for Ireland

A week ago, European parliamentary sources leaked the deal that British Prime Minister Theresa May was hoping to conclude with the European Union, and which would mean Northern Ireland remaining in “continued regulatory alignment” with the EU after Brexit. The obvious interpretation was that the UK was planning to cut Northern Ireland loose and effect a social, economic, and administrative merger of the whole island of Ireland. Ulster’s main Protestant grouping, the Democratic Unionist Party (DUP), duly screamed, and the proposed deal collapsed. After a week of further negotiations led by May, culminating in a early morning press conference on Friday in Brussels, the UK-EU deal was revived. But its present form is essentially a decision not to decide. No new regulatory barriers will exist between Northern Ireland and the rest of the United Kingdom unless and until the (Protestant/Unionist-dominated) Northern Ireland Executive and Assembly “agree that distinct arrangements are appropriate for Northern Ireland.” But that is never going to happen. A large Orange Protestant can has been vigorously kicked down the road.

The island of Ireland consists of both an independent republic of 26 counties, as well a six-county province—Northern Ireland, or Ulster—that is part of the United Kingdom. That division has through the years produced bitter conflicts, as the existence of the North remains offensive its Catholic minority who want a united Ireland. That both Great Britain and the Irish Republic were members of the European Union proved immensely beneficial at soothing tensions and effectively eliminating the internal border.

But what will happen when Great Britain leaves the EU while Ireland remains? There will have to be some frontier between northern and southern Ireland to check goods and/or migrants entering “Europe.” This can be done in two ways, each one desperately unpalatable to a major constituency. If there is to be a border, the obvious solution is to restore the long frontier across Ireland, with all its checkpoints and search controls, which the Republic finds utterly unacceptable (so also do most of Ulster’s Catholic minority).

Alternatively, Northern Ireland could be harmonized with the Republic and the rest of Europe, so that the de facto frontier would lie between the two islands, Britain and Ireland. Attractive as it sounds, it’s toxic for the province’s Protestant minority, as it would de facto unite the entire island under the control of Dublin and the Republic, and require some kind of border or checks between Northern Ireland and what we still notionally call “the rest of” the United Kingdom.

So there will and must be a border, either on land or in the Irish Sea, and choosing between the two is virtually impossible. Yet EU negotiators demanded that this quandary be resolved before Britain could even begin formal negotiations with Europe over its future trade relationship. (Those are also impossible but that’s a different story). Unlike other diplomatic issues, moreover, this particular nightmare had the potential to reignite actual violence and terrorism in Northern Ireland, a threat most people had believed solved after the Good Friday agreement of 1998.

One would think, then, that the British government would be hyper-cautious in proceeding in this area above all. Yet they’ve fallen far short of this standard, as was best demonstrated back in 2016 when the British Brexit minister David Davis referred to the border between Ulster and the Republic as an “internal” border, suggesting he did not know that the Republic is an independent state.

That is a bare narrative of events, one that admittedly ignores the real depth of the British policy disaster when it comes to Ireland and Brexit. Let’s examine the DUP, the Democratic Unionists, and pay particular attention to that latter word, which summarizes their whole raison d’être. They exist to preserve, protect, and defend the union of Northern Ireland and Great Britain, to which all other causes and beliefs are subsidiary. Yet Theresa May somehow hoped to slip Northern Ireland’s “continued regulatory alignment” with the EU past these Unionists as a fait accompli, without prior consultation, presumably on the assumption that they would happily go along with their near-severance from the UK.

But wait, there is more. From the 1920s, the dominant Protestant force in Northern Ireland was the Ulster Unionist Party (UU), which British observers used to think of as incomprehensibly extreme and implacable in its beliefs. Yet as the terror crisis grew worse in the 1970s, even the UU became too moderate, and support shifted to a much harder-line and more explicitly religious movement inspired by the Reverend Ian Paisley. That grouping became the DUP, which is not merely unionist, but extremely, extremely unwilling to accept anything smacking of compromise. Today, the DUP is by far the most powerful party in Ulster, holding 10 seats in the UK parliament.

That same DUP is also critical to the survival of Theresa May’s government. Last June, May suffered a disastrous election disappointment, when she lost her parliamentary majority and was forced to rely on those suddenly precious 10 DUP votes. By the way, May’s own party is called the Conservative and Unionist Party, and many of her pro-Brexit allies take that latter element very seriously.

What did May think was going to happen? Could anyone with the slightest knowledge of Northern Ireland have failed to tell her that what she was planning would be condemned as treason, and that it would assuredly provoke the DUP into moving towards resolute opposition, bringing down her government? Did nobody tell her what the “U” stood for?

Around the world, governments fall, careers are destroyed, but nobody actually gets hurt. This is not the case in Northern Ireland, where political disaster could yet raise some truly frightening ghosts. Most foreign observers are very familiar with Catholic/nationalist groups like the Irish Republican Army and its offshoots, but few know much about the deadly Protestant/unionist tradition associated with names like the Ulster Volunteer Force and the Red Hand Commando. From the 1960s through the 1990s, the UVF and other Protestant terror groups fought viciously against Catholics and nationalists, claiming some 600 lives—fewer than the IRA, but still a dreadful record of violence. Some of these actions were uniquely hideous even in the context of a terror war. One legendary group, nicknamed the Shankill Butchers, murdered dozens of Catholic civilians, torturing, mutilating, and in some cases killing their victims with hundreds of stab wounds.

Like the IRA, the Protestant terror groups largely disarmed or went underground after the 1998 peace settlement. As with the IRA, though, diehards remain, and they await only the opportunity to kill and maim once more for the cause of union. If a British government erected its EU border in the Irish Sea, the militants would very likely resurface. The DUP knows this particularly well, having itself benefited from its own hardline history. If the DUP compromised today, it senses acutely that other more extreme movements wait in the wings. And if the DUP replaced the UU, why could it not eventually be supplanted by the still more right-wing Popular Unionist Party or some equivalent successor?

That is the peril May and Davis were playing with—and seemingly without a clue as to the catastrophic dangers they faced. This is unforgivable.

On a whole different level of seriousness, David Davis has just revealed details of the planning that his government has undertaken about the likely economic impact of Brexit on various sectors of the British economy—automotive, aerospace, pharmaceuticals, and so on. He has now explicitly stated that because economic models are largely unreliable, no impact analysis statements have occurred, nor will they. Nothing, nada, none. Nor has he announced an intention to resign.

Theresa May’s latest visit to Brussels is being celebrated as a breakthrough—but it still doesn’t address the fundamental issues that have the potential to make Brexit a disaster for Britain. And with the specter of the Troubles looming again in Ireland, her government is guilty of gross negligence.

Philip Jenkins teaches at Baylor University. He is the author of Crucible of Faith: The Ancient Revolution That Made Our Modern Religious World.

Britain’s Incompetent Conservatives Can’t Handle Brexit

The world recently commemorated the centennial of the Russian Revolution, which resulted from the flagrant incompetence of that country’s ruling class in confronting a moment of overwhelming national crisis. The barricades are not yet out in the streets of modern-day London, but a certain sense of déjà vu is appropriate. At the least, we are likely witnessing the slow-motion suicide of the Conservative Party, and, conceivably, of British conservatism more broadly defined.

In the British case, the crisis involves the nation’s referendum vote in June 2016 to withdraw from the European Union. Opinions may differ about the virtues of Brexit as an idea—I opposed it—but once it was decided, most everyone agreed that the process of extraction had to be implemented with great care and single-minded dedication. The actual response of the Conservative government has been deplorable to the point of unforgivable—inept, slipshod, insouciant, and ignorant of even the basic realities of law and process.

Admittedly, the process of withdrawal would be extraordinarily difficult even for a generation of godlike political sages. Withdrawal proceeds under the so-called Article 50, which provides a strict two-year timetable for negotiation and ratification. The British government implemented the article in March 2017. By March 29, 2019, then, Britain will no longer be a member of the EU. Before that point, the two sides must reach agreement on a series of thorny primary issues, including the so-called divorce bill, the sum that Britain must pay to settle outstanding liabilities. Estimates for this bill range from $60 billion to $100 billion. Scarcely less sensitive is the question of Ireland and the virtually inevitable border that must be created between the EU Republic of Ireland and the British province of Northern Ireland.

Only after those preliminaries have been agreed to can negotiation can then move to determining the future trade relationship between Britain and the EU, an extremely complex and time-consuming dance of interests. When the EU and Canada recently discussed a similar deal, negotiations took seven years, and almost collapsed due to last-minute politicking. The Canadian agreement, incidentally, did not include financial services, which are a fundamental part of the British economy. So all that has to be accomplished before March 2019, but actually, the timetable is even tighter than that. As any deal must be ratified by the legislatures of all EU member states, a solid draft must realistically be presented by around October 2018.

If a deal of some kind is not reached, Britain would theoretically crash out of the EU in a way that would devastate its trading links and networks. The elaborate cross-border supply chains on which British manufacturing depends would collapse overnight. According to many observers, the resulting abandonment of existing treaties and agreements might leave British airlines unable to fly beyond UK borders, and the lack of nuclear fuels would cripple the nation’s power industry.

So how has the Conservative leadership responded to these grave challenges? After the referendum, Theresa May emerged as prime minister with a firm commitment to the principle “Brexit means Brexit.” Accordingly, she chose leading Brexit campaigners for key positions, including Boris Johnson as foreign secretary and David Davis as head of the new department in charge of exiting the European Union (DExEU) and chief negotiator of withdrawal. Liam Fox carries responsibility for international trade, which includes negotiating new agreements outside the old EU framework.

Painfully early, it became apparent that none of this Gang of Four had a clue of what they talking about in relation to Europe, nor did they understand the basic principles by which the EU worked. It is not so much that they approached the key issues wrongly—they did not even perceive them as issues. Throughout the referendum campaign, Brexiteers had trivialized the question of future relationships with the EU, suggesting that these would easily be decided in high-level summits within weeks rather than years. There was therefore not the slightest need to prepare detailed negotiating principles. As Johnson explicitly stated, the new British relationship with Europe would be exactly what it was at present, although omitting some of the features he found unpalatable, such as unrestricted EU immigration.

When Britain invoked Article 50, no government figure grasped the implications. Until quite recently, Johnson and Davis mocked the notion that Britain might have to pay a penny for the divorce bill. (The British government is now admitting a liability of some tens of billions, a sum that will definitely increase.) When the reborn Irish Question finally surfaced in their minds, Davis presented a vision of an invisible border made possible by as-yet undeveloped high technology, a prospect that has been commonly derided as magical thinking. Oh, and remember those vast new trading empires outside Europe, with all those lucrative deals being signed almost immediately? There’s no sign of any movement in that direction.

Originally, EU negotiators hoped that the preliminary questions (such as the divorce bill) would be largely settled by October 2017, permitting the start of trade negotiations. So utterly unprepared were the British even to formulate positions on key questions, never mind to discuss them seriously, that the trade stage will likely be put off until March 2018. It is inconceivable that the two sides will complete a comprehensive trade treaty by the following October.

So what happens then? A no-deal scenario—crashing out of the EU—is a strong and growing likelihood, but the government has made no effort to prepare for such an eventuality, for instance by hiring thousands of new customs officers or rebuilding port facilities.

One obvious temporary solution to the emerging crisis is to extend negotiations with the EU, with a transitional phase that would de facto prolong British EU membership by some years. That would inevitably mean continued British compliance with three essential requirements demanded by the Europeans: continued multi-billion-dollar annual payments, accepting the free movement of migrants, and observing the decisions of the European Court of Justice. The May government has expressed an openness to some kind of transitional arrangement—but has explicitly rejected the possibility of any of those three conditions extending beyond March 2019. If there is a single criticism of British actions that is most damning, it is the total failure to appreciate the fundamental significance of those EU prerequisites for any continued British involvement in the Union, even on a temporary basis.

The reality gap is yawning, and is apparently growing: Britain’s Conservative leaders honestly don’t know what they are doing. By 2019, Britain either faces economic cataclysm or else a series of humiliating climb-downs that will leave it in virtual dependency on the EU, bearing all the burdens of that association but lacking any say in decision-making. And yes, that does mean continued mass immigration. One way or another, there will be a political reckoning.

Until recently, Conservatives took comfort in the lack of any effective alternative to their rule, as the Labour Party headed since 2015 by Jeremy Corbyn had devolved into a radical sect. Its leaders form a gallery of unreconstructed far-left militants, Trotskyists, and overt IRA sympathizers, whose global ideal seemed to be revolutionary Venezuela. Corbyn’s Labour favors massive nationalization of industries and utilities, and a vast expansion of labor union power and influence. Anti-Semitic rhetoric has a widespread appeal among party activists. So flagrantly unelectable did the Labour Party appear that Conservatives largely treated their adversaries as a source of hilarity.

Last June, accordingly, Theresa May called a (wholly unnecessary) general election in the hope of securing Conservative power for decades to come. Not for the first time, her decision proved catastrophic. Labour votes surged, boosting the party’s parliamentary seats from 232 to 262. Despite all his poisonous past associations, Jeremy Corbyn emerged as the undisputed political hero of most under-35s, who loved his rhetoric of fighting inequality and assailing malefactors of great wealth. Currently, Conservatives run a nose ahead of Labour in the polls, but the prospect of a Corbyn government is all too real. If divisions over the EU force a schism in the Conservative Party, an election could come far sooner than anyone presently expects.

The vision of a Corbyn government would be alarming at the best of times, but would also combine the likelihood of a post-Brexit economic crisis with the withdrawal of foreign capital and investment. Imagine such a Labour government being elected in 2019. Its first decisions would unquestionably be to raise taxes and penalize corporations, further accelerating decline and isolation, while a run on the pound would be a certainty. At that point, Britain would be facing Disaster Socialism, with all the emergency measures that the regime deemed necessary to salvage the economy: rigid economic planning, further nationalizations, and ending the convertibility of sterling. The 2020s would find a new Venezuela off the coast of Western Europe, but without the warm weather.

And the blame for such an outcome would be laid wholly at the feet of a failed Conservative Party.

Philip Jenkins teaches at Baylor University. He is the author of Crucible of Faith: The Ancient Revolution That Made Our Modern Religious World.

So You Want a Cultural Revolution?

Horrified by images of American students shouting down and physically attacking speakers on their campuses, some commentators have reasonably invoked memories of the Chinese Cultural Revolution. The problem with that analogy is that it is simply lost on most readers, including most younger than middle age.

So what exactly was this “Cultural Revolution” thing anyway? The U.S. media does a wonderful job of recalling atrocities that they can associate with the Right, while far worse horrors stemming from the Left vanish into oblivion. In reality, not only does the Cultural Revolution demand to be remembered and commemorated, it also offers precious lessons about the nature of violence, and the perils of mob rule.

In 2019, Communist China will celebrate its seventieth anniversary, and in that short time it has been responsible for no fewer than three of the worst acts of mass carnage in human history. These include the mass murders of perceived class enemies in the immediate aftermath of the revolution (several million dead), and the government-caused and -manipulated famine of the late 1950s, which probably killed some 40 million. Only when set aside these epochal precedents does the Cultural Revolution of 1966-76 seem like anything other than a unique cataclysm.

By the early 1960s, China’s Communist elite hoped for an era of stability and growth, modeled on the then-apparently booming Soviet Union (remember, this was the immediate aftermath of Sputnik). The main obstacle to this scenario was the seventy year old leader Mao Zedong, whose apocalyptic visions held out hopes of revolutionary transformations almost overnight, of a near immediate move to perfect Communism. Mao himself loathed the post-Stalin regime in the Soviet Union, seeing it as a revisionist system little different from Western imperialism. In an ideal world, Mao would have been kicked upstairs to some symbolic role as national figurehead, but he proved a stubborn and resourceful foe. He outmaneuvered and defeated his “revisionist” Party rival Liu Shaoqi, who became a symbol of all that was reactionary, moderate, and imperialist. Brutally maltreated, Liu was hounded to death.

So far, the conflict was the bureaucratic backstabbing typical of Communist regimes, but Mao then escalated the affair to a totally different plane. From 1966 onwards, he deliberately incited and provoked mass movements to destroy the authority structures within China, within the Party itself, but also in all areas of government, education, and economic life. Mao held out a simple model, which perfectly prefigures modern campus theories of systematic oppression and “intersectionality.” Even in a Communist Chinese society, said Mao, there were privileged and underprivileged people, and those qualities were deeply rooted in ancestry and the legacies of history. Regardless of individual character or qualities, the child of a poor family was idealized as part of the masses that Communism was destined to liberate; the scion of a rich or middle class home was a class enemy.

The underprivileged – poor peasants, workers, and students – had an absolute right and duty to challenge and overthrow the powerful and the class enemies, not just as individuals, but in every aspect of the society and culture they ruled. In this struggle, there could be no restraint or limitation, no ethics or morality, beyond what served the good of the ultimate historical end, of perfect Communism. In a Great Proletarian Cultural Revolution, the oppressed need observe neither rules nor legality. Even to suggest such a constraint was bourgeois heresy.

What this all meant in practice is that over the following years, millions of uneducated and furious young thugs sought to destroy every form of authority structure or tradition in China. To understand the targets, it helps to think of the movement as a systematic inversion of Confucian values, which preached reverence to authority figures at all levels. In full blown Maoism, in contrast, all those figures were to be crushed and extirpated. Bureaucrats and Party officials were humiliated, beaten or killed, as was anyone associated (however implausibly) with The Past, or high culture, or foreign influence. Pianists and artists had their hands broken. Professors and teachers were special targets for vilification and violence, as the educational system altogether collapsed.

Anarchistic mobs replaced all authority with popular committees that inevitably became local juntas, each seeking to outdo the other in degrees of sadism. Some class enemies were beaten to death, others buried alive or mutilated. In parts of Guangxi province, the radicals pursued enemies beyond the grave, through a system of mass ritual cannibalism. Compared to such horrors, it seems almost trivial to record the mass destruction of books and manuscripts, artistic objects and cultural artifacts, historic sites and buildings. The radicals were seeking nothing less than the annihilation of Chinese culture. Within a few months of the coming of Revolution, local committees had degenerated into rival gangs and private armies, each claiming true ideological purity, and each at violent odds with the other. Such struggles tore apart cities and neighborhoods, villages and provincial towns.

Outside the military – and that is a crucial exception – the Chinese state ceased to function. The scale of the resulting anarchy is suggested by the controversy over the actual number of fatalities resulting from the crisis. Some say one million deaths over the full decade, some say ten million, with many estimates between those two extremes. Government was so absent that literally nobody in authority was available to count those few million missing bodies. China became a textbook example of the Hobbesian state of Nature – and a reasonable facsimile of Hell on Earth. Only gradually, during the early 1970s, were the Chinese armed forces able to intervene, sending the radicals off en masse into rural exile.

China’s agony ended only after the death of the monster Mao, in 1976, and the trial of his leading associates. From 1979, the country re-entered the civilized world under the leadership of Deng Xiaoping, who is today lionized as a great reformer. That portrayal is correct – but we should never forget that as an architect of the earlier Great Famine, Deng had almost as much blood on his hands as did Mao himself.

So extreme was the violence of the Cultural Revolution that we might reasonably ask whether any parallels exist with the contemporary U.S. However ghastly the suppression of free speech at Middlebury College and elsewhere, however unacceptable the rioting in Berkeley, nobody has as yet lost his life in the current wave of protests. But in so many ways, the analogies are there. As in the Cultural Revolution, American radicals are positing the existence of historically oppressed classes, races and social groups, who rebel against the unjust hegemony of others. In both cases, genetics is a critical means of identifying the two competing sides, the Children of Light and Children of Darkness. If you belong to a particular race, class or group, you hold privilege, whether you want to or not. Consistently, the radicals demonize their enemies, invoking every historical insult at their disposal, no matter how inapplicable: Berkeley’s would- be revolutionaries describe themselves as “Antifas,” Anti-Fascists, as if any of their targets vaguely fit any conceivable definition of “fascism.”

For the oppressed and underprivileged, or those who arrogate those titles to themselves, resistance is a moral imperative, and only the oppressed can decide what means are necessary and appropriate in the struggle for liberation. The enemy, the oppressors, the hegemons, have no rights whatever, and certainly no right of speech. There can be no dialogue between truth and error. Violence is necessary and justified, and always framed in terms of self-defense against acts of oppression, current or historic.

Presently, our own neo-Cultural Revolutionaries are limited in what they can achieve, because even the most inept campus police forces enforce some restraints. If you want to see what those radicals could do, were those limitations ever removed, then you need only look at China half a century ago. And if anyone ever tells you what a wonderful system Communism could be were it not for the bureaucracies that smothered the effervescent will of an insurgent people, then just point them to that same awful era of Chinese history.

If, meanwhile, you want to ensure that nothing like the Cultural Revolution could ever occur again, then look to values of universally applicable human rights, which extend to all people, all classes. And above all, support the impartial rule of law and legality. The Cultural Revolution may be the best argument ever formulated for the value of classical theories of liberalism.

Philip Jenkins teaches at Baylor University. He is the author of Crucible of Faith: The Ancient Revolution That Made Our Modern Religious World (forthcoming Fall 2017).

 

College Mob Rule: The Nuclear Option

Campus protests over the past couple of years have escalated in scale and seriousness, with appalling acts of violence and intimidation against visiting conservative lecturers. In Washington State, Evergreen State College has mounted its Day Without White People, with threats and intimidation directed against those who refused to absent themselves. In the New York Times, even the very liberal Frank Bruni now demands that “These Campus Inquisitions Must Stop.” Critics of such actions denounce the climate of intolerance, but university authorities effectively do nothing, rarely even imposing mild disciplinary procedures. The implication is that there really is nothing worthwhile that universities can do, even if they chose to act. The public, it seems, should keep their racist, sexist, transphobic, mouths shut, while continuing to write the tuition checks. 

If only there were some way to restore sanity and decency on campus! Actually, there are multiple opportunities to do so. Most obviously, public universities (like Evergreen State) depend on state legislatures for most or all of their funding, and could be hit hard by concerted action by elected officials.

Now, that recourse would not be available for private schools, like Yale or Middlebury College. Does that make them immune from external pressure or sanction? Well, actually no.

Unknown to most non-academics, there is in fact a deterrent of nuclear proportions that can be invoked in circumstances of extreme and egregious misbehavior. So fearsome is this weapon that, if deployed, it would assuredly induce academic authorities to clean up their act. If you are not an academic administrator, you know next to nothing about accreditation. If however you are, even invoking that unspeakable thirteen letter word causes grown adults to blanch. American universities rely on public ignorance of that fact.

So what is this nightmare beast, accreditation? All academic institutions, public and private, are accredited by various private agencies, which vouch for the quality and effectiveness of schools and their programs. Accreditation can be granted to a whole institution, or to specific programs that it offers. Dozens of such agencies exist, and they differ greatly in how far they acknowledge each other’s authority. There is, though, a core of six regional bodies that really matter. These include the Northwest Commission on Colleges and Universities (NWCCU), and the Middle States Commission on Higher Education (MSCHE). The NWCCU, for instance, accredits 159 universities and colleges, including Evergreen State; Middlebury is accredited by the New England Association of Schools and Colleges (NEASC).

Each of these agencies promulgates elaborate and quite draconian policies, covering minutiae of administrative policies, faculty qualifications, student support services, and student life. The language is expansive and designed to be legally enforceable. Institutions are reviewed regularly for compliance, and unadvertised visits and spot checks are a real possibility. Any institution or program found in violation of any part of these requirements is in deep trouble. Accreditation can be suspended, or schools can be reduced to probationary status during investigations, which are extremely long, expensive and time consuming. Literally, they can tie up a school and all its bureaucrats for years at a time. The ultimate nightmare is that a college can entirely lose its accredited status.

Why does this matter? Because if a school loses its accreditation, its degrees and qualifications no longer count for anything. It can no longer issue “degrees” that qualify for employment. Credits acquired from that school cannot be transferred. If we imagine the unthinkable—that Yale University lost its accreditation—then its degrees would count as much or as little as a diploma from the John Doe Academy of Astrology and Other Advanced Sciences. If an elite liberal arts college lost its accreditation, then its graduates would have spent a quarter million dollars (plus) for literally nothing. The college would be moribund, unless and until it regained accreditation.

No administrator would even risk such an appalling prospect. They would do anything to clean up their collective act, and to be seen to be doing so.

Now, as with most legal documents, accreditation standards do not literally and precisely address every eventuality that might arise. They do not, for instance, say that colleges shall prevent the mobbing and silencing of speakers, or shall prohibit flagrantly racist stunts by self-described social justice activists. Rather, they set out general standards, commonly framed in terms of safety, health and well-being, and academic freedom. The Southern Association of College and Schools demands that “the institution takes reasonable steps to provide a healthy, safe, and secure environment for all members of the campus community.” The NEASC (New England) specifies that, “The institution protects and fosters academic freedom for all faculty regardless of rank or term of appointment.” A member institution must “provide a safe environment that fosters the intellectual and personal development of its students.” All the accrediting bodies use similar boilerplate language about health, safety, institutional environment, and academic freedom.

But that general language can be applied expansively. To use an analogy, the vast Title IX apparatus that has emerged on college campuses is founded upon a very short and general legal text concerning sexual discrimination. Why should not the language of accreditation standards be treated similarly?

There is no conceivable way in which the recent campus horror stories can be reconciled with those lofty standards. In what sense can a campus be described as “safe” if the sober and respectful expression of mildly dissenting views invites physical assault? Can such behavior be reconciled with any concept of academic freedom? If colleges cannot be accused of directly fomenting or carrying out such acts, they assuredly can be blamed for failing to discipline perpetrators. They can likewise be faulted for not employing and training police forces to defend the liberties of individual students and faculty—in short, to create an appropriate institutional environment. If schools permit mob rule, they can and must be sanctioned, and the accreditation agencies are the critical protagonists in any such process.

If those agencies will not exercise that function, then we really need to know why that is. Appropriate pressure needs to be placed on them, whether from the U.S. administration, from legislators, or the general public. Make them do their job. Evergreen State would be a wonderful place to start.

Do we want to restore civility on US campuses? Then explore the potential of accreditation. Let’s start the debate.

Philip Jenkins teaches at Baylor University. He is the author of Crucible of Faith: The Ancient Revolution That Made Our Modern Religious World (forthcoming Fall 2017).

We Need Civil Defense

A screenshot from “Duck and Cover.”

Mention the phrase “civil defense” to most younger Americans, and you will likely enter the realm of unintended hilarity. The image that comes to mind is usually the notorious “Duck and Cover” film, in which Bert the Turtle instructed children how to avoid death during a nuclear attack by hiding under their desks. Such official misinformation (or so it seemed) was comprehensively pilloried during the much-praised 1982 documentary Atomic Café. Civil defense during the era of the hydrogen bomb? What a lunatic delusion.

The problem is that throughout modern U.S. history, even during the “Duck and Cover” years, civil defense in various forms has been valuable and necessary—and its significance is today greater than it has been for decades. We must understand how the hostile critique originated and the myths on which it is based.

Civil defense is a complex and multi-layered concept. There is not a simple up-down choice in which a country either practices civil defense or decides to forgo it. Civil defense includes, for instance, preventing enemy attacks on the territory of the homeland and the civilian population, and reducing the harm of successful assaults. More broadly, civil-defense policies might involve organizing defense against spies and saboteurs, a necessary but risky strategy that historically has sometimes shaded into vigilante actions against dissenters.

Through much of the past century, leftist and liberal groups have often been hostile to civil-defense policies, partly owing to civil-liberties fears. More generally, the objection is that stressing the need to defend against foreign powers stirs paranoia against those nations and actively contributes to warmongering. To prepare for war, in this vision, is to promote it. Such objections became all the more acute after 1945.

Atomic Café was released at the height of liberal hostility to Ronald Reagan’s hardline policies against the Soviet Union, and it has to be seen as a weapon in the ongoing polemic. From a left-liberal perspective, the only authentic way to defend U.S. civilians in the modern world is to achieve peace abroad through negotiation and compromise.

The problem with this perspective is that on occasion the U.S. genuinely does find itself facing foreign foes, who really do target the homeland. In the early months of the Second World War, the U.S. suffered terribly from inadequate civil-defense programs. That became evident in the early lack of blackout regulations in major East Coast cities. In consequence, German U-boats relied on those urban lights to silhouette large ships and oil tankers sailing along the coast, and took a crippling toll. (For that reason, German skippers loved to sail off Atlantic City and Miami.)

Civil defense became an active necessity with the growing Cold War confrontation from 1946 onward, when it was certain that any global war would involve direct attacks on U.S. territory. Looking at these years, it is essential to avoid the blinders of the post-1960s world, in which any superpower confrontation would be fought out by intercontinental ballistic missiles (ICBMs) and colossally destructive hydrogen bombs, which really would endanger the continuance of the human race. From 1970, the arrival of MIRV weapons (multiple independently targetable reentry vehicles) made it virtually impossible to stop such an attack once it was launched. We were definitively in the age of Mutual Assured Destruction.

But matters were very different in the pre-MAD years. Before (say) 1958, the most likely scenario was that the Soviets would indeed attack the U.S. homeland, initially by means of manned bombers and later missiles, but would be limited in their use of nuclear weapons. The Soviets tested their first fission bomb in 1949, with a hydrogen weapon following in 1953, but they still remained several years far behind the U.S. in delivery systems. They could hit some areas but not others, and it would be a good while before they could comprehensively destroy U.S. targets. We get a sense of this strategic situation from the U.S. concentration of its military resources in the heartland, away from the imperiled coasts. The main American strategic air forces were sensibly concentrated far inland, at centers like Offutt (Omaha, Neb.) and Dyess (Abilene, Texas). That geographical imperative was what made the Soviet deployment of missiles in Cuba in 1962 an existential threat. Overnight, the entire U.S. homeland was menaced.

American civilians in the 1950s thus faced some risk of nuclear attack, but it very much depended on their location. If we imagine, for instance, a Soviet attack in 1952, during the Korean War, that would conceivably involve 20 or so Nagasaki-sized bombs annihilating centers like New York, Pittsburgh, and Los Angeles but leaving large parts of the country free from direct blast effect. In such a situation, civil defense made wonderful sense. An atomic bomb dropped on Manhattan (say) would be a catastrophe for the five boroughs, but there were plenty of wise precautions that civilians could take further afield, in Allentown or Buffalo. They could stockpile food and water, build bomb shelters and survival rooms, and prepare to respond to fallout. Properly done, civil defense at this stage might have saved millions of lives.

Over time, though, technological changes made such policies obsolete. The “Duck and Cover” film offered excellent advice when it was first screened publicly in 1952; by 1982, it had been overtaken by technology and was irrelevant or worse.

In those early pre-MIRV days, the U.S. also plausibly could have hoped to stop some or all of the bombers and missiles from getting through. U.S. commanders were after all well aware of how effectively German anti-aircraft defenses had almost crippled their own bomber offensives in 1943–44. It’s largely forgotten today, but American cities were surrounded by batteries of radar-guided anti-aircraft guns, and later by Nike missiles. By 1962, 240 Nike Ajax launch sites were operational within the U.S., and the Ajax was subsequently replaced by the nuclear-armed Hercules. Even today, newspapers around the country regularly publish “believe it or not” stories about the surviving remains of these disused Nike sites, with their bunkers and guardhouses, and looking like fortresses from Game of Thrones.

The effort to defend the cities culminated in the late 1960s with schemes for comprehensive anti-ballistic missile (ABM) systems. But what was next? Would there be a new arms race, as the Soviets developed their own anti-ABM weapons, and the Americans responded with anti-anti-ABMs, and so on? And if the ABM system was to be as effective as was claimed, might the Soviets decide on a preemptive nuclear strike before it was implemented? Resulting concerns led to the U.S.-Soviet ABM Treaty of 1972, which in practice left most cities open to missile attack.

Technological advances thus overwhelmed the civil-defense idea, at least in any traditional sense. No, digging backyard shelters would not be of much use today against a mass onslaught of Russian (or Chinese) ICBMs.

But today, those are by no means the only nuclear scenarios that the country faces, or even the most plausible. Apart from terrorist assault, by far the most likely nightmare presently would be an attack by North Korea, perhaps with one bomb or at most a handful. In a sense, we would be back to the strictly limited nuclear threat that prevailed in 1952, and once again, civil defense would be an essential consideration. Even if we suffered the cataclysm of losing San Francisco, the people of Los Angeles or Seattle could still take many steps to protect their life and health. A public-education program would be an excellent idea, and the sooner the better. And critics can make all the jokes they like about “Duck and Cover.”

Civil defense will always be seen as paranoid, until it becomes a matter of life and death.

Philip Jenkins teaches at Baylor University. He is the author of Crucible of Faith: The Ancient Revolution That Made Our Modern Religious World (forthcoming Fall 2017).

Egypt and the End of the Secular Middle East

President of Sudan Gaafar Nimeiry (left), President of Egypt Gamal Abdel Nasser, President of Libya Muammar Gaddafi at the Tripoli Airport in 1969. (Wikimedia Commons)

Last month, ISIS terrorists tried to attack the ancient monastery of St. Catherine’s, in Egypt’s Sinai desert. That fact might not sound too surprising, until we recall that St. Catherine’s has in its possession a decree of protection issued by the Prophet Muhammad himself and supposedly valid until the end of the world. If any place in the world should be immune from Islamist terrorism, it is this religious house. Following so closely on the hideous massacre of over 40 worshipers at their Palm Sunday services in Egyptian cities, this event indicates just how lethally perilous life has become for the country’s nine million or so Coptic Christians.

The attacks were followed last week by a previously scheduled visit from Pope Francis, who declared that “no act of violence can be perpetrated in the name of God, for it would profane his name.” Western observers, the Roman pontiff included, tend to attribute such atrocities to the evils of religious fanaticism and intolerance. But in fact, these attacks do have a rationale and a logic.

Most obviously, they demonstrate the Egyptian government’s weakness in the face of armed jihad, while helping to undermine the economy by deterring tourists. But there is a larger and more critical agenda. These terrorist spectaculars are intended to appeal to specific audiences within the Egyptian ruling establishment, and particularly in the armed forces and the intelligence apparatus. By giving the continuing crisis a distinctly religious coloring, the jihadis are trying to force Egyptian elites to choose between Islam (as they portray it) and its enemies. This strategy is dangerous because it plausibly could succeed in splitting Egyptian elites and causing significant defections. This in turn would accelerate a critical trend in the modern Middle East, namely the near-collapse of secular ideologies across the region and the consequent rise of hard-edged political Islamism. In the worst-case scenario, such a process runs the risk of bringing Egypt to conditions that we more commonly associate with Iraq, which would pose catastrophic dangers.

While issues of religious freedom and persecution are grave enough in their own right, Western policymakers urgently need to understand these larger contexts.

♦♦♦

In the second half of the last century, movements across the Arab world adopted similar ideologies in their struggle to modernize their countries and resist colonialism. Commonly, these were nationalist, socialist, and secular, while pan-Arabism had a powerful appeal. Such modernizing ideas found hospitable homes in the armed services and intelligence communities of key nations, and a series of coups d’état brought nationalist movements to power in Egypt (Nasserism), and also in Ba’athist Syria and Iraq. Nationalist regimes were not necessarily anti-religious, but they had absolutely no time for any hint of political Islamism, especially when it resisted modernization. Nasser bitterly persecuted the Muslim Brotherhood, and in 1966 Egypt executed the movement’s leading intellectual, Sayyid Qutb.

That religious element had a natural appeal for minorities of all kinds, who feared the overwhelming weight of political Islam. In Egypt, the substantial Coptic minority had done well under British rule, but recognized the need to prepare for new post-imperial political arrangements. Copts gravitated toward secular and nationalist movements, often with interfaith aspirations. They disliked Nasser’s pan-Arabism for the simple reason that they saw themselves not as Arabs but as pure Egyptians, with Pharaonic roots. Even so, Nasser was obviously preferable to the kind of intolerant Islamic regime established in Saudi Arabia.

In Egypt, then, as in Iraq and Syria, minorities were firm supporters of nationalist politics, which worked well as long as those regimes flourished. The problem was that when the states faltered or collapsed, minorities were tainted by their association with unpopular dictatorships. That was all the more dangerous when those minorities were the targets of long-standing ethnic or religious prejudices.

In all these countries, moreover, secular nationalist regimes always had to deal with mass conservative religious sentiment in the population at large. Ordinary Muslims were happy to follow General Nasser as a symbol of national pride, but when later regimes descended into economic collapse, cronyism, and kleptocracy, older allegiances revived. The disastrous failures of the nationalist states in confronting Israel proved fatal. Secularism, it seemed, was bankrupt. What else was left?

Radical Islamist movements reorganized, and in 1981 the guerrilla group al-Gama’a al-Islamiyya assassinated Nasser’s successor Anwar al-Sadat and some of his closest allies. Terrorist groups could be fought and suppressed, but much more dangerous was the success of Islamist political activism in gaining a real mass following. In the elections of 2011–2012, some two-thirds of votes cast in Egypt went either to the political wing of the Muslim Brotherhood or to the Salafist party, al-Nour. Based on this mass democratic support, the Brotherhood formed a government, which was in turn overthrown by the bloody military coup led by Gen. Abdel Fattah el-Sisi, in July 2013.

♦♦♦

In some ways, Egypt’s current Sisi government is trying to reproduce the old Nasser system, complete with personality cult, but this is now happening in a vastly different political context. Far from being relatively passive, activist Muslims are now highly militant and organized, while the coup destroyed any hopes of a peaceful democratic road to Islamist political power. That left only the armed road to revolution. Boosted by internet propaganda, ideas of popular armed jihad have become mainstream, resulting in surging guerrilla campaigns across the country, especially in the Sinai. But interfaith relations have also been transformed, with much more intense and widespread popular hatred of Christians. As Muslim activists struggled against the 2013 coup, they launched attacks on churches and monasteries on a scale that by some accounts had not been equaled since 1321. Although the Brotherhood sporadically tries to make friendly gestures to Christians, the underlying mass sentiment is toxic.

In the new post-2013 environment, there are multiple reasons why Christians are such a natural target for Islamist terror. (I claim no great prophetic powers for having predicted this present strategy back at that time.) Given the strength of the Egyptian military and its strong intelligence networks, it is natural for jihadis to choose soft targets—poorly defended places and institutions—where the goal is to kill the maximum number of civilians. Once upon a time, Western tourists would have been the obvious targets of choice, but such visitors are no longer much in evidence. By default, then, Coptic Christian churches and communities are attacked.

But Christians have many other virtues as terror targets, fitting as they do into the Islamist global mythology. According to the propaganda vision of ISIS and like-minded groups, such attacks show the guerrillas to be Islamic warriors heroically struggling for the faith against its idolatrous enemies, who are also intimately linked with a corrupt and tyrannical regime. Moreover, anti-Christian terror serves to divide Egypt along religious and sectarian lines while offering the added bonus of infuriating the West. If attacks became sufficiently common, we might expect to see Upper Egypt sliding into overt sectarian conflict as Christian and Muslim militias battled.

But another agenda is also at work, as church attacks place Egyptian security forces in a dreadful quandary: How much repression can they properly launch against bloodthirsty terrorism, without appearing to take the side of Christians against Muslims? Such a consideration was no real problem in the Nasser years, when he was a popular national hero, and political Islamism was such a marginal factor. Clearly, it seemed, Nasserism represented a glorious future, an independent Arab road between capitalist West and Communist East, and no sane or vaguely progressive person paid much attention to those stuffy old religious ideas. Today, though, it is difficult to avoid the impression that if the Middle East has any future, it lies in some form of Islam. In the 1960s, thoughtful and educated Arabs could seriously believe in the intoxicating mix of nationalism, socialism, secularism, and pan-Arabism, all of which are now utterly discredited as serious ideologies. In Sisi’s world, military and bureaucratic elites rule for the sake of holding power, with the wealth that goes with it, and any higher aspirations are viewed very skeptically.

Such a situation can endure for decades, but not infinitely, and growing religious tensions might well detonate real change. It is all too tempting to play with scenarios and political war-games, which lack much basis in observed reality. In the Egyptian case, though, the situation has changed so rapidly, and generally deteriorated, over the past few years that it is necessary to project present trends only a little into the future. Some hypotheticals are all too probable.

If Egypt’s armed forces were to become engaged in prolonged internal warfare like the horrors that overwhelmed Algeria in the 1990s, how long would they be able to maintain the loyalty of their (conscripted) ordinary soldiers and junior officers? In particular, how long could the government count on the army and police to defend those hated Christians against good Muslims? If the armed forces split, that would simply open the way to revolution. Perhaps this would be an elite Colonels’ Coup, such as Nasser himself led in 1952, or else we might imagine something like the general defection of the armed forces to Islamist revolution as occurred in Iran in 1979. Either way, we would be looking at a new military order, with a radical new sympathy for Islamist causes. The fate of Coptic Christians in such a new Egypt would be grim indeed.

♦♦♦

We don’t have to wander too far from Egypt to find an ideological transition not too far removed from what have been imagining. Egypt was long dominated by a military/ bureaucratic/intelligence elite pledged to secular nationalist views, and that regime was firmly rooted into what we might call a classic Deep State. Sometimes, such a governing system can be overthrown, but it is also possible for it to switch towards a religious orientation that at first blush looks like a total reversal of older beliefs. Iraq offers the best example of such a transformation. However bizarre this might appear, we can witness a direct transition from secular nationalism to the most extreme forms of religious extremism, including the Islamic State itself.

From the 1960s until the Allied invasion of 2003, Iraq was led by the Ba’ath Party, which was pledged to secularist Arab socialism, and which prided itself on including leaders of diverse religious backgrounds. As in Egypt, though, those secular ideals became tainted over time, as the elite became ever further removed from its subject peoples. In the 1990s, Saddam Hussein made an explicit decision to adopt more Islam-friendly policies, which began a rapprochement with conservative Salafi Sunni believers. Domestically, the state moved from strident secularism to a pervasive religious reorientation, a Faith Campaign. That in turn laid the foundation for a revolutionary shift after 2003, with the growing resistance to Allied occupation.

Within a few years, most of the surviving Ba’ath leadership had forged close ties with extreme Sunni Islamists and Salafists, and many studies have now demonstrated the clear Ba’athist roots of the ISIS movement that emerged after 2013. (That story has been widely reported, including by Liz Sly in the Washington Post). At every stage of the rise of ISIS, the most active leaders and organizers included former members of the Iraqi Ba’athist military and political elite, and above all of the intelligence agencies, the mukhabarat.

Just what occurred in Iraq is open to debate. Some observers think that Ba’athist secularists cynically adopted a religious mantle in order to regain power; others see genuine religious conversions. More probably, we should not waste too much time in applying Western labels and dichotomies to Arab societies, where religion is so intimately bound up with the ties of family and clan, and of personal honor. In retrospect, perhaps the vaunted secularization of those nationalist movements was never as deeply ingrained as it appeared. And just possibly, the ambitions of today’s Islamic State are as much based in ideologies of clan, family, and simple realpolitik as they are in Islam.

But the lessons for understanding Egypt are suggestive. Is it vaguely conceivable that a secular- and nationalist-minded military/intelligence establishment make a near-overnight transition from persecuting Islamists to joining and leading them? Just ask the Caliphate.

♦♦♦

These experiences offer many lessons for U.S. policymakers, most powerfully in how they select their regional priorities. In so many ways, Egypt matters enormously, as by far the most populous Arab state, and as a cultural powerhouse for the whole Arab world. If the present regime did fail, that could have cataclysmic consequences. At the same time, it is far from clear what kinds of intervention might promote Western goals in the region, or whether any possible U.S. actions might do more good than harm. At the least, American leaders should recognize just how fragile the Sisi regime might be, in contrast with its recent predecessors. Having closed off alternative routes to change, any transition is likely to come from within the military or the intelligence world, and the U.S. should be prepared for sudden and perhaps lurching transformations.

Other lessons, though, are more general and more widely applicable. One involves the role of religion in political life, and by no means only in the Middle East. We tend, perhaps, to imagine “religious politics” being highly distinctive and segregated from “real” matters. Obviously they are not, and sacred and secular exist on a single spectrum, and often much more closely than we might assume. Also, even countries that give the impression of having controlled or suppressed those religious impulses really have not, however successful “modernization” efforts might appear. What is dormant is not dead.

U.S. policymakers in particular must be very cautious in selecting the regimes they seek to weaken or even to displace. As we saw all too well in Iran in 1979, and more recently in Libya and Iraq, removing even loathsome dictators can mean that they are replaced by far worse alternatives, commonly rooted in extremist forms of religion. And even when a state is seemingly destroyed, we honestly have little idea about what will grow from its ruins. The Iraq case suggests that even a movement that is supposedly crushed beyond hope of recovery can indeed return to life, and morph into still more dangerous forms. The less we understand of religious motivations in politics, the more likely we are to be shocked by such an afterlife.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Communism for Kids

The fence and guard tower at the Soviet forced labor camp Perm-36. Gerald Praschl / Wikimedia

Day by day, it becomes harder to tell parody from reality. If I told you that a major press was publishing a book called Communism for Kids, would you believe me? After due investigation, I can confirm that the title does exist, and that it is deadly serious. Even more remarkable, it comes from a very respectable publisher, namely MIT Press—yes, that MIT. (I cannot presently confirm suggestions of such possible future MIT titles as Sure, Johnny, You Should Take Candy From the Guy in the Van.) While we might like to attribute this project to temporary insanity, it does reflect some larger and really troubling currents in U.S. political discourse.

Communism for Kids is the work of Bini Adamczak, “a Berlin-based social theorist and artist” heavily involved in “queer theory.” When it originally appeared in German, the book was titled Kommunismus: Kleine Geschichte, wie Endlich Alles Anders Wird—roughly, “Communism: A Little Story, How Finally Everything Will Be Different”—without the explicit provocation of being aimed at children. In fact, the book is a simplified, user-friendly account of Marxist theory, illustrated with cartoons. At its heart are a series of case studies in pseudo-fairy tale language, where people explore various economic arrangements before settling on utopian communism.

Somewhere along the line, MIT Press decided to market it “for kids,” inspiring some confusion in the process. Amazon lists it as a children’s book intended for grades 3–7, although also suggesting a much more realistic age range of “18 and up.” Conceivably, the press deliberately chose the new title as a marketing gimmick in order to drive controversy and thereby increase sales. Alternatively, on the basis of their experience in Cambridge, Mass., they decided that there actually were enough play groups that would be delighted to work through Adamczak’s scenarios.

Either way, the book is targeted at “youngsters” broadly defined, and it has attracted some amazingly laudatory blurbs. Celebrity academic theorist Fredric R. Jameson remarks that this “delightful little book may be helpful in showing youngsters there are other forms of life and living than the one we currently ‘enjoy.’” Oh, the lowering severity that the professor bestows on us when we dare “enjoy” anything in our present monstrous dystopia! Novelist Rachel Kushner thinks that Adamczak’s is precisely the book we need at a time when global capitalism has brought us “more inequality than has ever been experienced by humans on earth” (which is a precise inversion of actual historical reality).

Assume for the sake of argument that Communism for Kids is not in fact designed to propagandize third graders, but is rather intended for teens and young adults. Is that not enough of a scandal in its own right? Somewhere in the book, might it not be acknowledged that communism is the bloodiest ideological system in human history? Solely measured by the number of his victims, Mao Zedong alone leaves Hitler in the dust. Could the book not mention such monuments to communist utopia as Kolyma and Vorkuta, among the largest and cruelest concentration camps that have ever existed?

Should it not be said that a solid scholarly consensus now accepts that this record of violence and bloodshed was a logical and inevitable consequence of the communist model itself, rather than a tragic betrayal or deformation? Evil Joseph Stalin did not distort the achievements and goals of Noble Vladimir Lenin: rather, he fulfilled them precisely. Pursuing the “for kids” framework, should we not see some equally cheery volumes such as A Day at the Gulag, and even (for middle schoolers) Natasha Is Shot as a Class Enemy? How about Springtime for Stalin?

As an intellectual exercise, just imagine that a major U.S. press offered a youth-oriented book on some other comparably bloody or violent system, such as National Socialism or white supremacy. The book might contain vignettes showing how young people at first learned to accept racial mixing and miscegenation. Eventually, though, they would realize the deeper underlying evils of Semitic influence. Or to paraphrase the advertising copy for Communism for Kids, such a little book would discuss a different kind of National Socialism, “one that is true to its ideals and free from authoritarianism.” Not of course that the publisher would actually be advocating such a thing, God forbid; it would rather be encouraging debate about the options available to contemporary youth. What reasonable person would object to such free discussion?

Even to describe such a project betrays its intrinsic lunacy. The book would not be treated seriously at any stage; it would not be accepted, and if it were, the press’s personnel would resign en masse rather than be involved in its actual publication. The press itself would likely not survive the debacle, and nor should it.

How, then, is that hypothetical instance different from Communism for Kids? Put simply, a great many educated Americans believe that totalitarianism of the left is utterly different from that of the right, and that communism should not be placed in the same toxic category as Nazism or fascism. According to this delusion, Americans who turned to communism through the decades were stubborn idealists, in stark contrast to those monsters who succumbed to racist or fascist theories. For all its possible failings, communism was not evil of itself. To misquote Chesterton, communism was not an ideal that was tried and found wanting, but rather was found difficult and therefore left untried. One day, though, when the stars align, we will do it right!

Such a benevolent view of communism is appallingly false and betrays a near total ignorance of the history of the past century. When we treat communism with tolerance or levity, we are scoffing at literally tens of millions of murdered victims. This is a disgusting moral idiocy, for which we must blame our educational institutions, and our mass media.

For a publisher like MIT Press to reinforce that view, to trivialize the communist historical record, is unpardonable. Have they no decency?

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Scandal-Free Obama

Beyond weakening the administration, the seemingly incessant wave of Trump scandals seems to reinforce liberals’ narrative of the previous president. As The New Republic remarked after the resignation of Michael Flynn, “Obama went eight years without a major White House scandal. Trump lasted three weeks.” Or as Obama himself boasted in December, “we’re probably the first administration in modern history that hasn’t had a major scandal in the White House.” To the horror of conservatives, who can cite a litany of official misdeeds during the Obama years, the apparent integrity of that era will feature prominently as historians evaluate that presidency. (Spoiler: as those historians are overwhelmingly liberal, they will rate it very highly indeed.)

In a sense, though, both sides are correct. The Obama administration did a great many bad things, but it suffered very few scandals. That paradox raises critical issues about how we report and record political events and how we define a word as apparently simple as “scandal.”

Very little effort is needed to compile a daunting list of horror stories surrounding the Obama administration, including the Justice Department’s disastrous Fast and Furious weapons scheme, the IRS’s targeting of political opponents, and a stunningly lax attitude to security evidenced by Hillary Clinton’s email server and the hacking of millions of files from the Office of Personnel Management. Even on the available evidence, the IRS affair had most of the major elements of something like Watergate, and a detailed investigation might well have turned up a chain of evidence leading to the White House.

But there was no detailed investigation, and that is the central point. Without investigation, the amount of embarrassing material that emerged was limited, and most mainstream media outlets had no interest in publicizing the affair. Concern was strictly limited to partisan conservative outlets, so official malfeasance did not turn into a general public scandal.

Misdeeds themselves, however, are not the sole basis for official statistics or public concern. To understand this, look for instance at the recently publicized issue of sexual assaults on college campuses. The actual behaviors involved have been prevalent for many decades, and have probably declined in recent years as a consequence of changing gender attitudes. In public perception, though, assaults are running at epidemic levels. That change is a consequence of strict new laws, reinforced by new mechanisms for investigation and enforcement. A new legal and bureaucratic environment has caused a massive upsurge of reported criminality, which uninformed people might take as an escalation of the behavior itself.

Political scandal is rather like that. To acknowledge that an administration or a party suffers a scandal says nothing whatever about the actual degree of wrongdoing that has occurred. Rather, it is a matter of perception, which is based on several distinct components, including a body of evidence but also the reactions of the media and the public. As long ago as 1930, Walter Lippman drew the essential distinction between the fact of political wrongdoing and its public manifestation. “It would be impossible,” he wrote, “for an historian to write a history of political corruption in America. What he could write is the history of the exposure of corruption.” And that exposure can be a complex and haphazard affair.

We can identify three key components. First, there must be investigation by law enforcement or intelligence agencies, which can be very difficult when the suspects are powerful or well-connected. Facing many obstacles to a free and wide-ranging investigation, the agencies involved will commonly leak information in the time-honored Washington way. The probability of such investigations and leaks depends on many variables, including the degree of harmony and common purpose within an administration. An administration riven by internal dissent or ideological feuding will be very leaky, and the amount of information available to media will accordingly be abundant.

Second, a great deal depends on the role of media in handling the allegations that do emerge. Some lurid tidbits will be avidly seized on and pursued, while others of equal plausibility will be largely ignored. That too depends on subjective factors, including the perceived popularity of the administration. If media outlets believe they are battering away at an already hated administration, they will do things they would not dare do against a popular leader.

Finally, media outlets can publish whatever evidence they wish, but this will not necessarily become the basis of a serious and damaging scandal unless it appeals to a mass audience, and probably one already restive and disenchanted with the political or economic status quo. Scandals thus reach storm force only when they focus or symbolize existing discontents.

The Watergate scandal developed as it did because it represented a perfect storm of these different elements. The political and military establishment and the intelligence agencies were deeply divided ideologically, both amongst themselves and against the Nixon White House. Leaks abounded from highly placed sources within the FBI and other agencies. Major media outlets loathed Nixon, and they published their stories at a time of unprecedented economic disaster: the OPEC oil squeeze, looming hyper-inflation, and even widespread fears of the imminent end of capitalism. The president duly fell.

But compare that disaster with other historical moments when administrations were committing misdeeds no less heinous than those of Richard Nixon, but largely escaped a like fate. Victor Lasky’s 1977 book It Didn’t Start With Watergate makes a convincing case for viewing Lyndon Johnson’s regime as the most flagrantly corrupt in U.S. history, at least since the 1870s. Not only was the LBJ White House heavily engaged in bugging and burgling opponents, but it was often using the same individuals who later earned notoriety as Nixon-era plumbers. In this instance, though, catastrophic scandals were averted. The intelligence apparatus had yet to develop the same internal schisms that it did under Nixon, the media remained unwilling to challenge the president directly, and the war-related spending boom ensured that economic conditions remained solid. Hence, Johnson completed his term, while Nixon did not.

Nor did it end with Watergate. Some enterprising political historian should write a history of one or more of America’s non-scandals, when public wrongdoing on a major scale was widely exposed but failed to lead to a Watergate-style explosion. A classic example would be the Whitewater affair that somewhat damaged Bill Clinton’s second term but never gained the traction needed to destroy his presidency. In that instance, as with the Iran-Contra affair of 1987, the key variable was the general public sense of prosperity and wellbeing, which had a great deal to do with oil prices standing at bargain-basement levels. Both Reagan and Clinton thus remained popular and escaped the stigma of economic crisis and collapse. In sharp contrast to 1974, a contented public had no desire to see a prolonged political circus directed at removing a president.

So we can take the story up to modern times. The Obama administration did many shameful and illegal things, but the law-enforcement bureaucracy remained united and largely under control: hence the remarkably few leaks. The media never lost their uncritical adulation for the president, and were reluctant to cause him any serious embarrassment. And despite troublingly high unemployment, most Americans had a general sense of improving conditions after 2009. The conditions to generate scandal did not exist, nor was there a mass audience receptive to such claims.

So yes, Obama really did run a scandal-free administration.

What you need for an apocalyptic scandal is a set of conditions roughly as follows: a deeply divided and restive set of bureaucrats and law-enforcement officials, a mass media at war with the administration, and a horrible economic crisis. Under Trump, the first two conditions assuredly exist already. If economic disaster is added to the mix, history suggests that something like a second Watergate meltdown is close to inevitable

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Don’t Forget the Epic Story of World War II

www.hacksawridge.movie

The World War II film Hacksaw Ridge is in contention for multiple Oscars, and I hope it wins a gaggle of them. It is a fine, well-made film, and a rare attempt in mainstream cinema to portray the heroism of a faithful Christian believer. Having said that, I have to lodge an objection. Without the slightest ill intent, the film contributes to a pervasive lack of understanding or appreciation of the U.S. role in that vastly significant conflict, the popular memory of which is utterly dominated by radical and leftist perspectives. For most people under forty, the war is recounted in terms of the country’s allegedly pervasive racism, bigotry, and sexism, in which the only heroes are those resisters who defied that hegemony. It has become Exhibit A in the contemporary retrojection of modern-day culture wars into the transmission of American history.

Hacksaw Ridge tells the story of Desmond Doss, a devout Seventh Day Adventist, whose religious views forbade him accepting military service. As a conscientious objector, he served as a medic, and found himself on the extraordinarily bloody battlefields of Okinawa. His feats of courage and self-sacrifice earned him the only Congressional Medal of Honor ever awarded to a conscientious objector. No one would have dared invent such a story, which clamored to be told. But here is the problem. If such a treatment were part of a broad range of accounts of the war, then it would be a wonderful contribution, but it does not form part of any such continuum. While the main narrative of the war has faded into oblivion, major events like Okinawa are recalled only as they can be told from a perspective that appeals to liberal opinion, and even to pacifists.

For many years, I taught a class on the Second World War at Penn State University, and I have an excellent sense of the materials that are available in terms of films, textbooks and documentaries. Overwhelmingly, when they approach the American role in the war, they do so by emphasizing marginal perspectives and racial politics, to the near exclusion of virtually every other event or controversy.

At that point, you might legitimately ask whether minority contributions don’t deserve proper emphasis, as of course they do. Waco, Texas, for instance, was the home of the magnificent Dorie Miller, an African-American cook on the USS West Virginia, who responded to the Japanese attack on Pearl Harbor by blasting at enemy aircraft with a machine gun. Miller was a superb American hero, as also was (for instance) Daniel Inouye, of the Japanese-American 442nd Regimental Combat Team, who suffered terrible wounds and was later awarded a Congressional Medal of Honor. The legendary Tuskegee Airmen produced a legion of distinguished (black) fliers, but we might particularly cite Roscoe Brown, the first US pilot to shoot down one of the Luftwaffe’s terrifying new jet fighters. All these individuals, and many like them, have been lauded repeatedly in recent books and documentaries on the war, for instance in Ken Burns’s 2007 PBS series The War. They absolutely deserve to be remembered and honored.

But they should not be the whole story, and in modern cultural memory, they virtually are. If you look for educational materials or museum presentations about America in World War II, I can guarantee you will find certain themes or events constantly placed front and center. By far the most significant thing to be highlighted in the great majority of films, texts, and exhibitions are the Japanese-American internments. Depending on their approach, other productions will assuredly discuss women’s role on the home front, and “Rosie the Riveter”. Any actual tales of combat will concern the Tuskegee airmen, or the Navajo code-talkers. Our students enter classes believing that the Tuskegee fliers were basically the whole of the Allied air offensive against Germany.

A like emphasis dominates feature films of the past couple of decades such as Red Tails (2012, on Tuskegee) and Windtalkers (2002, the code-talkers). Especially when dealing with the Pacific War, such combat-oriented accounts strive very hard to tell their tales with a presumed objectivity, to avoid any suggestion that the Japanese were any more motivated by pathological violence and racial hatred than the Americans. That approach was amply illustrated by Clint Eastwood’s sprawling duo of Flags of Our Fathers and Letters From Iwo Jima (2006). Western productions virtually never address the mass murders and widespread enslavement undertaken by the Japanese regime. Not surprisingly, the Japanese neo-militarist hard Right loved Eastwood’s Flags and Letters. (Fortunately, you are still allowed to hate Nazis, or we wouldn’t have the magnificent Saving Private Ryan.)

The consequences of all this are apparent. For many college-age Americans today, America’s war was largely a venture in hypocrisy, as a nation founded on segregation and illegal internments vaunted its bogus moral superiority. If awareness of Nazi deeds prevents staking a claim of total moral equivalence, then America’s record is viewed with a very jaundiced eye.

Even setting aside the moral issues, the degree of popular ignorance of the war is astounding. I have complained that the materials available for teaching military history are narrowly-focused and tendentious, but the opportunities even to take such courses have all but collapsed in recent years. Most major universities today will not hire specifically in military history, and do not replace retirements. Courses that are offered tend to be general social histories of the home front, which can be excellent in themselves, but they offer nothing of the larger context.

In terms of actual military enterprises, under-40s might at best know such names as Pearl Harbor, Omaha Beach (exclusively from Saving Private Ryan) and maybe Iwo Jima (from Flags / Letters). Maybe now, after Hacksaw Ridge, they will know something about Okinawabut only as seen through the eyes of one pacifist. (So what were U.S. forces actually doing in Okinawa? Why did the battle happen? How did it end?)

Military buffs apart, younger Americans know nothing about the Battle of the Bulge, which claimed nineteen thousand American lives. They have never heard of Guadalcanal, or Midway, or the Battle of the Coral Sea, or a series of battles that prevented the Pacific becoming a Japanese lake, and the main trade route of its slave empire. They know nothing about the land and sea battles that liberated the Philippines, although that could be politically sensitive, as it would demand coverage of the mass killings of tens of thousands of Filipino civilians by Japanese occupiers. That might even raise questions about the whole moral equivalence thing.

Younger Americans know nothing of the battle of Saipan, one of the truly amazing moments in U.S. military history. Within just days of the American involvement in the D-Day campaign in France, other U.S, forces on the other side of the planet launched a near-comparably sized invasion of a crucial Japanese-held island, in what has been described as D-Day in the Pacific. In just a couple of days of air battles related to this campaign, U.S. forces in the Marianas destroyed six hundred Japanese aircraft, an astounding total. Japan never recovered.

Quite apart from any specific incident, most Americans have virtually no sense of the course of the war, or American goals, or the political context. Nor will they appreciate the stupendous feats of industrial organization that allowed U.S. forces to operate so successfully on a global scale, and which laid the foundations for all the nation’s post-war triumphs. There was so much more to the story than Rosie the Riveter.

Nor do they appreciate the critical role of the war in creating American identity and nationhood, in forging previously disparate immigrant communities into a new national whole. So the Civil War was the American Iliad? Then World War II was our Aeneid, an epic struggle against authentic evil, which at once created the nation and framed its destiny. It should not be commemorated as a study in victimhood and injustice.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

What Was ‘America First’?

Charles Lindbergh. Photo: Library of Congress

In his inaugural address, Donald Trump used a slogan that he had already quoted approvingly in earlier speeches: “From this day forward,” he said, “it’s going to be only America first. America first.” Liberal writers profess to find the phrase terrifying, a confirmation not just of Trump’s dictatorial instincts, but also of his racial and religious prejudice. Sidney Blumenthal is one of many to recall that the slogan “was emblazoned on the banner of the movement for appeasement of Hitler.” In reality, the original America First movement of 1940–41 was far broader and more complex than this critique might suggest, and was actually much more respectable and even mainstream. It was a sincere anti-war movement that drew from all shades of the political spectrum. Its later stigmatization as a Nazi front group is tragic in its own right, but it also closes off legitimate paths of public debate that have nothing whatever to do with authoritarianism or bigotry.

The America First Committee (AFC) was formed in September 1940 and operated until the Pearl Harbor attack. In all that time, though, historical attention focuses on just one shameful moment, namely the speech that Charles Lindbergh gave in Des Moines, Iowa, on September 11, 1941, when he openly attacked Jews as a force driving the country toward war. The speech was appalling—and worse, Lindbergh’s anti-Semitism really did represent the view of a small minority of AFC supporters. That event, above all, fatally tainted the memory of America First. In modern television documentaries, the movement is usually mentioned alongside authentic Nazi groups like the clownish German American Bund, whose members paraded in brown shirts and swastikas. The impression we are left with is that the AFC was a deeply unsavory pressure group that tried to undermine the political will of a nation united behind the Roosevelt administration in its determination to fight Hitler when the proper time arose. Any ideas associated with America First must, by definition, be regarded as anti-Semitic, pro-Nazi, and toxic.

The fundamental problem with that view is that on most critical issues, the AFC held positions that were close to a national consensus, and in these matters, it was FDR and the interventionists who were the minority. The AFC can be understood as the clearest institutional manifestation of the nation’s deep-rooted anti-war sentiment.

It is very difficult today to understand just how deeply and passionately pacifist the U.S. was through the 1930s, and how strongly that sentiment persisted almost to the outbreak of war in 1941. That had nothing to do with anti-Semitism, or with sympathy for Hitler. Rather, it arose from a widespread perception of the First World War as an unmitigated catastrophe. According to this consensus, U.S. involvement in that earlier war arose because of the machinations of over-mighty financiers and plutocrats—what we might call the 1 percent—who deployed vicious and false propaganda from London and Paris. Also complicit were the arms dealers, for whom the phrase “merchants of death” now became standard.

Throughout the New Deal years, Republicans and Democrats alike worked to expose these crimes, and to ensure that they could never be repeated. Between 1934 and 1936, the Senate committee headed by Gerald Nye provided regular media copy about the sinister origins of U.S. participation in the First World War, a theme taken up in bestselling books. Those critical ideas led to the creation of a detailed series of Neutrality Acts of 1936–37, designed to prevent U.S. involvement in any new conflict. From 1935 through 1940, Congress repeatedly voted on the Ludlow Amendment, a proposed constitutional amendment that would have made it impossible to go to war without the consent of a national referendum except in cases of direct attack. As late as August 1941, when the House of Representatives approved a measure to extend the term of military draft service and ensure that the U.S. would retain a large fighting force, it did so by a single vote: 203–202.

Anti-war ideas saturated popular culture. Look, for instance, at the 1938 film You Can’t Take It With You, directed by the thoroughly mainstream Frank Capra. In one scene, the sympathetic anarchist played by Lionel Barrymore mocks the bureaucrat who is trying to make him pay taxes. What do you want the money for, he asks? To build battleships? They’re never going to be any good to anyone.

From 1939 to 1941, the Nazi-Soviet pact brought American communists wholeheartedly into the peace crusade, with results that are embarrassing in retrospect. Today, popular-culture historians cherish the memory of iconic folk singers like Pete Seeger, Woody Guthrie, and the Almanac Singers. But listen to the songs they were singing in 1940 and 1941, which mock the Roosevelts as warmongers who dutifully serve the interests of the British Empire: why should I die for Singapore? One tuneful Almanac singalong compares FDR’s appetite for war with his agricultural policy. The government destroys crops to keep prices high, says the song, so let’s plow under, plow under, every fourth American boy!

In such an atmosphere, even cynical jokes went viral. In 1936, Princeton students organized a prank organization called the Veterans of Future Wars. Since the government was so likely to draw them into idiotic wars, they argued, could they please have their bonuses now, instead of after the conflicts had ended? Veterans of Future Wars became a national campus sensation, with 60,000 members at its height. As in the 1960s, the anti-war cause was a campus youth movement, with its own potent folk songs.

From 1939, a series of ad hoc anti-war committees and campaign groups emerged across the country, out of which the AFC coalesced. America First had 800,000 paying members, and that at a time when membership meant far more than merely ticking a box on a website. Some AFC leaders were hard-right anti-New Dealers, but many others were former members of the Progressive movement, dedicated to social improvement and liberal reconstruction. Across the ideological board, women were very well represented. The churches were another key source of recruitment. Most American Roman Catholics, especially the Irish leadership, disliked the prospect of an alliance with imperial Britain, but many AFC supporters were liberal Protestants, organized through the mainline’s flagship magazine, Christian Century. AFC was cross-party, and cross-ideological.

Lindbergh’s horrible speech apart, virtually none of AFC’s campaigning or publicity materials so much as mentioned Jews. Instead, they raised a series of challenging questions about whether American interests were well-served by direct involvement in a war that might have consequences as grim and futile as those of its predecessor. What corporate interests desired such an insane conflict, and which media? Why were Americans so persistently vulnerable to manipulation and crude propaganda by foreign nations? Why would people not understand the administration’s blatant efforts to provoke an incident that would start a new war? (The AFC was actually dead right here. Throughout late 1941, FDR really did strive to provoke clashes between U.S. destroyers and German U-boats.)

Such questions are all the more relevant in light of more recent historical developments: questions about the nature of executive power in matters of foreign policy and the official use of propaganda and deceit to achieve a political end. You do not need to be a crypto-Nazi to oppose any war except one fought in the direct defense of national interests.

If Donald Trump’s administration wants to start a national debate about those aspects of the America First tradition, then good for them.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Sexual Blackmail’s Long History

Maybe Donald Trump is an aficionado of weird sexual practices, and maybe not: I really have no way of knowing. But I do know that stories of precisely this sort are a very common tactic used by intelligence agencies to discredit political figures at all levels. With very little effort, I can also tell you exactly how such tales emerge and how they fit into the modus operandi of particular agencies and governments, and there is no excuse whatever for other people not to know this.

The fact that U.S. agencies and media are being so uncritical of these latest Trump exposés and “dossiers” is alarming. The whole affair should alert us to the wave of bogus stories we can expect to be polluting our public life in the next few years.

Since so many roads in this affair seem to lead back to Russia, some background might be helpful. In 1991, senior KGB officer Vasili Mitrokhin defected to the British, carrying with him an amazing haul of secret documents and internal archives. Working with respected intelligence historian Christopher Andrew, Mitrokhin published a number of well-received books, including The Sword and the Shield (1999). The Mitrokhin Archive demonstrates the vast scale of Soviet disinformation—the invention and circulation of stories that were untrue or wildly exaggerated, commonly with a view to discredit critics or opponents of the USSR. These functions were the work of the KGB’s Service A, which was superlative at planting stories around the world, commonly using out-of-the-way news media. Service A also did a magnificent job of forging documents to support their tall tales. (Even serious and skeptical historians long accepted the authenticity of a particularly brilliant forged letter notionally sent by Lee Harvey Oswald.) Such disinformation was a major part of the KGB’s functions during the 1970s and 1980s, which were the formative years of the up-and-coming young KGB officer, Vladimir Putin.

One instance in particular illustrates Service A’s methods and how triumphantly they could shape public opinion. The Soviets naturally wanted to destroy the reputation of FBI Director J. Edgar Hoover, a dedicated and lethally effective foe of Soviet espionage until his death in 1972. Even after his death, the vilification continued, and in the context of the time, the deadliest charge that could be levied against him was that of homosexuality. Accordingly, the Mitrokhin Archive shows how hard the Soviets worked to spread that rumor.

Now, Hoover’s real sexual tendencies are not clear. He was a lifelong bachelor who maintained a very close friendship with a male subordinate, and our modern expectations would be that he was a closeted homosexual. Having said that, during Hoover’s youth it was common for members of the same gender to share close non-sexual friendships. What we can say with total confidence is that Hoover was fanatically conscious of his privacy and security, to the point of paranoia, and there was no way that he would do anything public that could arouse scandal or permit blackmail.

That context must make us very suspicious indeed of claims that Hoover was not only homosexual, but also a transvestite, stories that have become deeply entrenched in popular culture. But the origins of the cross-dressing stories are very suspect. The only source commonly quoted is a 1991 affidavit by a woman who claimed that some decades earlier she had met a heavily dragged-up Hoover at a private event, where he allegedly wished to be addressed as “Mary.” The story is multiply improbable, most would say incredible, and has been rejected even by historians who normally have nothing good to say about Hoover. Yet the tale circulated widely, and wildly. In 1993, searching for a new head of the FBI, the ever-tasteful new president Bill Clinton joked that it would be hard to find someone to fill Hoover’s pumps.

We can’t prove that Service A invented or spread the specific cross-dressing stories about Hoover, but they fit exactly with other Soviet efforts through the 1970s and 1980s. And that legend has subsequently become a truth universally acknowledged.

The Russians were absolutely not the only agency that circulated disinformation. Such tactics are common to most major agencies, including those of the U.S. But looking at the recent Trump revelations, we constantly encounter Russian sources for scabrous tales that are about as improbable as that of “Mary” Hoover.

The Trump dossier that is the source of current attention might be accurate, but there are real grounds for suspicion. The main source is a respected and credible British private agency, which like most organizations of its kind draws heavily on former members of that nation’s mainstream intelligence service. But this particular dossier was evidently not compiled by the standard means used by such agencies, where competing sources are evaluated and judged in an adversarial process. (Team A makes the best case for one interpretation, Team B argues the opposite, and you see which side makes the most credible argument.) Rather, the Trump material was gathered on a parti pris basis with the specific goal of collecting negative information on the then-candidate. This is in consequence a partisan report, which just did not exercise the kind of skepticism you would expect in a normal intelligence analysis. The stories (including the one involving the fetish) look not so much like kompromat (compromising material, which might be true) but rather seem to be Russian dezinformatsiya, disinformation.

I was stunned to read a recent story in the British Guardian suggesting that the CIA and FBI had “taken various factors into consideration before deciding [the dossier] had credibility. They include Trump’s public comments during the campaign, when he urged Russia to hack Hillary Clinton’s emails.” Is there really anyone who did not hear that “urging” as a classically outrageous Trumpian joke? Yet that is here cited as major confirmation of his sinister Russian connections.

Why would the Russians create such material, if it might discredit a figure who actually promised to be a close ally on the global stage? Perhaps it was a kind of insurance, to keep Trump in line if he became too hostile to future Putin actions. Far more likely, it was part of a general series of stories, scares, myths, and downright falsehoods that have circulated so freely over the past 18 months, with the collective goal of discrediting the U.S. democratic system and fomenting dissension, paranoia, and outright hatred in American public life. The Russians control our elections! The Russians watch everything you say and write, no matter how private you think it is! Russian hackers gave the Rust Belt states to Trump! Trump is a Putin puppet!

If the Russians were seeking to undermine the American political order, to discredit the U.S. presidency, and in short to destabilize the United States, then they have succeeded magnificently in their goal.

Over the next few years, Donald Trump’s many critics will be very open to accepting any and all wild stories about him and his circle. Some of those rumors might even be true. But a great many will be disinformation stories spread by the Russians and, who knows, by other international mischief makers. It would be wise to exercise some caution before falling for the next tall tale.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Checks and Balances

We have recently heard a great deal about the alleged evils of the Electoral College, and progressive opinion would currently like to abolish what it views as an obstacle to true multicultural democracy. Reasonable people can disagree about the virtues of the college itself, but the underlying arguments here are deeply troubling for what they suggest about the very widespread ignorance of the Constitution. This is a matter of rudimentary civics education.

The main indictment of the Electoral College is that it stands in the way of the expression of the voting majority. The main problem with this argument should be apparent: very few people actually believe in using simple electoral majorities to decide each and every political question.

If we did that, then all contentious issues would be resolved not by the decisions of courts or elected assemblies, but by referenda or plebiscites. And if the United States did work on that basis, many victories that progressives cherish—such as the end of formal racial segregation, the legalization of abortion, and the recognition of same-sex marriages—would have been achieved much later than they were or not achieved at all. Quite assuredly, the mass immigration that has occurred since the 1970s would not have taken place. The Founders knew plenty of examples of direct, unmediated democracy from the ancient world—plebiscites, referenda, votes on ostracism—and they absolutely rejected them.

The fact that particular institutions stand in the way of the popular will does not mean that they are anti-democratic. The United States does not work on the basis of crude majoritarianism, and it was never intended to do so. Any elected representative who fails to understand that is in the wrong profession.

Progressives further allege that the Electoral College’s undemocratic setup betrays the institution’s origins in the political needs of slave states: states’ power in the college is based on their total number of representatives and senators, and the Three-Fifths Compromise originally gave Southern states extra representatives in proportion to their (non-voting) slave populations. The college as an institution is thus tainted by a history of racial oppression. Only by ending it can the country march along the path to true racial equality.

But if you actually read the Founding Fathers, you see they are constantly trying to balance and reconcile two competing forces, namely the people’s role as the basis of government and the need to restrain and channel the shifting passions of that same people. That is why the U.S. government has the structure it does of House, Senate, and presidency, each with its different terms of office. The goal is to ensure that all three will not suddenly be elected together in a single moment of national fervor or insanity. While those institutions all represent the people, they do so with different degrees of directness and immediacy.

At every stage, the Founders wished to create a system that defended minorities from the majority, and moreover to protect smaller and less powerful regions and communities from their overweening neighbors. That protection did not just involve defending the institution of slavery, but extended to any number of other potential conflicts and rivalries of a regional nature: urban versus rural, merchants against manufacturers, farmers against merchants and manufacturers, skeptics versus pious true believers.

The critical need for intermediary institutions explains the role of states, which is apparently such an acute grievance for critics of the Electoral College. As the question is often asked today: why does that archaic state system stand in the way of the national popular will, as expressed through the ballot box? But let’s pursue the argument to its logical conclusion. If the Electoral College is really such an unjustified check on true democracy, its sins are as nothing beside those of the U.S. Senate, whose members hold power regardless of the relative population of their home states. California and Texas each have two senators, and so do Wyoming and Vermont. Is that not absurd? Back in the 1920s, the undemocratic quality of that arrangement was a major grievance for the leaders of populous industrial states, who could not understand why the half-million people of Arizona (say) had exactly the same Senate clout as the 10 million of Pennsylvania.

The Founders were also far-sighted enough to realize that those crude population figures would change dramatically over time. Probably in 20 years or so, Arizonans will outnumber Pennsylvanians. In the long run, things really do balance out.

So the next time you hear an argument that the Electoral College has served its purpose and should be abolished, do press the speaker on how far that argument should be taken. Why should we have a U.S. Senate, and should it be abolished forthwith? Why do we have more than one house of Congress anyway? Why don’t we make all decisions through referendum?

The Founders actually did know what they were doing, and Americans urgently need to be taught to understand that.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

When ‘Paranoia’ Is Justified

The U.S. has a lengthy tradition of paranoia and conspiracy theory dating from colonial times onwards, and it would not be difficult to cite many examples of that sad trend—times when Americans directed their suspicions toward such groups as Catholics, Jews, and Mormons. Today, that paranoid tradition is usually deployed against conservatives, with Trumpism presented as the latest manifestation of the American Paranoid Style.

Running alongside that, though, is an equally potent tradition of false accusations of paranoia: of using mockery and psychobabble to minimize quite genuine threats that the country faces. Yes, Sen. Joe McCarthy made wild and even ludicrous claims about communist subversion, but communists had in fact infiltrated U.S. industry and politics, a potentially lethal menace at a time when the U.S. and USSR stood on the verge of apocalyptic warfare. We were dealing with far more than a “Red Scare” or vulgar “McCarthyism.” And through the 1990s, anyone suggesting that al-Qaeda might pose a serious threat to New York or Washington was accused of succumbing to conspiracy nightmares about James Bond-style supervillains.

Sometimes, alleged menaces are bogus; but sometimes the monsters are real. Sometimes, they really are out to get you.

Next year, American paranoia will be much in the news when we commemorate the centennial of U.S. entry into the First World War. At that time, xenophobic and nativist passions unquestionably did rage against people derived from enemy nations, above all Germany. Vigilante mobs brutally attacked German individuals and properties, and a whole culture war raged against any manifestations of German language, literature, or music. Outrageous scare stories circulated about German plots, spies, and terrorists. When those stories are recounted in 2017, you can be certain that they will conclude with telling contemporary references about modern-day paranoia and xenophobia, especially directed against Muslims.

But how valid were those historic charges? Some years ago, I researched the extensive domestic security records of the state of Pennsylvania during the First World War era, at a time when this was one of the two or three leading industrial areas, critically significant to the war effort. Specifically, I looked at the internal records of the Pennsylvania State Police, an agency notorious in left-wing circles as a bastion of racism, reaction, and anti-labor violence. The agency was on the front lines of concerns about spies and terrorists, as the primary body to which citizens and local police would confide their suspicions about German plots.

What I found about those security efforts differed massively from the standard narrative. The first impression you get from those confidential documents is how extraordinarily sensible and restrained those police officers actually were. In the face of hundreds of complaints and denunciations, the agency’s normal response was to send in an undercover cop, who would investigate the alleged German traitor. On the vast majority of occasions, the officer would then submit a report explaining why Mr. Schmidt was in fact harmless. Yes, the officer might say, Schmidt has a big mouth, and cannot restrain himself from boasting about German victories over the Allies. He might on occasion have said something stupid about how the gutless Americans will never be able to stand up against the Kaiser’s veterans. On the whole, though, he is a windbag who should be left alone. Quite frequently, the reports might say, sympathetically, that Mr. Siegel is a harmless and decent individual whom the locals dislike because of his strong accent, and they really should stop persecuting him. Or that all the evidence against Mr. Müller was cooked up by hostile neighbors.

Overall, the internal security efforts in Pennsylvania at least impress by their sanity, decency, and restraint. That might be a tribute to the human qualities of the cops in question, although it is also true that enemy aliens were so abundant in Pennsylvania that nobody could feasibly have tried to jump on every misplaced word. But you look in vain for evidence of official paranoia. Ordinary people might have been “spy mad,” as a report noted, but the cops weren’t.

So innocuous were the general run of reports that I tended to believe those that did refer to sedition, which was no myth. A great many Germans or German sympathizers wound up in trouble because they had said or done things that were truly destructive in the context of a nation at war. They did publicly laud German armies, disparage U.S. forces, and spread slanders about American atrocities in Mexico as well as on the Western Front. Some even flew the German flag and denounced German neighbors who supported the war effort.

And then there were the spies and terrorists. When you read next year about the alleged paranoia of the time, do recall the genuine German conspiracies of the time, such as the Black Tom attack that occurred in Jersey City in July 1916, while the U.S. was still at peace. German saboteurs destroyed a U.S. munitions shipment destined for the Allies, in the process unleashing an explosion so large that it made people in Maryland think they were hearing an earthquake. Also recall that German secret agents really had formed working alliances with dissident groups on U.S. soil—Irish Republican militants in the big cities, Mexicans in the Southwest.

If the Germans ever did plan to strike again at the U.S. war effort, Pennsylvania would surely be their first target, and petty acts of arson and sabotage abounded. How sensible, then, were the state police to focus their efforts on German sympathizers who really did look and act like potential terrorists? In one typical case, a young German miner in Carbon County was heard making pro-German remarks. This was alarming because he was located right in the heart of the strategic coal country. On further examination, Wagner speculated in detail about just how the Allies could be crushed militarily. He then boasted to an undercover officer that he knew how to convey information to Philadelphia, where radio transmissions could carry it to the Mexican border, and thence to German agents. The investigating officer concluded, with the far from “hysterical” judgment, that Wagner “is a dangerous man … for he is very loyal to Germany; would like to work for the Fatherland, and his people who are in the war.”

I mentioned the comparison between “paranoia” in the Great War and modern Islamophobia. Actually, that Islamic parallel is closer than we might think. Then as now, some people spread worthless slanders about foreigners and aliens, but also, then as now, some of the nightmare stories were actually true. Among the mass of harmless ordinary migrants devoted to working to improve themselves and their families, there really were, and are, people out to destroy America and Americans. Despite all the horror stories we hear about idiots in 1917 striking at the Kaiser by kicking a dachshund in the street, German spies and terrorists really existed, and they posed a lethal threat.

I mention this context now because you are not going to hear much about it in the coming year, when we will once again be lamenting the American Paranoid Style.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Will Faithless Electors Cause a Constitutional Crisis?

This past presidential election was among the most unsavory in U.S. history, and it might not even be over yet. Unless we are very careful, this election could yet come close to crippling constitutional government.

The problem is straightforward enough: namely, that large sections of the losing side stubbornly refuse to admit defeat. That is bad in itself. Around the world, one of the commonly accepted criteria in judging a democratic society is whether the losing party agrees to stand aside after losing an election. Initially, Hillary Clinton conceded defeat, and did so with grace and maturity, partly (it seems) under White House pressure. Still, an alarming number of her followers remained diehards, and the Clintons have now joined the demand to recount votes in Wisconsin and other swing states.

That recount demand is rooted in some murky statistics, and some assertions that can now be shown to be bogus. Initially, New York magazine quoted an expert study purporting to show that votes in critical states had been manipulated by computer hacking. According to those early claims, a technical comparison of regions within particular states showed that places using electronic voting turned out surprisingly high votes for Trump relative to areas that relied on paper ballots. Hence, said the advocates, there was prima facie evidence of malfeasance, and votes in swing states should be audited carefully before any final decisions can be made about the election’s outcome—however long that process might take.

Actually, these suggestions were absurd, and were promptly recognized as such by many liberal media outlets. The allegations were, for instance, utterly rejected by quantitative guru Nate Silver, who is anything but a Trump supporter. As he and others noted, the non-urban areas that were vastly more inclined to vote for Trump were also the ones most likely to use electronic voting. Big cities, in contrast, commonly used paper ballots. Obviously, then, we would expect votes cast by computer to lean heavily toward Trump in comparison to those marked on paper. That result certainly did not mean that Russian techies in secret fortresses in the Urals were hacking computers in Wisconsin and Michigan to delete votes cast for Hillary Clinton. Not long after the New York story, the main computer expert cited made clear that he himself did not accept the hacking explanation, although he still felt that an electoral autopsy was called for. That process is now underway, and with the support of the Clinton camp.

The fact that such mischievous allegations have even been made bespeaks liberal desperation at the defeat of their candidate, and a bottom-feeding attempt to seek any explanation for the catastrophe those liberals feel they suffered on November 8. Sadly, though, these unfounded allegations will remain alive in popular folklore for decades to come, with the simple takeaway: Republicans stole the 2016 election.

Those electronic issues pale in comparison with Democratic Party resistance in the Electoral College, where delegates are scheduled to meet on December 19. Normally, those electors would simply be expected to confirm the results of the November ballot, but liberals have demanded that they do the opposite, and actively nullify the result. Some electors have already stated that they will refuse to accept the majority votes cast in their Trump-leaning states. Notionally, these “Hamilton electors” will take this course not from any partisan motivation but rather to draw attention to the perceived injustices of the Electoral College system, which in their view should be replaced by a national popular vote. Online petitions urging other electors to join the defection have garnered millions of signatures.

Donald Trump’s lead in the college was so substantial—probably 306 to 232—that a handful of “faithless electors” should not affect the overall result, which could be overturned only through the concerted efforts of dozens of pro-Hillary activists. That is extremely unlikely to happen, but all the credentialed experts dismissed as unthinkable so many other things that actually have happened in this turbulent year.

For the sake of argument, imagine that enough electors go rogue to flip the election. Think through the likely consequences of such an outcome—in which Hillary Clinton is inaugurated in January, rather than Donald Trump. It is inconceivable that a Republican Congress would accept this result. It would offer zero cooperation in any legislative efforts, and it would presumably stonewall any and all approval of Clinton-nominated officials or judges. The only way to operate the government in those circumstances would be for the president to make extensive use of executive orders, and to fill official posts through an unprecedented volume of recess appointments. Theoretically, that method might even be used to fill Supreme Court vacancies. Constitutional government would have broken down, and we would be facing something like a Latin American presidential dictatorship. For several years, Washington’s political debate would be reduced to something like a Hobbesian war of all against all.

Does anyone really want to see a Clinton presidency at such a cost?

Nor is it easy to see how such a cycle could ever be broken once set in place, and particularly how the precedent set in the Electoral College could ever be overcome. Would not Republican electors seek revenge in 2020 or 2024? In that event, the November elections would become merely an opening gambit in an interminable legal process.

It is also ironic to see Hillary’s supporters demanding action in the Electoral College on the grounds of her convincing win in the popular vote. As they argue, how could any administration seriously claim a “mandate” with just the 46 percent or so of that vote earned by Donald Trump? Older election aficionados might cast their minds back to 1992, when an incoming Clinton administration decided to go full steam ahead on a number of quite radical policies, including a bold attempt to establish a national health-care system. The president then was Bill Clinton, who owed his presidency to gaining just 43 percent of the popular vote. Mandates are strange and flexible beasts.

Through the years, we have witnessed a number of elections so catastrophic that they seemingly threaten the existence of one or the other party. In the mid-1970s, few serious observers believed the Republican Party would survive the Watergate crisis, and similar pessimism reigned on the right following Obama’s victory in 2008. Yet despite such disasters, political currents soon changed, and Republicans won historic victories in 1980 and 2010. The despairing Democratic Party of the late 1980s likewise managed to resurrect itself sufficiently to hold power through much of the following decade. The lesson is straightforward: complain all you like about defeat, but console yourself with the prospect of future recovery and victory, probably in as little as two years’ time. To that extent, the American political system is remarkably forgiving of even egregious failure.

But that system also depends on elections securing clear and commonly agreed outcomes, in accord with principles very clearly described in the Constitution. If those decisions are not accepted, and are subject to constant sniping and subversion, then that Constitutional settlement will simply run aground.

If people don’t learn to lose, the Constitution fails.

As for me, I will breathe again on December 20.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

White Christian Apocalypse?

I have been deliberately holding off on election-related comments on matters about which I have little novel to contribute. On one critical issue, though, contemporary debate and theorizing really is trespassing on my areas of expertise.

For some 15 years now, I have been writing about the idea of the U.S. becoming a majority-minority country, in which no single ethnic or racial group constitutes a majority. I discussed this, for instance, in my book The Next Christendom, back in 2002. That idea has recently become quite standard and orthodox, and is an increasingly familiar element of political rhetoric, especially among liberals and Democrats. But at least as the idea is appearing in the media and political discourse, it is being badly misunderstood, in two critical ways. For some, these misunderstandings arise from excessive optimism; for others the flaw lies in pessimism. These points may seem stunningly obvious, but as I say, they escape a lot of otherwise informed commentators. Consciously or otherwise, observers are letting themselves be deceived by the fluid nature of American ethnic classifications.

Firstly, and obviously, “minority” is not a uniform category.

After the recent election, I saw plenty of articles saying this was the last gasp of White America before whites lost their majority status, maybe sometime around 2040. Well, 2040 is a long way off, but let us look at the projections for what the U.S. population will look like in mid-century, say in 2050. The best estimate is that non-Latino whites will make up some 47 percent of that population, Latinos 29 percent, African-Americans 15 percent, and Asians 9 percent. Allow a couple of percentage points either way.

In that situation, “whites” will indeed be a minority. But the future U.S. will be a very diverse nation, with multiple communities whose interests might coincide on some issues but not others. The fact that whites will be a minority in 2050 does not, for instance, mean that African-Americans will have unlimited latitude to achieve their goals, or that blacks can count on the reliable support of Asians and Latinos. On some issues, yes, on others, no. Just to take a specific issue, a distinctively African-American issue like reparations for slavery is presumably not going to appeal to the mass of Latino or Asian-American taxpayers any more than it presently does to old-stock whites.

I have actually talked with people who are convinced that by 2050, African-Americans will be a majority in this country. No, they won’t, not even close. Actually, the African-American share of the population will not even grow that substantially. The figure was around 12 percent in 1980, rising to 15 percent by 2050. Much of that growth reflects newer African migration, from communities that generally do not identify with African-American politics or traditions.

Also, what do we mean by “white”? Historically, the category of “whiteness” has been very flexible, gradually extending over various groups not originally included in that constituency. In the mid-19th century, the Irish were assuredly not white, but then they became so. And then the same fate eventually befell Poles and Italians, and then Jews. A great many U.S. Latinos today certainly think of themselves as white. Ask most Cubans, or Argentines, or Puerto Ricans, and a lot of Mexicans. Any discussion of “whiteness” at different points in U.S. history has to take account of those labels and definitions.

Nor are Latinos alone in this regard. In recent controversies over diversity in Silicon Valley, complaints about workplaces that are overwhelmingly “white” were actually focused on targets where a quarter or more are of Asian origin. Even firms with a great many workers from India, Taiwan, or Korea found themselves condemned for lacking true ethnic diversity. Does that not mean that Asians are in the process of achieving whiteness?

Meanwhile, intermarriage proceeds apace, with a great many matches involving non-Latino whites and either Latinos or people of Asian origin. (Such unions are much more common than black-white relationships.) Anyone who expects the offspring of such matches to mobilize and rise up against White Supremacy is going to be sorely disappointed.

The second point specifically concerns the book The End of White Christian America, by Robert P. Jones, a work I found rewarding and provocative. But the title has been much cited and misused (not Jones’s fault!). Typically doom-laden was the Washington Post’s headline, “White Christian America Is Dying,” and the takeaway for most liberals is: and good riddance.

Reading some post-election comments, it seemed as if commentators were expecting the “white Christian” population to evaporate, which it won’t do. Firstly, non-Latino whites will of course remain, and will still, at least through the 2050s, constitute by far the nation’s largest ethnic community. A 47 percent community still represents an enormous plurality. Actually, the scale of “white Christian” America will be far more substantial even than that figure might suggest, given the de facto inclusion of other groups—especially Latinos, and possibly Asians—under the ethnic umbrella. Intermarriage accelerates the expansion of whiteness.

Whites are not going away, and nor are Christians. One great effect of the 1965 Immigration Act was to expand vastly the range of ethnic groups in the U.S., who were overwhelmingly Christian in origin. That is true obviously of Mexicans, but also of Asian-Americans and Arab-Americans. New generations of Africans trend to be fiercely Christian. The American Islamic population, for instance, was and remains tiny as a proportion of the national total, and it will continue to do so.

So no, we are not looking to the end of white Christian America, nor to the passing of white Christian America. In 2050, this will be a much more diverse country religiously and ethnically. But if you are waiting for the White Christian Apocalypse, you may have the wrong millennium.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Of Monsters and Black Lives

Lonnie David Franklin is a monster.

Franklin, the “Grim Sleeper,” is a convicted serial killer who, between 1985 and 2007, murdered 10 to 200 women in Southern California. (I will return to that extraordinarily wide range of estimates later.) The fact that he is an African-American is centrally relevant to my purpose here. He is an appalling object lesson in the lethal power of official racism, past and present. At the same time, though, his case should also serve as a serious caution in current debates about reforming American policing in an age of urban unrest.

On a personal note, one of my odder claims to fame is that in the early 1990s, I pioneered the academic study of serial murders committed by African-Americans. At that time, characters like Ted Bundy and Jeffrey Dahmer had become worldwide folk villains, but the burgeoning literature on serial killers made next to no reference to black offenders. Some commentators even suggested that such killers did not exist, making this a distinctively white pathology. Knowing what I did about the number of real black offenders, I disagreed strongly. I argued that African-Americans made up perhaps 15 percent of all U.S. serial killers, and subsequent research has supported that figure.

In stressing the abundance of black multiple killers, my goal was not of course to highlight the savagery and cruelty of any particular race, but rather to point to the neglect of minority victims. This actually gets to the issue of how serial murder happens, and why we need to consider much more than the evil or derangement of any given individual. Everything ultimately depends on the availability of victims and the priority that society places on them.

A vicious thug who likes to kill police officers, corporate CEOs, or Catholic priests is unlikely to claim more than one victim before the authorities start paying attention and reacting very forcefully. Hence, the man does not become a serial killer in the first place. If, though, a comparably disturbed offender chooses instead to target “low-value” or disposable individuals, such as street prostitutes, he can kill a great many victims without the police taking much notice.

That is all the more true if we also factor in a social ambience where life is assumed to be short and tenuous, for example in an era of gang wars and rampant drug abuse. If police find a corpse in such circumstances, it simply does not become a high investigative priority. Often, in fact, the dead are not even recognized as murder victims, but are simply dismissed as likely drug overdoses. Even when women survive such attacks and escape from their assailants, police generally pay little attention to any complaints they might make.

This is where race comes in so centrally. One of the golden rules of homicide is that generally, like kills like. People tend to kill within their own social setting, and commonly within their own class and race (and often, their own neighborhood). Throughout American history, some black men have committed savage, random violence against people of their own race, and to that degree, they are exactly the same as their white counterparts. Consistently, though, the fact that their victims are also black, and usually poor, means that police have paid little attention to those crimes, allowing individual offenders to count their kills in the dozens or hundreds. Even if they are arrested and convicted, media bias has meant that such offenders receive little public attention, leading police and government to underplay or ignore the problem of serial violence in minority communities. Racial bias thus contributed to the mass victimization of poor communities, and above all of poor women.

Exhibit A in this story would be Los Angeles in the 1980s and early 1990s, the era of crack wars and rampant gang struggles, when murder rates were skyrocketing. Police focused heavily on those crimes and pathologies, and largely neglected the mass slaughter then underway of poor minority women, whose deaths were basically noted in passing. California media in the 1980s identified a prolific serial killer called the “Southside Slayer,” who in retrospect might have been a composite of six or seven independent and unrelated offenders. At more or less the same time, Los Angeles was the hunting ground for several terrifying serial killers, men such as Louis Craine, Michael Hughes, Chester Turner, and Lonnie Franklin himself—all African-American. DNA evidence suggests that other yet unidentified killers were also active in the same years, and often the very same streets.

And that was just Los Angeles. The total number of victims involved here is unknown, and probably unknowable. Lonnie Franklin, as I mentioned, was officially implicated in around ten deaths, but a substantial collection of portrait photographs was found in his possession. If in fact they are trophies of his other, unrecorded victims, then we might be counting his victims in the hundreds—virtually all black and Latina women.

Similar stories could be told of other crisis-ridden inner-city areas across the nation. Other notorious names included Lorenzo Gilyard in Kansas City and Anthony Sowell in Cleveland. Such offenders are not rare, and what they have in common is their choice of marginalized victims: poor, minority, female, and commonly drug users or prostitutes.

The solution would be to reshape police priorities so that forces place a much higher premium on minority victims and are more sensitive to the possible presence of compulsive sexual criminals. There should be no “low value” victims. Put another way, the message would be that black lives matter, and especially black women’s lives. Through the years, community-activist groups have made strides in this cause, so that murder series are now more likely to be acknowledged, but much remains to be done.

And this is where we face a paradox. As black communities have protested against violence and discrimination by police, the resulting conflicts have strongly discouraged police from intervening in minority areas, reducing proactive interventions. Although this is debated, much evidence now suggests that the immediate result has been an upsurge of crime and violence in those areas, through the “Ferguson Effect.” Police tend to ask why they should go into an area unnecessarily if what they do is going to end up on YouTube and the evening news. In fact, such an idea is by no means new. After the urban rioting of the mid-1960s, police massively reduced their footprint in most inner-city areas, and the consequence was the jaw-dropping escalation of violence and homicide between 1967 and 1971.

Today, we are only beginning to see the first inklings of the Ferguson Effect and the consequences of the reduced police presence. The less police intervene in troubled minority areas, the easier it will be for poor victims to die and disappear, and for men like Lonnie Franklin to hunt without check. In the worst-case scenario, these could be very good times for serial predators, not to mention rapists and domestic abusers.

Less policing means more crime, and more victims. If you reduce levels of policing sufficiently, you will create a perfect ecology for victimization.

Obviously, this is not a simple dichotomy: the choice is not between policing that is interventionist and brutal, on the one hand, versus total neglect on the other. What we need, ideally, is effective, color-blind policing firmly rooted in particular communities, where all groups can rely on the good intentions and law-abiding character of their police forces. Trust is everything.

But that situation will not come overnight. In the interim, withdrawing or reducing the police presence runs the risk of endangering a great many ordinary people, whose lives absolutely must matter. We are talking about equal justice, and equal protection.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Terrorism With the Religion Taken Out

When bombs went off in New York City’s Chelsea neighborhood on Saturday night, state and city officials said some very silly things. But understanding those remarks is actually key to understanding U.S. policy toward terrorism in general.

The immediate response of Mayor Bill de Blasio was to reject possible links to terrorism as such. Gov. Andrew Cuomo, meanwhile, declared that “a bomb exploding in New York is obviously an act of terrorism, but it’s not linked to international terrorism—in other words we find no ISIS connection, etc.” Both men also rejected possible linkages to another bombing earlier in the day at Seaside Park, NJ, which had targeted a Marine charity run.

At the time both comments were uttered, nobody had any idea who had undertaken the attacks or what their motives were. Feasibly, the attacks could have been launched by militants working in the cause of international Islamism, or black power, or white supremacy, or none of the above: they might have been the work of a lone mad bomber furious at his maltreatment by the power company. De Blasio was thus right to leave open the question of attribution, but he was wrong to squelch the potential terrorist link. His comment was doubly foolish given that the New Jersey attack had happened the same day and involved similar methods, which in itself indicated that an organized campaign had begun. As Cuomo had no hint who the attackers were, he actually could say nothing whatsoever about whether they had any connections to the Islamic State.

The New York Times, meanwhile, headlined that the explosion was “intentional,” which was helpful, as it meant that a New York City garbage can had not detonated spontaneously.

Why on earth would de Blasio and Cuomo make such fatuous comments, especially when their security advisors must have been telling them something radically different? (NYPD intelligence and counter-terrorist operations are superb.) Why, particularly, would they make remarks that are virtually certain to be disproved within days?

Both de Blasio and Cuomo made an instant decision to go the heart of the matter as they saw it, which was not analyzing or discussing terrorism, but rather preventing hate crimes and pre-empting “Islamophobia.” In doing this, they were closely echoing the approach of Barack Obama, who has explicitly stated that the danger of terrorism is wildly overblown, as fewer Americans die from terrorism than die from slipping in their bathtubs. (Thank heaven Obama was not president in December 1941, or he would presumably have been lecturing the American people about how small the casualty figures were on the USS Arizona, when set aside road deaths.) In contrast, the real and pressing danger facing the nation is ethnic and religious hatred and bigotry, which is bad in itself, and which also threatens U.S. prestige and diplomatic clout in the wider world.

Combating that threat must take absolute supremacy. That means (among other things) systematically underplaying and under-reporting any and all violent incidents committed by Muslims, or even overtly claimed for Islamist causes. Where Islamist claims are explicitly made, then the waters must be muddied by suggesting other motives—presenting the assailant as a lone berserker, motivated perhaps by psychiatric illness or homophobia. We rarely hear this ubiquitous strategy identified or named, so I offer a name here: this is the U.S. policy of de-Islamization.

With a mixture of bemusement and despair, we watch the extremely limited coverage of the savage attack in St Cloud, Minn., where a Somali man approached strangers in a mall, asked them if they were Muslim, and attacked any and all non-Muslims with a knife while shouting “Allah akbar.” The Islamic State rapidly acknowledged the assailant as one of its soldiers. The FBI has labeled the attack “a potential act of terrorism.” (You think?) Rather than focusing on the attacker or his motivations, CNN’s coverage of the incident emphasizes that “Community leaders fear anti-Muslim backlash, call for unity.” If you have the languages, you are much better off accessing French or German news sources for coverage of such American events.

What about Cuomo’s “international terrorism” point? This represents a throwback to what should be an extinct classification system for terrorist attacks.

In years gone by, some terror attacks were launched by U.S. citizens working in various causes, while others were the work of international forces. The latter might include an Iraqi militant assassinating dissidents in Michigan. But the label also had ethnic and religious overtones. In the 1980s and 1990s, domestic terrorism usually implied white supremacists or neo-Nazis, while “international” commonly denoted Islamic or Middle Eastern connections.

That distinction made sense when the U.S. had a small Muslim population, very few of whom were tied to international causes or organizations. That situation is now totally different, and most of the numerous Islamist terror attacks on U.S. soil of the past decade have been undertaken by U.S. residents or citizens. Orlando killer Omar Mateen was born in New York State, and Fort Hood terrorist Nidal Hassan was a Virginian serving in the U.S. Army. The man currently identified as a suspect in the Chelsea attacks is Ahmad Khan Rahami, a naturalized U.S. citizen.

All these events are thus domestic terror attacks, but they were committed in the name of global Islamist causes, specifically of the Islamic State. So why does the domestic/international dichotomy matter any more?

When Cuomo said the Chelsea attacks were not international in character, what he meant to imply was that they were neither Islamic nor Islamist in inspiration. His statement was simply deceptive, and was part of the larger campaign to de-Islamize the present terror campaign.

Whoever the next president may be, I am not too concerned about how “tough” they aspire to be toward terrorism in general. I just want them to acknowledge the deadly seriousness of the situation this country faces from domestic guerrilla campaigns, and most importantly, the religious and political causes in which most of that violence is undertaken.

Let’s end de-Islamization.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Low-Tech Terror

If a mysterious alien ray swept every gun off the North American continent tomorrow, very ordinary and low-skilled militants could still perpetrate horrendous violence quite comparable to last month’s Orlando attacks. In understanding and forestalling terrorist threats, access to weaponry is only one consideration out of a great many, and that fact is crucial to contemporary political debates.

But without guns, without so-called “assault weapons,” how could terrorists kill innocents in large numbers? One answer to that question comes from the Chinese city of Kunming, where in 2014 a group of eight Islamist militants from the Uighur people stormed the local rail station, killing 29 civilians. As private access to firearms is extremely difficult in China, the killers used long bladed knives, and used them to devastating effect. That tactic has been repeated, and some horrible Chinese “spectaculars” have reached international attention. Last year, the same Uighur movement attacked a Xinjiang coal mine, reportedly killing 50 workers.

Such mass knife attacks occur quite frequently in China, and by no means always for political motives. Still, the fact that any tactic has been so successful in one country attracts the attention of terrorist social media, such as the Islamic State publication Inspire, which brings them to global attention. IS especially recommends that followers around the world should use whatever means available to attack and kill unbelievers, and if guns and explosives are not easily found, then knives are quite acceptable.

Knife attacks have one major drawback for terror groups, namely the large numbers of people needed to inflict mass casualties. Mobilizing a group of eight attackers is difficult without a high danger of penetration and exposure. Other forms of non-traditional violence, though, can easily be committed by solitary lone wolves, and for that reason they are warmly advocated by IS and al-Qaeda.

The main weapons in question are vehicles, and the U.S. was the scene of one pioneering experiment in this form of terror. In 2006, an Afghan immigrant named Omeed Aziz Popal used his SUV to attack civilians in the San Francisco Bay area, killing one and injuring nineteen. His intentions were very clear: as one observer remarked, “He was trolling for people.” After hitting his victims, he returned to run over their prone bodies. Don’t worry if you have never heard of the crime, which was poorly reported, and in such a way that made virtually no reference to the driver’s ethnicity or religious background. The same year, Iranian Mohammed Reza Taheri-azar used his SUV to attack passers by on the campus of the University of North Carolina at Chapel Hill, injuring nine. The driver cited 9/11 pilot Mohammed Atta as his special hero.

If such attacks have not recurred in the United States itself, they have happened repeatedly in other countries, with the clear implication that tactics and methods are being developed through trial and error, awaiting full scale deployment. By far the commonest venue for these assaults has been Israel, presumably because militants there find it all but impossible to obtain guns or explosives. Vehicles, though, are much easier, and Palestinian guerrillas have used cars and also heavier machines such as tractors and bulldozers. Jerusalem alone has witnessed several such attacks since 2008, each with a number of fatalities. Uighurs (again) have used vehicles to ram crowds in Beijing.

2014 marked a turning point in this saga, when IS propagandist Abu Muhammad al-Adnani urged an all-out campaign of lone wolf violence. Find an unbeliever, he said, “Smash his head with a rock, or slaughter him with a knife, or run him over with your car, or throw him down from a high place, or choke him, or poison him.” Multiple vehicle attacks occurred around that time. A man yelling “Allahu Akbar!” drove down eleven pedestrians in the city of Dijon, and the very next day, Nantes witnessed an almost identical attack by a separate militant. Also in 2014, a recent Islamic convert in Quebec used his car against two members of the Canadian military.

So far, the most striking thing about these lone wolf vehicular attacks is just how relatively small the casualties have been, but that could change very easily. It would be easy to imagine drivers choosing denser crowds, during busy shopping seasons or major sporting events. In this scenario, long lines of fans or shoppers or travelers represent a target rich environment. On such occasions, a determined driver not afraid of being killed could easily claim twenty or more fatalities.

Whatever else we might say about limiting access to firearms (even assault rifles), such a policy of itself would do nothing whatever to prevent these kinds of low-tech violence. The solution lies in efficient forms of intelligence gathering, monitoring and surveillance, combined with psychological profiling. The danger with such methods is that they will not pick up every potential assailant, while running a serious risk of producing lots of false positives, aggressive blowhards who in reality will never commit a crime. Just how to walk that particular tightrope, between effective prevention and respecting rights to free speech, is going to offer a major challenge to law enforcement agencies of all kinds.

And yet again, it would be very useful if our political leaders felt able to speak the name of the actual cause for which all those murderous guns and knives and cars are being deployed. Perhaps that is too much to hope.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Is Brexit National Suicide?

Sam Valadi / Flickr

Over the past week, I have often been asked what I think about the British referendum vote to leave the European Union, or to seek “Brexit.” My standard response is that I would be happy to explain, provided the sight of me ranting and shrieking obscenities does not bother my listeners. In my view, Brexit is quite literally nothing short of national suicide. It is a cataclysm that that country must and can avert, at all costs.

In saying this, I run contrary to the view of many conservative writers who seem delighted by the vote. Those observers make some legitimate points about the referendum and the national and anti-globalization values that it proclaimed. Yes, the vote did represent a powerful proclamation by a silent majority who felt utterly betrayed and neglected by global and corporate forces. The Leave vote, they rightly think, was a mighty voice of protest.

Actually leaving, though, is a radically different idea. At its simplest, it means that Britain would abandon its role as a dominant power in Europe, a continent that presently has its effective capital in London, and with English its lingua franca. It also means giving up countless opportunities for young people to work and travel across this vast area.

Alright, perhaps those losses are too tenuous and speculative, so let’s be very specific, hard-headed, and present-oriented. How would Britain survive outside the EU? If the country had three or four million people, it could revert to subsistence agriculture, but it doesn’t—it has 64 million. That means that the country absolutely has to trade to survive, whether in goods or services. Fantasies of global commercial empires apart, the vast majority of that trade will continue to be what it has been for decades, namely with other European nations. All discussions of Brexit have to begin with that irreducible fact.

So trade on what terms? Since the referendum vote, it has become starkly apparent that none of the Leave militants had given a moment of serious thought to this issue.

One attractive model is that of the associated nation, which enjoys access to the free market, but is exempt from EU laws and regulations. Conservative politician Boris Johnson recently published an op-ed suggesting just such a model, drawing on the example of Norway, in what has been called a kind of EU-Lite. Beyond accessing the single market, he also specified that Britain would be able to maintain continent-wide mobility for its own people, while restricting immigration of foreigners into Britain. He also declared that future fiscal deficits could be solved by the limitless veins of fairy gold to be found under his house. Well, I am making up that last part, but it is perhaps the most plausible part of his scenario. European leaders made it immediately clear that no form of association would be contemplated under which Britain could exclude migrants. Mobility of labor must run both ways.

And that Norwegian example demands closer inspection. What it means in practice is that Norway’s government pays a hefty price for its EU relationship and market access, in the form of continuing to pay very substantial sums into the EU, while agreeing to easy immigration policies. The only thing it lacks is any say whatever in EU policy-making.

Let me put this in U.S. terms. Imagine that Texas seceded from the union. The American President is amenable to the scheme, and explains how it would work in practice. Henceforward, he says, Texas would be completely independent! It would however continue to pay federal taxes, while having no control of immigration or border policy. Nor would it benefit in any form from federal aid, support or infrastructure projects. Oh, and Texas would no longer have any Congressional representation in Washington, to decide how its funds were spent. It seems like a bad idea to me, continues the president, but hey, it’s your decision. Enjoy your sovereignty!

As they begin to consider the effects of Brexit, the Leave leaders are facing an irreconcilable contradiction. On the one side, you have the more mainstream figures, like Boris Johnson, who will very soon be pleading for a Norway-style association model, with all the negatives I suggested earlier. Against them will be the populists, like Nigel Farage’s UKIP, who will accept nothing implying open immigration, no form of EU-Lite.  Rejecting that element, though, also means abandoning any hope of access to the single European market. If implemented, that would mean industrial and financial collapse.

But there is a good side to that outcome! As the British economy disintegrated, millions would be forced to leave the country to seek their livelihoods elsewhere, and among those would be many of the recent immigrants whom UKIP so loathes. Who would choose to remain in a beggared and impoverished junkyard? The immigration problem would thus solve itself, almost overnight.

Realistically, the most likely outcome for Britain is some kind of association status, which means many of the burdens of EU membership, but without the essential pluses, of being able to control the process from within at governmental level. And the advantages of seeking that solution rather than the present model of full EU membership are… are… hold on, I’m sure I can finish this sentence somehow. No, in fact, there aren’t any advantages.

Full British membership in the EU as constituted presently—with all its manifold flaws—is infinitely superior to any possible alternative outcome.

But surely, one might object, the referendum can be neither reversed or ignored? Actually, it can, easily, if any politician had the guts to do so. Nor do we need a second referendum to achieve that result. Contrary to the impression given by many media reports, the recent referendum was an advisory and nonbinding affair, with no necessary legal consequences whatever. In terms of its necessary impact on legislation, it had precisely the same force as a Cosmopolitan magazine survey on sexual predilections. In the context of constitutional laws and customs established over a millennium or so of British history, the referendum exists for a single purpose, namely to advise the deliberations of the Crown-in-Parliament. Parliament must vote on this issue, and if it decides to overturn the result, then so be it.

If at that point, British parliamentarians still decided to validate the Brexit result, then so be it, and may their country’s ruin be on their conscience. But the decision remains entirely theirs.

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

Who Threatens Democracy?

Evan Guest / Flickr

The United States is currently facing a truly dangerous and unsettling political movement that poses real challenges to traditional concepts of democracy. That phenomenon is, of course, anti-Trumpism. Speaking personally, nothing could induce me to vote for Mr. Trump, but the violent opposition to him is becoming alarming.

Trump Could Threaten U.S. Rule of Law, Scholars Say,” reads a New York Times headline. Specifically, these experts warn of his “contempt for the First Amendment, the separation of powers and the rule of law.” And if they are experts, and if the Times has bothered to telephone them, then their views are of course impeccably objective. There are multiple issues here, not least that another Clinton presidency would assuredly involve precisely the same hazards, and presumably to the point of impeachable crimes, yet the Times is not seeking expert opinions on those matters.

But right now, let us consider the “rule of law” in the present election. In San Jose, anti-Trump protesters just physically attacked and chased supporters leaving a meeting, and events like that are becoming commonplace. They are assuredly going to escalate over the next few months. That prospect determines attitudes on both sides. Every left-wing activist group knows it is duty-bound to express its opposition to Trump, and supporters know that they are likely to be attacked if they attend meetings.

We can guarantee that certain things are going to happen within the next two months. One is that at least a handful of Trump supporters are not going to turn the other cheek. They know they cannot rely on police protection, and so some will turn up to meetings prepared to defend themselves, possibly with firearms. At that point, someone is going to be wounded or killed. At that point, expect a media outpouring about the inherent violence of Trump, his supporters, and the political Right. These animals are vicious! When attacked, they defend themselves.

The other prediction we can make with fair certainty is that in mid-July, we are going to be facing a major political crisis. The Republican convention will be held in Cleveland July 18-21, and it will assuredly be held in a state of siege. The exact outcome of that event very much depends on police behavior, preparation, and organization. If protesters can be kept sufficiently far removed, then perhaps some semblance of order can be preserved. If not, it is possible that the convention itself might be forced to suspend its activities. Either way, it is highly likely that individual convention delegates and participants are going to be attacked and, conceivably, harmed.

Political protests on some scale are not new, and political conventions are a natural target. But in modern U.S. history, has there ever been a national election where the candidates of one party were simply unable to show their faces without being met by violence? Where mob action simply makes it impossible for one campaign to function? We are not here talking about the candidate of some neo-Nazi sect or Klan group, but the Republican Party itself.

Ultimately, this is all a matter of policing and the workings of the criminal-justice system. In recent years, American police forces have become very conscious of the need to avoid overreaction at protests and public gatherings, for fear of generating embarrassing film that shows up on YouTube. In a version of the notorious “Ferguson Effect,” they have become much gentler in their approaches than they were in earlier years. Witness, for instance, the decision to allow groups like Black Lives Matter to block roads without facing even the danger of arrest. The reasons for caution are understandable, but something has to change. If the police cannot maintain public order sufficiently to allow the functioning of something as critical as a national election, have we not ventured into a full-scale national crisis?

If national elections cannot be held in safety, has democracy not ceased to function?

Philip Jenkins is the author of The Many Faces of Christ: The Thousand Year Story of the Survival and Influence of the Lost Gospels. He is distinguished professor of history at Baylor University and serves as co-director for the Program on Historical Studies of Religion in the Institute for Studies of Religion.

← Older posts