Bradley J. Birzer is the author of Russell Kirk: American Conservative and co-founder of The Imaginative Conservative website.
“The Cold War is done, but those bastards will find us another one.”
This cry might have come from any current reader of The American Conservative alive in the early 1990s—well, maybe without the bastard part. But still, an anguished expression from Russell Kirk or Pat Buchanan? Why not? After all, as TAC editor Bob Merry recently and wisely noted, so many so-called conservatives of the early 1990s “kicked Reagan to the curb” the moment they inherited the Republican Party. And it seems they kept kicking, mutating a military that came into existence solely to defeat the Soviets into a world peace-keeping force, a new Delian League. The bastards did find us another one.
And then: “They’re here to protect us, don’t you know. So get used to it. Get used to it.”
James Bovard or Virginia Postrel? Or some other grand libertarian of a quarter of a century ago? Why not?
Actually, the words are prog rock lyrics from Marillion’s album Brave (1994).
Taking its name from The Silmarillion, the 1977 book released by the Tolkien Estate, Marillion formed in 1978 and released its first album, A Script for a Jester’s Tear, in 1983. With their third album, Misplaced Childhood, they hit it big as the name “Kayleigh” became a repeated word on MTV and all across European radio. In his excellent book Citizens of Hope and Glory, Stephen Lambe explains: “‘Kayleigh’ penetrated the UK public consciousness like no other song in Progressive Rock with the exception of ‘Dust in the Wind,’ which had done pretty much the same thing for Kansas in the USA in 1977.”
Indeed, the name Kayleigh—in a variety of spellings—became an important and trending name for girls throughout the English-speaking world.
“Fish” was none other than the charismatic and Goliath-sized Scottish lead singer of the band, Derek Dick. After one more album, Dick left, and Marillion hired Steve “h” Hogarth to take over lead vocals. To this day, the decision of the band to replace Fish rather than call it the end has deeply divided Marillion fans. Where Fish was clever and staccato, Hogarth is romantic and suave, bringing a fundamentally different sound.
With Hogarth, the band wrote and recorded 1989’s Season’s End and 1991’s Holiday in Eden. By corporate standards, the albums sold relatively well, each reaching number seven in the United Kingdom. Hoping to get the band to release another album that would sell as well as Misplaced Childhood, EMI’s delegate to the band told them: “Look what you guys need to do is to make a quick, sharp, surefire album, and, you know, get back to basics,” according to bassist Pete Trawavas. By “back to basics,” EMI’s man surely meant to produce another “Kayleigh” and make a huge profit.
Though now gone as an independent label, EMI, it should be remembered, was a major label in the 1960s through the 1990s, having made its fortune with artists such as The Beatles, Queen, Iron Maiden, Elvis Costello, R.E.M, The Smiths, Kate Bush, and Pink Floyd. It had, however, already revealed its true colors in 1988 when it allowed Mark Hollis to make arguably the finest album of the rock era, Spirit of Eden by Talk Talk. A commercial failure but a fan and critic favorite over the past 30 years, Spirit of Eden took 16 months to make, and EMI initially lost a considerable sum. The album has, however, transcended everything, including its label and its band. With its airy atmospherics, its silky flow, and its religious and sacramental lyrics, Spirit of Eden became known simply as the first “post-rock” album, breaking every genre and subgenre then in existence. It has since sold over 500,000 copies.
Marillion chose one of their sound engineers, Dave Meegan, to produce their third album with Hogarth. Having studied with famed producer and musician Trevor Horn (The Buggles, Yes, Seal), Meegan had also worked, perhaps most famously, with U2 on The Joshua Tree (1987) and Rattle and Hum (1988).
Meegan, it turned out, loved the gothic elements of sound, that is, the jagged nature of nature and of music, the uneven and the odd. Setting up their studio in a castle in the southwest of France, Meegan and the band recorded everything: from a fire crackling to random conversations to the air moving through hallways and turrets, all, it seemed, to pick up the random atmospheric of a passing ghost. “The theory was that if we fed all the mics onto tape then we’d pick up any passing ghosts as well. You can’t hear them but I can feel them here and there,” the atheist Hogarth remembered. “Brave is all about the spiritual aspect of life dominated by the non-spiritual, so we filled the songs with as many sounds and pictures as we could dream up—I sent our sound engineer out at dawn one morning to record silence for the beginning of the album!”
EMI’s desire to make a fast pound faltered almost immediately, as Meegan and Marillion began to echo Hollis’s work on Spirit of Eden. As it turned out, Meegan and the band took eight months to write the album and another seven to produce. Certainly, this was no longer normal commercial territory. Even the members of Marillion began to express concern over the time it was taking. But producer Meegan responded: “The way I see it, we could make a masterpiece or we could just make a very average record and you’ve got to decide which one you’d like.” The band chose “masterpiece.”
Inspired—or frightened—by the real-life news story of a young English woman who had been found walking around Severn Bridge unsure of her own identity, history, or age, Hogarth centered the concept of Brave around alienation, a human trapped in an overwhelmingly mechanistic and progressive world:
The babble of the family
And the dumb TV
Roar of the traffic and the thunder of jets
Chemicals in the water
Drugs in the food
The heat of the kitchen and the beat of the system
The attitude of authority
The laws and the rules
Hit me square in the face, first morning at school
The heroes and the zeroes
The first love of my life
When to kiss and to kick and to keep your head down
When they’re choosing the sides
I was never any good at it
I was terrified most of the time
I never got over it
I got used to it
Though many artists have depicted alienation, no one has done it quite as well as Hogarth and Marillion. From the moment the album begins to its final second 72 minutes later, Brave captivates its listeners.
As with Hollis’s Spirit of Eden, Brave never lived up to the hopes that EMI had invested in it. Marillion would only write one more album for the label, its phenomenal 1995 release Afraid of Sunlight. And despite the time and effort that went into Brave, it never caught on the way Misplaced Childhood had. In his diary entry dated March 29, 1994, Hogarth expressed his frustration:
The general reaction to Brave can only be described as euphoric. Why does media and radio despise us? I guess you either love or loathe Marillion…. Conversion is a long and quantum jump. Our music seems to be behind a locked door. It’s fun in the room if you find the key!
Following Afraid of Sunlight, Marillion decided to leave not just EMI but the entire corporate rock world behind. Amazingly, the band charted its own course, and even more amazingly, it succeeded. As Hogarth told Prog magazine in 2016, commemorating their 20th year of independence, labels are “not doing it because they love you. If they’re really honest, they’re not even doing it because they’re excited about the music.”
In true entrepreneurial form, the band made its 1999 album marillion.com available to their subscribers on the website of the same name. It proved a huge success. A year later, the band innovated again, offering the very first crowdfunded album in the history of rock. Since then, the band has made one success after another. Though they’ve never sold in the numbers they did in the 1980s, they have, perhaps, the most loyal fan base in the rock world today. Those who love Marillion—such as Gianna Englert, a young political theorist at Southern Methodist University; famous Lutheran organist and music theorist Rick Krueger; and the venerable libertarian Tom Woods—follow the band through every tour and keep up with its every nuance.
Whatever Hogarth’s worries in 1994, he and Marillion have transcended their critics as well as themselves in almost every way that matters. When the album came out, one of its most ardent supporters was South African filmmaker Richard Stanley, who retold the story of Brave in an art film of the same title that year, using the album as a soundtrack. Though the ending of the film is, arguably, more dour than the end of the album, the film not only does justice to the album, it does justice to the art.
Twenty-four years later, the album remains as vibrant as ever. Just this year, audiophile and musician Steven Wilson remixed and re-released Marillion’s Brave. The deluxe, 60-page hardback edition of the re-release comes with the original mix of the album, the 2018 mix of the album, two DVDs of a 1994 concert, and a Blu-ray with the full Wilson mix in 24-bit, promo films, a bonus track, and a 70-minute documentary.
Music journalist Stephen Humphries (Christian Science Monitor, Prog, Boston Globe, American Way) explains the brilliance of Brave best:
Hogarth, an uncommonly emotionally honest singer, turns one person’s search for connection, meaning, and redemption into something that is universal to the listener. I often tell people that Marillion reaches emotional and musical planes that most other bands don’t know exist. Brave exemplifies those qualities. The beauty of Steven Wilson’s remix is how it creates even greater clarity and space in the instruments so that Steve Hogarth’s voice can seep through to the listener’s soul.
Looking back over the past three decades, it’s rather clear that the bastards did indeed bring us another one, and they do simply want us to just get used to it. Yet in large ways and small ways, we normal folks keep fighting back—through truth, beauty, and goodness.
This piece benefitted immensely from the writings of Paul Stump, Phill Brown, Roie Avin, Stephen Lambe, Stephen Humphries, Rich Wilson, and Jerry Ewing.
Bradley J. Birzer is The American Conservative’s scholar-in-residence. He also holds the Russell Amos Kirk Chair in History at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
In almost every way, when we think about Thomas Jefferson, we think about America. When we’re proud of our heritage, we focus on his Declaration of Independence, his founding of the University of Virginia, his design of Monticello, his massive library, and his sending forth Lewis and Clark. In our uncertainty and embarrassment, we turn to his ownership of slaves, his alleged obsession with Sally Hemmings, and his bitterness towards Alexander Hamilton.
In so many ways, Thomas Jefferson is America, and America is Thomas Jefferson.
A conservative can appreciate him for his classical education, a libertarian for his promulgation of natural rights, a liberal for his love of choice, and a progressive for his optimism. Jefferson’s “right to life” might well adorn an anti-abortion placard, while his “right to liberty” might equally stand on a pro-abortion one. A marble 19-foot-tall version of the man might greet all who come to the imperial city of Washington, D.C., while a hastily drawn mimeograph of his face might appear at a protest in Tiananmen Square. Each is equally identifiable.
This month marks the 275th anniversary of his birth. The moment should give us all pause. Just who exactly was this extraordinary man—a man who seems to be the best of us, the worst of us, and, in some strange and mysterious way, also above us?
Whatever gifts Jefferson had, he was, to be sure, a man. He was born, he lived, and he died. He had a beloved wife, Martha, and when she passed all too young, his greatest friend and ally was his daughter, also named Martha. Certainly he loved books and wine, tangible pleasures, no matter the extraordinary talents of his mind. Six feet and one or two inches tall, one visitor wrote of him later in his life that he possessed a “face streaked and speckled with red, light gray eyes, white hair” and his stance was “bony, long and with broad shoulders, a true Virginian.” He was wearing “shoes of very thin soft leather with pointed toes and heels ascending in a peak behind, with very short quarters, gray worsted stockings, corduroy small clothes, blue waistcoat and coat, of stiff thick cloth made of wool of his own merinos and badly manufactured, the buttons of his coat and small clothes of horn, and an under waistcoat flannel bound with red velvet.” If nothing else, the description reminds us that Jefferson lived in another age, a more elegant and somehow voluptuous one, one that feels far more Hogwarts than it does Goldman Sachs.
If America has produced a more intelligent man—that is, at least as well rounded as Jefferson was in his intellect, perceptiveness, and creative drive—that person has yet to come forward. Few would claim Jefferson was a great president, but even in the White House he was unique. Washington might have had more fortitude and Lincoln more resolve, but Jefferson had already given us his everything by the time he entered office in 1801. He had, in the manner of a classical demi-god, articulated and perhaps bestowed upon us our founding mission, our purpose, and our greatest contribution to the world: the belief, however poorly practiced and implemented, that ALL men are created equal, each endowed by his creator with certain inalienable rights.
This contribution, though, raises vital questions not just about the man but by extension about America. Exactly what were Jefferson’s sources and influences? Was he merely a French radical living in the hinterlands of Western civilization? Certainly some have argued so. After all, when asked, he admitted in 1789 that he loved Isaac Newton, Francis Bacon, and John Locke above all others in the Western tradition as “the three greatest men that have ever lived, without any exception.” Taken at face value, this is an extraordinary claim by any standard, even one far less majestic than Jefferson’s. While each was an Englishman, each was also quite recent and modern in Jefferson’s day. And while Newton, Bacon, and Locke might each be highly intelligent in and of themselves, taken together they seem a bit radical and mischievous. Additionally, given Jefferson’s own life-long pursuit of the classics and liberal arts, one must ask, where is Greece and Rome in all of this, let alone medieval and Reformation England?
In his own extraordinary work on Jefferson, Thomas Jefferson: Apostle of Americanism, the French-born American man of letters and Princeton professor Gilbert Chinard claimed that simply because Jefferson admired someone—no matter to what degree—it didn’t mean that person’s ideas were reflected tangibly and measurably in his writings. Endowed with an intense intelligence, Jefferson could well separate what he knew to be true, what might be true, and what ought—but never would—be true.
Chinard argued forcefully that when it came to the Declaration as well as to the laws of Virginia, Jefferson understood what would and would not work in America. “No greater mistake could be made than to look for his sources in Locke, Montesquieu, or Rousseau,” Chinard argued, most certainly exaggerating to make a point. “The Jeffersonian democracy was born under the sign of Hengist and Horsa, not of the Goddess Reason.” As proof of this, Chinard—himself, it should be remembered, of French birth and stock—drew upon John Adams’ description of Jefferson’s proposed seal of the United States in 1776. “Mr. Jefferson proposed, the children of Israel in the wilderness led by a cloud by day, and a pillar by night—and on the other side, Hengist and Horsa, the Saxon chiefs, from whom we claim the honor of being descended, and whose political principles and form of government we have assumed.” Even if you’re an extremely intelligent reader—and, after all, you wouldn’t be here at The American Conservative if you weren’t—you might be scratching your head as you read this. Newton and Locke, certainly. You know them well. But Hengist and Horsa? Who on God’s green earth are these two? Unless you spend your time reading early Medieval Celtic or Anglo-Saxon poetry—such as Beowulf—or modern British fantasy by C.S. Lewis and J.R.R. Tolkien, Hengist and Horsa probably mean almost or even less than nothing. The two Saxon chiefs reside more accurately in myth than they do in history, at least as professional historians understand the term.
For Jefferson, though, Hengist and Horsa represented the great republican tradition of the Germanic tribes sitting under the oak trees, deciding what was common law and what was not, speaking as representatives of their people in the Witan, and living as free men, bound to no emperor. To the American founding generation, Hengist and Horsa were as real as Cincinnatus, the Roman republican who threw down the sword, refused a permanent dictatorship of the city, and walked into the country to spend his life as a farmer. In the long scheme of things, the accuracy of the founders’ understanding of history matters little. They believed in Cincinnatus, Hengist, and Horsa, and they acted accordingly.
It isn’t hard to find the classical world that intrigued Jefferson’s mind. Probably no one has documented this as well as Carl Richard in his 1994 magnum opus, The Founders and the Classics. As late as 1810, Jefferson complained that any understanding of current events took precious time away from his reading of Tacitus and Homer. Roughly a decade later, he admitted, “I feel a much greater interest in knowing what has happened two or three thousand years ago than in what is now passing.” Though he loved Homer and Tacitus most, Virgil was not far behind. When Jefferson founded the University of Virginia in the late 1810s, he noted that all the science in the world meant little if a student failed to learn Greek and Latin. He wanted to exclude all professors and students who could not readily read the classics in their original language. Only this way could the nature of man, the temptations of power, and the attainment of the virtues truly be understood.
All of this came together for Jefferson in what he called, as historian Hans Eicholz beautifully put it, the “harmonizing sentiments of the day.” In an 1825 letter to Henry Lee explaining the purpose behind the Declaration, Jefferson wrote:
This was the object of the Declaration of Independence. Not to find out new principles, or new arguments, never before thought of, not merely to say things which had never been said before; but to place before mankind the common sense of the subject, in terms so plain and firm as to command their assent, and to justify ourselves in the independent stand we are compelled to take. Neither aiming at originality of principle or sentiment, nor yet copied from any particular and previous writing, it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion. All its authority rests then on the harmonizing sentiments of the day, whether expressed in conversation, in letters, printed essays, or in elementary books of public right, as Aristotle, Cicero, Locke, Sidney, etc.
If Jefferson really is the best mind that America has produced—he probably is—and if his greatest contribution to the world is his Declaration that all men are endowed with certain inalienable rights, we would be fools to ignore our classical and medieval lineage. Indeed, we would become nothing more than mischievous European radicals, bent on altering all things inherited from man and God, giving neither his proper due.
This article (and my views, such as they are) benefitted immensely from the thoughts and words of Dedra Birzer, Gilbert Chinard, Hans Eicholz, Winston Elliott, Kevin Gutzman, Christian Kopff, Don Lutz, Dumas Malone, Rob McDonald, and Carl Richard.
Bradley J. Birzer is The American Conservative’s scholar-in-residence. He also holds the Russell Amos Kirk Chair in History at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
Even among the odd, Russell Amos Kirk was unusual. Perhaps only in America could such an eccentric and anti-individualist individual have arisen. And arise he did.
One hundred years ago, Kirk entered the world. Born to poverty-stricken but bookish Anglo-Saxon Celts on the wrong side of the tracks in Plymouth, Michigan, probably very few looked at his young parents and believed them capable of producing a genius. Kirk’s mother was a quiet saint, but his father was a ne’er-do-well who never quite got his life together and certainly never earned any respect from his only son. Like so many who settled in Michigan during the 19th century, the Kirks and their relatives had come with the first waves of immigration to the northern American colonies, slowly migrating across New England, upstate New York, and towards the Great Lakes. They had lost their Puritanism at some point, these old Yankees, and were to become the supporters of Abraham Lincoln and the proud backbone of the Union army during the American Civil War. Northern and agrarian, they replaced their lost Knoxian faith with not only cherished books but also with séances, faith healings, levitations, and bizarre spiritual liturgies.
Like so many traditionalists of the 20th century, Russell Amos Kirk (“Jr.”) revered his grandfather while dismissing his father. His traditionalist sympathies and piety were lacking in the previous generation, who deemed it unworthy even of consideration. All real hope rested in his grandfather’s cohort. Russell especially admired his maternal grandfather, Frank Pierce, a well-read and rough Stoic character.
Unsure of what greater things to believe, young Kirk found certainty in his mother and in her father. Through their influence, he read everything he could find, from the collected works of James Fenimore Cooper to Thomas Jefferson to Karl Marx, all while still in his tweens. He never stopped reading, and possessed amazing recall courtesy of what was almost certainly a photographic memory. Of all the great things in the world, however, nothing bested a walk with his quietly certain grandfather. On those walks, Russell felt his mind sharpen, his soul enlarge, and his world come into focus.
Not realizing how abnormal it was for a 12-year-old to read and devour all the knowledge and wisdom around him, Kirk never felt the sting of poverty, so immersed was he in the life of the mind.
When moved, he began to write, and once he started to write, he never stopped. One of his most loyal students, Wes McDonald (RIP), noted that Kirk probably wrote more in his lifetime than what even the most educated American reads in a lifetime. Having been privileged to read all of Kirk’s published books, articles, and reviews, as well as his private correspondence and papers, I can affirm McDonald’s suspicion. Everywhere Kirk traveled, he took with him his three-piece tweed suit, a swordstick, and a typewriter. The eminent scholar Paul Gottfried claimed that watching Kirk at the typewriter was akin to watching Beethoven compose. According to those who knew Kirk well, he could type while carrying on a conversation simultaneously. His photographic memory allowed him to reference things he had read over the years without looking them up. Sometimes in a furious day, he might answer tens or even hundreds of letters, and often in a furious night, he could produce a full-book chapter.
Almost as soon as Kirk entered Michigan State College as an undergraduate in 1936, Professor John Abbott Clark took him under his care and introduced him not just to the profoundly important but already neglected works of Irving Babbitt and Paul Elmer More, but to Socratic and Ciceronian humanism as a fundamental part of the Western tradition. During his college years, Kirk combined his love of romantic literature, the humanist ideals of Babbitt and More, and the stoic wisdom of his grandfather into what would be recognized by 1953 as modern conservatism. While earning an M.A. in history at Duke in 1940 and 1941, Kirk also discovered an intense love of Edmund Burke, whom he’d encountered through Babbitt and More in college but only indirectly. It was while writing his M.A. thesis on the rabid Southern republican John Randolph of Roanoke that Kirk first felt the influence of the greatest of the 18th-century Anglo-Irish statesmen. Though many scholars—from Daniel Boorstin to Leo Strauss to Peter Stanlis—were also re-discovering Burke (along with Alexis de Tocqueville) in the 1940s, it was Kirk’s 1953 work, The Conservative Mind, that would once again make Burke a household name in America and, to a lesser extent, in Great Britain.
By the time America entered World War II, a very young Kirk—rather enthusiastically Nockian and anarchistic—already despised Franklin Roosevelt for his mistreatment of ethnic and religious minorities at home and abroad and his militarization of the American economy. As much as Kirk hated Hitler, he did not see FDR as a viable alternative. Succumbing to the draft in the late summer of 1942, Russell Amos Kirk, B.A., M.A., endured in the military the only way he knew how: by spending all of his free time reading. Before shipping off to training at Camp Custer in Michigan (he would spend much of the war as a company clerk in the desert wastes of Utah), Kirk purchased every work of Plato and the Stoics that he could find. From his childhood to his death, he kept a copy of Aurelius’s Meditations close to him. As in the rest of his life, it would serve as his greatest comfort during the war. As he wrote in a personal letter, “everything in Christianity is Stoic”:
Really, the highest compliment I can pay to the Greeks is that they could understand and admire the Stoics and admit their own inferiority. Were the Stoics to ask the moderns the rhetorical questions they asked the Greeks, the moderns also would accept the questions as rhetorical—but would answer them in exactly the opposite manner.
In imitation of Aurelius, his own war diaries attempted to describe the world around him through the lens of the Greek and Roman-adopted Logos, the eternal order of the universe. “’Nothing is good but virtue’—Zeno” Kirk scrawled across the cover of his first diary.
That same Stoicism, however, also made Kirk profoundly aware of the majesty of nature, even in the desert wastes of the Great Basin. Though he had always been an excellent writer, something fundamentally changed in his view of the world in September of 1942. Kirk, though only 24 when he wrote these words, is worth quoting at length:
This is written in the dead of night (and why shouldn’t it be the dead of night? All else is dead here, and has been ever since the beginning of time). …I handle special orders, travel orders, daily bulletins, and the like—a great many stencils to type—and am a star contributor to the Sand Blast, our paper, a copy of which I’ll send you once we get the next issue out; I intend to do some brief literary criticism for it, once the post library opens. Officers are affable, hours required are briefer than those I had as a civilian, and the work is very light and sometimes infrequent. …I’ve grown to endure the country in true Stoic fashion, and take a certain pleasure in feeling that I’m a tough inhabitant of one of the most blasted spots on the continent. There’s enough leisure here, and that’s a lot; the winters are said to be dreadful, but I have found fears exceed realities here, as everywhere. Already we have very cold mornings and evenings, and as I write a great sand-laden wind very chilly, is howling around the shacks of Dugway. Coming here tends to make me lean toward the Stoic belief in a special providence—or, perhaps, more toward the belief of Schopenhauer that we are punished for our sins, in proportion to our sins, here on earth; for I’d been talking of Stoicism for two or three months before I burst into Dugway and there never was a better and sterner test of a philosophy, within my little realm of personal experience—to be hurled from the pleasures of the mind and the flesh, prosperity and friends and ease, to so utterly desolate a plain, closed in by mountains like a yard within a spiked fence, with everywhere the suggestion of death and futility and eternal emptiness. But, others, without any philosophy, live well enough here; and, as Marcus Aurelius observes, if some who think the pleasures of the world good still do not fear death, why should we?
Though disgusted by the ill treatment of Japanese Americans and the dropping of the atomic bombs, Kirk remained loyal to, if not uncritical of, the United States. The Army finally released Kirk in 1946, and after two years of teaching Western civilization at Michigan State, he accepted a position in the doctorate program at the University of St. Andrews in Scotland in 1948. It was after a messy breakup with a girlfriend in the fall of that year that a dispirited Kirk devoted himself to “an invigoration of conservative principles.” The five years of research and writing following that breakup would become The Conservative Mind, which was published in the spring of 1953.
Amidst today’s whirligig of populist conservatism, crass conservatism, and consumerist conservatism, we conservatives and libertarians have almost completely forgotten our roots. Those roots can be found in Kirk’s thought, an eccentric but effective and potent mixture of Stoicism, Burkeanism, anarchism, romanticism, and humanism. It is also important—critically so—to remember that Kirk’s vision of conservatism was never primarily a political one. Politics should play a role in the lives of Americans, but a role limited to its own sphere that stays out of rival areas of life. Family, business, education, and religion should each remain sovereign, devoid of politics and politicization. Kirk wanted a conservatism of imagination, of liberal education, and of human dignity. Vitally, he wanted a conservatism that found all persons—regardless of their accidents of birth—as individual manifestations of the eternal and universal Logos.
A hundred years after the birth of Russell Amos Kirk, those are ideas well worth remembering.
Bradley J. Birzer is the scholar-in-residence at TAC. He holds the Russell Amos Kirk chair in history at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
As I read about the political insanity this weekend and the ridiculous blame game for the looming government shutdown—will it be remembered as Trump’s fault or as Schumer’s fault?—I can’t help but think about what no one is talking about: how to solve our $21 trillion national debt. This number breaks down to a little over $170,000 per U.S. taxpayer.
It’s infuriating that the politicos attempt (and, more often than not, succeed) to distract us from this real issue. There’s an Orwellian element to all of this, whether intentional or not. That is, the most important issue is so critical that it is overwhelming in what it demands of our faculties to understand: that Washington, D.C., and our federal government are, at this point, simply insolvent. Whether this has been caused mainly by social issues or military ones, we’re insolvent. As some point, everyone will see the federal government for what it is, and, at that point, the collapse will be not just swift but horrific. Yet, there seems to be no reform coming. At least no serious reform.
Even the most pro-interventionist of the American founders, Alexander Hamilton, could never have imagined or desired the kind of federal government we have now. When he wrote of “energy” in government, he meant it as a means of restraint. To give “energy” to government meant, at least to Hamilton, giving the federal government the means to execute the powers expected of it by its Constitution. Rather brilliantly, he argued that a government charged with a duty but not empowered by the specific rules of that government to accomplish its duty would merely make up its own rules, thus taking government away from restraint and toward leviathan. Though many libertarians think of Hamilton as the touchstone for all future expansive government, they’re wrong. Even Alexander Hamilton desired ways to limit the expansion of government, and whether he wanted a strong executive or not, he envisioned a small, commercial republic as the proper outcome of the American revolution.
Over the previous three pieces in this series, “The Origins of the Rise of the Modern Nation State,” I’ve focused almost exclusively on the classical understanding of government. There is, I must confess, a method to my madness. One need only look at the actual classical words and symbols used by the founders to see how immensely indebted they were to the ancients. The U.S. Senate, for example, is modeled on the Maryland Senate, which is modeled on the Roman Senate. “Senate” comes from the Latin for “old wise men.” If only!
Or, even more blatantly, look at our capitol building. While we might expect our founders to have designed it as something grand and spectacular, such as the Hanging Gardens, the Taj Mahal, or, even, English Parliament, they chose an architectural style from the height of the Roman Republic. Which, of course, is also why a Washington with thousands of armed guards, black SUVs, road blocks, and rooftop surface to air missiles looks so ominous. Nothing is worse when regarding the symbols of authority than the militarization of republican architecture. The fasces of Congress quickly look like the fasces of Mussolini. Even if we don’t recognize it immediately, something in us reminds us of how readily Rome succumbed to the temptations of power as we drive around the D.C. of 2018.
The hold of the classical world on the founding mind, however, is much deeper than architecture or names. To enter college in one of the nine schools available in the American colonies in, say, 1750, one had to prove fluency in Greek and Latin. The grand historian of the period, Forrest McDonald and his wife, Ellen, explained:
Just to enter college during the eighteenth century—which students normally did at the age of fourteen or fifteen—it was necessary, among other things, to be able to read and translate from the original Latin into English (I quote from the requirements at King’s College—now Columbia—which were typical) “the first three of Tully Select Orations and the first three books of Virgil’s Aeneid: and to translate the first ten chapters of the Gospel of John from Greek into Latin, as well as to be a ”expert in arithmetic’ and to have a ‘blameless moral character.’
To be prepared for a college education, pupils began studying Greek and Latin around the age of six or seven. Indeed, one thing we in the world of schooling for democratic citizenship often forget is that all education in the 18th Century was classical education (even the term, “classical education,” would be redundant to the 18th Century mind). One was supposed to learn reading, writing, and arithmetic at home. Schools taught only Greek, Latin, and classical literature. Even farm children, with only a year or two of schooling in their lives, spent their school days drilling Greek and Latin.
For the truly enterprising student, he would also study Italian, if for no other reason than to read Dante in the original.
This is a world 300 years and 1 million miles apart from ours. It is no wonder, though, that George Washington (one of the few founders not liberally educated, interestingly enough) chose the mythic Republican Cincinnatus and the Republican rebel Cato the Younger as his exemplars or that the founders as a whole wanted a republic. This understanding of the classical world pervaded all of America, even the America that had not received much classical education, if any. Names such as George (Latin for agriculture), Narcissa, and Romulus were not uncommon proper names. Towns and counties took the names Homer, Athens, Remus, etc. Though not every American had read Virgil’s Aeneid, every American knew something about Aeneas, Troy, and Dido. Tellingly, the McDonalds reminded us, when American officers and French officers spoke on the field of battle during the Revolutionary War, they spoke in Latin, the only common language they shared. The index to the Federalist Papers quickly reveals as much, with 56 references to the classical and medieval world of the West and no references to John Locke.
Among the Romans, the American founders most appreciated and idealized the stoic Cato the Elder, the martyr Cicero, the poet Virgil, the historian Livy, and the theorist Tacitus. While the founders knew and studied the Greeks, it was the Roman Republicans that inspired them and the Roman imperials that terrified them.
“The Revolutionary leaders were men of substance—propertied, educated. They read. And what they read made it easier for them to become rebels because they did not see rebels when they looked in the mirror,” historian Trevor Colbourn has written. “They saw transplanted Englishmen with the rights of expatriated men. They were determined to fight for inherited historic rights and liberties.”
When writing the Declaration of Independence, Thomas Jefferson explained that he drew on ancient sources:
This was the object of the Declaration of Independence. Not to find out new principles, or new arguments, never before thought of, not merely to say things which had never been said before; but to place before mankind the common sense of the subject, in terms so plain and firm as to command their assent, and to justify ourselves in the independent stand we are compelled to take. Neither aiming at originality of principle or sentiment, nor yet copied from any particular and previous writing, it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion. All its authority rests then on the harmonizing sentiments of the day, whether expressed in conversation, in letters, printed essays, or in elementary books of public right, as Aristotle, Cicero, Locke, Sidney, etc.
John Adams, the first American to argue for independence, as early as 1765, said the same as Jefferson in 1774:
These are what are called revolution principles. They are the principles of Aristotle and Plato, of Livy and Cicero, of Sidney, Harrington, and Locke; the principles of nature and eternal reason.
Unlike the French or Russian revolutionaries, attempting to create, in the words of Shakespeare, a “brave new world,” the American patriots turned the world right-side up. They desired a republic rooted in right reason, first principles, and the Natural Law. God had written the republican principles of the American Revolution into nature herself. “We do not by declarations change the nature of things, or create new truths, but we give existence, or at least establish in the minds of the people truths and principles which they might never have thought of, or soon forgot. If a nation means its systems, religious or political, shall have duration, it ought to recognize the leading principles of them in the front page of every family book,” a leading Anti-Federalist wrote in the aftermath of the war for Independence.
For this reason, the modern American conservative has a duty to know not just the origins of the American republic, but its origins in the Roman republic. After all, if we’re not conserving these things, what is it worth to be a conservative?
When the founders of the United States created her, they wanted a republic, not an empire; a government, not a state; and a commonwealth not a democracy.
Bradley J. Birzer is the president of the American Ideas Institute, which publishes TAC. He holds the Russell Amos Kirk Chair in History at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
In recent years, probably no matter has split nationalist and populist conservatives from libertarian and anti-statist conservatives more than that of immigration. Yet, very few conservatives are actually taking the time to debate or discuss this issue, so fundamental to understanding the very essence of who we are as an American people. Too many suppositions and assumptions have taken on the air of truth, and, as such, and, if for no other reason, the topic itself demands good discussion and vigorous debate. In particular, the modern American conservative should praise Gerald Russello and The University Bookman for its on-going symposium dealing the whole swirling mess. We need much more of this. It’s too important to leave to emotion or passion alone.
As Christians around the world celebrated the arrival of the Three Kings—the Magi of the Orient—on Epiphany, the president of the United States called for $33 billion to shore up America’s borders with $18 billion for the wall.
Would the Magi have been admitted in 2018? “Excuse me, Balthasar, but I need to see that your papers are in order. Oh, I’m sorry, but your gift of myrrh exceeds our 3.2 ounces of liquid allowed.”
Perhaps, President Trump simply chose his timing poorly, but it would be impossible for the Christian to miss the irony.
As a professor of the western canon, the Great Ideas of the West, and the western tradition, I find it nearly impossible to claim that there is a long tradition of excluding those who “aren’t us.” Even the most cursory examination of the issue reveals that the best of western thinkers have considered political borders a form of selfish insanity and a violation of the dignity of the human person. The free movement of peoples has not only been seen as a natural right throughout much of the western tradition, but it has also been seen as a sacred one.
In the gloriously pagan Odyssey,Odysseus survives, again and again, because the highest commandment of Zeus is to welcome the stranger and protect him with all that one has. To this day, one finds remnants of this tradition throughout the Mediterranean as the stranger is greeted with olive oil, bread, and, depending on the predominant religion of the region, wine. As staple crops of the ancient world, these signified not just acceptance but actual joy at the arrival of the stranger. The god of the hearth stood as patron of the sojourner.
The Athenians, during the tumultuous fifth century before Christ, prided themselves on allowing not just the stranger into their communities, but also their very enemies in. After all, what did the Athenians have to hide? Why not expose the ignorant to truth? Let the oppressed see how a free people live.
During the vast, long expanse of the Middle Ages, the Germanic peoples not only thought of themselves as residents of their own little piece of Middle-earth (Midgard), but they also thought of themselves as citizens of what King Alfred the Great labeled Christendom, the Christiana res publica, as well as believing themselves sojourners en route to the City of God. What Christian could allow—in good conscience—the accidents of birth such as gender or skin tone in this Veil of Tears to trump the possibilities of eternal salvation in the next? Neither Greek nor Jew, neither male nor female. . . .
Nothing in Christendom better represented the ideals of the free movement of peoples than did the Great Charter of 1215, forced upon King John at Runnymede. Though points 1 and 63 of the Magna Carta demanded freedom of the Church from political interference, points 41 and 42 reveal how fundamental the movement of peoples is to the sanctity of the common law.
- All merchants shall have safe and secure exit from England, and entry to England, with the right to tarry there and to move about as well by land as by water, for buying and selling by the ancient and right customs, quit from all evil tolls, except (in time of war) such merchants as are of the land at war with us. And if such are found in our land at the beginning of the war, they shall be detained, without injury to their bodies or goods, until information be received by us, or by our chief justiciar, how the merchants of our land found in the land at war with us are treated; and if our men are safe there, the others shall be safe in our land.
- It shall be lawful in future for anyone (excepting always those imprisoned or outlawed in accordance with the law of the kingdom, and natives of any country at war with us, and merchants, who shall be treated as if above provided) to leave our kingdom and to return, safe and secure by land and water, except for a short period in time of war, on grounds of public policy- reserving always the allegiance due to us.
If we accept the Magna Carta as one of the most important documents in the history of western civilization, we Americans cannot afford to ignore it, its intent, or its specifics. Common law demanded that a people—and the person—move freely, border or not. Even in time of war, the enemy must be treated with dignity.
Equally important, can we American afford to ignore that the pagans, such as Odysseus, as well as the Christians, such as King Alfred, stood alike for the free movement of peoples and the welcoming of the stranger? To this day, the Roman Catholic Church, following the Hebraic Decalogue, teaches: “The more prosperous nations are obliged, to the extent they are able, to welcome the foreigner in search of the security and the means of livelihood which he cannot find in his country of origin. Public authorities should see to it that the natural right is respected that places a guest under the protection of those who receive him.” To be sure, the immigrant must fulfill his or her duty as a citizen as well.
As an American conservative, I am not suggesting that we should surrender our own free will to the dictates of the past or even to any one religion, but I do think we would be foolish beyond measure to ignore the advice of our ancestors. And, for what it’s worth, the best of our ancestors believed in the free movement of peoples.
When it comes to the specifically American tradition of immigration and the free movements of peoples, the issue becomes more complicated.
Imagine for a moment that the great waves of immigration never came to America. In the colonial period, among those who freely chose to cross the Atlantic, you would have to dismiss the Anglicans to Virginia, the Puritans to New England, the Quakers to Pennsylvania, and the Scotch-Irish. Of the unfree peoples, you would have to take out all of those of African origin. In the 1840s, remove the Germans, the Scandinavians, and the Irish. In the 1880s through the 1910s, remove all Greeks, Poles, Jews, Italians. . . .
Yes, the native American Indian population would be justly celebrating, but, overall, and, from any relatively objective view, there would be no America.
Between 1801 and 1924—with the critical exception of the Chinese and the Japanese—no peoples were barred from entry into the United States. Congress forbade further Chinese immigration in 1882, and a gentleman’s agreement ended Japanese immigration in 1905. Otherwise, until 1921 and 1924, any person of any continent, of any religion, of either gender, of any skin color, or any other accident of birth could enter the United States and take up residency the very day of arrival. Only those with known criminal records or those suffering from tuberculosis were turned away.
Unless you are a full-blooded American Indian (less than one percent of the present United States population), you, American reader, would not be here without some ancestor having immigrated—freely or by force—to the United States. And possibly from what one might crassly dismiss as a “sh-hole country.”
Thus, our ancestors not only expressed their favor of the freedom of movement among peoples in their writings and laws, but when,] push came to shove, they also voted with their feet.
Since the tragedies of September 11, 2001, we Americans have surrendered not just our liberties but our very souls to the false notion and false comfort of governmentally-provided security. Tellingly, we have even closed off what was once the freest and longest border in the history of the world, our border with our extremely kind and polite neighbor to the north, Canada.
Again, I am not suggesting we must be slaves to the past, nor am I suggesting that we should dismiss the legitimate security concerns of a sovereign people. But, as an America people, we came into being because of the free movement of peoples. We rebelled against the designs of the 18th-century British, and we mocked the 19th-century Europeans and their passports and border guards.
Now, we seem to have become them.
If we continue to build walls around our country, really, then, just who are we? Only in the last generation or so have so many American conservatives become convinced of the necessity of the vast array of restrictions on those who wish to become a part of the United States. Perhaps they are right, but, regardless, there is much to discuss.
Bradley J. Birzer is the president of the American Ideas Institute, which publishes TAC. He holds the Russell Amos Kirk Chair in History at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
“I see a beautiful city and a brilliant people rising from this abyss. I see the lives for which I lay down my life, peaceful, useful, prosperous and happy. I see that I hold a sanctuary in their hearts, and in the hearts of their descendants, generations hence. It is a far, far better thing that I do, than I have ever done; it is a far, far better rest that I go to than I have ever known.”
—Obituary for Bruce Wayne, taken from Dickens’s A Tale of Two Cities
In 2005, Time Warner released Batman Begins, the first high-budget film by Anglo-
American filmmaker Christopher Nolan (who later did Dunkirk, Inception, Interstellar), known at the time only to a few cinema nuts for his low-budget but intensely artful and intellectual films (Memento). The Batman franchise—from novels to comic books to movies to toys—had been a hugely profitable property for Time Warner for years. Still, most Americans viewed Batman as a really neat comic book figure. “Be yourself. Unless you can be Batman. Then, be Batman.” When the character had appeared on screens, it was as a countercultural buffoon on television in the 1960s, then two decades later as a big-screen gothic and carnival-esque weirdo in the hands of Tim Burton and his followers. Only Bruce Timm’s excellent animated Batman, which aired afternoons during the early 1990s, did the character justice, but this version, given the medium, reached only a handful of diehard Batman fans.
And so with Nolan the question emerged: could this newcomer to big film projects transfer his cinematic intensity and intellectualism to Batman, thus transforming him from pop sensation to a cultural mainstay, giving the property gravitas and the studio profit?
The answer, it turns out, was yes. Two central elements of Nolan’s filmmaking characterized his particular Batman genre. First, he brought the characters into the realm of realism. They reside in the actual world, not a fantasy world, and events and developments can all be explained rationally. Second, Nolan fashioned his central character not from the pastel pages of a comic book but rather from America’s western legend, the frontier mythos that captured the national consciousness so powerfully in multiple movies of the 1940s and TV shows of the 1950s. This western legend, or myth, was larger than any single person, event, or even culture. And there were no antiheroes in that cultural fare, focused on the daunting challenge of extending the essence of Western civilization to those forbidding and often dangerous lands of the Rocky Mountains and beyond. It took real heroes to do that.
Thus does Nolan’s Batman trilogy stand today as a remarkable cultural achievement. Indeed, his third film in the trilogy, The Dark Knight Rises, was not only the best of the three but is arguably one of the finest movies ever made, a true achievement of the cinematic arts, certainly worthy of an Alfred Hitchcock or a John Ford. It also may be the single most important defense of Western civilization ever to reach a Hollywood screen. That the severe cultural liberals of the West Coast didn’t rip it to shreds indicates they probably didn’t watch it—or perhaps didn’t understand it.
In crafting his Batman movies, Nolan pulled together his longtime core development team—his wife, Emma, and his brother, Jonathan—but he also turned to his troupe of actors from previous projects, including Christian Bale and Cillian Murphy. Nolan, though a longtime Batman fan, had never been a collector or reader of comic books, and he concluded that he needed an expert in the original comic book Batman. He wisely turned to David Goyer, an Ann Arbor native, and lifelong comic book fan and writer. Goyer not only had written for DC and Marvel (the two main comic book companies and friendly rivals) but also had written some extraordinary film scripts, such as the Gothic noir dystopia, Dark City (1998), arguably one of most imaginative science-fiction films ever made.
Partly because he was untried in this kind of filmmaking and partly because of his own artistic sensibilities, Nolan developed no initial plan for any sequels. He wanted every member of his team to see this film as a one-time opportunity, holding nothing back in its making. As he put it:
People ask if we’d always planned a trilogy. This is like being asked whether you had planned on growing up, getting married, having kids. The answer is complicated. When David and I first started cracking open Bruce’s [Bruce Wayne’s] story, we flirted with what might come after, then backed away, not wanting to look too deep into the future….I told David and Jonah to put everything they knew into each film as we made it. The entire cast and crew put all they had into the first film. Nothing held back. Nothing saved for next time.
As Nolan approached the story, he decided two critical things. The first was his insistence on realism. If something happened that could not be explained rationally, he excised even the idea of it. Everything from the Batmobile to the reaction of the police had to be utterly realistic. If the Batman headgear needed two ears, there needed to be an explanation for those two ears. If Batman jumped from building to building, there needed to be a reason why and explanation as to how. The second was his embrace of the myths of the American West. Here Nolan was tapping into something already in the Batman mythos but not explicitly understood by the larger public. Like Natty Bumppo of James Fenimore Cooper’s “Leatherstocking Tales” and Mark Twain’s Huck Finn, Bruce Wayne/Batman stands as a crucial American symbol. If Bumppo and Finn personify the American frontier of the 19th century, so Wayne/Batman is the great mythological figure of 20th and 21st century urban America. Named after the Revolutionary War general, Mad Anthony Wayne, and coming from one of the wealthiest of American families (builders and defenders of Gotham City, a Platonic shadow of New York, populated by 30 million people), Bruce Wayne considers it his aristocratic duty to protect the poor and oppressed from the wealthy and corrupt. He is an Arthurian but also deeply American figure.
In the Batman stories as developed in the comics over the last three decades, it has come to light that the Waynes have been keepers of the Holy Grail in the present day, descendents of Arthur from 2000 years ago. As with Arthur, Wayne surrounds his Batman persona with a number of knights (the Gotham Knights) who serve under such codenames as Nightwing, Robin, Oracle, and others. While, to our modern eyes, they seem much like a Marine platoon, they more properly resemble the Catholic military orders of the High Middle Ages. As with Arthur, Wayne must enter the Chapel Perilous, time and again, to keep the darkness of the Waste Land at bay.
Brilliantly, Nolan wrapped the eventual Dark Knight Trilogy into the significance of myth, and the significance of myth into the story. When attempting to explain to Alfred, his father figure, butler, and accomplice, what he hoped to do when returning to Gotham City, Wayne says: “People need dramatic examples to shake them out of apathy and I can’t do that as Bruce Wayne. As a man I’m flesh and blood. I can be ignored. I can be destroyed. But as a symbol, as a symbol I can be incorruptible, I can be everlasting.”
Several themes inform each movie. The first movie deals with justice and fear; the second with free will and anarchy; the third with hope and reformation.
In Nolan’s typical but eccentric way, the first movie jumps repeatedly in time, creating a whole out of non-linear storytelling. The essential tale is familiar, at least to Americans born after 1939, but Nolan adds his own tastes and vision.
The son of the wealthiest couple in the greatest city of the Western world, Gotham, Bruce Wayne, as a young boy, stands with his parents in “Crime Alley,” having left the opera. A killer shoots both parents and takes their money and jewels. Left an orphan, Bruce is raised by the family butler, Alfred Pennyworth. After dropping out of Princeton and ineffectively confronting the man he believes ordered the hit on his parents, Wayne departs Gotham City, traveling throughout the world for years, learning what it means to fight, to suffer, and to survive. The purpose, as he sees it, is to hone all his skills—physical as well as intellectual—and return to Gotham to protect the innocent.
In Batman Begins, Wayne finds himself in a high Tibetan temple belonging to an evil and inverted type of Knights Templar, the “League of Shadows,” an organization that demands the end of corruption of public officials. It calls them to account by destroying any city that has become unrepentantly corrupt. They claim responsibility for having destroyed Rome, Constantinople, and London, across the centuries. Now they say they will take out Gotham. The leader is a man named Ra’s al Ghul, an Arabic title meaning, “Head of the Demon.” The League portrays itself as superior to all political organizations in promoting what it sees as justice, a harmony derived from a very Nietzschean desire for the “will to act.” The League of Shadows, Ra’s al Ghul explains, has been a check against human corruption for thousands of years. “We sacked Rome. Loaded trade ships with plague rats. Burned London to the ground. Every time a civilization reaches the pinnacle of its decadence, we return to restore the balance.”
Though trained by the League of Shadows to be the heir to Ra’s al Ghul, Wayne rejects its brutal philosophy, destroys its temple, and returns to Gotham, presuming incorrectly that he has destroyed the League.
Back in Gotham, he assumes the primal symbol of his own fears, a Bat, hoping to employ terror against evil. As a Batman, he stands as a living gargoyle, adorning the cathedral of Western civilization while driving away rival evils. In his fight, he relies on four persons to sustain him: Alfred, to serve as Watson to his Holmes; Police Lieutenant James Gordon, the only honest cop in Gotham; Lucius Fox, a master engineer and entrepreneur; and Rachel Dawes, the one true love of his life, now an assistant district attorney. Rachel particularly complicates Wayne’s life, as she is unsure of his sanity and his intentions, especially in his assumption of the Bat persona.
The movie—operatic in a Wagnerian way from opening to final scene—concludes with Wayne barely defeating the revived Ra’s al Ghul and his League of Shadows. In the war against the League, Wayne Manor is destroyed, Rachel reveals that she cannot love a man who fights crime as a Bat, and a poison is loosed upon an area of Gotham known as “The Narrows,” a decayed part of the city that houses the poor and the insane. The consequences of this poison remain unknown as the movie ends, but Lieutenant Gordon, now fully in alliance with Batman, shows him the “calling card” of a new masked criminal, a grimy playing card of a joker.
The second movie, The Dark Knight (2008), begins with the Joker and his henchmen stealing from a mob bank. The heist, filmed as a tribute to such crime neo-noir classics as The French Connection (1971) and Heat (1995), goes off as planned, introducing the audience to the face of diabolic anarchy and insanity, the stunning Joker (played by Heath Ledger, who died in a drug overdose after filming).
Unlike the first movie, filmed almost entirely in shadow, with vertical lines and a Gothic noir aesthetic, The Dark Knight presents a much shinier and sunnier Gotham, its architectural lines straight, sleek, clean, and horizontal. This story, atypically for Nolan, is linear, driving relentlessly from the opening heist to the final tragic moments.
The story centers on the Joker’s attempt to destroy Gotham from within, through anarchy. As he puts it:
I’m a dog chasing cars … I wouldn’t know what to do with one if I caught it. I just do things. I’m just the wrench in the gears. I hate plans. Yours, theirs, everyone’s. Maroni has plans. Gordon has plans. Schemers trying to control their little worlds. I’m not a schemer, I show the schemers how pathetic their attempts to control things really are. So when I say that you and your girlfriend was nothing personal, you know I’m telling the truth.
He may denigrate plans, but the Joker is a master chessman, planning and scheming, always three or four moves ahead of his opponents. According to all law enforcement databases, the Joker should not exist—no fingerprints on record, clearly trained in some form of special ops, outfitted entirely in custom clothes.
Though The Dark Knight deals with anarchy and plans, the movie also probes the notions of free will and duality. If we choose A, are we doomed to follow B? If we follow B, have we destroyed all future options? The question manifests itself most particularly in the personal story of Harvey Dent, a young and courageous district attorney, ready to become the face of decency in Gotham, a White Knight, replacing Batman’s Dark Knight.
To prove that no real goodness resides in the world, the Joker plays upon Dent’s weaknesses, killing his girlfriend (Rachel Dawes, also Wayne’s one love) and driving him to madness and evil deeds. In the final scene, with Gotham not knowing that Dent had succumbed to the Joker’s dark spirit, Dent takes Lieutenant Gordon’s family hostage. In the fight to protect Gordon’s children, Batman plunges over a building ledge in a fight with Dent. Dent is killed. Batman tells Gordon that he will take the blame, though he has done nothing wrong. The movie ends with Batman having saved hundreds of lives, defeating the Joker and Dent, but now becoming a hunted man, vilified as a murderer. “You’ll hunt me. You’ll condemn me. You’ll set the dogs on me.” But Gordon explains to his son that Batman is “a silent guardian, a watchful protector, a dark knight.”
Hoping to shed the vigilante mantle of the Batman, Wayne and Police Commissioner Gordon decide to place all of their hopes on District Attorney Harvey Dent, the “White Knight” as opposed to Wayne’s “Dark Knight.” Harvey, though, hides an unrestrained abusive side, one that revels in torture. Knowing this, the Joker manipulates events in the movie—always three or four moves ahead of the good guys as if in a masterful game of chess—to force Dent to reveal this horrific side. Rather than allow the symbol to die, Wayne and Gordon decide to hide it, allowing Dent to die a martyr as the White Knight, placing, then, the abuse and killings committed by Dent on the Batman.
After The Dark Knight, Nolan insisted he had no intention of making a third movie. The death of his friend Heath Ledger rattled him, inhibiting any return to the world of Batman. But Batman wouldn’t leave him alone, he later explained, and he had to produce the third movie to find out how the story unfolds.
Nolan’s third Batman film was inspired by Charles Dickens’s A Tale of Two Cities, though much of the dialogue might have been written by the Anglo-Irish statesman Edmund Burke, considered by many the father of modern conservatism. The movie is in essence a retelling of the events of the French Revolution in Paris. In place of Robespierre is a mercenary, chemically-enhanced villain from either Eastern Europe or the Mideast, known only as Bane. This character is the creation of noted writer Chuck Dixon, a man both admired and reviled in the comic book world for his conservatism. Bane, working with several of Wayne’s competitors in business, has spent six months rebuilding the infrastructure of Gotham City, secretly lacing all of the concrete in streets, bridges, tunnels, and sewers with explosives. He considers himself the fulfillment of the infamous League of Shadows.
Coming out of retirement at 43, Batman investigates. But, when he encounters Bane in the sewers, the evildoer breaks his back. Bane takes the injured Wayne to a prison somewhere in the Middle East (filmed in an ancient city on the Pakistan-Indian border) and leaves him there to die. Languishing in this hell hole with his broken back, Wayne is reduced to watching TV, specifically, a single Gotham City channel. Bane wants Wayne to see the fall of Gotham as it implodes and collapses in on itself from the weight of its own corruption. In lines that could have been lifted straight out of Volume I of Alexander Solzhenitsyn’s The Gulag Archipelago, Bane explains to Wayne that while he is happy to have broken Wayne’s body, the prison is meant to destroy his soul.
Returning to Gotham, Bane detonates his explosives, destroying the city’s infrastructure as well as its access to and from the main island—the equivalent of Manhattan. In taking over the city, he warns the United States not to intervene or he will unleash a nuclear weapon with a six-mile blast radius, killing all on the island. By separating Gotham from the United States, Bane has created his own city-state. In the new, conquered city of Gotham, Bane frees all prisoners of Blackgate Prison (the Bastille) and declares the city to be under control of “the people.” The people, Bane says truthfully, have been deceived by the leadership of Gotham. Harvey Dent was not a White Knight but an insane, murderous criminal. Thus all of Gotham’s successes over the previous eight years, since Dent’s death, have been lies. Bane declares:
We take Gotham from the corrupt. The rich. The oppressors of generations who’ve kept you down with the myth of opportunity. And, we give it to you, the people. Gotham is yours. None shall interfere. Do as you please. . . . For an army will be raised. The powerful will be ripped from their decadent nests, and cast into the cold world the rest of us have known and endured. Courts will be convened. The spoils will be enjoyed. Blood will be shed.
In Soviet style, the criminal, the insane, and the poor ravage the homes, property, and persons of the wealthy, inverting the entire socio-economic structure of Gotham. The people—under the judgeship of Dr. Jonathan Crane, the “Scarecrow” and creator of the poisons in the first film—establish courts to sentence the wealthy for having preyed upon the poor. All such trials end in the execution of the guilty. This level of anti-communist passion has not been seen from Hollywood since Roland Joffé’s revealing if horrifying 1984 look into Cambodia under the Khmer Rouge, The Killing Fields.
Meanwhile, in his Middle Eastern prison, Bruce Wayne heals and regains his strength, spiritually as well as physically, climbing out of the pit (Plato’s Cave), liberating himself and his fellow prisoners. The fact that only one person had ever escaped from this prison heartens Wayne, for he calculates that if one person had escaped he could too.
Returning to Gotham City, Wayne as Batman takes control of the remaining police under Gordon’s command, raising a counter-revolutionary army. Leading hundreds of police into battle, he and his greatest ally, a somewhat reformed jewel thief named Selina Kyle, battle Bane and his revolutionaries. In hand-to-hand combat outside the Gotham City stock exchange, Kyle and Batman barely defeat Bane. Still, there remains the nuclear bomb. Taking his Bat—a hover aircraft based on the Harrier and the helicopter—Batman flies the bomb out of the city, over the Atlantic, and lets it detonate safely. Everyone assumes, however, that Batman sacrificed himself in saving Gotham.
At the funeral—attended only by four loved ones—a shell-shocked Gordon, who has only now come to realize the true identity of Batman, reads from The Tale of Two Cities. Looking at the grave of Wayne, next to that of Wayne’s mother and father, Alfred breaks down, believing that his entire life has been a failure. He had wanted to serve the Wayne family but had overseen its death.
The movie ends with Wayne Manor becoming a home for orphaned boys, St. Swithin’s, led by a Catholic priest. Also Lucius Fox begins to think that Wayne might have survived the flight over the Atlantic while Gordon refurbishes the long disused Bat signal and Alfred sits in an outside French café, seeing Bruce and Selina sitting together, in love, at a neighboring table.
In Nolan’s expert hands, Batman becomes what he always meant to be, an American Odysseus, an American Aeneas, an American Arthur, an American Beowulf, and an American Thomas More. Indeed, it would be hard to find another figure in popular and literary culture that more embodies the traditional heroism of the West more than in the figure of Bruce Wayne. He most closely resembles Aeneas, carrying on the culture of charity and sacrifice into the darkest and most savage parts of his world. Like St. Michael, he guards the weak, the poor, and the innocent. Like Socrates, he will die for Athens (Gotham) as it should be rather than as it is. Like Beowulf, he asks nothing for himself, merely the opportunity to wage the never-ending war against evil.
And in the third film, Western civilization survives, but only barely and only with incredible sacrifice at every level. “I see that I hold a sanctuary in their hearts, and in the hearts of their descendants, generations hence,” Dickens had written.
While some might still see merely a children’s comic book superhero made glittery with a Hollywood budget in the Dark Knight Trilogy, it would be impossible not to recognize Nolan’s genius in these films. Unlike, say, Peter Jackson, who dumbed down The Lord of the Rings, Christopher Nolan leavened Batman. Jackson diminished Tolkien, while Nolan enlarged Batman.
Since his creation in 1939 by two young Jewish artists in New York, Batman has served as a critical cultural marker for American and Western civilization. If we treat him like a clown, as did the 1960s TV series, we do not know who he is—or who we are. If we treat him like a Gothic carnival freak, as did Tim Burton, same thing. If we treat him as the great American hero and symbol of an urban age, as did Nolan, we have a chance at survival.
Bradley J. Birzer is the president of the American Ideas Institute, which publishes TAC. He holds the Russell Amos Kirk Chair in History at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
One of the best—but, sadly, least known—political scientists of the past century, Don Lutz, recognized exactly how important symbols can be to a free and ordered people. Communities across time share “symbols and myths that provide meaning in their existence as a people and link them to some transcendent order,” Lutz argued in the preface to a Liberty Fund collection of American colonial documents. In his argumentation, Lutz followed a number of critical thinkers, ranging from Eric Voegelin to Russell Kirk to Robert Nisbet. Unfortunately, a people, a person, a government, a bureaucracy, or a corporation can readily pervert such symbols, stripping them of their original meaning while allowing them to raise the consciousness of a society in ways directly contrary to what the symbols originally meant. Such is the power of symbols.
One of the most fascinating symbols of a republic in the western tradition, from the Romans through the Germanic Barbarians to the American founders to the American founders of the Republican Party, is the mighty oak. As noted in the previous essay on the history on the rise of the modern nation state, all republics must exist—by their very nature—as reflections of nature herself. They are, at essence, organic, necessarily experiencing birth, middle age, and death. How easily one might transfer this to the oak, thinking of its own stages, from acorn to prevailing gian, to corrupted and hollowed-out shell. Once, a thing of nearly infinite possibilities, but, ultimately, food for termites.
Yet, as a symbol, the oak itself has remained alive and well for a free and ordered people not just over generations, but over millennia. How much healthier for us and those of us who crave ordered liberty to see our representation in a majestic thing of nature rather than in a person, too often transformed into a god or demigod in our fallen humanity.
To see the importance of the oak, we must turn back to the Romans at the end of the Republic, nostalgically clinging to and idealizing what was.
When her father unjustly declared neutrality in the matter of the Trojans, Venus intervened on behalf of her son, Aeneas, bestowing upon him divine weaponry.
But the goddess Venus,
lustrous among the cloudbanks, bearing her gifts,
approached and when she spotted her son alone,
off in a glade’s recess by the frigid stream,
she hailed him, suddenly there fore him: “Look,
just forged to perfection by all my husband’s kill:
the gifts I promised! There’s no need now, my son,
to flinch from fighting swaggering Latin ranks
or challenging savage Turnus to a duel!”
With that, Venus reached to embrace her son
And set the brilliant armor down before him
under a nearby oak.
Aeneas takes delight in the goddess’ gifts and the honor of it all
as he runs his eyes across them piece by piece.
He cannot get enough of them, filled with wonder,
turning them over, now with his hands, now his arms,
the terrible crested helmet plumed and shooting fire,
the sword-blade honed to kill, the breastplate, solid bronze,
blood-red and immense, like a dark blue cloud enflamed
by the sun’s rays and gleaming through the heavens the burnished greaves of electrum, smelted gold,
the spear and the shield, the workmanship of the shield,
no words can tell its power . . .
There is the story of Italy,
Rome in all her triumphs. There the fire-god forged them,
well aware of the seers and schooled in the times to come.
When the greatest of Roman republicans, Marcus Tullius Cicero, offered the world the first treatise on the natural law, On the Laws, began with the image of an oak, deeply rooted not just in the soil, but in the poetic imagination itself. “I recognize that grove and the oak tree of the people of Arpinum: I have read about them often in the Marius. If that oak tree survives, this is surely it; it’s certainly old enough,” Atticus begins. To which Quintus famously answers, “It survives, Atticus, and it will always survive: its roots are in the imagination. No farmer’s cultivation can preserve a tree as long as one sown in a poet’s verse.” Indeed, Quintus continues, this very oak might have been planted by the one god. Certainly, the name of the oak will remain, tied to the sacred spot, long after nature has ravaged it.
In his History of Early Rome, Livy informs us that a consecrated oak sheltered the praetorium, a seat of waiting and contemplation for foreign guests and ambassadors from the Senate. Likewise, Suetonius reminds us that Mars, especially, favored the oak as a tree symbolizing the divine authority.
The Mediterraneans, though, held no monopoly over a mythic understanding of the oak, as the Germanic tribes far to the north considered the tree the symbol of their god of justice, Thor. When the Anglo-Saxons and Scandinavians met to decide the fate of inherited and common law–which laws to pass on, which laws to end, and which laws to reform–they met as a Witan or AllThing under the oaks.
Christians, knowing the oak to be so utterly rooted in the pagan tradition, knew not whether to love or to hate the tree. According to St. Bede, when St. Augustine of Canterbury called a conference of church leaders in 603, he did so at an oak, knowing the Anglo-Saxon fondness for the tree. There, at what became known as Augustine’s oak or Augustine’s Ak, the evangelist called for unity in proclaiming the gospel. Two generations earlier, Bede records, St. Columba had done something similar, building a monastery among the Celts known as Dearmach, “Field of Oaks.” Even at the most famous of medieval monasteries, Lindisfarne, Finan built the church altar there not out of traditional stone, but, rather according to the custom of the peoples in that region, an altar “of hewn oak, thatched with reeds.”
When St. Boniface, a century later, encountered a group of Friesians still worshipping the oak of Thor, he—with nothing short of awesome bravado–attacked the tree with his axe. According to the hagiographic legends surrounding Boniface, the oak exploded into four parts moments before the blade touched its bark. So astounded were the pagans at his daring, that St. Boniface seized the moment to begin proclaiming the gospel. Where the ruined oak stood, according to hagiographic myth, an evergreen grew in its place. As it was getting dark and Boniface continued to preach, his followers placed candles all around and upon the evergreen, thus creating the first Christmas tree.
St. Boniface, it turns out, tried this trick one too many times, the last in 754, when some Thor worshippers decided to stick with Thor, beheading the poor Catholic evangelist.
If Boniface undid the oak as a direct representation of a god, he could not undo its importance to justice, as it remained a symbol of the law and of a free people. When the grand Christian King Alfred the Great met with his men in the late 800s to judge the inheritance of the common laws of the Anglo-Saxon people, they, too, met under an oak. Critically, Alfred and his Witan judged the laws. They did not create them, believing such actions illegal. A ruling body can only judge what it has inherited, not create laws out of nothing. Such a power belongs only to God and through his people only across time.
Perhaps, then, St. Boniface’s actions merely rendered under God what was God’s, and unto the community what was the community’s.
The symbol of the oak remained a powerful one in colonial America, especially as the various communities on the eastern seaboard continued their own observance of the traditional common laws and, especially, in their Declaration of Independence. Though not exclusively oak, oaks made fine Liberty Poles and Liberty Trees in the 1760s through 1780s, and newly-freed American communities regularly planted oaks to celebrate their independence from Britain. Pamphleteers, not surprisingly, used the symbol of the acorn and the oak as representative of America’s independence and hardihood.
When Congress rashly passed the democratic Kansas-Nebraska Act in 1854—a law that claimed that the enslavement of an entire people could be decided by mere majority vote—angry republican citizens of Michigan formed a third party, the Republican Party, in Jackson, Michigan, under, not surprisingly, a grove of oaks.
Whatever one in the early twenty-first century might think of Jupiter or Thor, the oak remains a mighty symbol of a free people, a people ready to remember and reclaim what is rightfully theirs by the grace of the Creator and the created order. The oak reminds us of strength in the face of nasty and bitter times, returning us to the nourishment of what makes us strong and free, the duty to govern ourselves in a fashion becoming to God and nature and, equally important, to the dignity of the human person. Unlike oppressive governments who rely on cults of personality, the republic relies on the nature of nature and the nature (good and bad) of the human person.
As I write this second part of the series, the origins of the rise of the modern nation state, our own nation state looks—financially—nothing short of pathetic. At the end of 2017, the federal government’s official estimate for deficit spending is $666 billion. For all kinds of reasons, this is a really scary number, and not just because it causes one to think of the mark of St. John’s envisioned beast. Rrroawr! $666 billion is a number so terribly large that it is difficult for any of us—even those of us not suffering from innumeracy or apocalyptic dread—to comprehend. And, of course, this is just the recorded and admitted deficit spending for one year. That is, it accounts for those things the government admits to, on the books and on budget.
According to the U.S. Debt Clock, we’re at nearly $21 trillion in debt, and the number increases so quickly that seizures might very well result. As the number made my stomach turn, I thought, perhaps the site should come with a warning akin to those found on PS4 and Xbox games. That’s all we need, right? Another law and another regulation.
As Tom Woods and all sensible economists have recently claimed, the United States of America is simply insolvent. The only shocking thing is that no one in the mainstream media or financial institutions seems to care.
Whither the American republic? It is worth remembering that no one founds a republic believing the republic will last forever. To believe such a thing automatically negates one’s conservatism. Like all living things, a republic must experience a birth, a middle age, and a death. The question is never if a republic will die, but when. The stronger its soul, the healthier its body. Conversely, the less a people have a purpose, the faster will they decline. A republic, American or not, is a res publica—a common good, a good thing, a public thing. Whether our government still resembles the republic of the American founders is yet another question, and one for another post.
It is also worth remembering that in the long history of western civilization, no political arrangement—with only the rarest exceptions—has lasted more than a few centuries. Political bodies come and go. The two longest lived institutions in the West are not political, but, ethnic and religious. The oldest sustained cohesive people in the world are the Jews, and the oldest institution in the West is the Latin church. We can conservatively date the first at 4,000 years old and, the second, at roughly 2,000 years old. Not a single political body that existed during the time of the Pentecost still exists today. Indeed, even the very form of government that so predominates in the world—the roughly 200 nation states of the world—did not exist until the fifteenth century.
In the previous post, I mentioned what a libertarian skeptic God seems to be, as understood in the Books of Samuel and in Jesus’ handling of the coin of the Roman Empire. This skepticism about what would be called caesaro-papism arrived not just with the Jews, but also with the ancient Greeks and Romans as well.
The classical Greeks believed in community rule, that is, rule localized to each polis, its citizens deciding over and across time what rules, norms, and laws should prevail. At the height of ancient Greece, roughly 150 poleis existed, each with its own form of government. The Athenians were relatively democratic, the Spartans monarchical and militaristic, and the Corinthians free traders. What they held in common was a despising of the Oriental (Persian) belief in a godking. Equally, the Persian “godkings,” Darius and Xerxes, also despised the Greeks and what they perceived as anarchic and archaic liberty. When the Persians warred against the Greek poleis in the early fifth century, their war was far more about pride than logic. As the eminent twentieth-century historian, Christopher Dawson argued, the Persian War was, at its essence, a spiritual struggle.
The Greek patriot Herodotus described one Persian invasion gloriously, the defense of the Gates of Fire (Thermopylae) by Leonidas and 300 Spartans.
But Xerxes was not persuaded any the more. Four whole days he suffered to go by, expecting that the Greeks would run away. When, however, he found on the fifth that they were not gone, thinking that their firm stand was mere impudence and recklessness, he grew wroth, and sent against them the Medes and Cissians, with orders to take them alive and bring them into this presence. Then the Medes rushed forward and charged the Greeks, but fell in vast numbers: others now took the places of the slain, and would not be beaten off, though they suffered terrible losses. In this way it became clear to all, and especially to the king, that though he had plenty of combatants, he had but very few men. (Herodotus, The History, Book VII).
Real men, Herodotus implied rather strongly, fought because they chose to fight, not because they were forced to. Only “free societies” allow the flourishing of real manhood. However brave a Persian might be, no real man could fight for Xerxes. Such warriors were, simply put, slaves, playthings of a false godking. “It was as ‘free men,’ as members of a self-governing community, that the Greeks felt themselves to be different from other men,” Dawson argued.
It would not be absurd to argue that when the last Spartan died at Thermopylae, the Occident was born. Though the Greeks (under the hubris of the Athenians) ultimately squandered their inheritance, falling into empire, civil war, and ruin by the end of the fifth century, the successes of the first few decades of that century are not lessened. The Greek achievement against the Persians proved a glorious watershed in the history of liberty, in the history of dignity, and in the history of civilization.
A full three decades before the Spartans and Persians battled at the Gates of Fire, the farmers of Rome overthrew their Etruscan overlords, proclaiming within a year of their rebellion, a republic. True to their own fears of godkings, the Romans insisted that their republic was not created—implying a man or group of men had the divine ability to declare such a thing out of nothing—but, rather, grew. Our republic, Cicero writes in his dialogue, On the Republic, “in contrast, was not shaped by one man’s talent but by that of the many; and not in one person’s life time, but over many generations” (Cicero, On the Republic, Book II). Though far from perfect, the Roman republic grew, adapted, and evolved over centuries of time, lasting 400 years before succumbing to the dread and fate of outright empire.
Again, one must remember that no republicans believe their republic can last forever. A republic, by its very essence, must rely on its organic nature, a living thing that is born, flourishes, decays, and dies. It is, by nature, trapped in the cycles of life, bounded by the walls of time. While a cosmic republic might exist—as understood by Cicero’s “Cosmopolis” and Augustine’s “City of God”—it existed in eternity and, therefore, aloof of time.
For better or worse, the Roman Republic reflected not just nature, but the Edenic fall of nature as well. We can, the Roman republican Livy recorded, “trace the process of our moral decline, to watch, first, the sinking of the foundations of morality as the old teaching was allowed to lapse, then the rapidly increasing disintegration, then the final collapse of the whole edifice.” The virtues of the commonwealth—the duties of labor, fate, and piety—gave way to the avaricious desires for private wealth. When young, the Romans rejoiced in the little they had, knowing that their liberty from the Etruscans meant more than all the wealth of the material world. “Poverty, with us, went hand in hand with contentment.” As the republic evolved and wealth became the focus of the community, not sacrifice, so the soul decayed. “Of late years,” Livy continued, “wealth has made us greedy, and self-indulgence has brought us, through every form of sensual excess, to be, if I may so put it, in love with death both individual and collective.” (All Livy quotes from The History of Early Rome, Book I)
Not long before his own martyrdom at the hands of a would-be Caesar, Mark Antony, Cicero lamented:
Thus, before our own time, the customs of our ancestors produced excellent men, and eminent men preserved our ancient customs and the institutions of their forefathers. But though the republic, when it came to us, was like a beautiful painting, whose colours, however, were already fading with age, our own time not only has neglected to freshen it by renewing the original colours, but has not even taken the trouble to preserve its configuration and, so to speak its general outlines. For what is now left of the ‘ancient customs’ on which he said ‘the commonwealth of Rome’ was ‘founded firm’? They have been, as we see, so completely buried in oblivion that they are not only no longer practiced, but are already unknown. And what shall I say of the men? For the loss of our customs is due to our lack of men, and for this great evil we must not only give an account, but must even defend ourselves in every way possible, as if we were accused of capital crime. For it is through our own faults, not by any accident, that we retain only the form of the commonwealth, but have long since lost its substance. (Cicero, On the Republic, Book IV)
As we consider our own nation state with its immense debt and bloated empire, we might wonder if Cicero’s words written during the reign of first caesar might not equally apply to 2017.
Bradley J. Birzer is the president of the American Ideas Institute, which publishesTAC. He holds the Russell Amos Kirk Chair in History at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
There are few joys in this insane world greater than the pleasure of really artful music, whatever the genre, whatever the market. And of all of the rock bands in the world, the best might very well be England’s Big Big Train (BBT). Well known in the U.K. and Europe, they remain relatively unknown in North America, to our shame. To my mind, the only band that rivals BBT is Tennessee’s Glass Hammer.
As a band and a project, BBT has been around since the early 1990s. Like most living things, it has aged considerably (though quite gracefully) over the past two decades. Guided by its brilliant founding members Greg Spawton and Andy Poole, BBT is now made up of eight full-time members, including one from Sweden and one from the U.S.
Its most famous member is Dave Gregory, formerly the lead guitarist for XTC. Like every member of the band, Gregory is an extraordinary musician pursuing a high art. He is also, I’m happy to note, a true gentleman and, like everyone in the band, a perfectionist. From the beginning of its existence, BBT has honed its complex song structures, riveting melodies, and gorgeous historical, poetic, and mythic lyrics. Almost all of the band’s songs celebrate excellence, innovation, and struggle. Typical themes include World War I and II ace fighters, beekeepers, medieval saints, architects, and survivors of trauma. Lyrically, the band is levels above almost anything being written in popular culture today, and, in the rock-pop world, certainly well beyond Elvis, Madonna, and Lady Gaga.
BBT resides in a sub-genre of rock music known popularly as progressive rock, art rock, or more affectionately “prog.” Prog began as an attempt in the mid-1960s to present rock music as an art form rather than an emotional reaction. American lovers of prog generally date its advent to “Pet Sounds” by the Beach Boys, while the British usually turn to “Sgt. Pepper’s” as the beginning. One of the most important rules about prog is that there are generally no rules. Traditionally, progressive rock incorporates odd, African-American jazz-like tempos and time-signatures with classical European tonal and compositional structures.
For a time, between about 1970 and 1976, progressive rock—led by such bands as ELP, Genesis, and Yes—sold millions of records. But the genre faded in the late 1970s, outcompeted by the less complicated (many would say less talented) punk rock movement. Where progressive rock succeeded during this period, it was usually by incorporating elements of hard rock and metal, such as Canada’s Rush, becoming a part of the experimental New Wave and post-New Wave scene, such as Peter Gabriel did, or, lasting just a bit longer, by embracing Cold War existentialism, as did Pink Floyd.
By 1980, though, all but the most diehard fans of straightforward prog—if such a thing could exist or ever did exist—considered the genre to be pretentious, overly complicated, and bloated. Even books that attempt to cover the genre sympathetically, such as Ed Macan’s Rocking the Classics and Dave Weigel’s The Show That Never Ends, are written in the past tense. When most commentators speak of prog, they do so in mocking tones, remembering Yes’s Rick Wakeman wearing gaudy wizard cloaks.
Beginning in the early 1990s, however, a whole new group of progressive rock artists emerged, especially as the internet began to decentralize the music market and connect various parts of the globe, one to another. Between 1994 and 2000, progressive rock once again gained a substantial following in Europe, the Middle East, India, and the Americas.
This movement, known as “third-wave prog,” has yet to subside. In America, Neal Morse, Glass Hammer, and Dream Theater dominate. In the U.K., Steven Wilson, BBT, and Marillion hold prominent positions. (To be sure, the heart of the third-wave resides in England, centered around editor and kingmaker Jerry Ewing and his geekish and stylish Prog magazine.)
For TAC readers, it’s worth noting that a whole host of serious American conservative and libertarian writers—ranging from Steve Hayward to Tom Woods to S.T. Karnick to Jason Sorens to Steve Horwitz to Sarah Skwire to Aeon Skoble to Carl Olson to Bruce Frohnen—share a deep and abiding affection for prog.
Additionally, progressive music has typically embraced intelligent topics, offering cultural and political criticisms that take anywhere from six to (*gasp*) 78 minutes. The average prog song is three to four times the length of the average pop song. Many prog songs remain strictly instrumental in their opening five to six minutes, with the vocalist finally entering long after the average pop song would have ended. And as in jazz, progressive musicians often play extended passages or solos for impressive amounts of time, emphasizing complexity as well as spontaneity.
This brings us back to that English wonder of wonders, BBT. Though Spawton and company had limited success in their first decade, it was not until 2009 that the band’s current form began to take shape. That year, Spawton recruited and introduced three key elements to their future success: the extraordinary American drummer Nick D’Virgilio (younger brother of Mike D’Virgilio of theamericanculture.org); the minstrel English vocalist, flautist, and composer David Longdon; and guitarist gentleman Dave Gregory. The album that BBT released that year, The Underfall Yard, is every bit as good and meaningful as Brubeck’s Time Out, Simon and Garfunkel’s Bookends, Davis’s Kind of Blue, and U2’s War. While any listener would expect the traditional rock instruments of guitar, bass, drums, and keyboards to be present, few would expect the incorporation of a full English brass band, woodwinds, and strings. The themes of The Underfall Yard revolve around the mysteries of Venus, abandoned industrial areas as archeological wonders, entrepreneurial visionaries, Dante-esque architects, and electrical storms off the coast of England.
None of this ever screams—or even whispers—pretense. Through the extraordinary talents of BBT, it all comes across as a perfect and necessary whole, as though the members of the band have found something that had always existed in the created order but had yet to be seen since the fall of Eden. Once seen (or heard), it can never be unseen, nor should it be. By the time the listener reaches the end of The Underfall Yard, he cares immensely about that which has been lost and should never have been forgotten:
These are old places stood in the way,
Grass grown hills and stone.
Parting the land
With the mark of man,
The permanent way.
Using available light,
He could still see far.
Far have we traveled from the pop ejaculations of “hey, baby, baby.” Spawton and Longdon reach into the realms of Chesterton, Eliot, and Betjeman.
Indeed, Spawton’s lyrics, as quoted above, might have been written, at least in their intent if not in their wording, by the greatest of 18th-century thinkers, Edmund Burke. We see the past because of the sacrifice of our ancestors. We see the present as a means to honor, through piety, those who struggled for us, whether they knew us or not. And we see the future, however dimly, only by the available light of tradition, reason, and nature. Could this not be Burke’s mysterious incorporation of the human race: the dead, the living, and the yet to be born comprising one profound community, transcending the limitations of time and place?
Since 2009, BBT has grown vigorously, adding a full-time keyboardist, Danny Manners, another lead guitarist, Rikard Sjöblom, and a violinist, Rachel Hall. And since the release of The Underfall Yard, they’ve just kept getting better and better with one masterpiece after another: Far Skies Deep Time in 2010, English Electric Full Power in 2013, and Folklore in 2016, in addition to an EP and two live albums.
I believe 2017, however, has truly been the year of BBT. This year alone, the band has released two full albums, Grimspound and The Second Brightest Star, a free 34-minute song “London Song,” and on December 1 a Christmas EP.
There exists not a single false step in any of this. And, despite a mass of releases, quantity has never overwhelmed quality. BBT brims with ideas, especially as Spawton and Longdon play off each other, forming a sort of prog Lennon/McCartney for the 21st century. This is a band at the top of its game, bursting with joy over expressing itself and its art.
Let me note just two additional things.
First, as noted above, there’s a Stoic perfectionist streak in most progressive rock, but none more so than in BBT. When the band releases something, Spawton makes sure every single aspect of it is right, from the music to the lyrics to the packaging.
Second, and equally important, the band members do not see themselves as aloof geniuses, tapping into the esoteric music of the spheres. For BBT, the art is real, but so is the audience. They excel at creating and leavening community, not only among themselves, but for and among the rest of us as well.
As a part of the northern tradition of the Beowulf poet, King Alfred, the eddas, and the sagas, the band sings on one of their 2017 albums:
Here, with book in hand
Follow the hedgerow
To the meadowland
Here with science and art
And beauty and music
And friendship and love
You will find us
The best of what we are
Poets and painters
And writers and dreamers
If you thought rock was an artless and juvenile spewing of emotion, think again.
Pick up any album by BBT and be dazzled, not just by the artistry of the music, but by the invitation to be a part of the art and to become immersed fully into something that is unapologetically true, good, and beautiful.
Cynics need not apply.
Over the last several years, amidst the swirls of overt corruption, immigrant “hordes,” rising “national security” concerns, police militarization, bloated empire, and the so-called deepening of the “deep state,” conservatives and libertarians of all stripes have pondered the meaning of the modern state. Most recently, Paul Moreno has brilliantly considered the rise of The Bureaucratic Kings, Alex Salter has wisely questioned the relationship of anarchy (the Bohemian, Nockian variety) to conservatism, and, though I have yet to read what the always thoughtful Jason Kuznicki of Cato recommends, there is also James C. Scott’s Against the Grain: A Deep History of the Earliest States. Believe me, I am intrigued. Each of these authors and recommenders, of course, owes an immense debt to the pioneering work of Robert Higgs’s magnum opus, Crisis and Leviathan (1987), and Higgs, in turn, had followed in the footsteps of such 20th century greats as Christopher Dawson, Robert Nisbet, Friedrich Hayek, and Joseph Schumpeter.
Some conservatives will immediately balk at such analyses. Students of Leo Strauss want to remind us that politics, properly understood in the Aristotelian sense, is high, not sordid. Students of Russell Kirk want to remind us that order is the first concern of any society and that to look too deeply at the origins of a state is a form of pornographic leering and peeping. And, Christians of every variety, consider the 13th chapter of St. Paul’s letters to the Church in Roman as having closed the matter before it ever needs discussion. God, according to a literal reading of St. Paul’s letter, commanded us each to “submit to the supreme authorities. There is no authority but by act of God, and the existing authorities are instituted by him; consequently anyone who rebels against authority is resisting a divine institution.”
While modern Christians might claim this answers every question about the legitimacy of state action, they are not necessarily mainstream in the history of Christianity. The Prophet Samuel, feeling outcast by the ill favor of his people, of course, had a fierce argument with them, after consulting with God about the necessity of centralizing the government under a monarch. God assured him that this would be foolish:
He will take your sons and make them serve in his chariots and with his cavalry, and will make them run before his chariot. Some he will appoint officers over units of a thousand and units of fifty. Others will plough his fields and reap his harvest; others again will make weapons of war and equipment for mounted troops. He will make your daughters for perfumers, cooks, and confectionaries, and will seize the best of your cornfields, vineyards, and olive-yards, and them to his lackeys. He will take a tenth of your grain and your vintage to give to his eunuchs and lackeys. Your slaves, both men and women, and the best of your cattle and your asses he will seize and put to his own use. He will take a tenth of your flocks, and you yourselves will become his slaves.
God seems to have been the first hard-core decentralist anti-statist, but Samuel’s people refused to listen, and God granted them, against His better judgement, a monarchy.
Jesus, holding a coin of his day, stamped with Ruler of Things Temporal on one side and Ruler of Things Spiritual on the other, told His followers that they must render unto Caesar what is Caesar’s and to God what is God’s. For better or worse, He did not elaborate, but it is rather clear that the body politic has no right to interfere with the body spiritual.
Even St. Paul, when he wrote the thirteenth chapter of Romans, wrote his chapter in the context of a much larger letter that dealt entirely with the nature of the human person as citizen. Not surprisingly, he wrote this letter to the Christians who lived at the very center of the empire. The letter itself is deeply complex, full of nuances, and, one would wish, resistant to proof texting. In order, St. Paul addresses citizenship to and within the Natural Law, to Judaism, to and within the Gospel of Jesus, to and within Creation itself, a return to the topic of Judaism, to and with God’s will for each person within history, in the Body of Christ, and, finally, in chapter 13, to and within the secular authorities of the world. To suggest that one could readily take any one of these discussions and commands apart for any other is as wrong as it is absurd. While I would never proclaim to know exactly what St. Paul wants of us, I can state with certainty that no easy answer suffices. St. Paul was as individual in his personality as he was in his thought.
Three centuries after St. Paul wrote his letter to the Romans and after horrific massacres, huntings, and martyrdoms at the hands of the Roman Imperials, Christians found themselves, if not quite legal, no longer illegal after the Edict of Milan of 313. Not until 380, did the Roman government declare Christianity fully legal, and, twelve years later, in 392, it offered Christianity a monopoly. For eighteen years, though many Romans grumbled about the privilege given exclusively to Christianity, none openly challenged it until the barbarian hordes invaded the city of Rome on August 24, 410. Then, all hell broke loose, and the grumbling pagans became outraged pagans, demanding the recognition that the forsaking of the gods for the Christian God had resulted in the fall of the Eternal City.
In those years prior to the invasion, St. Ambrose of Milan had forbidden the Roman Emperor from receiving communion after the emperor had sanctioned the massacre of rebellious civilians. This Ambrosian doctrine established that while the powers spiritual did not possess force of arms, they did have the right to deny those who wielded political and military power from enjoying the sacraments of the Holy Church when they were in grave sin. Ambrose’s excommunication worked, and the emperor accepted and endured an extended penance before being received back into the arms of the church. Such power remains to this day, as seen most recently and most powerfully in the modern age in a Polish Pope’s shaming of an Evil Empire.
Ambrose’s close friend, St. Augustine, elaborated this Catholic distrust of state power most effectively and most persuasively in his magisterial, The City of God (412-428). Though long, it is worth quoting at length.
Justice being taken away, then, what are kingdoms but great robberies? For what are robberies themselves, but little kingdoms? The band itself is made up of men; it is ruled by the authority of a prince, it is knit together by the pact of the confederacy; the booty is divided by the law agreed on. If, by the admittance of abandoned men, this evil increases to such a degree that it holds places, fixes abodes, takes possession of cities, and subdues peoples, it assumes the more plainly the name of a kingdom, because the reality is now manifestly conferred on it, not by the removal of covetousness, but by the addition of impunity. Indeed, that was an apt and true reply which was given to Alexander the Great by a pirate who had been seized. For when that king had asked the man what he meant by keeping hostile possession of the sea, he answered with bold pride, “What thou meanest by seizing the whole earth; but because I do it with a petty ship, I am called a robber, whilst thou who dost it with a great fleet art styled emperor. [St. Augustine, City of God, Book IV]
Whatever one might personally think of St. Augustine in the early 21st-century, it matters little. Outside of Holy Scripture, nothing in the western middle ages mattered as much as his City of God. For all intents and purposes, it was the handbook for the next thousand years of the West. As such, we moderns and post-moderns almost never turn to the medieval period to understand political theory. For the medieval greats, what mattered most was not what form government took, but how moral it was, how ethical it was, and how protective of the powers spiritual it was. As much as the Medievals studied Paul, they did so through the lens of Augustine. Paul’s Letter to the Romans, especially chapter 13, was anything but simple.
For those of us living in the last six-hundred years of history, attuned as we are to the doings of the nation-states, at home and abroad, the Medieval is as far from us as is Ray Bradbury’s imaginary civilizations on Mars.
Yet, as good and true conservatives, we in this present whirligig we call civilization, must return to first principles and right reason. If we are to understand the modern state, we must understand its origins.
Part II, coming to an American Conservative website near you.
For all the wrong reasons (and none actually correct), Hillsdale College served as an important part of the debates in the Senate this weekend regarding tax reform. Taking it upon himself to become the crusader for everything “progressive,” Senator Jeff Merkley of Oregon proudly proclaimed on Twitter and Facebook: Hillsdale College wants “to have permission to discriminate in selecting students.” Of course, Senator Merkley did not mean that the college discriminated in its selection process, as any real university would, to seek and recruit the best and the brightest, but, rather, that the college discriminates to make sure the college stays racially white. Or, as he not so delicately put it, Hillsdale College “specializes in discrimination.”
I have no ability to judge whether the Senator spoke out of ignorance of maliciousness, but I can state this definitively: He knows absolutely nothing about Hillsdale College, and, frankly, if he possesses even an ounce of decency, he will formally apologize for his claims.
A group of abolitionist Free-will Baptists founded Hillsdale College in 1844, though they stipulated that the college could not be denominational. Instead, true to their abolitionist beliefs, the founders of the college forbade any discrimination based on the accidents of birth. In other words, Hillsdale—from day one of its existence, as defined by its charter—allowed a person of either sex and of any racial, ethnic, or religious background to study there. The college became, understandably, a hotbed for abolitionist sentiment, and it was the rare prominent abolitionist of the ante-bellum period who did not grace Hillsdale with a visit and a speech. Perhaps, most prominently, Frederick Douglass spoke here. True to our heritage, President Larry Arnn dedicated a statue to the great anti-slavery orator just this past spring. That statue, along with a statue of a Civil War soldier and Abraham Lincoln greet the visitor to Hillsdale’s beautiful campus in southern Michigan.
As noted above, though, Hillsdale was not just color-blind from day one, it was also the first college or university in the United States to allow women the right to earn a liberal arts degree. Others allowed women to study for home economics, but, at Hillsdale, they were treated just as well as men, studying the Great Ideas, the Great Minds, and the Great Books of western civilization.
When Abraham Lincoln called for volunteers to suppress the Confederate rebellion in the spring of 1861, almost every single male at the college answered that call, making it unique among all northern colleges. Indeed, outside of the military academies, not a single institution of higher learning offered anywhere near the level of participation that Hillsdale offered. Hillsdale men (and, of course, women, though in non-combat positions) served the Union stunningly, especially in the 2nd, 4th, and 24th Michigan regiments. The 24th, the fifth of five regiments to make up the justly famous Iron Brigade, sacrificed themselves in one of the most horrific moments of the Civil War, the first day at the Battle of Gettysburg. Positioning themselves at a bottle neck on the eastern side of the little Lutheran Pennsylvania town, the 24th Michigan, outnumbered nearly 10 to 1, fought so fiercely that the Confederate invaders held back, despite having the superiority in numbers. When Lee found out about the timidity of his own troops, he was furious. Had his troops broken the 24th Michigan, they could have readily taken the high ground of Little Roundtop and surrounding areas. The Hillsdale men who gave their lives that day in what must have seemed a hopeless cause very well changed the course of American and western civilization. Today, the fourth floor of Delp Hall, which houses the history department, is dedicated to their sacrifice, a seminar room displaying paintings of that hot, humid afternoon in Pennsylvania as one Hillsdale man after another succumbed to enemy fire.
During the 1950s, at the height of the struggle for black civil rights, Hillsdale’s football team, led by the intrepid Muddy Waters, refused to play in the Tangerine Bowl because black players were not allowed on the field. Hillsdale’s team would’ve have gone into the 1955 Bowl game with a 9-0 record.
Your author—yours truly—has had the privilege of teaching at this college for over eighteen years. To this very day, I am more than proud to note, Hillsdale remains 100% blind when it comes to the color, race, ethnicity, and religion of its students. Not only do we not ask a student to identify any race or ethnicity on his or her application form to the college, but we keep absolutely no data about such things. We believe in character, not skin color. We love intelligence, not appearance. We love the individual, not the group.
Though I can only speak for myself and not for the college (for I have no such authority to do so) as a whole, I can state that far from “specializing in discrimination,” we might be the single best institution in western civilization that adamantly refuses at every level to “specialize in discrimination.”
Though I do not have the privilege of knowing or even understanding Senator Merkley, I can state with certainty that while he makes a show of calling for “equality,” he really means a drab uniformity and collectivized tapioca. As Dr. Arnn, the single best college president in the world, has reminded us many times, we were anti-discrimination long before the Federal Government was. In fact, he notes, the Federal Government finally adopted OUR position on the issue of race and ethnicity, not the other way around. Hillsdale had to remind the United States over and over again of the Founding intent as expressed in the Declaration of Independence and the Northwest Ordinance of 1787.
Rather than speaking about that of which he knows nothing, perhaps Senator Merkley would consent to visit our campus. I would happily show him our statues that so beautifully reveal our devotion to liberal education as well as to the dignity and beauty of each human person, each a unique expression of a majestic Creator. I would happily introduce him to my extraordinary colleagues and to my ever-curious students. I would also take him to Oak Grove Cemetery, a sacred site on the northern most part of town that inters over 300 Civil War veterans as well as the first historian of Hillsdale College, Ransom Dunn. In 1854, he became so disgusted with Washington politics and especially the Democratic Party under Stephen Douglas, that he helped form an independent movement that sought to prevent the extension of slavery in the American West. After much deliberation under a grove of oak trees in Jackson, Michigan, they finally decided on the name, the Republican Party.
As a historian at one of the finest institutions of higher learning in existence, I only ask that the Senate neither helps nor hinders us. Hillsdale College does not take one single penny from the federal government, and our students take not one single penny in loans. Just please leave us alone, and we’ll be fine. Indeed, leave us alone, and we’ll continue to show the world how best to educate and how best to promote the dignity of every single human person regardless of race, ethnicity, religion, etc.
(Warning: Some spoilers regarding Season 2)
If there is something better that has been made for the screen (large or small) than Stranger Things–at least since the final movie of Christopher Nolan’s Dark Knight Trilogy came out in 2012—I have certainly missed it. And yet, it’s not as if Stranger Things is second best in some weird contest of mediocrity. It is, from start to finish, extraordinary. It’s extraordinary in its imagination, in its plot, in its making the unreal real, in its embrace of nostalgia, but, most of all, in its full acceptance of the human condition–at once mysterious and full of awe, comprised of beautiful individuals, each deeply flawed. While Stranger Things Season 2, is at once better and weaker than Season 1 in its constituent parts, it remains a thing of glory and beauty.
When thinking about the excellence of the show, one might very well wonder, just who are the Duffer Brothers, and where on God’s Green Earth did they come from? Twin brother creators, writers, and directors, they brought Stranger Things to life. Crazily, they did not even enter this whirligig of existence until after Season 1 took place. They were in utero! Somehow, though, they absorbed the culture and deeper meaning of the decade, grasping the nuances of the early Reagan Era–full of tax cuts, unmatched economic growth, acid rain, middle-class pride, the death of Spock, The Thing, Blade Runner, Sixteen Candles, Rush, Yes, Tears for Fears, Echo and the Bunnymen, the eminent if unseen collapse of the Soviet empire, entrepreneurial genius, California ascendency, Commodore 64s and Macintosh 128s, John Paul II, Stephen King, Steven Jobs, Milton Friedman, and, of course, that greatest of all nerddom games, Advanced Dungeons and Dragons.
Just as important, the Duffer Brothers could only have emerged at the moment that internet had almost fully decentralized the music and the television industries. Try as they might, the Duffer Brothers found little success with the mainstream channels. In their unrelenting drive for success—never tempered by any desire to compromise their art—they turned to Netflix. Netflix, thank the good Lord, took a chance with the Duffers. The rest, of course, is history. A match made in eternity, but manifested in time.
Being exactly the same age as the female protagonist of Stranger Things, Nancy, last year’s Season 1 re-immersed me into my life in Hutchinson High School in ways I never could have expected. Yes, I listened to the pensive early New Order, I played Dungeons and Dragons, I loved Reagan, Jobs, and Friedman, and I never stopped reading science fiction or Batman comic books. As with the younger male protagonists of Stranger Things, I had but a few very close best friends, and my mom let me ride my bike—free-range parenting in those days—from one side of Hutchinson, Kansas, to the other, from dawn to dusk. As long as I was home by dinner time, no inquiries about my day were made. Was I mischievous? Oh yes. Did I head to the library as often as I caused trouble? Equally, yes. I might very well have been the local king of trouble-making nerds, amazed, to this day, that I didn’t kill myself or cause more property damage than I actually did.
While Season 1 of Stranger Things brilliantly introduced us to the wildly imaginative and yet comfortably familiar bright and dark worlds of that decaying autumn of 1983, with the government’s unleashing of hell upon an unsuspecting Indiana town the day after Guy Fawkes’ Day, Season 2 begins on October 28, 1984, a little less than a year later. Critically, the second season begins on the eve of the 1984 presidential election, the election that solidified the Reagan Revolution, clearing out the political control of technology, industry, and community, and positioning the free world to destroy the tyrannical one. Reagan/Bush signs appear prominently throughout the first several episodes, offering surety and hope.
In Season 1, the heroes were outcast kids, confused teenagers, and broken adults. The enemies were societal conformity, peer pressure, and the U.S. government, especially the Department of Energy. For Season 2, the Duffer Brothers brought the heroes together, still separated by age, but much more aligned in purpose. The new government bureaucrats are not quite heroes, but—lead by Paul Reiser—they are on the side of right. Reaganism has replaced Johnson-Nixon-Carter era corruption, if not the incompetence.
Though the Duffer Brothers might have taken the easy route with the new season, offering us into all-new adventures their X-Files-like Hoosier funhouse of horrors, the twin brothers wisely tackle the far more difficult issues of post-traumatic stress syndrome. The show is never merely about monsters, it’s about nightmares, all too real and all too cumbersome for the human condition. In the first season, the adults—and especially Police Chief Hopper, having served in Vietnam and endured the taunts of the New Left, lost children, and seen his marriage destroyed, now surviving only by crusading against injustice and popping anti-anxiety medication—suffer from their pasts. In this season, we see the boy, Will, abducted by the monster experiencing not only depression but also possession, and the girl, Eleven (Jane), wondering if her new father is holding her back as oppressively as the government once had engineered her. Depressed and confused, neither can find happiness, though each is surrounded by love. These two must enter into the unknown on their own, only coming to realize, penultimately, just how vital they are, not just as individuals, but as friends.
Indeed, if there is one thing that ties together the best of what’s humane in the second season on Stranger Things, it is the necessity of friendship and community to overcome adversity, no matter how demonic or depraved or bureaucratic. At one critical moment, as a new student named Max arrives from California and living with an abusive older brother asks Michael why he opposes her entrance into their intimate friendship, he loses his temper.
Taken as a whole, Season 2 is every bit as great as Season 1. Yet, despite its many successes, its weaknesses make it slightly uneven and even a bit troubling. In the first season, we were treated to something done exceedingly well that is usually done very poorly in the various dramatic arts of the last hundred years. Very few artists can capably create good characters who actually strive to be good and still remain interesting. Most artists—especially in fiction, movies, and television—bander to the easy, bad decisions or they make the good characters one dimensional and cheesy, usually armed not just with powerful weapons but with cringe-worthy one-liners.
Season 1 of Stranger Things introduced deeply flawed heroes, but those heroes struggled mightily against those flaws, always hoping to do what’s right, even when hindered by their own individual sins, flaws, and failings. As such, even the most troubled of the heroes earned our profound love by remaining, at some fundamental level, innocent. Of course, there was no better character for the viewer than Eleven, the little girl stolen by the U.S. government from her parents, tattooed in the manner of the Jews in the Holocaust, and used brutally as an experiment to further national interests. She killed when necessary, but she strove to find her humanity, despite never having had a good example in her life. We cheered and cheered for Eleven to succeed because she wanted to succeed, but only by doing the right thing and by wanting to love and be loved. Season 1 ended, correctly, with Eleven seemingly sacrificing herself for her friends, the first persons who had ever treated her with respect.
In Season 2, Eleven is understandably angry at the way she’s been raised, and she now wants, again understandably, to be with those she loves and to find out why she was abandoned by (stolen from) her parents. Her quest, though, goes all wrong, as she becomes involved with a bombastic, nasty gang of scuzzy misfits. Though she walks away from this gang, she had changed, becoming sleek and cool, rather than innocent and loving. Frankly, I hated to see this change in her, and it made me less sympathetic to the second season. The same thing happens, though, with far less screen time, to Mike and Nancy’s mother, who has gone from a powerfully concerned mom to a bored, sex-craved kitten. It’s neither funny nor helpful to the story.
These, however, are minor points in the big scheme of things, and, whatever its faults, Season 2 is still the best thing on any screen at the moment. Those characters we loved in Season 1 are every bit as interesting in the second season, if not more so. Joyce is still the best mom in the world, Hopper is still the best cop in the world, the four boys are the best nerds in the world, and even Steve, so sleazy in Season 1, has become the “good guy,” a true leader in the best sense of the word.
And, for those of us who actually grew up in the early 1980s, we get to enjoy all the nostalgia, yet again, of the decade that so shaped us. The demogorgons, the mind flayers, the new wave music, the arcades, the free-range parenting, the best president of the twentieth century that so shaped our childhood are now manifest for all to see and enjoy.
Ave, the Duffer Brothers! Yes, Ave. Pure and simple, Ave.
In the gloriously barbaric world of Anglo-Saxon history, legend, and myth, The Moot was a meeting of creative minds and families. Its task? To ponder and analyze this issue or that, and, ultimately, to decide matters of the utmost importance to the larger Christiana res publica. It also vitally served as the best means to bring the news of one shire or duchy or free city to all the rest. In a time polycentric authorities, absent our modern and post-modern lightning-fast communications, The Moot brought order and stability to the medieval Anglo-Saxons and Scandinavians, but it also gave its respective communities time to contemplate that which most needed contemplation.
One can, at least in the mythological tradition of the Burkean Whigs, ultimately trace The Moot to the Witan, to Parliament, and, across the Atlantic, to Congress.
Horrified by the inhumane ideologies rising powerfully in the 1930s, English Quaker J.H. Oldham called together a new moot to discuss ways in which tradition and faith could combat political heresy and brutality. Among those who joined his Moot were T.S. Eliot, Owen Barfield, Christopher Dawson, Karl Mannheim, Reinhold Niebuhr, and Michael Polanyi. As a functioning body, the Moot debated among its members, corresponded with men such as Dietrich Bonhoffer, and published a wonderful irregular newsletter, simply called The Christian News-Letter, edited usually by Oldham and sometimes by Eliot. In one of the most important issues, dated July 24, 1940, Inkling Owen Barfield explained that National Socialism and Communism could be defeated only by challenging their ideas with better ones. Such ideas could never, appropriately or effectively, arise from any one person but only from a group of persons, each working in harmony toward the shared goal of a more decent humanity. Those opposed to ideology, however, could not and should not create a counter ideology. Instead, those of good will and like mind, should pursue the “sober effort to build up and maintain a common stock of thought rather than to startle with a series of sparkling individual contributions—like a commonwealth of the spirit, in which there is no copyright.”
As president (since August 1, 2017) of The American Conservative, I hope to situate explicitly us, our writers, and our readers, within the western framework of a Republic of Letters, beginning with Heraclitus and Herodotus and ending only when God so decides. While, as my libertarian side knows all too well, property rights and individual genius matter profoundly, I agree with Barfield that we must strive for a “commonwealth of the spirit,” connecting us to all other like-minded persons and communities of the present, but also to all of the greatest of the past, and to those we can only imagine in the future. Cicero would have known this as a Cosmopolis, the city of the god and all of humanity, held together by the common divine language, Reason, whereas St. Augustine would have called this the sojourning City of God, trapped within time, and, hence, in the City of Man. The humanists of the ancient through the medieval and early modern worlds, though, would have referred to this as a “republic of letters.”
As a magazine, The American Conservative—especially under the careful direction of the current and previous editors—has done this brilliantly. As such, I see this column, “The Moot,” as merely a more direct manifestation of the ambassadorial and clearing-house function of the conservative and libertarian movements. Through this column, I hope to announce and comment on any “thing” happening in the non-leftist world. That is, I hope to bring news of books published, conferences held, paintings painted, Kickstarter projects initiated, speeches delivered, albums released, organizations fighting the good fight, and courses being taught, and, therefore, connecting one thing to another.
If you have news of interest to the conservative and libertarian worlds at large, please let me know at my [email protected] Make sure you include the how, what, where, when, and why—when possible and appropriate. These things can be in the recent past or in the far future. Include a sentence or two explaining why your item matters. Also, please know, that unless otherwise indicated, I will assume any part of your notice sent to me is quotable. In the subject line, please note something akin to “News for The Moot.”
Finally, and critically, the “thing” does NOT need to be directly or explicitly conservative or libertarian, merely of interest. So, I’m happy to announce that Notre Dame scholar Patrick Deneen has a new book coming out, that one of the two greatest rock bands in the world today, Big Big Train, has a Christmas single coming out, and that portrait artist Anna Rose Bain just won a major award for her work. While Deneen’s book is explicitly conservative, Big Big Train and Bain are simply (and beautifully) creative. After all, we must seek whatever is excellent, whatever its origins or manifestations.
I very much look forward to hearing from you.
Yours, Brad Birzer
President, The American Conservative
If you look at what’s playing on your television, at what’s showing at the local cinema, at what video games your children are playing, or at what is selling in the young adult section of your neighborhood Barnes & Noble, you’ll see something that is at once deeply cultural and deeply countercultural at the exact same moment: Romanticism.
It’s difficult to know exactly where the movement started, though most historians and literary scholars would give the nod to Edmund Burke and his second great work, On the Sublime and the Beautiful. From Burke’s treatise, almost all modern Romantic thought arose. Burke’s presence is, at times, implicit, and, at times, blatant in the works of such critical figures as Wordsworth and Coleridge, but it can be found throughout most of the romantic poetry and art of the early 19th century. It’s not hard even to imagine Burke’s shadow lingering over Beethoven’s Sixth Symphony, the Pastoral. In his own writings on Western civilization, Christopher Dawson argued that the rise of Romanticism, whatever its excesses and failings, was as important to Western civilization, as the re-discovery of Hellenic thought in the Renaissance. Whatever its original and essential intent, Romanticism successfully saved Christianity from the utilitarianism and rationalism of the 18th century, Dawson continued. In its recovery of medieval Christianity in the early 19th century, the Anglo-Welsh Roman Catholic scholar asserted, the Romantics actually discovered “a new kind of beauty.”
From its earliest origins, one can trace Romanticism’s history through the 19th century and into the early 20th century through figures as diverse as Friedrich Nietzsche, G.K. Chesterton, and Willa Cather. Perhaps most importantly for Western culture, however, was its manifestation in the vast mythology of J.R.R. Tolkien.
Not surprisingly, especially given its origins in the thought of Edmund Burke, Romanticism, properly understood, is deeply conservative in its praise of the ancestors, its idealization of the past, and in its admiration of folk customs as a greater wisdom than any one generation or one person can know. Romanticism is also, properly understood, deeply sacramental. Like all good things in this world, it can be perverted in and to varying degrees. It’s love of the past and one’s ancestry can be unthinkingly reactionary, its love of place can become pantheistic, and its love of the folk can become nationalistic and even, at times, downright fascistic.
In the 20th century, as noted above, the greatest expression of a proper romanticism can be found in the works of Tolkien as well as in the works of the other Inklings, C.S. Lewis, Owen Barfield, and Lord David Cecil. In terms of sales and influence, however, Tolkien has far exceeded that of his closest friends. For almost anyone under the age of 70, Tolkien is a champion of great art and high imagination. For an older generation—in general—he still, unfortunately, represents decadent hippiness, magic mushrooms, and psychedelic tuning out.
Fundamentalists of all ages also fear Tolkien, too, worrying that his discussion of magic and elves and dwarves is somehow a bit too dark and unchristian, perhaps created with noble intent, but the devil’s work, nonetheless. After all, it was the Tolkien craze that spawned (or at least radically increased the popularity) of such games as Dungeons & Dragons and such music as Led Zeppelin IV. Once the province of nerds and nerds only, Dungeons & Dragons has become powerfully mainstream, and as various scholars have argued, one can trace a rather direct line from Tolkien to D&D to modern video games. And, Led Zeppelin’s music is now as much a part of Western civilization as is Beethoven, though often relegated to the wallpaper sounds of Muzak in our elevators of commerce and industry.
Assuming, fair reader of TAC, that you are not worried about losing your soul when listening to “Stairway to Heaven” or that demons lurk when spending an evening with friends pretending to be an elf as you roll the dice in some distant imaginary land, you might very well be curious as to what is good in this huge and Romantic bent in fiction over the past century. Tolkien and Lewis, to be sure. But, what about that great question, “and, after Tolkien?” If you’re asking yourself this—for you or your kids or grandkids—you’re not alone. As someone who has had the grand privilege of spending much of his academic career studying Tolkien, fantasy, and science fiction, I often get asked, “Ok, after Tolkien, now what?”
It should be noted that there is a lot of mediocre literature for sale in Barnes & Noble (and all other fine book retailers). Indeed, there exists far more mediocre than there is the diabolic or the good. For this piece, I’ll avoid the mediocre completely. Be hot or cold, but “lukewarm, get away from me!” As to the diabolic, there are three authors that series lovers of Romantic literature should avoid: Philip Pullman, Michael Moorcock, and Stephen Donaldson, each of whom has intentionally set out to undermine, subvert, and pervert the Christian elements of Tolkienian fantasy. They are, to put it mildly, not only anti-Christian and anti-romantic, but painfully so. They’re, to be sure, quite talented, but they use their talents in ways that undermine the very gifts of truth, beauty, and goodness.
As I list what to read “After Tolkien,” I must make two caveats. First, almost no one has reached the literary quality of Tolkien’s writings, whether in his clever children’s stories, such as The Hobbit, or in his high fantasy, such as in The Lord of the Rings and The Silmarillion. And, second, no one has reached the imaginative quality of Tolkien’s writings, either. For better or worse, these two must be givens as we consider “After Tolkien.” And, these two might be givens for the next several centuries.
Of all 20th century fabulists, Ray Bradbury comes closest to equaling Tolkien’s literary and imaginative powers. Unlike his English counterpart, however, Bradbury excelled in the direct, sharp, and well-defined story. There is no meandering in a Bradbury story, no extraordinary quest, no prolonged journey. Bradbury would latch fiercely onto one idea or one image and write a short story around that one thing. “Life is short, misery sure, mortality certain,” Bradbury noted in the early 1970s. “But on the way, in your work, why not carry these two inflated pig bladders labeled Zest and Gusto.” He was a master of these two pig bladders, and even his novels—such as Martian Chronicles and Fahrenheit 451—are really just compilations of short stories. One of Bradbury’s best as well as his most neglected novel is his story of good and evil as represented and manifested in two young boys, Something Wicked This Way Comes. It might very well be the best Christian book written by a non-Christian in the 20th century.
If she had taken twice or even three times as long to write her seven books of the Harry Potter series, Scottish authoress J.K. Rowling might have achieved a form of literary immortality. As they are, the Harry Potter books are extremely clever and relentlessly entertaining, but they will probably not be read a century from now. Some newer, more clever series will have taken its place by then. Still, for what they are—despite the worries of fundamentalist Catholics—the Harry Potter books are both pro-Western civilization and pro-Catholic. Well versed in the Western canon, Rowling peppers her stories with a vast number of specifically Catholic symbolism: the blood of the unicorn bringing everlasting death upon those unworthy to drink of it; the rebirth of the phoenix named, ironically enough, Fawkes (Guy Fawkes was a 17th century Roman Catholic terrorist lampooned by British Protestants); the use of a variety of saints names such as Hedwig, Mungo, and Brutus; the communion of saints in the form of Harry’s family in his first direct battle with Voldemort in a graveyard. And, this is just a short list. Perhaps most tellingly, the wizards became illegal and had to form recusant in 1689, the very same year (in reality) that Catholics had to do the same in Britain.
For those of a certain age, Terry Brooks will always be remembered for writing his Shannara trilogy in the late 1970s and early 1980s. Brooks openly borrowed from Tolkien, and, at the time, as many critics lambasted him for this even as thousands upon thousands of kids bought his books, eagerly hoping to find more adventures of romantic heroes. Since the early 1980s, the Shannara universe has grown brilliantly, and Brooks—rather open about his own Protestant Christianity—has grown equally brilliantly as a writer. As with much of fantasy (and some Protestantism), Brooks too often veers into Manicheanism, but, then again, so did Saint Augustine. His universe is based on a seemingly never-ending war of the Word and the Void. The Word seeks to leaven all life, while the Void seeks nothing but annihilation. Far from the winding sentences and paragraphs of the first few Shannara books, the more recent ones are pithy, honed, and direct. As with Tolkien, Brooks does an excellent job exploring the good of good while not basking in the evil of evil.
Though remembered more for his nonfiction such as The Conservative Mind and Roots of American Order, Russell Kirk produced some of the most powerful fiction of the last century. In terms of his short stories, one might very well imagine the power of a Bradbury with the morality of a Flannery O’Connor. These, too, deal with good and evil, though Kirk is at his best when describing noble sacrifices. His best book, however, is a dark but powerfully Christian fantasy called The Lord of The Hollow Dark. In it, Kirk places all of the major figures from the plays and poems of T.S. Eliot at a Scottish castle dedicated to Satanism and the performance of a black mass. Not surprisingly, the dialogue is intellectually rigorous while the plot remains riveting. It is a rare achievement of high philosophy, fantasy, and theology and deserves a much wider audience than it has thus far received.
Sharing the same literary agent, Stephen King took inspiration from Kirk, strangely enough. While they have different views politically, the two men saw good and evil in much the same way, and each reveres place (Michigan, Scotland, Maine) in a way usually reserved for the most traditionalist of traditionalists. The great difference between the two authors, of course, is that King wallows in the decadence and immorality of evil (though he is equally good at evoking the good and the heroic). Whereas Kirk might allude to a murder, for example, King gives us five to six pages of gruesome description of that murder. Five such pages can readily change the entire tone of a novel. King’s best novel—in terms of literary quality and imaginative power—is Salem’s Lot, the rewrite of Bram Stoker’s Dracula, set in small-town Maine.
While no one has equaled the literary achievements of Tolkien, a healthy romanticism remains alive and well in Western civilization. Long may it counter the dreary and dreadful world of the progressives.
If one were merely to glimpse the life, work, and reputation of Margaret Atwood, one could not be blamed—or then again, easily forgiven—for thinking she’s just another radicalized ideologue from the bygone days of the 1960s, one of the many cookie-cutter feminists who invaded academia in the subsequent decades.
When she published The Handmaid’s Tale in 1985, professors of women’s studies across North America embraced her with a sycophantic love bordering on the cultish. And many of the sources she employed in her famous six Cambridge University Empson Lectures in 2000—as a typical example of her academic work—reek of predictability: Isaiah Berlin, E.L. Doctorow, Peter Gay, John Irving, D.H. Lawrence, Claude Levi-Strauss, Alice Munro, Sylvia Plath.
Ugh. Utterly boring and disappointingly unoriginal.
But a closer look at her Cambridge lecture sources reveals a bit more. In addition to the unenlightened and unimaginative list of scholars above, there also lurk the works and ideas of L. Frank Baum, Lewis Carroll, Graham Greene, Stephen King, and, most wonderfully, Ray Bradbury and Ursula LeGuin.
Peter Gay? Again, ugh. But Peter Gay and Ray Bradbury? Far more interesting.
Then, browsing even the first several pages of the first Cambridge lecture, the reader is struck by a profound truth about Margaret Eleanor Atwood (b. 1939). Whatever dubious intellectual company she keeps, she is rather gloriously and absolutely her own person.
Physically quite striking as a woman in the latter half of her 70s, she likes to joke that while she might look like a “kindly granny,” she is anything but. Her neighbors even tease her that she looks best with a broom, sweeping the blustery October leaves. “Witch,” however, would not be the best word to describe her. These words work, however: brilliant, genius, quirky, funny, merciless, odd, gothic, rational, individual, personal, moving, witty, maddening, and eclectic. Whatever one might say or write about her, she is not and never was boring.
Thinking about her childhood, spent moving from place to place in the lesser-known reaches of Canada, she explains what she believes to be the source of her imagination:
Because none of my relatives were people I could actually see, my own grandmothers were no more and no less mythological than Little Red Riding Hood’s grandmother, and perhaps this had something to do with my eventual writing life—the inability to distinguish between the real and the imagined, or rather the attitude that what we consider real is also imagined: every life lived is also an inner life, a life created.
The dreadfully uptight and haughty Peter Gay does not readily emerge from such a passage, but the irrepressible Ray Bradbury leaps from it in full ecstasy.
Yet however interesting her imagination, Atwood never dismisses or downplays her more rigorous and intellectual side. Indeed, Atwood describes herself in interviews as an 18th-century rationalist who just happens to have all kinds of voices and persons and stories floating around and interacting with one another in her head. She falls more clearly, though, into the broad camp of the humanists (Christian and otherwise). As such, she expertly sculpts, caresses, and condemns in her art the horrors and the achievements of the human person.
“Why is it that when we grab for heaven—socialist or capitalist or even religious—we so often produce hell?” she plaintively asks. “I’m not sure, but so it is. Maybe it’s the lumpiness of human beings.” Lumpiness, indeed. Neither Thomas More nor Russell Kirk could have said it better.
To explore the humanist aspect of Atwood, it’s worth reconsidering her most famous work, The Handmaid’s Tale, a story that has been made into a major motion picture as well as a forthcoming television series and that is read throughout high schools and colleges in the English-speaking world as gospel. Nolite te bastardes carborundorum—“don’t let the bastards grind you down”—playful faux Latin words that inspire the novel’s heroine to resist her enslavement.
When it first came out in 1985, The Handmaid’s Tale was both praised and condemned for being anti-male as well as pro-abortion. Those who loved it and hated it viewed it as an updated, feminist 1984. Whether it was a poor reflection or a logical extension of Orwell’s classic, no one much cared. It was what it was.
Little has changed today. Nearly every public school in the United States offers it as a modern classic, sometimes replacing and sometimes supplementing Brave New World and Lord of the Flies. Now it’s so pervasive that it’s taught in an almost perfunctory way. When pressed, however, those who teach it and those who read it claim to do so for the very same reasons as those who first adopted the book in the mid-1980s.
The Handmaid’s Tale has become a significant artifact of North American postmodern culture. It’s hard to imagine, for example, the myriad of shelves dedicated to young-adult fiction at your local Barnes and Noble without the influence—however indirect or incorrect—of The Handmaid’s Tale. After all, in our age of intellectual stagnation, who better to destroy patriarchal oppression than a noble and brave teenage girl, a postmodern Joan of Arc?
But this is a most superficial reading of the novel. In reality, the story is as complicated as anything Huxley or Orwell wrote. Indeed, in many ways, The Handmaid’s Tale is the best dystopian novel written thus far, even better than its predecessors, in part because it builds so effectively on what came before it. As a grand work of art, it is deep. The story moves rapidly, but the symbolism and nuances take innumerable readings to discover. Without question, it is far too deep to be categorized in the simplistic terms of left or right.
I read it the first semester of my junior year in college. Not surprisingly, as this was 1988, I had to read it for a course on the history of women in America. While written by a Canadian, The Handmaid’s Tale served as an updated Scarlet Letter; we had begun the course with colonial women and the plight of those living in New England. Though I had devoured science fiction and dystopian fiction for years at that point—they were my favorite genres of literature—I suspected The Handmaid’s Tale to be some sick joke of a politicized feminist imposition on the sacred realm of intellect and art. I was still proudly wearing my anti-PC button on my buffalo-check-lined jean jacket in those days.
And yet what I found in the novel had nothing, at least at its most fundamental level, to do with imposing any ideology on the reader. As with all such dystopian fiction, it served as a new type of warning. Indeed, for those of us who grew up in middle-class Goldwater households in the 1970s and 1980s, The Handmaid’s Tale describes almost perfectly the two things we were rightly taught to fear: the fascist and communist tyrannies that had inflicted so much pain and suffering on the Western world, and the puritanical televangelists who were then emerging as cultural brokers for the New Right. While Pat Robertson might be more attractive than Stalin, each represented forms of control and unjust authority.
In The Handmaid’s Tale, Atwood imagines just what might happen should a culture on the verge of collapse embrace the very tyranny it had struggled against throughout much of the century. What if after defeating the Nazis and the Communists, the United States succumbed to a new Cromwell, one who is shiny and glittering even in his despotism? Near the beginning of the novel, the heroine—who has been made a sort of demonic anti-nun through no fault of her own—describes her mistress:
It’s one of the things we fought for, said the Commander’s Wife, and suddenly she wasn’t looking at me, she was looking down at her knuckled, diamond-studded hands, and I knew where I’d seen her before. The first time was on television, when I was eight or nine. It was when my mother was sleeping in, on Sunday mornings, and I would get up early and go to the television set in my mother’s study and flip through the channels, looking for cartoons. Sometimes when I couldn’t find any I would watch the Growing Souls Gospel Hour, where they would tell Bible stories for children and sing hymns. One of the women was called Serena Joy. She was the lead soprano. She was ash blond, petite, with a snub nose and huge blue eyes which she’d turn upwards during hymns. She could smile and cry at the same time, one tear or two sliding gracefully down her cheek, as if on cue, as her voice lifted through its highest notes, tremulous, effortless. It was after that she went on to other things. The woman sitting in front of me was Serena Joy. Or had been, once. So it was worse than I thought.
It would be impossible for any reader of my age and background not to visualize with dread the Commander’s Wife as anyone other than the late evangelist Tammy Faye Bakker. And yet one cannot stop there. Though Atwood repeatedly read 1984 and Darkness at Noon as a high-school student, nearly memorizing each, she also earned her Ph.D. under the famous Harvard scholar of the Puritans Perry Miller. In the early 1980s, Atwood lived and studied in West Berlin, taking a side trip into the communist East. Utterly horrified by the crippling leviathan of communism, she found the inspiration for her own dystopian novel, set in a new Puritan New England.
None of this should suggest that feminism does not inform Atwood’s fiction. It most certainly does. But to limit her fiction to a feminist interpretation is to distort almost beyond recognition Atwood’s deep and creative individuality. When asked about her own views of feminism not long after the astounding success of The Handmaid’s Tale, Atwood answered with her characteristically eccentric caution against all oppressions, left, right, above, or below—rather clearly to the surprise of the interviewer:
But I’m an artist. That’s my affiliation, and in any monolithic regime I would be shot. They always do that to the artists. Why? Because the artists are messy. They don’t fit. They make squawking noises. They protest. They insist on some kind of standard of humanity which any such regime is going to violate. They will violate it saying that it’s better for the good of all, or the good of the many, or the better this or better that. And the artists will always protest and they’ll always get shot. Or go into exile.
The Handmaid’s Tale proved an effective examination of the genre of dystopia. So, too, have several of her other tales of a horrific and bizarre Moreau-esque future: in particular, the MaddAddam trilogy—a play on Genesis—consisting of Oryx and Crake (2003), The Year of the Flood (2009), and MaddAddam (2013).
By “flood,” Atwood is not referring to the biblical deluge but rather to the genetic manipulation of the human species into something less than human in both the immediate and far future. Though she does not generally refer to the thought or work of C.S. Lewis, except in derision of his female characters—“fond as he was of creating sweet-talking, good-looking evil queens”—her MaddAddam trilogy reflects Lewis’s The Abolition of Man and That Hideous Strength. “All long-term exercises of power, especially in breeding, must mean the power of earlier generations over later ones,” Lewis wrote in the third part of The Abolition of Man. “What we call Man’s power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument.”
In the MaddAddam trilogy, one generation of corporations and their government allies play too deeply with the genetic code, thus ending man and beginning him again as something new and alien. Though the ghost of Lewis lurks over this trilogy, so do those of H.G. Wells, Aldous Huxley, and Arthur C. Clarke.
When Snowman—Jimmy, the protagonist of Oryx and Crake—first describes the children who find his humanity so bizarre, uncomfortable, and simultaneously intriguing, Atwood writes in a vein that would have made Huxley blush: “Still, they’re amazingly attractive, these children—each one naked, each one perfect, each one a different skin colour—chocolate, rose, tea, butter, cream, honey—but each with green eyes. Crake’s aesthetic.”
That Atwood so readily engages previous writers in the fantasy and science-fiction genres makes her only more interesting, not less. In everything she writes, as the above passage reveals so clearly, there is at once something deeply familiar and disturbingly alien. It is one of her greatest gifts as an artist.
As one of her many loveable quirks, Atwood insists on defining genres differently than do the PR flacks for her publishers. Though one might readily label much of what she writes as “utopian” or “dystopian,” Atwood believes all utopias and dystopias are of a whole, calling them “ustopias.” Additionally, she believes her fantastic literature is not “science fiction” but “speculative fiction.” She fights vehemently on this last point, noting that her fiction never involves things that simply could not happen or that simply have not yet been invented. Every aspect of her fiction, she claims, is possible, here and now.
As I mentioned earlier, photos of Atwood taken over the last several decades reveal what a beautiful woman she is. What is most striking, however, are her eyes. Her eyes radiate intelligence and mischievousness. Truly, they are a gateway to her soul. And very bright indeed must that soul be.
Art is messy, and artists are even messier. Somehow, though, this Canadian has managed to harness the messiness of her mind and her soul in her art. Blissfully, in Atwood’s imagination, there is no one way of doing all things, and no one way of thinking about all things. If we conservatives and libertarians cannot embrace the diverse and unique art of Margaret Atwood—whatever way she votes and to whatever charity she gives—we have lost our own ability to be ourselves and celebrate the good in life.
Bradley J. Birzer holds the Russell Amos Kirk Chair in History at Hillsdale College and is the author, most recently, of Russell Kirk: American Conservative.
Every once in a while, something utterly profound comes along, even in the wasteland that is TV culture. For those of us who came of age in the 1980s, there’s been no greater rush of nostalgia in recent years than that provided by Netflix’s delightful and ever-engaging eight-part series Stranger Things.
In almost every way, Stranger Things captures a brief slice of time perfectly, especially for those of us who attended junior high and high school during Reagan’s first term in the White House. The show takes place over just a few days, beginning November 6, 1983, in the mythical but all-too-real town of Hawkins, Indiana. It follows the heroic actions of four seventh-grade boys, a mysterious girl who arrives in town, some siblings and their friends and rivals, a divorced mom, and a broken sheriff. There’s also an ominous modern building, a Department of Energy complex, looming over the normally quiet Hoosier town. Surrounded by barbed wire and armed guards and adorned with neo-Stalinist architecture, the building stands out dramatically in the landscape of Hawkins, much like the dilapidated Bates house overlooks its accompanying motel in 1960’s Psycho.
Even the season of the show matters, as November 6 is a date situated in the twilight realm between Halloween and Thanksgiving. No longer colorful or attractive, the remaining leaves on the trees merely hang dead, shriveled and brown, awaiting execution from the inevitable first snowfall.
Stranger Things artfully meshes elements of late Cold War-era Midwest Americana, Reagan-driven affluence, libertarian paranoia, New Wave and progressive electronica music, mad science and Leviathan, Dungeons and Dragons, John Carpenter movies, John Hughes movies, David Lynch movies, Hitchcock movies, Stephen King novels, and The X-Files to create a complete and satisfying work of art. Phew. Some critics have railed against the show for stealing the work of others, and it would be impossible to deny the charge, yet this is also what makes the show so brilliant in so many ways. Exactly because it relies on so much nerd culture of the early 1980s, Stranger Things is as comfortable as it is unsettling.
The key to the entire epic, though, is its reliance on the essential nerd game of the early 1980s, Dungeons and Dragons. The five junior-high protagonists are best friends, and they begin and end the eight episodes while playing a DnD campaign. From the moment we first see the boys, they are held in rapt attention by their Dungeon Master—Mike, the author and referee of their game. As they begin, the most menacing of DnD monsters, a prince of the Demons, a Demogorgon, has arrived. As soon as it it appears with its menacing two heads, the players fall into a panic, with one player, Will, sacrificing himself for the group, attacking the Demogorgon rather than protecting himself. His effort fails, though, as he’s rolled only a seven out of 20, not enough to destroy the newly emergent beast. A mother calls for supper, adding additional chaos to the enclosed world of fabulism, and the game must end, despite no satisfying conclusion for the boys.
As Will rides his bike home in the dark—something we always did in 1983—the real Demogorgon, having been unleashed by the machinations of the Department of Energy, emerges in a part of town the boys refer to as “Mirkwood” and takes Will captive, carrying him off to his underground lair.
Reality becomes Dungeons and Dragons, and Dungeons and Dragons becomes reality at the very beginning of Stranger Things. The series opens with an accident—or so it seems—at the Department of Energy complex. Whether the government has unleashed a being from another dimension, or whether the government complex was intentionally placed on top of a portal to another dimension, remains unclear throughout the story. Instead, the viewer knows only that two things emerge at the same time: an eleven-year old girl named Eleven (presumably after the famous scene involving guitar amplifiers that “go to eleven” in Spinal Tap) and the faceless monster that does nothing but kidnap and devour. Whether these two beings represent the two heads of the Demogorgon or whether they are merely examples of good and evil also remains unclear throughout the series.
What is clear is that the little girl was a normal human girl abducted by the government and raised by the head bureaucrat—named “Poppa”—while being experimented upon repeatedly. Now endowed with incredible powers of telekinesis, she behaves as one would expected an abused child to behave. She is at once stunningly brilliant and seriously damaged.
Again, though, what allows all of this to work—from the government, to the monster, to the small town—is the bookending game of Dungeons and Dragons.
Created in the late 1960s and early 1970s by a number of wargamers, but especially by the Wisconsin genius Gary Gygax (1938–2008), DnD became the game of choice of all outsider, geek, and nerd kids (especially boys) in the late 1970s and early 1980s. The jocks wouldn’t play it, of course, and neither would the druggies. The former were too busy looking good, and the latter too busy being spaced out. Instead, DnD was the exclusive game of all of the “gifted” kids, those deemed hideously uncool by the majority of their peers, those who watched Battlestar Galactica and Doctor Who and who read J.R.R. Tolkien and Terry Brooks without apology and who wrote their seventh-grade theme papers on the effects of atomic warfare on Nagasaki, a space colony on Mars, and the dangers of acid rain. These were the same kids who went to Rush concerts (with the stoners!) if their parents would let them, and who thought Alien, The Thing, and Blade Runner the greatest movies in the history of cinema. They probably read Starlog and collected comics as well.
Though the story of Gygax is, ultimately, a rather depressing one, he did manage for a while to combine, successfully, the excitement of fantasy (especially the American pulp and horror works of Robert E. Howard, H.P. Lovecraft, August Derleth, and Fritz Leiber) with the intensity of war gaming, creating an all-too-brief gaming empire, TSR, out of Lake Geneva, Wis.
As mentioned above, this show beautifully blends much of the past to make a very artful present. Without giving away too many plot elements, the story’s themes deal with a natural and healthy fear of government, the omnipresent tapioca conformity of the American middle class, the need for heroism at all times (no matter the cost), the little things that make community work, and the brokenness of each individual person. While the story has elements of humor, it is a dark and unhappy story without a fulfilling resolution. It also earns its TV-14 rating at times, with some of the creepiest situations imaginable and also some not-necessarily-historically-inaccurate sexuality among Midwestern teenagers.
From the opening minute to the closing, nostalgia covered me in waves as I watched all eight episodes. Now in the second half of my 40s, I find it hard if not impossible not to look back to the early 1980s and not see the glorious days of friendship and innocence, Dungeons and Dragons, and an almost complete absence of cynicism.
While I visit now only when time and work permits, I once had the grand privilege of living in Hawkins, Indiana. Stranger Things has allowed me to remember the fine citizenship I once possessed in that gloriously broken and imaginative community.
Bradley J. Birzer is the Russell Amos Kirk Chair in American Studies at Hillsdale College and author of the biography Russell Kirk: American Conservative.
In late June I had the honor of giving a lecture on the civil-rights movement. When the sponsor initially asked me to speak on this topic, I readily agreed, stupidly presuming that he meant the era of Reconstruction, immediately following the Civil War. When I realized that he meant the civil-rights movement of the 1960s, I startled just a bit, wondering if I should back out: I have very strong opinions about 1964 and 1965, but I have next to no expertise on the subject.
In graduate school, as well as in courses I’ve taught, I’ve extensively studied the colonial origins of slavery, the debates around the institution in 1787, and the innumerable laws and codes leading up to the Civil War. I’ve read and studied the history of black soldiers in the Civil War and in the Indian Wars (the Buffalo Soldiers), as well as the noble history of the Exodusters of the Great Plains. And, being a dabbler in literature, I’ve read Ralph Ellison, Zora Neale Hurston, Langston Hughes, and James Baldwin. One of my all-time favorite books is Malcolm X’s Autobiography, a book I believe firmly that should be read by all Americans.
All of this adds up to me knowing lots of stuff, very little of it about the actual civil-rights legislation of the mid-1960s.
Wisely or not, I decided not to cancel the talk. Instead, I immersed myself as much as possible in the life of Martin Luther King Jr. and the immense frustrations experienced by the American black community leading up to the 1960s. What I found absolutely fascinated me. I came to realize that historians have made two assumptions about 1964 and 1965, neither of which I think are completely true. First, historians have generally attributed the success of decreased racialism—a controversial point, to be sure—to the passage of civil-rights legislation; and second, consequently, historians have argued that the 1964 and 1965 legislation ushered in a new era. That is, they have seen the legislation as the beginning of something profoundly new and radical.
In response to the first claim, let me state what seems painfully obvious—we are far from a racially free, color-blind, or non-discriminatory society. While my belief about this is purely anecdotal, I think it’s a fair assessment.
When I came of age in the 1980s, I rarely if ever heard racial or bigoted epithets. Granted, I grew up in a devoutly Catholic family that stressed the equality of every single person—regardless of the accidents of birth—before the eyes of a loving and creative God. (I also grew up in a Goldwater family that cherished the excellence of the individual as individual.) I certainly can’t imagine anyone in my family or in my school—my grade-school principal was a fierce and brilliant Dominican nun, and I both loved and feared her—expressing beliefs defending some kind of racial inequality. Such things were simply not said. If thought, they remained unspoken.
Things were far from perfect when it came to race relations in the 1980s, but true discrimination and hatred seemed to me a thing of the past. Now, in 2016, I can’t say the same thing at all. Indeed, I believe that we Americans are far more racist and race-conscious than we were 25 or 30 years ago. Bigotries, often disguised as righteous anger, fly left and right in this insane whirligig of a world. We have once again—as a culture—embraced an “us and them” mentality. Black, white, cops, victims.
I’ll make a second argument regarding this first claim. Not only do I think the legislation of 1964 and 1965 did not produce racial equality, but I actually believe such legislation might very well have solidified the inequality of the time. I can’t help but think of Barry Goldwater, who voted against the 1964 act in the Senate while quietly funding lawsuits against white business owners who discriminated against blacks in the 1950s and 1960s. Goldwater understood that real change comes from action and the changes of heart, soul, and mind, not from the passage of legislation. For legislation to mean anything, it must follow the beliefs already accepted by its society. Otherwise, as Edmund Burke noted so wisely of the French Revolutionaries and their radical attempts to recreate the world, the law is supported only by its own terrors.
Finally, a third argument about racism then and now. If one believes in the superiority or inferiority of a person based on accidents of birth, one is simply not a conservative. A conservative, going back to Socrates, understands the individual dignity of every person, regardless of skin color or gender. Socrates might have spoken for the Athenians, but he also spoke for all of humanity when he stressed the need always to do good, never evil, and certainly never to do evil for the sake (as it seemed) of good. The true conservative, with St. Paul, believes that the divine image in which we’re made transcends Greek and Jew, male and female. The true conservative, with Martin Luther King Jr., recognizes that we must judge another by the content of his character, not the color of his skin. The true conservative, with Robert Nisbet, recognizes that racism is entirely a modern construct, the result of perversions of science.
Less personally, let me make the second argument—that the passage of the legislation in 1964 and 1965 seems much more of a conclusion of an era than a beginning of one. The two dominant personalities in the black community—that of Malcolm X and that of Martin Luther King Jr.—each represent very serious parts of the American and Western traditions. Far from being unique and revolutionary, they each gained immensely from the past and its successes, as well as its failures.
The brilliant and jaw-dropping opening to The Autobiography of Malcolm X reveals much more than its mere words might at first indicate. As X describes the KKK raid against his pregnant mother in Omaha, Neb., he argues that he knew from the earliest moment of his awareness that his life would end in violence. Though Malcolm X rejected the name and the faith of his father, he embraced the republican tradition of violence so pronounced throughout American history. His response to racism differs very little from the response of the men of Lexington to the invasion of 6,600 British soldiers in 1774. Blacks, as exemplified by Malcolm X, embraced the republican notion of protection of hearth and home in the 20th century. They are little different from other Americans in this respect: they just came to their violence later.
In so many ways, X was not only a reflection of the Lexingtonian of 1775, but, even in his personal ethics, of the English Puritanism of the 1640s. In a brilliant and wonderful scene in Spike Lee’s biopic of X, Lee has two FBI agents surveilling X, noting that X seems a perfect saint compared to the rather worldly desires of Martin Luther King.
Though almost the same age as X, Martin Luther King Jr. embraced a very different tradition. As he noted in his “Letter From a Negro Brother”—remembered popularly as the “Letter From Birmingham Jail”—the movement for black equality in the United States worked because it had embraced and integrated the personalist and nonviolent movements of the Western tradition. In his brief letter, MLK draws upon great Western figures from Socrates to St. Augustine to St. Thomas Aquinas to T.S. Eliot. Indeed, King sounds almost like Russell Kirk in his letter. They draw upon the same sources, and they each embrace the witness of virtue against the irrationality of bigotry.
My point in writing all of this is far from profound, I suppose, and the events of last week—one of the most depressing news weeks in my adult life—made me really understand that I truly am a white guy from a small, idyllic Kansas town who has been sheltered from much of the horrors of bigotry and violence in our modern world. Still, if we want a world free of bigotries based on the accidents of birth, we must know our history. It’s not enough to claim that the two pieces of legislation passed in 1964 and 1965 solved the problem. At best, they tempered the problem, and, at worst, they stopped the real and permanent societal and civic progress dead in its tracks.
The two heroes of the black movement in the 1950s and 1960s—X and MLK—were profoundly interesting men. By their own accounts, however, they had not created anything new, but rather embraced the best of the past. Though one stood for violence and the other for nonviolence, they each represented deep and abiding strains and tensions in the Western and republican traditions. Far from being the harbingers of brave new worlds, they each saw the hope for equality and liberty in the past.
While I do not believe the solution to ending racialism is violence, I do recognize that the tensions and eruptions of anger over the last several years have very real roots. If we are to prevent the resolution of such problems with violence, we must be honest about ourselves, our neighbors, our laws, and perhaps most importantly, our history and our mores.
Bradley J. Birzer is the Russell Amos Kirk Chair in American Studies at Hillsdale College and author of the biography Russell Kirk: American Conservative.
Though sadly forgotten by almost everyone today—with the exception of a few sociologists and other academics, here and there, and by a few conservatives and libertarians, here and there—Robert Nisbet once stood as a leading public intellectual, respected and admired in the media and throughout western universities. Even histories of conservatism and the right, such as George Nash’s magisterial The Conservative Intellectual Movement in America Since 1945, have generally ignored or underplayed Nisbet’s contributions to the post-war movement.
Yet just one example is needed to see just how vital he was to the conservative movement of the 1950s and 1960s. In late 1953, after the publication of Russell Kirk’s The Conservative Mind and Nisbet’s The Quest for Community, an executive at General Motors, Jay Gordon Hall, contacted Kirk for the first time. Did he know of a wonderful book by a California scholar, Robert Nisbet? As it turned out, Kirk and Nisbet had already corresponded and developed a deep respect for one another, a respect that would last until the death of each.
But Hall’s contact with Kirk turned out to be fortuitous, as it was Hall who would over the next half decade introduce Kirk and William F. Buckley to a politician emerging nationally out of Arizona, Barry Goldwater. Or, as Goldwater called Hall, “Sir Jay.” Further, Kirk sent copies of The Quest for Community to T.S. Eliot (who wanted the book for his firm, Faber and Faber), Leonard Read of the Foundation for Economic Education, W.T. Couch of Collier’s Encyclopedia, and B.E. Hutchinson, board chairman of Chrysler.
In return, it was Nisbet who (mostly) secured a Guggenheim Fellowship for Kirk in 1954. His praise of Kirk is worth noting at length:
I do not have to tell you of the extremely high regard in which I hold your judgment on all philosophical and humanistic matters. For that reason, I take understandable pride in a book [Quest for Community] that receives the high measure of your praise. I have been delighted by several of the reviews my book has received but by none so much as your own. Yours is certainly the most authoritative view of the book that could be written by a living American and you have written it with all of your habitual insight and sparkling eloquence.
It was not just Kirk’s mind, however, that Nisbet admired so much. He felt that no single writer in the English-speaking world possessed the love and understanding of language more than did Kirk.
Never content with the state of the world and desirous of reforming it through his own writings, Nisbet wrote and edited over 20 critically-acclaimed books and dozens and dozens of articles, essays, and book reviews. He also wrote a number of broad essays for outlets as diverse as the Wall Street Journal, Commentary, and Harper’s. He spent his professional career at the University of California campuses at Berkeley and Riverside, the University of Arizona, and retired at Columbia University. He held prestigious academic chairs and even gave the 1988 Jefferson Lecture, a prize sponsored by the National Endowment for the Humanities that has honored such luminaries as Robert Penn Warren, Forrest McDonald, Walker Percy, Tom Wolfe, and Wendell Berry. The lecture eventually became one of Nisbet’s most popular and penetrating books, The Present Age (now published by Liberty Fund).
While always careful in his scholarship, Nisbet could write with an acid-tipped pen, especially when dealing with controversies of the moment. His most interesting non-academic work was certainly the aptly-named Prejudices (1982), which offered thoughtful essays on everything from abortion to corruption to individualism to snobbism. Profoundly and traditionally conservative in the most appropriate sense of that word in an ideological word, Nisbet’s iconoclastic views came out most stridently in his public writings. In these many essays, for example, he feared the Soviet Union as much as he feared Christian televangelists. One corroded us from the outside, the other, he feared, from within.
Though it is never exactly clear when, Nisbet came to embrace a rather Burkean view of the world, especially as he cast his eyes over Nazi Germany, the Soviet Union, and Communist China. “Insight into the nature of the totalitarian mind, complete with its passion for centralization and uniformity, for rationalist extirpation of tradition and prejudgment, and for an absolute moralism that would extend when necessary to terror was not so easily come by in the late 18th Century,” he gushed. “We owe Burke much for this first insight.” Further, Nisbet noted, “few minds of stature have ever given more brilliant witness to rights, liberties and equities in the affairs of government” as had Burke.
Liberally educated and a proponent throughout his life of the great western conversation from Socrates to the present, Nisbet advanced three significant ideas in his writings.
First, a specialist in the establishment and history of social institutions, Nisbet is best remembered as a sociologist. Far from the cut-and-dried sociology rampant for so long in the 1970s and 1980s, Nisbet looked at the nearly uncountable and unaccountable nuances in social norms and mores, universal in communities but uniquely manifested in each individual community. In his work, he anticipated almost every aspect of the current communitarian movement in sociology and religious studies and the neo-republicanism of present-day philosophers.
Second, a historian of ideas, Nisbet traced the nearly untraceable influence of one thinker or another on thinkers of later generations. In several of his works, as had Kirk, Nisbet evaluated Socratic and Greek notions as remade and re-manifested in later western civilizations, taking into account the roles of societies as well as individuals, from Xenophanes to Cicero to Sir Thomas More to Herbert Spencer to Winston Churchill. In his own intellectual histories, Nisbet considered ideas of war, progress, ethics, and economics as central controversies and questions worth asking about the reasons for and against human flourishing.
Finally, it would be difficult to find any scholar in the 20th century who better analyzed and deconstructed the modern nation-state. In his many works, Nisbet employed the scholarship and ideas of many 19th-century (especially Alexis de Tocqueville) and early 20th-century thinkers (such as Christopher Dawson) to see the modern nation-state as something unique in history. Though it had antecedents in the god-kings of the ancient world, the modern nation-state does away with all pretense and exerts extraordinary control over its citizens in the workplace, the schoolroom, and the bedroom. Armed with media technology as well as the most advanced weaponry known in human history, the modern nation-state manifests itself in every aspect of a person’s life, utterly annihilating private spheres. Although Nisbet—a World War II veteran and sometime philosophical anarchist—despised this aspect of modern history, he also could examine it with a cold and scholarly eye.
Since his death, of course, the American state at home and abroad has metastasized. What Nisbet feared as the totalist state has become not only the reality, but perhaps the certainty. His voice was the voice of the prophet as well as the poet. Let us hope it was not the last such voice.
Bradley J. Birzer is the Russell Amos Kirk Chair in American Studies at Hillsdale College and author of the biography Russell Kirk: American Conservative.
“I happen to believe that you can’t study men; you can only get to know them.” So spoke C.S. Lewis’s William Hingest in That Hideous Strength (1945). The fascists murder this doomed curmudgeon only a few pages later.
Hingest, of course, is correct. We really cannot study men. We can only get to know them. This is as true of those closest to us as it is of ourselves. The farther away from our own daily reach, the person becomes harder to understand. Equally important, even the most introspective and the wisest among us barely know themselves.
A thought experiment: try to recreate everything you’ve done since you started reading this article. Every thought, every distraction, every movement, every feeling. Have you wanted some coffee? Have you thought about closing this page? Have you scratched that itch on the side of your head? Have you wondered if you should call the kids today? Have you thought about what you’ll do for lunch? Now, take each of these things we can barely construct in the shortest moments of our lives—the impulses, the questions, the longings, the satisfactions—and multiply that by the minutes of the day, the days of the year, and the years of our lives. Then, multiply this again by seven billion distinct persons walking this world in any 24-hour period. Where to start? The possibilities, the decisions, the desires, and the frustrations are unaccountable and uncountable. No graph, no data set, and no equation can incorporate all of the complexities and nuances of a single human person, let alone seven billion of them.
We know names and dates and facts, and we often create a narrative to connect these varied and various things, but we surprise ourselves as much as we surprise others in our daily moments. Some of us are just better at hiding this surprise behind a practiced veneer. This mystery of the human person is as it should be. Every single person is vastly complex, known, perhaps, only to his creator, and, as J.R.R. Tolkien once mused, possibly to his guardian angel.
Though the Left has made a mockery of diversity, real diversity is always stunning and often glorious. Indeed, one of the most beautiful things about life is our individual ability to create, to imagine, to tinker, to innovate, to improve, and to see across the bounds of time itself. In that first grand story ever written about the western tradition, The Histories, Herodotus notes that every person lives only about 26,250 days, “and any one of these days brings with it something completely unlike any other.” Account for the shortening and lengthening of lifespan depending on available technology and standards of living, and Herodotus’ statement is as apt in 2016 as it was in the fifth century, BC.
In all the genres of literature and in all the schools of scholarship, the non-fiction writer who best understands the human condition is arguably the biographer. Pick up three separate biographies—say, by David McCulloch, by Joseph Pearce, and by Robert Utley—and read them with delight. Even the most cursory examination of the subjects each biographer reveals just how infinitely complex, nuanced, and subtle the human person can be. At the moment we believe we understand man’s motivations, we find that he is capable of even higher highs and lower lows. Man can paint the Sistine Chapel one moment and mow down his fellows in concentration camps the next.
The art of a biographer is a high one. She has the duty of honoring a person’s life by taking the subject (for good or ill) seriously and by judging it according not only to the standards of the time but also to the standards of the ages. She must be faithful to every name, date, and fact of a person’s life without becoming a mere antiquarian, a slave to the information. The subject might have kept extraordinary diaries during his late teens—a young man full of anxiety, full of passion, and full of life—but left no records for the next twenty years. How does the biographer faithfully render judgment, knowing that few women or men escape their youth without some mischievousness? Or, perhaps the subject behaves charitably to ninety-nine folks but treats just one with seething contempt. Do we dismiss the 99 because of the evidence of the one? If a subject expresses one view at age 30 but another at age 60, do we merely overlook one, or privilege one, or mock one? Who casts the first stone?
Because of the sheer complexities of each person—subject as well as writer—the biographer must always and everywhere be poetic, connecting the things that are seen with those that are unseen. When we read a great biography, we instinctively know it is such. Why? Because we have met the subject as well as the biographer in the work, and they each make themselves known to us, at least to the extent they are capable, in all their excellences and failings and in the spaces they left blank out of humility, and those they connected because of imagination.
The biographer makes the fact the story, and, along the way, the story becomes the fact.
Who can resist a story that begins with a regicide? Or, at the very least, the slaying of a man who would be king?
In his latest writing, noted journalist (Wall Street Journal, National Review), novelist, and journalism professor John J. Miller movingly tells the true story of James J. Strang, a nineteenth-century Mormon cult leader who attempted to form his own independent kingdom in the heart of Lake Michigan. As Miller so ably describes him:
Strang was one of the most colorful men of his time — a political boss who called himself a king, a cult leader who proclaimed himself a prophet, and a con artist who persuaded hundreds of people to move to a remote island and obey his commands. He emerged during a turbulent period of sectarian passion and frontier settlement, twin forces that helped give birth to what may remain as the greatest display of Christian religious diversity ever seen in the United States. During a six-month period in his early thirties, he converted to the new faith of the Mormons, launched an audacious bid to become their leader, and lost a power struggle to Brigham Young.
Miller’s gripping tale is a short one, published by Amazon as a “Kindle Single,” costing only $2.99, and taking around two hours to read. These singles provide a much needed form of reading technology—something longer and deeper than an academic or journal article but not as long as a full-blown book. One of the great joys of the internet, of course, is that one is not limited by the availability and cost of paper, binding, and printing. Amazon has effectively taken advantage of this niche and great writers such as Miller have seen and embraced the new form.
As much as nineteenth-century Americans hated Catholics and often treated them as third-class citizens, Americans still somewhat—if reluctantly—respected Catholic history, daring (especially the Jesuits), and sheer tenacity. For the Mormons, however, Americans held nothing but seething hatred. From the time that Joseph Smith proclaimed his new religion, Americans denounced him and his followers as con artists, a sham, and a terrible internal danger to the cohesion and spirit of the republic. From the early 1830s through the 1880s, Mormons became the quintessential scapegoats, the subjects of ridicule, government persecution, and mistrust. When Arthur Conan Doyle wrote his first Sherlock Holmes mystery, he set the background murder in Utah, with a vicious Brigham Young and his bloodthirsty and sycophantic Danite militia tyrannizing the settlers who only wanted a chance to make a life on western soil.
Born in 1813 in the so-called Burned-Over District of upstate New York, Strang was raised in a culture that had come to distrust all orthodox, institutionalized religion as something hateful—having witnessed first-hand the competing ministers of the Second Great Awakening who spent more time denouncing their Christian rivals than promoting the love of Christ. A bright young Jacksonian man, Strang decided early in life that he would love to be a Caesar or a Napoleon. History, science, and epic poetry captured the restless young man’s imagination.
This desire to conquer was also a part of the American culture. Called filibustering in America, notions of manhood often revolved around “fame,” the ability to create a new republic or nation. “Fame alone of all the productions of man’s folly may survive,” Strang wrote in 1834. More often than not, as in the case of Rhodesia or Honduras, filibustering often fought in the name of republicanism while actually creating the seeds of an empire. The founding of the Republic of Texas somewhat proved an exception, but it was far more common to try and found a republic outside of the territorial limits of what would become the continental United States. The Mormons—in every permutation—also proved the exception. While they did try to conquer abroad, they mostly desired to settle lands in what would become part of the union.
When an anti-Mormon mob assassinated Joseph Smith in Carthage, Illinois, a power vacuum quickly formed within the young religion. Brigham Young took control of the vast majority of Mormons, ultimately ending up in Utah, but Joseph Smith’s wife claimed a large share of Mormons, too, leading them to southwestern Missouri. Strang, however, considered the move toward Utah “a crackpot scheme, doomed to fail.” Instead, he and his followers moved north, eventually founding a Mormon settlement on Beaver Island in Lake Michigan. An angel, he claimed, commanded him to do so. Well, the angel at least commanded him to become a prophet and a leader, though the divine messenger left the details of this future somewhat vague. Strang distanced himself from Brigham Young by denouncing plural marriage (what the Mormons incorrectly call “polygamy”) as not only immoral but as unbecoming to a free people. The divine messenger also gave Strang his own set of golden tablets which he would translate as “The Book of the Law of the Lord,” a set of rules by which civil society should live a godly life.
Almost immediately after creating a following, though, Strang began to cultivate his own sort of cult of personality. “Rather than earn the devotion of his followers,” Miller explains, “Strang demanded that his councilors take a comprehensive oath of fealty during a private ceremony, replete with the theatrics of secret handshakes and gestures.” Further, Strang, now styled as King, began the bizarre task of clothing himself and creating a liturgy around his new court.
Miller’s description of Strang is darkly hilarious:
He held a wooden scepter and wore a bright red robe trimmed with white, perhaps looking a bit like Santa Claus. An entourage of men with various church titles surrounded him, like dukes, earls, and barons at a court. At the climax of the coronation, Adams placed a crown on Strang’s head. Witnesses described it as a metal circlet, but it was in fact cast of heavy paper and decorated with tinsel. According to one account, the royal costume was a hodgepodge of Masonic garb. It was supposed to make Strang appear as a Jewish king from the Old Testament. Strang was now the King of Beaver Island.
As the reader knows from the opening of Miller’s retelling, Strang’s end came unnaturally, and the entire piece brilliantly leads back to this opening shot to the cult leader’s head. Strang’s rule became openly creepy from his first moments as king. Tellingly, he embraced plural marriage, ultimately having fourteen children by five different women. He also established a series of draconian living codes, though he was quite gentle in terms of taxation.
Not surprisingly, those already living on or near Beaver Island saw the formation of this cult kingdom as a serious danger to the American republic. When they tried to challenge it through the law, Strang played this up as unjust persecution against a righteous man. He told himself and his people that all such persecution would only make him stronger. “Like Moses of old my name will be revered and men scarcely restrained from worshipping me as a God.”
Yet few remember Strang now. This is our loss. Not because Strang was god-like, but because he believed he was god-like. Miller beautifully tells the story and exposes the folly of such men. As I’ve noted, the story begins in regicide. But just who murders Strang and how, I’ll leave for the reader to discover. Miller is a truly fine writer—whether in an article, the full-blown mystery novel, the public policy expose, or the Kindle single. From the first word of Polygamist King to the last, Miller holds the reader in rapt attention. The plot grabs ahold of you and never lets you go. Amazingly enough, it’s all true—true crime at its best.