Why is it that today’s liberals have become the most ardent cheerleaders of arbitrary monarchy? Wasn’t liberalism born of the effort to limit arbitrary rule of a single, unelected ruler?
No, I’m not suggesting that the Left has suddenly decided that they regret the American Revolution. But, in nearly every leading liberal magazine, newspaper and blog, there is a growing excitement and hope that Pope Francis will change the Roman Catholic Church’s “policies” on birth control, male celibate priesthood, homosexuality, gay marriage, divorce and (some, at least, though far fewer) abortion. They have celebrated the appointment of Pope Francis as a sign that the Church is finally going to join the modern world, and fervently hope that he will simply declare that those teachings are no longer valid and embrace today’s accepted orthodoxies. They yearn for executive fiat.
It is striking to witness this palpable longing in juxtaposition of the absence of any real concern on the Left about possible abrogations of the rule of law arising from President Obama’s decision to suspend the “employer mandate” until 2015, and general support of the President’s assertion that in the face of Congressional opposition that he has recourse to the “Pen and the Phone.” And, after a season of accusatory lamentation about Pope Benedict’s authoritarian treatment of the “Nuns on the Bus,” there has been deafening silence from the Left over the Obama administration’s decision to go to court to force the Little Sisters of the Poor to violate their conscience in accepting provision of contraception, abortifacients, and sterilization.
Liberalism was born, the story goes, as a reaction against arbitrary and unlimited rule by monarchs. Yet, today’s liberals seem to adore executive power when it’s used to effect their preferred ends, even hoping that one of the only remaining “monarchs”—the Pope—will single-handedly change the “rules” of the Church. They wish to exchange “fiat” in the sense of “let it be done” to “fiat” in the sense of “do as I say.”
(Of course, “conservatives” don’t escape from this general inclination—they tend also to be ardent supporters of expansive executive power when one of their own is in office, and it is generally conservative intellectuals who have been most interested in developing theories about active executive power.)
What happened to limited government, you might ask? I answer: exactly what liberalism promised. For, liberalism was never about “limited” government, but the pursuit and exercise of potentially limitless power toward seemingly “limited” ends of securing Rights. Read More…
The occasion of the 50th anniversary of the Beatles’ appearance on The Ed Sullivan Show has inspired some obligatory guffawing at those old squares who greeted the band with derision. One putdown that fairly stands out for its utter revulsion was from none other than William F. Buckley, who wrote in the Boston Globe in September 1964:
The Beatles are not merely awful; I would consider it sacrilegious to say anything less than that they are god awful. They are so unbelievably horribly, so appallingly unmusical, so dogmatically insensitive to the magic of the art that they qualify as crowned heads of anti-music, even as the imposter popes went down in history as “anti-popes.”
Without appearing willfully contrarian, I get where these critics were coming from, if only in a roundabout sort of way. I’m an enthusiast of early rock and all its British exponents, from both London and Liverpool; I appreciate and admire the Beatles just fine; etc. Yet at the end of the day I’m a Stones guy—and I can’t help but bristle when Beatlemaniacs diminish the Stones for their comparative lack of technical sophistication or proficiency. There is no end to my puzzlement at those who swear by the Beatles because of their proto-progressivity. Because here’s the thing: rock-and-roll really was retrogressive. Yes, even the Beatles.
Oh, I can just hear you out there. Look at George’s sweet jazz-guitar technique!
To which I can only respond, give me a freakin’ break.
George—a lovely player; in my opinion, the finest of the three Beatles guitarists—could never have hung with the likes of Barney Kessel, Herb Ellis, Les Paul, or Wes Montgomery, all of whose mastery of the guitar (in the 1950s!) far exceeded that of any rock-and-roller of the 1960s. This is to say nothing of Django or Charlie Christian.
For all the magic that the Beatles, with not a little help from the classically trained George Martin, created in the studio; for all their genius at crafting songs, there is not a chord or trope or motif of theirs that Cole Porter and George Gershwin would not have recognized. As Elijah Wald has noted, the Beatles did not so much push musical boundaries forward as they consolidated the earlier advances of other 20th century greats, from Louis Armstrong all the way to Tin Pan Alley. (I’m reminded of a bit of trivia I learned from Terry Teachout: the Beatles had mistakenly thought they were the first ones to end a tune on a 6th chord. Martin informed them that Glenn Miller already had.)
Again, don’t misunderstand: I’m a Beatles fan. I appreciate the unparalleled pop-cultural phenomenon that they were. But if I squint just a little, I find it easy to put myself in the shoes of someone who’d lived through hot jazz and hard bop, and who found the Beatles to be amateurish lightweights. In my own shoes, I would defend the Beatles without denying this fact. The amateurishness of rock music was a feature, not a bug. And it still is. If your passion for the Beatles stems from this outsize opinion of their technical competence, I regret to inform you, you’re doing it wrong.
I saved The Friends of Meager Fortune, the second novel I’ve read by Canadian Catholic author David Adams Richards, for the polar vortex. If anything can make Boston in January seem warm, it’s this relentlessly grim tale of the last days of man-and-horse lumbering, with horses crashing through the ice and bloodied hands freezing on the reins.
I’m conflicted about recommending the book. What is good in it is immensely powerful. The story of the doomed love of local failure/hero/failure again Owen Johnson and charity case/outcast Camellia Dupuis is suspenseful and deeply moving. Camellia is a luminous innocent who never becomes cloying. She’s gentle, in a profoundly ungentle world.
Even more moving, though, is the portrayal of the grim, death-shadowed men who work for Owen up on Good Friday Mountain, cutting down logs under shockingly dangerous and miserable conditions. The book would be worth reading just for the depictions of the horses, their pride and suffering, as they work themselves to death under the care of proud and suffering men. The economic suspense (will Johnson’s timber haul fail?) and the suspense of the work itself (who will survive the grim conditions on Good Friday?) are as tense as the romance, and the plot twists in these areas made me gasp several times.
And Richards acidly depicts the gossip and judgment of a small town, the way the gazes of our neighbors can destroy us. Read More…
Since the medieval ages, the small town of Geel, Belgium has had an eccentric but vital vocation: its inhabitants have created a safe home of sorts for the mentally insane. Inspired by St. Dymphna, patron saint for the mentally ill, Geel became a place of sanctuary for the mad: patients were lodged in the homes of local townspeople as “boarders,” and were expected to work alongside and participate in family life. Mike Jay shares the town’s story at Aeon Magazine:
The family care system, as it’s known, is resolutely non-medical. When boarders meet their new families, they do so, as they always have, without a backstory or clinical diagnosis. If a word is needed to describe them, it’s often a positive one such as ‘special’, or at worst, ‘different’. This might in fact be more accurate than ‘mentally ill’, since the boarders have always included some who would today be diagnosed with learning difficulties or special needs. But the most common collective term is simply ‘boarders’, which defines them at the most pragmatic level by their social, not mental, condition. These are people who, whatever their diagnosis, have come here because they’re unable to cope on their own, and because they have no family or friends who can look after them.
Sadly, Geel’s patient population has been steadily declining—partly because of modern medicine and psychiatry, but also because of modernization’s effect on the familial, vocational life of Geel: “Few families are now able or willing to take on a boarder,” writes Jay. “Few now work the land or need help with manual labor; these days most are employed in the thriving business parks outside town … Modern aspirations—the increasing desire for mobility and privacy, timeshifted work schedules, and the freedom to travel—disrupt the patterns on which daily care depends.” Even as people remark upon Geel’s incredible familial, communitarian response to mental illness, the societal structures necessary for its existence are fading away.
The traditional community is often derided for its tribal instincts: for possessing a dangerous tendency toward discrimination and judgment. But Geel’s story exemplifies an idealized community: one in which care is dispensed freely and charitably within small, private associations. The needy find solace within a family structure, rather than within the solicitude of the state. As Jay notes, “The people of Geel don’t regard any of this as therapy: it’s simply ‘family care’.”
Geel also shows us community’s vital role in humans’ mental health. Geel’s population is not huge, and its landscape is largely rural. This simplicity and closeness—to the land and to people—seems to have healing powers for the mentally ill. Even without medication, psychiatrists, and specialized care, “boarders” have flourished in Geel for hundreds of years. Perhaps what we really need, more than drugs or doctors, is human nourishment. “However we might categorise or diagnose their conditions, and whatever we believe their cause to be—whether genetics or childhood trauma or brain chemistry or modern society—the ‘mentally ill’ are in practice those who have fallen through the net, who have broken the ties that bind the rest of us in our social contract, who are no longer able to connect,” Jay writes. “If these ties can be remade so that the individual is reintegrated with the collective, doesn’t ‘family care’ amount to therapy? Even, perhaps, the closest we can approach to an actual cure?”
It’s a vital question to consider, especially as we confront urbanization and individualism within our culture. What happens if private associations begin to die away—if the familial and vocational structure of small communities erodes with the rise of more atomized lifestyles? Such social structures may lead to larger paychecks and prominence, but if Jay is correct, they may also harm human flourishing.
“Apartheid is an affront to human rights and human dignity. Normal and friendly relations cannot exist between the United States and South Africa until it becomes a dead policy. Americans are of one mind and one heart on this issue.”
So said Ronald Reagan in his 1986 message to Congress vetoing the “sweeping and punitive sanctions” Congress was seeking to impose.
Reagan equated the sanctions to “declaring economic warfare on the people of South Africa.”
His Treasury Secretary James Baker said Sunday that Reagan likely regretted this veto. But having worked with the president on his veto message and address on South Africa, I never heard a word of regret.
Nor should there have been any.
For in declaring, “we must stay and build not cut and run” from South Africa, Reagan, whose first duty was the defense of his nation in the Cold War with the Soviet empire, saw not only the moral issue but the strategic imperative.
In 1986, there were 40,000 Cuban troops in Angola, where South Africa was a fighting ally and backer of anti-Communist Jonas Savimbi.
In Zimbabwe, Robert “Comrade Bob” Mugabe, having butchered thousands of Ndebele of rival Joshua Nkomo, was communizing his country. Southwest Africa and Mozambique hung in the balance.
Reagan was determined to block Moscow’s drive to the Cape of Good Hope. And in that struggle State President P. W. Botha was an ally.
Second, as Reagan declared, the sanctions ban on sugar imports would imperil 23,000 black farmers, and cutting off Western purchases of natural resources would imperil the jobs of 500,000 black miners.
“The Prime Minister of Great Britain has denounced punitive sanctions as immoral and utterly repugnant,” said Reagan in July of 1986, “Mrs. Thatcher is right.”
“Are we truly helping the black people of South Africa—the lifelong victims of apartheid,” said Reagan in his veto, “when we throw them out of work and leave them and their families jobless and hungry in those segregated townships? Or are we simply assuming a moral posture at the expense of the people in whose name we presume to act?”
One of the landmark studies of America, published 120 years ago, is Fredrick Jackson Turner’s essay “The Significance of the Frontier in American History.” Turner’s essay was inspired by a line that appeared in the Census report of 1890:
Up to and including 1880 the country had a frontier of settlement, but at present the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line. In the discussion of its extent, its westward movement, etc., it can not therefore any longer have a place in the census reports.
Turner recognized that contained within this small passage of officialese was a momentous turning point in American history. The official announcement of the end of the existence of the American frontier marked “the closing of a great historic moment.” In Turner’s view, the existence of the frontier, and all that it entailed, constituted the deepest source of the American character—more than any other explanation, including even the Constitution.
Turner was a fervent Progressive during the years when Progressivism was gaining steam—indeed, he receives top billing in Richard Hofstadter’s study The Progressive Historians: Turner, Beard, Parrington. In Turner’s view, the role played by the frontier—and the type of values and attributes it fostered in Americans—was the root of the progressive thrust in American history, including, importantly in his view, the rise of the sense of American nationalism.
The wilderness has been interpenetrated by lines of civilization growing ever more numerous. It is like the steady growth of a complex nervous system for the originally simple, inert continent. If one would understand why we are to-day one nation, rather than a collection of isolated states, he must study this economic and social consolidation of the country.
What is striking in these and similar passages is how closely Turner’s analysis echoes the hopes and intentions of the Founders of whom the Progressives were often fervent critics. He particularly echoes the Hamiltonians who envisioned a “national system” that would draw the allegiances of people away from local and parochial identities through the soft but persistent pressure of a nationalizing economic and political order. Turner recognized that this thrust toward an increasing national identity would be achieved through the encouragement of the individualistic spirit of the American frontiersman. The John Wayne, Daniel Boone spirit of the self-standing, self-made, independent, free individual would, ironically, forge the conditions for a national identity and usher in the possibility and even necessity of the Progressive stage.
I picked up Irmgard Keun’s 1932 novel The Artificial Silk Girl at the Neue Galerie in New York, basically on a whim. It promised to be a dizzying tour of Weimar Berlin, last call before Hell and all that, from the perspective of a young, single woman whom the introduction compares to Madonna’s “Material Girl.”
Certainly our heroine, Doris, is materialistic in a certain sense. She pays her bills by dating men. Her closest relationship is with her stolen fur coat. (The letter she writes to the coat’s rightful owner is a terrific, tilt-a-whirl study in ambivalent amends.) But she isn’t hard-headed; her desires are a collage of sentiment and hunger. She maintains her girlish figure easily, since throughout most of the novel she can’t actually afford food. She writes her hopes and dreams in the notebook she’s covered with little paper doves:
I’m going to be a star, and then everything I do will be right–I’ll never have to be careful about what I do or say. I don’t have to calculate my words or my actions–I can just be drunk–nothing can happen to me anymore, no loss, no disdain, because I’m a star.
Kathie von Ankum’s translation is full of sharp, funny cockeyed lines, usually describing men—”his usual politics is blonde,” for example. But Doris goes through some truly rough times, and the most memorable sections of the book are its most poignant. This book made me choke up over a dead goldfish: “Put him back in the water!”, this universal human desire to reverse the irreversible. There are parts of this book which sound like Walker Percy:
So they have courses teaching you foreign languages and ballroom dancing and etiquette and cooking. But there are no classes to learn how to be by yourself in a furnished room with chipped dishes, or how to be alone in general without any words of concern or familiar sounds.
I don’t really like him all that much, but I’m with him, because every human being is like a stove for my heart that is homesick but not always longing for my parents’ house, but for a real home–those are the thoughts I’m turning over in my mind. What am I doing wrong?
Perhaps I don’t deserve better.
The future does hang over this book, and thin acrid drifts of it waft through the novel here and there: Doris ruminates on being asked whether she’s a Jew; she gets caught up in the ecstasy of a political rally. Berlin is filled with the desperately poor, especially veterans. It’s a city of people who have slipped down many rungs of life’s ladder, and Doris begins to feel herself slipping too.
I ended this book loving poor Doris, and Keun seems to love her too. She strains to come up with some kind of demi-happy ending for her heroine, Doris who believes that “it is particularly those things you have stolen with your own hands that you love the most”; but she can’t quite reach happiness, and settles for chastening.
While much aspersion has been cast upon some of the leading villains who have engineered the latest imbroglio in Washington, D.C.—Ted Cruz, the Tea Party, the Republicans, among those most often named—it is at least instructive to stand back from the current moment and consider the curious status of representation itself in today’s political circumstance. For we have neither of the two proposed forms of representation that were debated at the creation of America, but instead a hybrid that, arguably, combines the worst of both without the virtues of either.
Mostly forgotten today is that a major source of debate during the original ratification debates between the Federalists and Anti-Federalists was the very nature of representation, and in particular, the role that would be played by elected officials along with their relationship to the citizenry. The debate especially touched on respective views of the organization of the House of Representatives, but more broadly implicated the very nature of representation itself. According to the Federalists—those who sought ratification and eventually carried the day—the Constitution aimed at the creation of fairly large districts with numerous constituents, better to decrease the likelihood of passionate political expressions and participation by the electorate. Larger districts would, they hoped, make it more likely that only the most successful and visible people would be sufficiently identifiable by a larger electorate, ensuring the election of “fit characters” to office who, they also hoped, would better be able to discern the public good than if the entire body of the people had been gathered for that purpose.
The Anti-Federalists, by contrast, argued for relatively small and homogenous districts in which there would be frequent rotation in office and shorter terms (a year, at most), thereby ensuring that representatives would be drawn from the body of the citizens, and that there would be a close bond between constituents and their representatives. Rather than hoping for representatives who would be prominent, visible and “fit,” instead they hoped representatives would be drawn from the “middling” part of society, whom they believed would be less prone toward vices of the “great,” such as luxury and empire, and more likely instead to be people of “ordinary” virtue.
In short, the Federalists subscribed to a “filter” theory of representation, which they hoped would lead to political leaders who would be able to make decisions in the “public good” rather than constrained by the narrow parochial interests of their constituents. They sought to encourage the formation of private-minded citizens who would pay relatively little attention to political matters, leaving it to competent “fit characters.” The Anti-Federalists advanced a “mirror” theory of representation, instead hoping that representatives would reflect the modest virtues of the yeomanry. They hoped to foster high degree of deliberation and political discussion among the whole of the citizenry, favoring more local and deliberative forms of self-government.
The Federalists hoped that representatives, drawn from among the ambitious, would—whatever the differences of their regions and constituencies—all share an ambition for American greatness, and put aside differences in favor of crafting policies toward that end. The Anti-Federalists hoped for a numerous lower chamber of considerable contention, one likely to thwart the ambitions of the elite and instead keep the central government relatively ineffectual, while fostering strong local forms of political self-rule. The Federalists believed in a strong division of labor, in which “fit” elected officials would do the “work” of politics; the Anti-Federalists defended the role of “amateurs” in politics, believing that citizenship consisted in that ancient practice of “ruling and being ruled in turn.”
The Federalists—particularly Madison in the justly celebrated Federalist 10—argued that this form of representation, combined with a large geographic scale, would constitute the best means of combating the formation of “majority factions.” Their overarching fear was of a portion of the polity using the levers of government to effect its narrow ends.
The Anti-Federalists insisted that their version of representation would forestall the creation of a “consolidated” government, making frequent agreement at the federal level unlikely, while also fostering civic virtues and practices that would keep governance close to home. Their overarching fear was a powerful central government commandeered by the wealthy and powerful.
Today, we have combined parts of each theory and arrived at a highly unpalatable and even toxic mix.
Earlier this week I began a series of lectures in one of my classes on the thought of the Anti-Federalists. I began by echoing some of the conclusions of the great compiler and interpreter of the Anti-Federalist writings, Herbert Storing, whose summation of their thought is found in his compact introductory volume, What the Anti-Federalists Were For. I began with the first main conclusion of that book, that in the context of the debate over the Constitution, the Anti-Federalists were the original American conservatives. I then related a series of positions that were held by the Anti-Federalist opponents of the proposed Constitution. To wit:
They insisted on the importance of a small political scale, particularly because a large expanse of diverse citizens makes it difficult to arrive at a shared conception of the common good and an overly large scale makes direct participation in political rule entirely impracticable if not impossible. They believed that laws were and ought to be educative, and insisted upon the centrality of virtue in a citizenry. Among the virtues most prized was frugality, and they opposed an expansive, commercial economy that would draw various parts of the Union into overly close relations, thereby encouraging avarice, and particularly opposed trade with foreign nations, which they believed would lead the nation to compromise its independence for lucre. They were strongly in favor of “diversity,” particularly relatively bounded communities of relatively homogeneous people, whose views could then be represented (that is, whose views could be “re-presented”) at the national scale in very numerous (and presumably boisterous) assemblies. They believed that laws were only likely to be followed when more or less directly assented to by the citizenry, and feared that as distance between legislators and the citizenry increased, that laws would require increased force of arms to achieve compliance. For that reason, along with their fears of the attractions of international commerce and of imperial expansion, they strongly opposed the creation of a standing army and insisted instead upon state-based civilian militias. They demanded inclusion of a Bill of Rights, among which was the Second Amendment, the stress of which was not on individual rights of gun ownership, but collective rights of civilian self-defense born of fear of a standing army and the temptations to “outsource” civic virtue to paid mercenaries.
As I disclosed the positions of the Anti-Federalists, I could see puzzlement growing on the faces of a number of students, until one finally exclaimed—”this doesn’t sound like conservatism at all!” Conservatism, for these 18-to-22-year-olds, has always been associated with George W. Bush: a combination of cowboy, crony capitalism, and foreign adventurism in search of eradicating evil from the world. To hear the views of the Anti-Federalists described as “conservative” was the source of severe cognitive dissonance, a deep confusion about what, exactly, is meant by conservatism.
So I took a step back and discussed several ways by which we might understand what is meant by conservatism—first, as a set of dispositions, then as a response to the perceived threats emanating from a revolutionary (or even merely reformist) left, and then as a set of contested substantive positions. And, I suggested, only by connecting the first and third, and understanding the instability of the second, could one properly arrive at a conclusion such as that of Storing, who would describe the positions of the Anti-Federalists as “conservative.”
First, there is the conservative disposition, one articulated perhaps most brilliantly by Russell Kirk, who described conservatism above all not as a set of policy positions, but as a general view toward the world. That disposition especially finds expression in a “piety toward the wisdom of one’s ancestors,” a respect for the ancestral that only with great caution, hesitancy, and forbearance seeks to introduce or accept change into society. It is supremely wary of the only iron law of politics—the law of unintended consequences (e.g., a few conservatives predicted that the introduction of the direct primary in the early 1900′s would lead to increasingly extreme ideological divides and the increased influence of money in politics. In the zeal for reform, no one listened). It also tends toward a pessimistic view of history, more concerned to prevent the introduction of corruption in a decent regime than driven to pursue change out a belief in progress toward a better future.
Is cursive an outdated and unnecessary facet of American education? Once again, Common Core is causing an academic stir—this time surrounding its exemption of cursive from required curricula. Instead, the computer keyboard is becoming school’s chosen writing methodology, according to a Tuesday article in The Atlantic:
Opponents of script argue that needing to read and write in cursive is no longer relevant in an increasingly digital society. Some believe that cursive is essentially archaic, the importance of which is relegated only to checks, signatures, and the occasional love letter. They believe instructional time is better devoted to other classroom subjects that are included on standardized tests, and cursive is not necessary for academic achievement. After all, they say, we have computers and speech dictation machines.
The Washington Post heralded the imminent demise of longhand in 2006, after only 15 percent of 1.5 million SAT test-takers used cursive. The rest printed in block letters. While some experts were unconcerned by the trend, others warned that the demise of handwriting could have unexpected consequences:
…Academics who specialize in writing acquisition argue that it’s important cognitively, pointing to research that shows children without proficient handwriting skills produce simpler, shorter compositions, from the earliest grades. Scholars who study original documents say the demise of handwriting will diminish the power and accuracy of future historical research. And others simply lament the loss of handwritten communication for its beauty, individualism and intimacy.
There are numerous practical skills, like those mentioned above, associated with cursive. After the Los Angeles Times printed an article on the archaic nature of cursive, teachers responded with various defenses—arguing that it improved coordination, focus, even mathematical skills. Steve Jobs, famous former Apple CEO, studied calligraphy at Reed College and found inspiration in its beauty. He told Stanford graduates in 2005, “I learned about serif and sans serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can’t capture.” He added, “If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts.” Ironically, Microsoft mastermind Bill Gates is a significant financial sponsor of the Common Core curriculum.
Beyond all the cognitive, academic, intellectual, and aesthetic benefits of cursive, there is perhaps one more. Cursive, despite its loopy letters and structured theory, truly develops with the individual hand. Thus, every person’s handwriting will be unique and personal. In an age of computers, where professors mandate essays in Times New Roman 12 pt, and a swath of fonts are available via Dafont, cursive preserves artistic diversity. And it is comforting to know that we few cursive users still have a unique print in the world. The Atlantic article sums it up nicely: “In a very meaningful way, the debate between cursive and print, or keyboards and handwriting, is entirely up to us: what type of mark do we want to leave?”