Thanks to the website Open Culture, I came across George Orwell’s 1940 review of Hitler’s Mein Kampf. Not only does Orwell suss precisely the nature of Hitler’s menace and the source of his popularity, he provides a neat thumbnail description of European liberals and social democrats that could easily attach to today’s American Democrats:
Also [Hitler] has grasped the falsity of the hedonistic attitude to life. Nearly all western thought since the last war, certainly all “progressive” thought, has assumed tacitly that human beings desire nothing beyond ease, security and avoidance of pain. In such a view of life there is no room, for instance, for patriotism and the military virtues. The Socialist who finds his children playing with soldiers is usually upset, but he is never able to think of a substitute for the tin soldiers; tin pacifists somehow won’t do. Hitler, because in his own joyless mind he feels it with exceptional strength, knows that human beings don’t only want comfort, safety, short working-hours, hygiene, birth-control and, in general, common sense; they also, at least intermittently, want struggle and self sacrifice, not to mention drums, flags and loyalty-parades. However they may be as economic theories, Fascism and Nazism are psychologically far sounder than any hedonistic conception of life.
Dig that prescient reference to birth control!
There’s a variety of reasons—see Santayana, Garry Wills, and our own Dan McCarthy—why liberalism leads to force and coercion, but it’s simply not the case that progressivism or modern liberalism or whatever you want to call it is akin to European fascism and Nazism, a virulent outgrowth of German romanticism that should not be confused with the rationalist-materialist hubris of Marx, Engels, and scientific socialism. Since I began blogging semi-regularly four years ago, the conceit that, well, Nancy Pelosi should check her sleeve for a swastika, has been a constant irritant.
I’m glad to learn that the great Orwell would have been similarly irritated.
Today, the United States of America commemorates 238 years of independence from the British—but an approaching bicentennial serves as a reminder that the struggle to establish sovereignty was only beginning when the Declaration of Independence was written. During the summer of 1814, the United States was in the full throes of what historians have long considered a “second war of independence.” The turning point of that war would come at the Battle of Baltimore, known to most Americans today as the inspiration for the national anthem, “The Star-Spangled Banner.”
The invading British had just burned the White House and plundered the port of Alexandria. On August 27, the Baltimore Patriot printed a letter from President James Madison, who had recently fled from the burning capital:
“On an occasion which appeals so forcibly to the proud feelings and patriotic devotion of the American people, none will forget what they owe to themselves; what they owe to their country and the high destinies which await it; what to the glory acquired by their fathers, in establishing the independence which is now to be maintained by their sons, with the augmented strength and resources with which time and Heaven have blessed them.”
The letter was addressed to the whole nation, but its exhortation to preserve the young nation’s independence was keenly felt in the notoriously anti-British city of Baltimore. The city was then known for its privateers, who raided British ships’ merchandise, as well as for a series of anti-British riots during which mobs burned the offices of anti-war newspapers. Baltimore was also of great strategic importance as a busy port that connected several East Coast cities by land and sea. The city’s residents were perhaps uniquely prepared for such an invasion due to their strong local tradition of civic self-defense. After the riots, local elites had recognized the authorities’ failure to keep the crowds under control and independently organized patrols of armed citizens to assist in expanding the city’s defensive resources.
But the events of the summer of 1814 pushed the city to formalize their efforts more strongly. After news of the fall of Washington, Baltimoreans set up a Committee of Vigilance and Safety comprised of the existing Maryland militia as well as civilians who served as volunteers on a rotating night patrol. Besides the night watch, the Committee was also responsible for raising funds for the local war effort, which generally took the form of donating at each prominent community center (usually a local tavern). A prescient Washingtonian wrote to Baltimore’s American and Commercial Daily Advertiser: “If the British visit Baltimore I have no doubt you will receive them in American style—we are disgraced.”
The local militia was particularly focused on unity in the face of the national military’s failure. The biggest blow to military success was the strategic blunder made by the Secretary of War, John Armstrong, who was thoroughly convinced that the British would not attack Washington. He saw the city as strategically insignificant, so he failed to prepare for its defense, which ended up drawing military resources to the region in a scrambled last-minute effort. Armstrong resigned after the defeat at Washington, but the local militia were already prepared to count on themselves for the most part. When his resignation letter made it to Baltimore, along with a full account of the disastrous battle, publishers refrained from comment and blame games. Invoking patriotism, the editor of the Baltimore Patriot proclaimed: “An enquiry must be made–when the nation is saved.” This restrained attitude continued through the city’s preparations for war, as the staff printed assurances of the city’s proper defense without giving particulars for fear of over-informing the enemy.
Morale quickly increased as the British made their way back north. A Revolutionary War veteran wrote in to the Baltimore Patriot encouraging the militia: “Be ready to suffer every thing, to attempt every thing in submission to Providence.” It ran alongside an account of the Maryland militia’s bravery in repulsing a surprise British attack in the Battle of Caulk’s Field, just a few miles across the river from Baltimore. The militia’s ability to fend off the British without any national military aid from the Army was a particularly hopeful sign for the Baltimoreans, who were preparing to engage an empire with a force of mainly armed citizens rather than professional soldiers.
The local militia took the brunt of the British force at the initial land engagement when the battle began on September 12 at North Point outside Baltimore. Though the Marylanders ended up retreating, they did major damage to the British forces, killing one of their commanders and scattering the remaining units into confusion. This allowed the national military, mainly the Army infantry under Major George Armistead, time to mount their famous defense of Fort McHenry against the bombardment of the Royal Navy.
“The Star-Spangled Banner” may only describe the more picturesque battle at sea, but the story of the song itself tells the full tale. Francis Scott Key, a Georgetown lawyer on a mercy mission to secure the release of an imprisoned American doctor, watched the Army defend the fort while detained on a British ship. He began scribbling away his famous poem there, but he did not finish it until he arrived at the tavern where he would stay the night.
The tavern, called the Indian Queen and maintained by hotelier John Gadsby, had been the central gathering point for the appointment of patrols after the riots. It had hosted the militia as they trained over the course of the war. It had been a neighborhood headquarters for the Committee of Vigilance and Safety. It was the perfect manifestation of the local effort that saved Baltimore behind the scenes of Key’s star-spangled tale. The tavern is a reminder that in Key’s famous words, it was not just a country, but a city, that was “the home of the brave.”
The Freer Gallery named their show of wood-block prints by fin de siecle Japanese artist Kobayashi Kiyochika “Master of the Night” (on display through July 27th), but night is ancient and Kiyochika’s work is distinctly modern. His prints show a world in transition. Some of the street scenes might almost be Victorian London; even the rickshaw used to pull a geisha through the night turns out to be a recent import, an innovation. Many of the scenes show people in traditional kimono mixing with bowler-hatted men in Western suits. This was Tokyo, the new capital city, hurrying toward the twentieth century.
Kiyochika’s haunting, color-washed night scenes show him to be on par with Edward Hopper as a poet of artificial light. The show opens with his 1881 “Sumida River by Night,” in which far-off windows glow red, and their light glimmers on the dark river. Gray dusk, slender black trees, and two silhouettes, a kimono-clad and a man in a mix of traditional and Western clothing. Kiyochika’s skies are swathed in slate-blue, gray, and mauve; or they’re jewel-toned, like “Teahouse at Imadobashi by Moonlight,” with its peacock or turquoise sky, glinting water, and drooping fronds. His moons are enormous, surrounded by halos of glowing red or white. He loves to portray light on water: light filtering through umbrellas and broken into shards on the pavement, light caught and gleaming in a river. “Rainy Moon at Gohanmatsu” is a title which gives the general mood of many of these pieces.
The perspective is typically somewhat remote. We’re observers, not participants. The people are usually silhouettes; if they do have faces, these faces are turned away from the viewer. The mood is one of longing, watching–a hushed, secluded feeling in which people mostly fail to connect with one another or with the viewer. It’s a nostalgic feeling, sweetly melancholy, with that characteristic modern edge of alienation. The rain which falls so frequently in these pictures not only lets Kiyochika play with light; it also isolates his people as they run past huddled under their umbrellas.
“Distant View of Ichinohashi Bridge from Sumida River’s Embankment” is a stellar example of this mood. In silhouette, two runners pull a geisha in a rickshaw along the riverbank. Although their running legs and forward-leaning bodies might suggest urgency, the moon glows serenely and from the right side of the frame yearning, curving branches stretch overhead. These curving lines soften the angular lines of the runners and make the scene feel quieter, somehow muffled.
Kiyochika depicted steamboats and warships—more light on the water, this time from cannon fire—and 1879’s propulsive “View of Takanawa Ushimachi Under a Shrouded Moon” shows blood-red smoke coming from a locomotive. The train’s headlights make an artificial dawn. Kimonos under telegraph wires, gas lamps partly hidden by crooked pines. Soot-gray clouds drift in the background. It’s easy to make a firefly-filled night out on a pleasure boat look beautiful, and this show will give you some lovely images of that; but Kiyochika was also capable of 1881’s “Taro Inari Shrine at the Akasuka Rice Fields,” in which a lone figure wanders away (or approaches) under a high full moon. The trees are sparse and bare, the buildings are falling apart, and there is only this small anonymous person under the starless sky. Read More…
Leave it to the Nazis to make charity posters into advertisements for power-worship.
In the late 1930s the Nazi regime created a traveling exhibition which contrasted Fuhrer-approved artworks with “degenerate” works produced by modernists, New Objectivists, and other riffraff. The exhibition was a bizarre contrast to the book-burning and art-destroying we might expect from a totalitarian regime. Instead of preventing people from seeing the art at all, the Nazis encouraged them to view it—but sought to control the viewers’ responses by creating a context in which the displayed art would evoke revulsion or consternation. The totalitarian art was displayed with plenty of light and space, centered in the galleries or on the walls, while the “degenerate” art was crammed together and surrounded by graffiti-like reminders of the regime’s aesthetic judgments. The current show at New York’s Neue Galerie, “Degenerate Art: The Attack on Modern Art in Nazi Germany, 1937,” showing through June 30, doesn’t completely replicate this heavy-handed curatorial approach, but it gives enough hints (and striking photos of the Nazi shows) that viewers can get the point.
And what’s perhaps surprising is how much you really can learn about Nazism from this art show. There are pieces which would puzzle contemporary viewers who aren’t steeped in the arguments over abstract expressionism and ideology: What’s so threatening about a sleek Bauhaus armchair? What did Vasily Kandinsky’s interstellar circles ever do to Hitler? But the overall picture which emerges from the Neue Galerie’s show is of a regime which worshiped strength and hated weakness. Although the Nazis reviled artists for “mocking religion,” the religion most clearly displayed in their preferred artworks was not the cult of Jesus but of Mars.
Like the very evenhanded Jamelle Bouie here, I think Sen. Rand Paul’s heart was in the right place when he remarked on the irony of a black president presiding over a domestic security apparatus that, decades ago, had targeted civil rights leaders like Martin Luther King Jr. “You don’t have to support Rand Paul or his policy agenda to see that he was right to call out the president on the tension between his position and his actions,” Bouie writes.
Yet Paul’s invocation of race and civil liberties still gave me the heebie-jeebies.
Perhaps uncharitably, I see in it the same kind of ideological switcheroo that conservatives have, in the past, employed to distance themselves from other abuses involving race. Like segregation: over the years, the right has sought to evade guilt for this legacy by sowing confusion over party ID: Segregationists were Democrats! True, but only trivially so. (It was possible, back then, to be rightwing and belong to the party of Jefferson and Jackson.) The more sophisticated version of this defense says that segregation was economically wasteful and inefficient; it violated free-market principles. Also true, and also trivial.
A similar rhetorical trick was brought to bear on South Africa in the 1980s. The Jack Abramoff-fronted International Freedom Foundation held up the apartheid government as a bulwark against expansionist communism. After apartheid ended—presto!—it was apartheid itself that was socialist: a “pervasive system of government regulation, regimentation and control.”
This kind of sleight of hand ignores the lived reality of libertarian ideas in America. As historian David Hackett Fischer has written, the ordered liberty of 18th-century New England was altogether different than that of Virginia in the same period, with its conflation of liberty and the “hegemonic condition of dominion over others.”
The unfortunate fact is that, when it came to segregation, apartheid, or domestic spying before Obama, the oppositionist energy issued from the left.
Too often, my conservative friends sound like post-WWII Frenchmen: we all joined the resistance! During the years between the September 11 terrorist attacks and the inauguration of Barack Obama, for example, the line was that Sen. Frank Church and the left had eviscerated our intelligence-gathering capabilities. Now you can find a positive gloss on Church at Breitbart.com!
Rand Paul’s criticism of Obama from this flank amounts, in my opinion, to an inadvisable sort of concern-trolling.
By all means, slam the brakes on the NSA.
But save the convenient harrumphing about MLK.
Why is it that today’s liberals have become the most ardent cheerleaders of arbitrary monarchy? Wasn’t liberalism born of the effort to limit arbitrary rule of a single, unelected ruler?
No, I’m not suggesting that the Left has suddenly decided that they regret the American Revolution. But, in nearly every leading liberal magazine, newspaper and blog, there is a growing excitement and hope that Pope Francis will change the Roman Catholic Church’s “policies” on birth control, male celibate priesthood, homosexuality, gay marriage, divorce and (some, at least, though far fewer) abortion. They have celebrated the appointment of Pope Francis as a sign that the Church is finally going to join the modern world, and fervently hope that he will simply declare that those teachings are no longer valid and embrace today’s accepted orthodoxies. They yearn for executive fiat.
It is striking to witness this palpable longing in juxtaposition of the absence of any real concern on the Left about possible abrogations of the rule of law arising from President Obama’s decision to suspend the “employer mandate” until 2015, and general support of the President’s assertion that in the face of Congressional opposition that he has recourse to the “Pen and the Phone.” And, after a season of accusatory lamentation about Pope Benedict’s authoritarian treatment of the “Nuns on the Bus,” there has been deafening silence from the Left over the Obama administration’s decision to go to court to force the Little Sisters of the Poor to violate their conscience in accepting provision of contraception, abortifacients, and sterilization.
Liberalism was born, the story goes, as a reaction against arbitrary and unlimited rule by monarchs. Yet, today’s liberals seem to adore executive power when it’s used to effect their preferred ends, even hoping that one of the only remaining “monarchs”—the Pope—will single-handedly change the “rules” of the Church. They wish to exchange “fiat” in the sense of “let it be done” to “fiat” in the sense of “do as I say.”
(Of course, “conservatives” don’t escape from this general inclination—they tend also to be ardent supporters of expansive executive power when one of their own is in office, and it is generally conservative intellectuals who have been most interested in developing theories about active executive power.)
What happened to limited government, you might ask? I answer: exactly what liberalism promised. For, liberalism was never about “limited” government, but the pursuit and exercise of potentially limitless power toward seemingly “limited” ends of securing Rights. Read More…
The occasion of the 50th anniversary of the Beatles’ appearance on The Ed Sullivan Show has inspired some obligatory guffawing at those old squares who greeted the band with derision. One putdown that fairly stands out for its utter revulsion was from none other than William F. Buckley, who wrote in the Boston Globe in September 1964:
The Beatles are not merely awful; I would consider it sacrilegious to say anything less than that they are god awful. They are so unbelievably horribly, so appallingly unmusical, so dogmatically insensitive to the magic of the art that they qualify as crowned heads of anti-music, even as the imposter popes went down in history as “anti-popes.”
Without appearing willfully contrarian, I get where these critics were coming from, if only in a roundabout sort of way. I’m an enthusiast of early rock and all its British exponents, from both London and Liverpool; I appreciate and admire the Beatles just fine; etc. Yet at the end of the day I’m a Stones guy—and I can’t help but bristle when Beatlemaniacs diminish the Stones for their comparative lack of technical sophistication or proficiency. There is no end to my puzzlement at those who swear by the Beatles because of their proto-progressivity. Because here’s the thing: rock-and-roll really was retrogressive. Yes, even the Beatles.
Oh, I can just hear you out there. Look at George’s sweet jazz-guitar technique!
To which I can only respond, give me a freakin’ break.
George—a lovely player; in my opinion, the finest of the three Beatles guitarists—could never have hung with the likes of Barney Kessel, Herb Ellis, Les Paul, or Wes Montgomery, all of whose mastery of the guitar (in the 1950s!) far exceeded that of any rock-and-roller of the 1960s. This is to say nothing of Django or Charlie Christian.
For all the magic that the Beatles, with not a little help from the classically trained George Martin, created in the studio; for all their genius at crafting songs, there is not a chord or trope or motif of theirs that Cole Porter and George Gershwin would not have recognized. As Elijah Wald has noted, the Beatles did not so much push musical boundaries forward as they consolidated the earlier advances of other 20th century greats, from Louis Armstrong all the way to Tin Pan Alley. (I’m reminded of a bit of trivia I learned from Terry Teachout: the Beatles had mistakenly thought they were the first ones to end a tune on a 6th chord. Martin informed them that Glenn Miller already had.)
Again, don’t misunderstand: I’m a Beatles fan. I appreciate the unparalleled pop-cultural phenomenon that they were. But if I squint just a little, I find it easy to put myself in the shoes of someone who’d lived through hot jazz and hard bop, and who found the Beatles to be amateurish lightweights. In my own shoes, I would defend the Beatles without denying this fact. The amateurishness of rock music was a feature, not a bug. And it still is. If your passion for the Beatles stems from this outsize opinion of their technical competence, I regret to inform you, you’re doing it wrong.
I saved The Friends of Meager Fortune, the second novel I’ve read by Canadian Catholic author David Adams Richards, for the polar vortex. If anything can make Boston in January seem warm, it’s this relentlessly grim tale of the last days of man-and-horse lumbering, with horses crashing through the ice and bloodied hands freezing on the reins.
I’m conflicted about recommending the book. What is good in it is immensely powerful. The story of the doomed love of local failure/hero/failure again Owen Johnson and charity case/outcast Camellia Dupuis is suspenseful and deeply moving. Camellia is a luminous innocent who never becomes cloying. She’s gentle, in a profoundly ungentle world.
Even more moving, though, is the portrayal of the grim, death-shadowed men who work for Owen up on Good Friday Mountain, cutting down logs under shockingly dangerous and miserable conditions. The book would be worth reading just for the depictions of the horses, their pride and suffering, as they work themselves to death under the care of proud and suffering men. The economic suspense (will Johnson’s timber haul fail?) and the suspense of the work itself (who will survive the grim conditions on Good Friday?) are as tense as the romance, and the plot twists in these areas made me gasp several times.
And Richards acidly depicts the gossip and judgment of a small town, the way the gazes of our neighbors can destroy us. Read More…
Since the medieval ages, the small town of Geel, Belgium has had an eccentric but vital vocation: its inhabitants have created a safe home of sorts for the mentally insane. Inspired by St. Dymphna, patron saint for the mentally ill, Geel became a place of sanctuary for the mad: patients were lodged in the homes of local townspeople as “boarders,” and were expected to work alongside and participate in family life. Mike Jay shares the town’s story at Aeon Magazine:
The family care system, as it’s known, is resolutely non-medical. When boarders meet their new families, they do so, as they always have, without a backstory or clinical diagnosis. If a word is needed to describe them, it’s often a positive one such as ‘special’, or at worst, ‘different’. This might in fact be more accurate than ‘mentally ill’, since the boarders have always included some who would today be diagnosed with learning difficulties or special needs. But the most common collective term is simply ‘boarders’, which defines them at the most pragmatic level by their social, not mental, condition. These are people who, whatever their diagnosis, have come here because they’re unable to cope on their own, and because they have no family or friends who can look after them.
Sadly, Geel’s patient population has been steadily declining—partly because of modern medicine and psychiatry, but also because of modernization’s effect on the familial, vocational life of Geel: “Few families are now able or willing to take on a boarder,” writes Jay. “Few now work the land or need help with manual labor; these days most are employed in the thriving business parks outside town … Modern aspirations—the increasing desire for mobility and privacy, timeshifted work schedules, and the freedom to travel—disrupt the patterns on which daily care depends.” Even as people remark upon Geel’s incredible familial, communitarian response to mental illness, the societal structures necessary for its existence are fading away.
The traditional community is often derided for its tribal instincts: for possessing a dangerous tendency toward discrimination and judgment. But Geel’s story exemplifies an idealized community: one in which care is dispensed freely and charitably within small, private associations. The needy find solace within a family structure, rather than within the solicitude of the state. As Jay notes, “The people of Geel don’t regard any of this as therapy: it’s simply ‘family care’.”
Geel also shows us community’s vital role in humans’ mental health. Geel’s population is not huge, and its landscape is largely rural. This simplicity and closeness—to the land and to people—seems to have healing powers for the mentally ill. Even without medication, psychiatrists, and specialized care, “boarders” have flourished in Geel for hundreds of years. Perhaps what we really need, more than drugs or doctors, is human nourishment. “However we might categorise or diagnose their conditions, and whatever we believe their cause to be—whether genetics or childhood trauma or brain chemistry or modern society—the ‘mentally ill’ are in practice those who have fallen through the net, who have broken the ties that bind the rest of us in our social contract, who are no longer able to connect,” Jay writes. “If these ties can be remade so that the individual is reintegrated with the collective, doesn’t ‘family care’ amount to therapy? Even, perhaps, the closest we can approach to an actual cure?”
It’s a vital question to consider, especially as we confront urbanization and individualism within our culture. What happens if private associations begin to die away—if the familial and vocational structure of small communities erodes with the rise of more atomized lifestyles? Such social structures may lead to larger paychecks and prominence, but if Jay is correct, they may also harm human flourishing.
“Apartheid is an affront to human rights and human dignity. Normal and friendly relations cannot exist between the United States and South Africa until it becomes a dead policy. Americans are of one mind and one heart on this issue.”
So said Ronald Reagan in his 1986 message to Congress vetoing the “sweeping and punitive sanctions” Congress was seeking to impose.
Reagan equated the sanctions to “declaring economic warfare on the people of South Africa.”
His Treasury Secretary James Baker said Sunday that Reagan likely regretted this veto. But having worked with the president on his veto message and address on South Africa, I never heard a word of regret.
Nor should there have been any.
For in declaring, “we must stay and build not cut and run” from South Africa, Reagan, whose first duty was the defense of his nation in the Cold War with the Soviet empire, saw not only the moral issue but the strategic imperative.
In 1986, there were 40,000 Cuban troops in Angola, where South Africa was a fighting ally and backer of anti-Communist Jonas Savimbi.
In Zimbabwe, Robert “Comrade Bob” Mugabe, having butchered thousands of Ndebele of rival Joshua Nkomo, was communizing his country. Southwest Africa and Mozambique hung in the balance.
Reagan was determined to block Moscow’s drive to the Cape of Good Hope. And in that struggle State President P. W. Botha was an ally.
Second, as Reagan declared, the sanctions ban on sugar imports would imperil 23,000 black farmers, and cutting off Western purchases of natural resources would imperil the jobs of 500,000 black miners.
“The Prime Minister of Great Britain has denounced punitive sanctions as immoral and utterly repugnant,” said Reagan in July of 1986, “Mrs. Thatcher is right.”
“Are we truly helping the black people of South Africa—the lifelong victims of apartheid,” said Reagan in his veto, “when we throw them out of work and leave them and their families jobless and hungry in those segregated townships? Or are we simply assuming a moral posture at the expense of the people in whose name we presume to act?”