Rod Dreher can post pictures of food all the time. I pretty much do it once a year. This is that once.
To recap for readers who aren’t obsessives, every year I throw a dinner party on Hanukkah that features a holiday-appropriate eight courses, each showcasing the holiday-appropriate ingredient of olive oil.
This year’s menu:
Antipasto: Latkes three ways, topped with -
- Triple-crème cheese and apple-cranberry compote
- Wild mushroom stroganoff
- Gorgonzola and brandied figs with toasted pine nuts
(I always start with latkes with three different toppings. Since this is my seventh year throwing this particular dinner party, I’ve now tried 21 different latke toppings. Some of them are real keepers; I suspect someone more enterprising than I would see the potential for a book, or at least an article. This year, the figs were a little over-brandied, but the other toppings were hits.)
Zuppe: Chestnut cream topped with fried leeks
(I don’t really have a rule for what constitutes showcasing olive oil as opposed to merely using it. I guess I’d say that I count a dish as showcasing if either (a) it’s fried; (b) it uses olive oil in a modestly non-traditional way, or (c) you can really taste the oil. Even with that leniency, I don’t always make it.)
Insalata: Puntarelle salad with anchovy dressing
(For example: this salad is not fried, uses olive oil in an entirely traditional way, and you can’t really taste the oil because the capers, anchovies and fresh garlic predominate. Well, maybe that’s not fair; you can taste the oil. Anyway, it’s a great salad.)
Primo: Deep-fried risotto “oranges” topped with butternut squash and roast garlic puree
(I’ve been meaning to make these for years. Finally got around to it. That’s my son in the tuxedo t-shirt serving, by the way.)
Intermezzo: Blistered shishito peppers with matcha salt
(Most of them aren’t spicy. But every now and then . . .)
Secondo: Whole snapper baked in a salt crust, served drizzled with olive oil, with a side of sautéed broccoli rabe
Dolce I: Deep-fried chocolate-filled wontons dusted with five-spice sugar, accompanied by rosemary olive oil ice cream and green tea
(The wontons puff up when fried. So after you take a bite of the wonton, you can spoon the ice cream into the opening and you’ve got an ice cream cone.)
Dolce II: Pistachio olive-oil cake filled with fig compote, iced with a cream cheese frosting
Previous years’ menus:
Recipes, as always, available upon request.
The conjunction of Hanukkah and Thanksgiving struck me, initially, the way it struck most people: as an opportunity to have latkes and turkey together, and to use cranberry-apple sauce two ways; and as a more fortuitous juxtaposition than Hanukkah and Christmas. Hanukkah, after all, is supposed to be a fairly minor holiday, and neither it nor Christmas particularly benefits from the competition. And Hanukkah is a holiday of thanksgiving: it commemorates the rededication of the Temple in Jerusalem after the victory by the Hasmoneans in their war against Antiochus IV, and expresses gratitude at the implicit divine favor shown on the victors in that they were able to complete the rededication even though there appeared to be insufficient pure oil to keep the flame burning. (And, in the background, there’s gratitude for the arrival of the winter rains even though the festival of Tabernacles could not be observed at the proper time a couple of months earlier, and with the appropriate sacrifices, due to the pollution of the Temple and the ongoing civil war.)
But the more I thought about it, other links between the two holidays began to assert themselves in interesting ways. Specifically, both holidays relate to civil wars – and to civil religion as a means of establishing national unity.
President Lincoln formalized America’s Thanksgiving in 1863, while our great civil war still raged at its bloodiest. Lincoln’s proclamation explicitly established the holiday as a national one, to be observed solemnly and reverently, “with one heart and one voice by the whole American People.” It also explicitly associated divine providence with the continued flourishing of the Union even under the stress of civil war: the growth of population, the spread of settlement, the abundance of crops, etc. Framed as the giving of thanks, it was also a political prophecy: the Union would prevail, and ultimately we’d all be celebrating Thanksgiving together.
Lincoln’s Thanksgiving wasn’t a secular holiday exactly, but it was an ecumenical and theologically vague one. It can be thought of as a template of “civil religion,” the association of the nation with a kind of religious aura untethered to any particular theology.
Hanukkah is far more particularist in its origins – but it’s also about the establishment (or reestablishment) of a civic religion. Hanukkah originated as a celebration of victory at the end of a civil war – and a successful rebellion against a foreign empire. The war began as a contest for power between a Hellenizing pro-Seleucid party and an anti-Hellenist, pro-Egyptian party among the Judeans. The Hellenizers invited in Antiochus IV to put down their enemies, and Antiochus conducted an atypically harsh campaign against the religious observances of the traditionalists as part of the war effort, including turning the Jerusalem Temple into a temple of Zeus. This latter can be readily understood as an effort to establish unity with the rest of the Seleucid domains, but it backfired and provoked more furious resistance by the anti-Hellenizing party, the Hasmoneans, led by Judah Maccabee.
Alongside dynastic and economic motivations for the Judean civil war, in other words, there was a battle over communal particularism – and, more specifically, whether the national symbol, the Temple of Jerusalem, would have a particularistic orientation or would follow the norms of the larger Hellenistic world.
Both holidays evolved substantially from their origins, however. As early as the writing of the Mishnah, Hanukkah was treated as problematic by the rabbis. There was a clear discomfort, in the wake of the catastrophically failed Bar Kochba revolt, to celebrate a holiday of national prowess and self-assertion. This is one reason why the “miracle of the oil” began to take center stage. But even that observance becomes ironic if you consider that the menorah is a recollection of the rededication of a Temple that, by this point, had been obliterated by the victorious Roman armies. By the time you get down to medieval and modern times, the symbol of the holiday – and of the divine “great miracle” that happened “there” – is the dreidl, a game of chance. Its observance is almost entirely private, and is far more common than other, theoretically more important holidays – and, though nominally a celebration of particularism, it’s the holiday that is most commonly shared across communal boundaries (and in multi-religious homes).
Thanksgiving, meanwhile, has largely ceased to be a civic holiday. Instead, it has been privatized to paradigmatic family holiday, a day when far-flung relatives get together to roast a sacrificial bird and observe a ritual contest of strength and skill, and give thanks for their private plenitude. It may have more or less religious content depending on the observance of the home in question – but the primary civic ritual is the pardoning of the sacrificial bird, an act which symbolizes the god-like powers over life and death accruing to the Executive, powers which few civilians care to dwell on at any length.
There’s a lesson here about the limits of that executive power. Kings, High Priests and Presidents may have the power of life and death, as well as the power to create holidays to celebrate their victories. But the meanings of their inaugurations are beyond their control, and get re-written to conform to the actual contours of their celebrants lives – and to change as those lives change. When we excavate it, much of religion turns out to be civic in origin, and much civic ritual, forged in times of civic stress, thereby acquires (or is formally invested with) religious aura. But when those particular stresses pass, and the generations who were shaped by them are gathered unto their ancestors, the rituals, if they are to endure, inevitably get re-invested with new significance that would strike our forebears as strange indeed.
And we should give thanks for that, as well, because that process is how both the living and the dead get to live comfortably if confusedly together.
I went last night to see the new Michel Gondry documentary, “Is the Man Who Is Tall Happy?: An Animated Conversation With Noam Chomsky,” and I admit, it wasn’t quite what I was expecting.
Based on the title, I assumed that Gondry was going to explain – and illustrate – some of the seminal work that Chomsky did in linguistics, and, presumably, connect that work both with Chomsky’s political ideas (would you really do a movie about Chomsky and not touch on politics?) and with the process of filmmaking. And at the very start, the film gestures in that direction; Gondry announces that he decided to animate the film precisely so that it will be plain to the viewer that this is an artifact of the director, and not a transparent record of the words of the subject.
This ought to be the beginning, not the end, of a discussion, but the subject is dropped. Gondry never asks Chomsky about how film teaches us to see, or about the relationship between the language of film and the innate grammatical structures that Chomsky made his life’s work to study. The closest we come is repeated discussion of the concept of “narrative continuity” – the way in which we perceive external world as composed of entities with an associated history as opposed to as objects with concrete properties. So, even small children understand that when Sylvester becomes a rock, he’s still Sylvester the donkey, while a cutting that is planted and grows to become a tree is a different organism from the tree from which it was cut, even though the two organisms are genetically identical and hence both have equal logical claim to continuity with the “original” tree.
Gondry seems to think that by showing us how his film is made, he has somehow exposed this mental process and thereby avoided manipulating us. For example, late in the film, when we finally get to the famous sentence that gives the film its title, he says that he wanted to talk about that sentence because “I could do a really good animation” – in other words, that it was driven not by the requirements of the argument he wanted to make but by the potentialities he saw in the medium to entertain the audience. (He draws a charming picture of a giant crouching inside a too-small house. So no, I don’t think the tall man is happy.) But I don’t see how this alleviates the problem that Gondry identifies, that our minds close up gaps in continuity in order to make sense of a narrative.
What it does instead is expose that Gondry hasn’t processed Chomsky’s core insights sufficiently well to communicate them to an audience. He and Chomsky have a long back-and-forth at one point where they plainly are not understanding one another. Gondry wants to make some kind of point about how children see cartoons of dogs before they see actual dogs, and yet they understand what dogs are; Chomsky is trying to make some kind of point about how the intuition that we have a mental picture of dogs based on common attributes is simply wrong. But we never learn what either man thinks of the other’s point, nor why either point is important – instead, we’re just given the record of mutual-incomprehension.
Which I found terribly frustrating. Chomsky’s big point about the sentence in the title, if I understand it, is that very young children are able to understand how to transform a statement – “the man who is tall is happy” – into a question – “is the man who is tall happy?” – which, it would seem, requires them to have internalized a generative grammar that should be too complex for them to understand. (How do they know that the second “is” is the one to move, that it is structurally closer to the beginning of the sentence even though it is linearly further?)
But the “wrongly” structured version of the question – “is the man who tall is happy?” – is actually comprehensible, provided you read it with the correct stress (with emphasis on the word “tall”). If you read it correctly, you see that both instances of “is” have been moved – the “is” that used to be before “happy” has been moved to the front, while the “is” that used to be before “tall” is moved to where the first “is” used to be. We understand that “who would fardles bear” means the same thing as “who would bear fardles;” Shakespeare’s construction has a poetic, elevated tone because the verb is placed Germanically at the end of the sentence. And I’ve heard any number of small children make these kinds of grammatical “mistakes” – that is to say, construct sentences that show a clear understanding of what the different words in the sentence are doing grammatically while not yet getting the conventions of “proper” ordering thereof.
I suspect Chomsky would say something like “exactly” – but I really wanted to get into it with him and make sure I understood what he was getting at, what was the significance of his insight. And the movie just never got me there, preferring to let Chomsky reiterate his self-flattering belief that his own insights were like Galileo’s, the first steps away from linguistics as mere taxonomy and towards being an actual science.
Ultimately, the film is more a record of Gondry’s fascination with Chomsky than it is a particularly clear explication of his ideas. The fascination is understandable – Chomsky has an incredibly powerful mind, which he has kept focused for an entire lifetime on the subjects that interest him, primarily the origins of thought, which he sees as inseparable from the origins of language. Even his political ideas can be understood as an effort to force people to think, and to fight the manifold efforts by government, corporations and the organs of the media to convince us to let them do our thinking for us, and thereby reduce us to something less than human. For that reason, I would think Chomsky would prefer the company of minds worth pressing against, prefer questions that he has to work to answer. So I’m disappointed to discover that Chomsky admired this admiring but intellectually unchallenging film.
Read this account of the colossal bureaucratic stupidity that brought us the fiasco of healthcare.gov and then this account of the Iran deal just agreed to in Geneva. On the one hand, I’m inclined to say we now know where the President’s priorities really lie: he’s a foreign policy President, with ultimately little interest in domestic affairs. Or maybe the President has an appreciation for the negotiation process that he doesn’t have for the implementation and management phases. After all, he was able to drag the ACA over the line legislatively in the face of furious opposition, only to thoroughly discredit it through criminally-incompetent implementation. And of course, these are not mutually-exclusive explanations.
It’s too soon to say what Obama’s ultimate legacy will be in either area. Healthcare.gov could still be revamped, and prove both successful and popular in the long term; it could also continue to founder, leading to the failure of the exchanges and demands for a new round of healthcare reform that could take a very different direction (either single payer or catastrophc-plus-MSAs for all); or it could simply limp along, too important to the insurance industry to kill but never popular enough for future Democrats to want to crow about. Similarly, the Geneva agreement could presage a permanent solution to Iran’s nuclear program, and ultimately full normalization; or it could fall apart within six months; or it could merely be the prelude to another short-term deal, and then another, postponing the tough concessions both sides have to make to end the crisis while preventing that crisis from ever boiling over into open warfare. But whether for good or ill, these two areas are where the Obama Administration’s legacy will be made.
To me, there’s an obvious way for the GOP to respond to both developments: run against healthcare.gov as proof that Democrats can’t even build a website, and argue that the Iran deal vindicates a tough negotiating posture with adversaries, and now requires continued vigilance in implementation. But I suspect they will do neither, instead running against healthcare.gov as proof that government can’t even build a website (implicitly conceding that Republicans wouldn’t do any better), and arguing that the fact that we got a deal with Iran proves that we weren’t tough enough (implicitly conceding that their goal is continued conflict, possibly war, and not a solution to the nuclear standoff). In other words, I expect a depressingly ideological rather than pragmatic response to both the Administration’s failures and its successes.
And they may win anyway, particularly in an off-year like 2014.
I want to second Rod Dreher’s appreciation of Allie Brosh’s comments on depression. From my own experience – with myself and with others, on both the receiving end and the giving end – there’s this incredibly powerful impulse to try to solve mental and emotional problems. Which is funny when you think about it, because we don’t do that with, say, a broken leg or pneumonia.
When someone you love is sick, you might give them advice about how to feel better, if you had any useful advice from experience to impart. But that would likely not be the focus of your attentions. And you certainly wouldn’t try to guilt them into getting better. Mostly you’d offer sympathy. And food:
When my wife was diagnosed with breast cancer, we ate well. Mary Beth and I had both read the terrifying pathology report of a tumor the size of an olive. The surgical digging for lymph nodes was followed by months of radiation. We ate very well.
Friends drove Mary Beth to her radiation sessions and sometimes to her favorite ice cream shop on the half-hour drive back from the hospital. She always ordered a chocolate malt. Extra thick.
Our family feasted for months on the lovingly prepared dishes brought by friends from work and church and the neighborhood: chicken breasts encrusted with parmesan, covered safely in tin foil; pots of thick soup with hearty bread; bubbling pans of lasagna and macaroni and cheese. There were warm home-baked rolls in tea towel–covered baskets, ham with dark baked pineapple rings, scalloped potatoes, and warm pies overflowing with the syrups of cherries or apples.
Leftovers piled up in the refrigerator, and soon the freezer filled up too, this tsunami of food offerings an edible symbol of our community’s abundant generosity.
Although few said the word breast unless it belonged to a chicken, many friends were familiar with the word cancer and said it often, without flinching. They asked how we were doing, sent notes and cards, passed along things they’d read about treatments and medications, emailed links to good recovery websites and the titles of helpful books, called frequently, placed gentle if tentative hands on shoulders, spoke in low and warm tones, wondered if we had enough food. The phrase we heard most was: “If there’s anything I can do … ”
But if someone suffers from anxiety, or depression, or has a nervous breakdown or addiction?
Almost a decade later, our daughter, Maggie, was admitted to a psychiatric hospital and diagnosed with bipolar disorder, following years of secret alcohol and drug abuse.
No warm casseroles.
At 19, she was arrested for drug possession, faced a judge, and was placed on a probation program. Before her hearings, we ate soup and grilled cheese in a restaurant near the courthouse, mere booths away from the lawyers, police officers, and court clerks she might later see.
No scalloped potatoes in tinfoil pans.
Maggie was disciplined by her college for breaking the drug and alcohol rules. She began an outpatient recovery program. She took a medical leave from school. She was admitted to a psychiatric hospital, diagnosed, released. She began years of counseling, recovery meetings, and intensive outpatient rehabilitation. She lived in a recovery house, relapsed, then spent seven weeks in a drug and alcohol addiction treatment center.
No soup, no homemade loaves of bread. . . .
Friends talk about cancer and other physical maladies more easily than about psychological afflictions. Breasts might draw blushes, but brains are unmentionable. These questions are rarely heard: “How’s your depression these days?” “What improvements do you notice now that you have treatment for your ADD?” “Do you find your manic episodes are less intense now that you are on medication?” “What does depression feel like?” “Is the counseling helpful?” A much smaller circle of friends than those who’d fed us during cancer now asked guarded questions. No one ever showed up at our door with a meal.
I will say that I have seen people be extraordinarily supportive in precisely the circumstances described above – sometimes people with personal experience of a similar situation, but by no means always. But I still think Mr. Lake (author of the piece above) makes a good and important point.
I haven’t read Henry Nau’s book (I admit, I wasn’t aware of it), but gosh darn it I sure am annoyed to learn from Daniel Larison that he’s decided that the term to use for rebooting neoconservative foreign policy is “conservative internationalism.” Because I was really hoping that word would wind up meaning something else.
To start with, I don’t see why “internationalism” and “interventionism” should be understood as synonyms. From where I sit, “internationalism” should be opposed to “nationalism” – that is to say, it should represent a vision of enlightened self-interest that recognizes our inevitable interconnectedness with other states and, further, that this interconnectedness is generally desirable. An internationalist could be more aggressive or more pacific – there’s no necessary implication of a bias toward warfare. Canada has a very “internationalist” approach to its relations with other states, with a free-trade agreement with its largest neighbor, a very open immigration policy, an enthusiastic participation in international organizations and peacekeeping operations, and a longstanding commitment to collective security through NATO. But it would be bizarre to characterize Canadian foreign policy as aggressive or interventionist. (Incidentally, I think nationalism can also be more- or less-aggressive. A country can zealously defend its existing territory and prerogatives without seeking to expand territorially or to assert its dominance over neighboring states. Historically, though, nationalism has tended to be associated with expansionist and aggressive foreign policy.)
Similarly, it should be possible to be more “liberal” or more “conservative” in one’s internationalism. If I were to characterize the difference, I’d say that liberal internationalism is more universalist and comprehensive in its ambitions, more inclined toward supra-national institution-building and to moving international law away from a customary basis and toward explicit universal rules with an enforcement mechanism. A conservative internationalist, then, would be expected to be more wary of these trends – more comfortable with customary law than with explicit universal rules, more comfortable with states organizing for collective security than with supra-national institutions of universal pretension.
As I say, I haven’t read Nau’s book, but right from the subtitle I see an enormous problem: if James Polk was an “internationalist” of any sort, conservative or otherwise, then the term really is devoid of useful meaning. Polk pursued a policy of aggressive nationalist expansion on all fronts, threatening war with Britain to get a resolution of the Oregon boundary dispute, annexing Texas and waging aggressive war with Mexico to conquer what is now the American West, and attempting to purchase Cuba. There’s no obvious applicability of Polk’s foreign policy to an age when territorial expansion is no longer an option, but at a minimum it should be possible to agree that building a continental empire by force should be seen as the antithesis of any kind of internationalism, liberal or conservative.
It would be useful to have a term for a perspective that sought to maximize American interests within the context of understanding those interests as best-served by a relatively harmonious and stable international order, and that saw a proper role for America as the world’s leading power in trying to foster such an order. Useful because not every argument against militarism and hegemonism proceeds from non-interventionist premises, and because the imperatives of power make it very hard for me to see America ever adopting the foreign policy of, say, Switzerland. “Realism” is an inadequate term because it brings along the intellectual baggage of the academic theory of the same name; Walter Russell Mead’s term, “Hamiltonianism,” might do but it’s relatively obscure and tends to be reduced to the notion that foreign policy should promote American commercial interests, which is too narrow a definition for the purpose.
“Conservative internationalism” might have done well. It might have appealed to people who favor greater restraint, a greater emphasis on diplomacy, a greater respect for the sovereignty and legitimate interests of other states, and a greater interest in order and stability, than has been the case with American foreign policy since the end of the Cold War – but who don’t want to think of themselves as narrow-minded “isolationists” or hard-hearted “realists” or people who “blame America first.” So it’s a shame to lose the word to someone who appears to want it to serve the opposite purpose.
UPDATE: I see that in a subsequent post, Daniel Larison has already made the same point about Polk that I made above. I would like to second his view that it makes no sense to call Jefferson a “conservative internationalist” either. As for Truman, if I were to guess, I’d say Nau sees a discontinuity between the liberal internationalism of the UN and the “conservative internationalism” of NATO, and wants to draw attention specifically to anti-Communism as a template for what “conservative internationalism” should be. All that would be consistent with the idea that he’s using the term to re-boot neoconservatism.
Reagan is a tough one because his legacy remains highly contested. Reagan bombed Libya and withdrew from Lebanon. He built the MX missile and signed the INF treaty. He pursued a policy of confrontation with Brezhnev and a policy of détente with Gorbachev. His unilateral intervention in Grenada and support for the Salvadoran regime and for the Contra rebels in Nicaragua loomed large at the time, but pale in comparison to America’s earlier Cold War intervention in Indochina or its post-Cold War interventions in Panama, Kosovo, and Iraq. Similarly, his Administration facilitated the democratic transition in South Korea and the Philippines, but only after a history of supporting the right-wing dictators of those countries.
If you want to make a case for Reagan as the proper heir of Truman in a neo-conservative foreign policy tradition, you can do that, and if you want to you can call that tradition “conservative internationalism.” On the other hand, I think you can also make the case for Reagan as the proper heir of Eisenhower – as more a builder and husbander of national resources than a wanton spender thereof, and, though anything but an anti-interventionist, as genuinely committed to the “peace” part as well as the “strength” part of “peace through strength.”
The hawks have made a consistent habit of apocalypticism when speaking of Iran’s nuclear program: if we don’t act within such and such a time frame, we will pass the point of no return, and then catastrophe will ensue. There has never been much justification for this tone (and I’m not going to use this space to rehash why). But there is similarly little justification for advocates of a diplomatic solution (among whom I include myself) taking an apocalyptic tone towards setbacks in negotiations. Indeed, such a tone plays entirely into the hands of the hawks.
The most fundamental premise of the Iran hawks is that a nuclear (or even nuclear-capable) Iran is absolutely unacceptable, and that we are justified going to war to try to prevent such an eventuality even if there is substantial uncertainty that military action will succeed, because doing nothing guarantees a catastrophic outcome. A key premise of Iran doves must therefore be that this is not the case – that nuclearization would be a bad outcome, one worth paying a real price to avoid, but not catastrophic, and certainly not something that would justify all the dangers of military action.
Moreover, the dovish position holds that Iran seeks a nuclear capability for rational reasons (deterrence and the desire to bolster regional influence), and for emotional reasons that are entirely comprehensible (national pride, primarily). If that premise is true, and if Iran has no more grandiose ambitions, to say nothing of suicidal plans to plunge the world into a nuclear maelstrom, then logically there really is no “point of no return” beyond which diplomacy becomes impossible. Instead, there are better and worse opportunities to get a deal on more or less favorable terms to ourselves.
If the possibility of a nuclear Iran is not worth launching a war over (and it isn’t), then by the same token we need not be so desperate for a deal that every mistake or setback raises the prospect of total failure and “inevitable” armed conflict. Instead of panicking at the possibility that a particular round of talks might fail, advocates of diplomacy should stress the clear rational interest for both parties in a diplomatic solution, and therefore express confidence that, ultimately, a diplomatic solution will be forthcoming – and that the real question is how long it will take and what price will be paid by both sides.
People should make the arguments they believe are true, of course, but the resort to hyperbole is really a rhetorical strategy rather than an argument, and in this case it’s a strategy that has the opposite effect of that intended, aiding the hawks more than the doves.
Well, Lisa Kron and Jeanine Tesori have shattered the walls of that Venn diagram with a delightful and painful new musical, based on Alison Bechdel’s graphic memoir of the same title, Fun Home, now running at New York’s Public Theater.
By “graphic memoir” I don’t mean that the material is graphic – though it certainly is at times. I mean that the narrative is conveyed graphically. Ms. Bechdel has been writing and drawing the comic strip, “Dykes To Watch Out For,” a Doonesbury-esque soap opera with social commentary, for decades; when she set out to write the story of her father’s life and (most likely) self-inflicted death, she turned to the same form, and produced an absolutely searing portrait that both could serve as a definition of “dysfunctional” and that breaks radically with the tradition of such works by the utter absence of self-pity, the painful empathy for the tyrannical father whom she knows she resembles.
When I first heard someone had adapted Fun Home into a musical, I said: that’s impossible. First of all, the book is incredibly close to the consciousness of the Bechdel character – we spend a huge amount of time alone with her thoughts and behaviors. Second, it’s wildly non-linear, jumping around between childhood, her college years (the era of her father’s death), and a present-tense that provides the narrative voice. Third, as a comic it’s naturally cinematic, directing the eye, framing what it wants us to see. That’s not how the stage works. And finally, a musical?!? This painful, beautiful but impacted work is going to launch into song?
But Kron and Tesori have accomplished the impossible. They’ve created a work that is true to the original story and material without being beholden to a vision crafted for another medium. And they’ve made it sing.
How did they do it?
They kept the non-linearity of the book, but structured it more clearly as a classic memory play. The mature Alison (Beth Malone), now roughly the age her father was when he died, is struggling to write the book that serves as the basis for the play, and in that capacity she guides us back to her childhood, and then her college days, and then mixes periods as she needs to. She talks to us, but she also talks at the characters in her past – those memories are still alive to her, and so she is a dramatic character, not just an observer.
They – and director Sam Gold, and his design team, David Zinn (set and costumes), Ben Stanton (lighting) and Jim Findlay and Jeff Sugg (video) – managed to reference the graphic origins without trying to recreate them, instead taking full advantage of the possibilities of the stage. We start in the mature Alison’s loft, but pieces of furniture from her childhood home (her father obsessively restored their Victorian pile to museum-level perfection) litter the stage – the memories she lives with, or has conjured up for us. Other scenes intrude on the same space – her Oberlin College dorm room, for example, where Alison (played at this age by the winningly nerdy Alexandra Socha) has her first proper sexual experience with her first girlfriend (the way-too-perfect Roberta Colindrez). But we never forget that this is being staged for us by the mature Alison. Then, for a while, a black curtain isolates a narrow downstage playing area, hiding everything behind; this is when the young Alison visits New York with dad, a trip whose harrowing emotional quality fully justifies the emersion in blackness. And then, on a visit home with the girlfriend, right after Alison’s come out of the closet, the curtain rises to reveal her childhood home in all its glory, and we are finally and fully there. It’s a breathtaking moment – and entirely theatrical; you couldn’t ever get the same effect in a comic strip.
They changed the young Alison (Sydney Lucas, who will be, as they say, going places) in key ways, making her far more winning and outgoing than the Alison of the book, and dropping the obsessive-compulsive behaviors that occupy a great deal of the book’s narrative; in general, this Alison seems to have emerged far healthier than the Alison of the book. She’s angry and sad about her father, yearning for him and wishing she could put him behind her, but she doesn’t come off as deeply damaged. And they’ve played some creepy scenes for comedy, to magnificent effect. For example: the book and play are called Fun Home because that’s the ironic name the kids gave to the family business, a funeral home, which adjoined the house. And the first real number of the musical is a Jackson Five-style advertisement the kids (Alison has two brothers, played by Griffin Birney and Noah Hinsdale, both of whom do a fine job but neither of whom has been given a proper character – but that’s true of the book as well) have put together which they start singing from inside the coffins, and which contains rhymes like “condolence book” and “aneurism hook” (yes, I know that’s just rhyming “book” and “hook” – that’s not the point). Everyone in the audience fell in love with little Alison during that number – and it’s nothing the Alison of the book would have done.
And they let us see her father’s neediness. In the book, we necessarily see him entirely through Alison’s eyes. But on stage, the father (Michael Cerveris, giving an almost too-painfully real performance) is an independent presence. He doesn’t speak to us – only the mature Alison is allowed to do that – but he sings his own songs, and we see into his inner life. The inner life of a closeted gay man, whose life was shattered by the possibilities of liberation.
That’s what both the book and the play are ultimately about. Alison’s father grew up in an era when homosexuality was first stigmatized but culturally ignored, then medicalized, then finally ceased to be defined by the larger culture, as gay people decided for themselves who they were and what recognition they deserved. Bechdel does not flinch from showing the ugliness and danger of the closeted life her father led – he has repeated affairs with much younger men, including some who are underage; he turns to hustlers; he brings home disease – and Malone is scathing in her commentary on those choices. But it was a life whose order he understood, and meticulously, even ruthlessly, maintained.
After his daughter comes out, though, and his wife (played with tightly reined-in hatred by Judy Kuhn) decides enough is enough and leaves him, that order is shattered. And the suggestion of the play is that he just wasn’t psychically capable of picking up the pieces and building something new, and more open. So he stepped in front of a truck.
If you have a heart, it’ll break it. But you’ll be laughing as you cry, because for all its darkness and pain, this play is actually a lot of fun.
Fun Home plays at New York’s Public Theater through December 1st.
Just a few quick observations on last night’s major results.
First, Bill DeBlasio is a pretty standard liberal Democrat who ran a very good campaign against a weak opponent in a very Democratic city. It’s hardly surprising that he won. What’s surprising is that New York has not trusted a Democrat to be mayor in twenty years. And Ed Koch was not a standard liberal Democrat; there’s more continuity than not from the Koch years through the Giuliani years to the Bloomberg years, certainly in terms of their respective electoral bases. And, given that history, it’s surprising that he won by such overwhelming margins, winning nearly three-quarters of the vote overall, and basically every meaningful demographic slice of the city (the New York Times has a handy map that lets you look at the actual vote tally in different districts, and you can filter by districts that are in a certain income category, or have a certain racial makeup – it’s fascinating to play around with).
I interpret the overwhelming margin to be a version of a bandwagon effect. The city as a whole decided it was time for a change, and decided to give the agent of change their collective blessing. DeBlasio has a mandate not just from those New Yorkers who never liked Bloomberg and what he stood for, but from many New Yorkers who voted for a third Bloomberg term and still like him. (DeBlasio carried districts that voted for Bloomberg in 2009 by a 14.5% margin.)
Bloomberg, after all, is not leaving office under a cloud of scandal or being widely acknowledged a failure. The city is doing extremely well by most measures, and he’s leaving office reasonably popular. The huge margin for a candidate who ran explicitly as a candidate of change isn’t a repudiation of Bloombergism as such but an all-but-unanimous declaration that Bloombergism has achieved its proper objectives, and now it’s time to try a new tack to accomplish other objectives.
Second, Chris Christie is now officially the only Republican with broad popular appeal. No, that appeal is not deep – most people know absolutely nothing about him, and they may come to hate him once they get to know him. Yes, he won against an extraordinarily weak opponent – but if the Democrats thought they had a solid chance of beating him, they would have put up someone stronger. And yes, some of the juicy targets he’s aimed at in New Jersey are not nearly so juicy at the Federal level. None of that matters right now. Right now, the Electability Caucus in the Republican Party has a reasonable candidate. And his most plausible opponent for that title is surnamed “Bush.”
All that speaks to his political prospects, which I see as very good right now. It doesn’t say anything about the meaning of his political prospects – because I don’t think there is any meaning to them. I don’t think Chris Christie represents a particular wing or faction or disposition within the Republican Party – in the way that, say, Rand Paul or even Marco Rubio does. His agenda as governor has been substantially identical to the agenda of Republican governors across the country – and has born some similarity to the agenda of the Democratic governor across the river, Andrew Cuomo. That fact reflects the particular balance of pressures on statehouses in the post-financial-crisis era more than anything, which has produced austerity to one degree or another all around the country, and Republicans have done as well as they have electorally at the state level because an agenda of austerity pleases rather than angers their electoral base.
All of which is a roundabout way of saying that if Christie wins the Republican Presidential nomination in 2016, he will do so because he appears electable, but not because he mounts any kind of ideological challenge to the Republican center of gravity – on any substantial issue.
Finally: I know very little about the Virginia gubernatorial race, but I confidently predict that Terry McAuliffe will prove a really unpopular governor. I just can’t imagine any electorate warming to him. It’s a testament to how rapidly Virginia is changing and how unpopular Cuccinelli must have been that he could lose to McAuliffe. But if I were Hillary Clinton, I’d be mildly worried about being dragged down in Virginia in 2016 – where she should still be favored, all else being equal – because of my association with McAuliffe.
I was delighted this morning to join Rod Dreher in his ruminations on rock music, which he loves but has difficulty justifying in terms of his Christian commitments. The post is worth reading in full; I’m going to take as my own jumping-off point his conclusion:
I end this digression almost as conflicted and as confused, and as “dialectic and bizarre,” as I started. The one thing my theologian friend’s question helped me to learn, by sending me back to Auden, is that the answer, or answers, will likely emerge out of a reflection on the distinction between what is Real and what is True, and how the two relate dialectically in art, including the art of rock music.
I think that’s very much the right place to start. If I understand Auden, or Dreher’s take on Auden, correctly, then the difference between “Real” and “True” is the difference between phenomenology and ontology – between an experience and an objective reality independent of subjective experience. From Dreher’s perspective, that realm of “True” includes moral truths, and he’s asking a question, as old as Plato, about the effects of art that allows us to participate in an experience that is “untrue” in a moral sense. Which, to me, devolves back to a question about whether it’s a good idea to have experience as such – because, clearly, from a moral perspective it’s better to understand from the inside what it’s like to be, say, a Nazi, or a prostitute, or whatever, through an encounter with great art that really takes you inside that experience, than to have to actually become a Nazi, or a prostitute, or whatever.
But I wanted to make another observation. Dreher comments in passing that he can’t appreciate rap music in part for aesthetic reasons: he just doesn’t like it. And there’s no point in trying to argue somebody into or out of an aesthetic appreciation – you can learn to like something you don’t appreciate initially, but you can’t really be taught to like it in the way that you can be taught to understand it. But he also says that he can’t imagine what music, what aesthetic experience, would make him capable of appreciating the experience of the lyrics of a rap like “Big Pimpin’” – a rap he calls “animalistic.”
I just wanted to point out that Jay-Z himself had conflicted feelings about that bit of writing, and used the same terminology that Dreher did to describe it:
WSJ: A lot of musicians claim to never go back and listen to their old material, but you obviously took some joy in digging back into your archives for “Decoded.”
Jay-Z: I believe that it’s necessary. Especially for rap music, where the words are fast and for the most part there’s not a consistent melody that people can sing along to. So a lot gets lost in translation. Because rap music is poetry, I thought it was important to describe it as such.
WSJ: You’re famous for not writing your lyrics down as you compose them. What changes about them when you see them on the page like this?
Jay-Z: Some [lyrics] become really profound when you see them in writing. Not “Big Pimpin.” That’s the exception. It was like, I can’t believe I said that. And kept saying it. What kind of animal would say this sort of thing? Reading it is really harsh. [emphasis mine]
If I take Jay-Z at his word (and that’s my inclination in the absence of a reason not to), he’s saying that this particular lyric came from someplace he’s a little alarmed to discover in himself. He doesn’t read his words and say: wow, I got at something important and complex. He reads them and says: did I say that? How could I have? Must I this thing of darkness acknowledge mine?
Is that, perhaps, a stance from which the lyric can be experienced more effectively by someone like Dreher? Does framing the lyric that way, as something that came out of the artist almost involuntarily, and that disturbed him when he looked at what he created, make it something that can be appreciated from within a moral framework that Dreher would respect? Or does it do the opposite – suggest that the artist himself should have suppressed it?
That interview with Jay-Z went on:
WSJ: What would you change about hip-hop if you could?
Jay-Z: We have to find our way back to true emotion. This is going to sound so sappy, but love is the only thing that stands the test of time. “The Miseducation of Lauryn Hill ” was all about love. Andre 3000, “The Love Below.” Even NWA, at its core, that was about love for a neighborhood.
We’re chasing a lot of sounds now, but I’m not hearing anyone’s real voice. The emotion of where you are in your life.
“The emotion of where you are in your life” – that isn’t always love, and may not always stand the test of time, but it’s something we, as a species, are not very good at living in. If great art enables us to do that, connecting us more deeply to ourselves by connecting us to somebody else who has connected deeply to him- or herself, then I’m for it. And if we can’t make moral sense of that experience, well, sometimes it’s hard to make moral sense of life. But we can’t escape that problem by not living.