Because I just don’t have much more to say on the whole religious freedom question and I’m tired of saying what I’ve already said.
On the subject of “are corporations people,” meanwhile, I feel like Jacob T. Levy makes a pretty interesting argument why, even if you believe corporations have a robust set of constitutional rights, that not only doesn’t imply supporting the result in Hobby Lobby but may well cut the other way:
The general doctrine of corporate personhood is right: corporations can enter into contracts, own property, and be held liable for wrongdoing or debts *as separate entities* from the various natural biological persons involved– and this is a necessary and valuable organizational innovation.
The particular doctrine of corporate persons as holders of constitutional rights is right: the corporation qua property owner has, for example, 4th Amendment rights against its property being unreasonably warrantlessly searched, and 5th Amendment rights against it being taken for public use without compensation, or against being deprived of it without due process of law. . . .
Hobby Lobby seems to me to stand for a very different proposition: “[P]rotecting the free-exercise rights of corporations like Hobby Lobby, Conestoga, and Mardel protects the religious liberty of the humans who own and control those companies.” . . .
The judgment today maintains that a closely-held corporation like Hobby Lobby is so close to the natural persons behind it that it’s not really a distinct corporate person at all; it’s just a costume that the Green family puts on and takes off as it suits them.
The decision has to pierce that veil because corporations qua corporations have no particular reason to hold religious views of any kind:
Notice that the right of a corporation to freedom of the press or to be secure in its property against searches or expropriation makes perfectly good sense in terms of the corporate person’s own interests, regardless of who its owners happen to be. Corporate religious liberty isn’t like that. The reason we have the emphasis here on “closely-held” corporations is because the corporate veil is being pierced in order to look directly at the natural persons behind it. . . .
[H]obby Lobby, a for-profit corporation like IBM, can’t be described as itself having a religious belief. Making sense of that idea requires making the corporate person disappear from the description and talking about the Green family, treating the “closely held” corporation as if it were a partnership or sole proprietorship that doesn’t have a corporate-style separateness from the natural persons. Try as I might, I can’t persuade myself that that’s right. Corporations are persons, or corporations are made out of people– the two thoughts lead to very different conclusions, and I think protecting the former requires rejecting this kind of easy recourse to the latter.
His view is basically congruent with Patrick Deneen’s view of the place of corporations in our collective life, but coming from the opposite end.
I’m not sure I agree with that view, because it presumes a radical dichotomy between for-profit entities, which can only have financial interests, and actual people, which can have a variety of interests and values. (Not-for-profit entities can, presumably, also have interests and values other than profit, by definition.) You can believe in the idea of corporate personhood without believing that private corporations must be profit-maximizing entities, but can have some characteristics of a community, albeit a hierarchical one rather than a democratically-organized one. But it’s still an interesting counter to the line that Hobby Lobby was yet another decision in favor of corporate power.
(Ok: I’ll talk about the religious freedom stuff briefly. I think it’s appropriate for the government to guarantee access to contraceptive services – I think it’s a positive good. I’d like that to be achieved in a way that doesn’t make religious believers feel they are directly providing a service they consider profoundly abhorrent, because I believe in a robust conception of freedom of religion. I see a clear distinction between that and the cakes-for-gay-weddings business, because there is nothing abhorrent about providing a cake – the abhorred (by the baker) act is the wedding, and the baker is not providing that; allowing her to refuse service is pretty plainly discrimination against people whose behavior she disapproves of, and the only question is whether we think it’s invidious and whether the class discriminated against deserves any protections. Whether we should be more or less vigilant about policing discrimination in general is another matter. I think my hypothetical Scientology school network is a tougher nut because Scientologists believe mental health services do active harm, and I really do tend to think that the reason the Court wouldn’t recognize a right to deny mental health services in such a case is that it simply wouldn’t treat the moral logic of the Church of Scientology with the dignity that it accords the views of the Catholic Church. Which is pretty much what the Court said in Hobby Lobby when it disclaimed any possible application of this decision to minority religions that object to transfusions, etc. And yes, that troubles me.)
After taking in last year’s excellent Blithe Spirit at the Stratford Festival, I argued that Coward’s play anticipated Seinfeld in its characters’ utter self-involvement and the play’s fundamental misanthropy. Well, the director of this year’s Coward, Alisa Palmer, seems to have been of the same mind about her play: “Hay Fever reminds me of Seinfeld, a show whose creators, like Coward, pre-empted their own critics by declaring, cheekily, that their show was ‘about nothing.’”
I should be delighted that we agree, but, you know, I’ve got my critic hat on. And the thing is, generally nothing will come of nothing. Palmer’s production comes perilously close to proving the truth of Lear’s statement, though the play is redeemed by certain key performances.
Hay Fever‘s plot (which Coward himself admitted was minimal to the point of near-nonexistence) revolves around the by-now well-worn scenario of throwing a bunch of squares and a bunch of cool, artistic types into close proximity. Judith Bliss (Lucy Peacock), retired stage actress, her children, Sorel (Ruby Joy) and Simon (Tyrone Savage), and her novelist husband, David (Kevin Bundy), have each, unbeknownst to the others, invited a romantic prospect down for the weekend. They are each appalled by having their respective plans upset by the others, and respond by treating each other’s guests with outrageous rudeness.
Over the course of the first evening, the opening pairings, which are obviously mis-matched in terms of both age and temperament, get more-plausibly recombined. These recombinations prompt Judith to bouts of extreme theatrical excess in which her family, knowing the routine, join her, to their guests’ distinct alarm. Not so much Blissed-out as simply exhausted, the invitees decamp collectively first thing the next morning, leaving their hosts to comment on their rudeness, and resume their normal, quarrelsome family life.
The play could be read as a satire on the artistic sensibility – or, alternatively, as a satire on the lack of sensibility of the squares – or both, something like the movie, “Impromptu.” But Coward isn’t engaged in social satire, because satire requires an affirmative set of values against which a society may be judged. Hay Fever has no such values – it’s blissfully relativistic. Instead of values, it has manners. But nobody agrees what those manners ought to be – and it is here where the play approaches the Seinfeldian.
It’s significant that what characters on all sides of Hay Fever are primarily concerned with is the rudeness of the other characters, a rudeness that can be manifested by too little attention or too much or simply the very wrong kind. Seinfeld‘s plots frequently revolve around characters asserting or violating norms of behavior, and engaging in wild theatrics around the necessity of upholding said norms, notwithstanding that said norms are invariably completely spurious. The Bliss family inhabit a somewhat similar world, inasmuch as they, by virtue of their position as artists, have a kind of professional responsibility to be able to play all sides, emotionally, in a scene, and hence can’t take any of them seriously. Each member of the family plays this out a bit differently; the novelist likes to see things as they really are, and then pretend that they are otherwise, while the actress dives right in without bothering to discern whether there is a reality of any kind at question. But it amounts to much the same thing either way: they all believe that sincerity is the key thing in art; once you can fake that, you can fake anything.
That’s what I see in the play, at any rate. To realize that vision requires giving each visitor to the Bliss household a distinct integrity, a vision of how one ought to behave, that will be upended by the Blisses. The only character who I saw manifesting such a vision was Sanjay Talwar’s professional diplomatist, Richard, and it’s not an accident I think that Talwar’s touching performance gets the most heart-felt laughs of the night. (It’s also not an accident, I think, that the diplomatist is the only character definitively to transgress against propriety, in making a pass at the married Judith Bliss.) Gareth Potter’s bluff boxer, Sandy, is perfectly plausible, as is Cynthia Dale’s predatory minx, Myra. But they don’t come into sufficiently sharp focus; we don’t see clearly how their respective senses of the way people ought to behave is disturbed by the ways in which the Blisses transgress. (Dale, in particular, seems more put out that she isn’t getting over than furious that David has already read – indeed, written, many times – the script she’s reading from as she tries to seduce him.)
Something’s off on the Bliss side of the fence as well, though Peacock’s antics never failed to bring a smile to my face, and Bundy got at the heart of the matter in his scene with Dale. I have a sneaking suspicion that Palmer harbors sentimental feelings for their family, and that she’s directed them to play big even when they aren’t formally “playing” a role so that we’ll get that these are lovable eccentrics, and fall for them as she has. But if she has sentimentalized them, it must be because they are artists, like she is. And the thing to remember about the Blisses is that they aren’t really great artists – they aren’t even really that good. They’re successful pros; that’s all. The play Judith loved doing so much sounds ghastly – her children tell her it’s ghastly, and even she knows it’s ghastly. But it was a great role because it gave her so much scenery for her to chew. David has no pretensions to greatness; he’s a hack novelist and he knows he’s a hack novelist. There is no Chopin here, no Delacroix.
Coward is honest enough to see the Blisses’ eccentric manners as tribal markers rather than signs of any higher calling. But if that’s all they are, then this can’t be a story about hapless squares running afoul of charmingly outrageous artists. Which is a good thing, actually, because that particular story just isn’t terribly funny, and no amount of “amping up” of the acting nor layering-on of slapstick will really make it so.
Hay Fever plays through October 11th at Stratford’s Avon Theatre.
Eugene Ionesco’s play, The Killer, is rarely produced, and I think I understand the reason. It’s over-long (particularly the third act, which feels interminable), highly abstract (the principal character is taken to speechifying in airy generalities about his experience), and yet also distinctly dated (the intimations of incipient fascism in Ma Piper’s politician campaign feel rooted in France in the 1950s, and have nothing like the universal resonance of the transformations in Rhinoceros).
But the core of the play is a meditation on original sin understood as perversity, a notion ideally suited to Ionesco’s theater of the absurd. And the Theater for a New Audience’s current production is an excellent opportunity to experience this notion played out on stage.
The Killer begins with Ionesco’s everyman character, Berenger (Michael Shannon), arriving in a new “Radiant City,” a thoroughly planned urban development in a part of his city that he’s never been to. The name and era suggests a Le Corbusier-esque modernist utopia, but the town as described sounds more like something out of “The Truman Show” – roses, manicured lawns, beautiful brickwork, and a crystal dome that keeps out all foul weather (the roses are watered by drip irrigation). (Wisely, designer Suttirat Larlarb shows us none of this – the Radiant City exists entirely in our mind’s eye.) Berenger, escorted by the Architect (Robert Stanton), the civil servant responsible for creating this utopia, comes to life in the space, connecting it with an experience in his childhood of a kind of euphoria in which he perceived the world as radiant, and experience that, although he never felt anything like it again, kept him going through the pointless tedium of mundane life. But here, perhaps, here he could have such an experience on a daily basis.
Berenger is so carried away, he falls in love, or something resembling it, with the architect’s pretty blonde assistant, Dennie (Stephanie Bunch), who arrives on the scene announcing she’s going to quit – against the Architect’s most strenuous advice. By the end of the scene, during which Dennie not only doesn’t return Berenger’s affections but barely takes note of his existence, Berenger is convinced they are engaged. He determines to buy a house in this perfect city in which they might live together.
And then, immediately after agreeing to purchase a house in the community, the whole vision comes crashing down. It turns out there’s a serial killer on the loose in this ideal environment, whose method is to lure people (always in threes: a man, a woman and a child) to a lagoon, and shove them in. People have grown so frightened that they generally don’t leave their houses, and are moving out of the neighborhood en masse, but still the killer never lacks for victims. His most recent victim: Dennie.
Berenger is appalled, heartbroken, distraught that his vision of happiness has been so thoroughly violated. He demands that something be done – but the Architect breezily avers that all possible steps have already been taken, to no avail. After a depressing dinner with the Architect at a pub by the bus station, he trudges home, back to the dreary rainy city that he left – but determined to bring the killer to justice, somehow. End of Act I.
It’s a marvelous beginning to the play, anchored by two splendid performances. Stanton’s Architect is a picture of punctilious perfectionism, quietly proud of his creation but never smug, smiling blandly at Berenger even as we can tell that he desperately wants to get back to the office. (The Architect’s dialogue with Berenger is repeatedly interrupted by calls from the office, which the Architect answers by pulling a ’50s-era corded phone out of his pocket. Whether this is Ionesco’s prescient original direction, or director Darko Tresnjak’s brainstorm, it brilliantly revitalizes what has become a cliche in our cellular age.) And I applaud Tresnjak for the decision to cast Shannon against type as Berenger. Looking at Shannon’s craggy, pitted face, we feel how he has been oppressed by ordinary life, and there’s something so incongruous about seeing Shannon skip about the stage in glee and prostrate himself before his beloved Dennie – he makes Berenger come alive as a specific character, and therefore makes him more universal than he would be if played by a more obvious naif.
You can see, I’m sure, why I describe this play as a meditation on original sin, the Radiant City alluding obviously to the Garden of Eden, the killer as the serpent in the grass (who seduces his victims rather than merely surprising them). Ionesco would seem to be satirizing our efforts to get back to that garden, mocking, along with our Promethean presumption, our specifically male presumption to see every woman as a potential Eve. (Berenger’s overtures to Dennie come off as especially creepy in the age of Elliot Rodger, and connect him, surreptitiously, to the killer, who, Berenger suspects in Act III, has his own “issues” with having been rejected by women.)
But that kind of satire is secondary to Ionesco’s primary aim. From the moment we learn about the killer’s existence, Ionesco unsettles us by making him seem, well, silly. How does the killer lure his victims to their deaths? He tries begging, and selling them trifles, but this never works. What works is offering to show them a picture of “the Colonel.” This, the Architect informs us, nobody is able to resist. Why? We have no idea, and the play has no interest in telling us. The point is its absurdity.
We finally get to see this Colonel’s picture in Act II, when Berenger comes home to his dingy flat (under the management of the Concierge, played by the always delightful Kristine Nielsen, who doubles as the proto-fascist politician Ma Piper in Act III), and finds his perennially unwell friend, Edward (a deliciously Renfieldian Paul Sparks), waiting for him. Edward, to Berenger’s surprise, seems to know all about the killer. This knowledge becomes less surprising when we learn that Edward’s briefcase, which he is loathe to let leave his hand, contains all the materials connected to the killer – a map of the Radiant City marked where the killer has struck, a diary detailing his attacks, the trinkets the killer tries to sell, and dozens of photos of that Colonel. Edward expresses mystification as to how all of this material came to be in his possession – and Berenger, surprisingly, never suspects him. Instead he enlists his aid to bring these materials to the police so they can finally catch the killer.
Act III, the weakest act by far, consists of Berenger’s continually-frustrated attempts to reach the police, and to recover the briefcase (which Edward sneakily avoids taking with him when they leave Berenger’s flat), and then of his solitary confrontation with the killer. Berenger attempts, at length, to convince the killer that his career of crime is absurd, using a variety of psychological philosophical frameworks, even venturing into Christian theology, to no avail. The killer does nothing but laugh in response. This, along with the satire of Ma Piper, is the most thuddingly obvious part of the play, the point being that the primal urge to kill not only cannot be reasoned out of us, but cannot even be comprehended.
But to me, the heart of the matter is that photo of the Colonel, which points to a different absurdity, a different perversity. It’s not only that our urge to kill is irrational and perverse. Our victimhood, our susceptibility temptation, is even more ludicrous. What does us in is not the promise of worldly wealth and fame, not sexual or delirious experience. What none of us can resist is a completely unexceptional photo of a mustachioed officer.
The Killer plays at the Polonsky Shakespeare Center through June 29th.
I’ve been enjoying the back-and-forth between Ross Douthat and Matt Yglesias over the true strength of the Democratic coalition. I thought Yglesias was getting the better of the argument in the first round, but in his most recent contribution I think Douthat lands a blow on himself, whether he realizes it or not. Here’s the last paragraph of Douthat’s post from this morning:
I would add, as a coda, I’m not at all persuaded by Yglesias’s initial premise either — the idea that Clinton’s polling advantages are the result of ideological unity, rather than a case of her brand covering over disagreements that would matter more if she weren’t running. Give me an Andrew Cuomo-versus-Elizabeth Warren tilt for the nomination, for instance, and I’d wager that all manner of intra-Democratic divisions would suddenly matter much more than they seem to today. For that matter, give me a candidate exactly like Hillary who doesn’t have her mystique and history, and it’s easy to imagine the issues she’d get challenged on. (Let’s just say there’s a reason Robert Kagan likes her.) I agree with Yglesias that the Democrats are relatively unified, especially by their party’s historical standards … but it’s her aura that’s sealing that unity, rather than the other way around.
I think Douthat is entirely right about the point I bolded. Indeed, I’d go further: without her mystique, she’d be a long-shot for the nomination. Consider her resume. First Lady is a ceremonial post. Her one substantial task in that post, organizing health care reform, was a political failure of the first order. Next, she was Senator from New York, where she compiled a respectable record of constituent service, but did not distinguish herself either as a legislator or in intra-party policy debates. Next, she was a failed Presidential candidate. And finally, she was Secretary of State, where, again, you really have to squint to see a substantial record of accomplishment. And the State Department hasn’t been a stepping-stone to the White House for quite some time. She’s an organized and disciplined politician, but she’s rarely noted for her political charm or acumen.
The one thing that distinguishes her from your typical Democrat is that she is substantially more hawkish, having taken the hawkish side in essentially every political debate from Bosnia and Kosovo through Afghanistan and Iraq and into the Obama-era debates over Libya, Syria and Ukraine. If she weren’t Hillary Clinton, that fact would not only make her a long shot; it would probably be disqualifying.
But I think that cuts against his ultimate point, that it is only Hillary Clinton’s mystique that is holding the coalition together. On the contrary, it’s the high degree of policy consensus, combined with a steadily-strengthening conviction of the perfidy of the opposing party, that holds the Democratic coalition together. The mystique is what holds the Clinton campaign together, not the party.
If Hillary Clinton had died in 2013 after sustaining that concussion, there would be a real tussle within the Democratic Party over who is best positioned to lead the party to another Presidential victory, and potentially a real debate over how far to the left to lean on economic issues. But I have a hard time picturing the kind of debate over fundamental direction that characterized the 1984, 1988 or 1992 Democratic races, or the 1976 or 1980 GOP contexts. In Hillary Clinton’s absence, the 1988 Republicans would seem to me to be a likely model for what we’d expect from Democrats leading up to 2016.
Instead, it’s overwhelmingly likely that Clinton will be the nominee. And she probably has an as-good or better shot to win the Presidency as any other Democrat. But what then? The biggest risk to the future of the Democratic coalition is events, and the decisions future Democratic Presidents make to respond to them. Given her hawkish inclinations, it may be that, far from being what holds the Democratic coalition together, Hillary Clinton’s decisions in office, if she becomes President, could be a significant risk factor for the future of that coalition. And an entirely avoidable one, were it not for that mystique.
My point was not that we are obliged to “fix” Iraq, or that the Iraqis have an infinite claim against us. You can’t be obliged to do the impossible, and obviously claims can’t be infinite. But claims can be very large without being infinite, and we shouldn’t pretend they don’t exist.
Nor was my point that there is “no difference” between action and inaction – those are Sullivan’s words, not mine. Obviously, there is an enormous difference between killing somebody and not preventing their death – and a really very big difference between the two if you don’t have any plausible means of prevention ready to hand. What I said is that inaction, for a hegemonic power assumed to be engaged essentially everywhere, is a kind of action. A policy of indifference is also a policy. That’s the difference between the United States and Sweden, and that difference is a consequence of the differences in our relative power.
Nor did I argue for military intervention, which I think would be counterproductive. As I said in the piece, leaving a residual force would have given us little leverage to drive a political settlement in Iraq, and in the absence of a political settlement violence was likely to resume, and escalate, as it has. I agree with Tom Ricks: we should not be surprised at how Iraq has deteriorated, and recent events not only don’t prove we should have left a residual force, but arguably prove the opposite – that leaving a residual force would have put us in an even worse situation.
I made the following analogy:
ISIS may be likened to the Khmer Rouge, who might never have come to power in Cambodia had we not bombed that country as part of our failed effort to defeat North Vietnam. Then, of course, it was our old enemy, Vietnam, that kicked out the Khmer Rouge from Cambodia. Similarly, if ISIS is prevented from overrunning Iraq, it will probably be because of intervention by Iran.
That doesn’t sound to me like a call for America to engage in air strikes or re-insert troops. Because it isn’t.
I will note in passing that I strongly opposed any intervention in Syria, among other reasons because I strongly suspected we’d wind up, unintentionally or not, supporting precisely the kinds of groups that have coalesced into ISIS.
Now, having said that, here’s how Sullivan ends his post:
Leave it alone. And do what we can to protect ourselves. That doesn’t guarantee anything. But intervention guarantees far worse.
That’s the attitude I’m arguing against. No, we can’t “fix” Iraq, and renewed military intervention would be counterproductive. But we do owe the Iraqis more than a determination simply to “protect ourselves.” We owe it to them to do what we can to ameliorate the situation.
So what can we actually do?
The single most helpful thing we can do, it seems to me, is to work to prevent this from becoming a regional war. That means working with Turkey, Iran and Saudi Arabia to see that all of their interests will be harmed by such a war, and that instead their interests lie in laboring to produce a political settlement in Iraq. We have less influence than we once did in Turkey, and we have very little influence in Iran. One would hope that after all this time we have some influence in Saudi Arabia, though there’s only intermittent evidence of this. Nonetheless, we should use what we have, and try to bring these powers, potentially enemies of each other, into something resembling concert. This is basically what Leon Hadar advocates, and, as he notes, we won’t be able to force any of these powers to do what we wish - all we can do is try to influence them through diplomatic engagement, both with carrots and sticks.
But here’s the thing: there will be costs associated with both those carrots and those sticks. We have other goals with all of these countries; we can’t get everything we want. Whether we should pay those costs or not is partly a function of how much we feel we owe Iraq.
Daniel Larison also responded to my post, and also seems to think I favor renewed military intervention, which I do not. In fact, I agree with much of what he writes, particularly this:
The question is not whether the U.S. has done a great deal to create the current situation in Iraq–obviously it has–but what the U.S. can constructively do to remedy the country’s many woes. A government may be responsible for something and nonetheless be completely unqualified to repair the damage it has done. While there is a certain justice to the idea that the people responsible for breaking something are obliged to fix it, that takes for granted that they have the first clue how to rebuild what they’ve destroyed.
But I have a bone to pick with this:
If we took this definition of “indirect responsibility” seriously and applied it consistently, there is almost no event in the world for which the U.S. would not be somehow “indirectly responsible.” That way lie madness, endless conflict, and exhaustion.
I understand what he means, but I think he’s confusing what I intended as description for prescription. My definition of “indirect responsibility” is simply to say that once you have positioned yourself as a global hegemon, declared yourself “indispensable” and arrogated to yourself rights that are not granted to any other state, of course you are indirectly responsible for just about everything that happens, to a greater or lesser degree depending on the situation. The madness lies not in my description of reality but in the reality itself.
We should seek to change that reality. Perhaps I am overly pessimistic, but I assume that this will be a difficult and lengthy labor, with many setbacks along the way. I am hard-pressed to name another hegemonic power that acceded peacefully to a more multi-polar reality. Most empires decay before crumbling with catastrophic speed. Moreover, a policy of – let’s call it “diplomatically-engaged restraint” – may well produce some results that look practically indistinguishable from that crumbling. Indeed, that’s exactly what John McCain sees in Iraq right now. We should be aware of that fact – and that was the point of my parting line about “minding our own business” not necessarily leading to any kind of solution to the world’s conflicts.
In the meantime, we emphatically do not need to intervene everywhere to solve every problem. But that’s not the same thing as saying that we don’t need to have a policy – or that our policy can plausibly be tailored to a narrow vision of the national interest. We are simply too powerful, and too enmeshed in too many commitments.
I’ve been struggling to find something useful to say about the horrible situation unfolding in Iraq, and Daniel Larison’s most recent post has finally enabled me to crystallize my small contribution.
I agree with him that maintaining a presence in Iraq would not have given us very much ability to shape events. The best evidence of that is the situation in Afghanistan, where our presence – augmented for several years – has done little to change the character of the Afghan government, or to prevent the reemergence of the Taliban once we began to reduce our commitment once again.
But we are responsible for the situation in Iraq. We are directly responsible in that we broke the existing arrangement of power and installed ourselves as the occupier. We are also indirectly responsible inasmuch as our overweening hegemonic influence in the region means that inaction is also a kind of action. So, because the Syrian civil war has not resolved, but expanded and become more violent and extreme, and because that civil war and Iraq’s are, with the rise of ISIS, effectively merging, to the extent that we may be “blamed” for not resolving that civil war, we may also be “blamed” indirectly for the deterioration in Iraq.
None of which means we should do something stupid and counter-productive, but it provides and genuine moral explanation for why we might feel obliged to do something.
ISIS may be likened to the Khmer Rouge, who might never have come to power in Cambodia had we not bombed that country as part of our failed effort to defeat North Vietnam. Then, of course, it was our old enemy, Vietnam, that kicked out the Khmer Rouge from Cambodia. Similarly, if ISIS is prevented from overrunning Iraq, it will probably be because of intervention by Iran.
People who think the world will swiftly get more peaceful if we mind our own business may well be just as wrong as the people who think that by sticking our nose into other people’s business we can force the world to be peaceful.
You know how some politicians you just can’t stand, even though they aren’t materially worse than their fellows? John Edwards was always one of those for me; back in 2004, my line about him was that he made me want to scratch his face.
Eric Cantor was another one, in my book, someone I just could not stand to listen to or see. He seemed to me to represent the worst aspects of both establishment and insurgent Republicanism.
And now he’s gone.
Scott Galupo may be right that Brat is going to be “another useless crank,” but we can always hope that he will be a useful crank, the kind who demands a wildly against-the-consensus look at this or that particular issue, as opposed to someone willing to destroy the institution if he doesn’t get his way. The House of Representatives is pretty big; there’s room for those who make the sausage and room for those who want to change the recipe – even radically. We’ve just had enough of folks whose idea of changing the recipe is adding e coli.
If Brat becomes a table-pounder on immigration, or NSA spying, or corporate welfare – he may make a useful contribution to shaping the debate, even if I don’t always agree with the direction. If he refuses to vote for any budget that doesn’t repeal Obamacare – not so much.
We’ll find out which kind of crank Brat is soon enough. Or maybe not – there is a general election, after all; we might instead find out what kind of crank Jack Trammell is.
I’m very sympathetic to Gracy Olmstead and Connor Friedersdorf in their arguments against getting rid of cash. I think the utopian arguments for a cashless society are kind of silly, and the privacy implications of getting rid of cash are arguably not trivial.
However: I also think the dystopian implications are kind of silly, and that by and large the privacy ship has sailed. Right now, all of our most important financial transactions are electronic; only criminal enterprises do any significant business in cash. And the examples that Friedersdorf and Olmstead use to prove the continued utility of cash frequently prove the opposite.
Already, you have to be a serious obsessive to avoid leaving an electronic footprint to be read by Big Data – that’s the lesson of Janet Vertesi’s story. ATMs and credit cards already make it trivially easy to spend more than you have in your wallet at any given moment. As for the utility of cash for the poor, 90% of African-American adults own a cell phone, as do 84% of those who earn less than $30k/yr; by contrast, over 20% of African-American households are unbanked, and an even higher percentage of low-earners. The existing financial infrastructure is abandoning the poor, making reliance on cash more and more expensive for them, even as mobile technology is penetrating more fully.
Getting rid of cash is not going to eliminate crime or the black market, and it’s not going to abolish recessions. Nor is the development of an effective electronic wallet going to suddenly put poor people on a level playing field with the wealthy in terms of the daily costs they face executing simple transactions. Rather, this is another one of those situations where we’ll have to keep moving just to stay in place.
These are reasons to hope that electronic alternatives to cash continue to develop, and make cash more and more obsolete. But that’s no reason to abolish cash entirely. The main reason to abolish cash entirely is a technical one, related to the economics of demographic decline.
In a world where we expect robust economic growth, interest rates will be positive, and the yield curve will be upward-sloping – that is to say, a lender who lends for 10 years gets a higher rate than a lender who lends for 1 year. If expectations for economic growth drop to low levels, or even to negative levels, then long-term rates drop as well.
Now, when demographic growth is positive, both real and nominal growth will generally be positive. But when demographic growth is negative, the only way real growth will be positive is if productivity growth is high enough to compensate for the “burden” of negative demographic growth. And this is rarely likely to be the case for countries on the technological frontier.
So for those countries experiencing demographic decline, the rational expectation is for real growth to be zero or negative over the long term. Which should translate into low or negative real interest rates. And, in the absence of high inflation expectations, low or negative nominal interest rates.
Which is where we run into a problem. Because, when long interest rates are low or negative, there’s no incentive to invest in longer-dated assets. It makes more sense to hoard cash. When that’s a cyclical event, we call it a recession.
What is cash? Cash can be thought of as a bearer bond with no maturity date and an interest rate of zero. In what we think of as a “normal” world of positive nominal interest rates, cash is a poor investment; almost anything else has a higher expected yield. But in a world of low or negative nominal long rates, cash will be a persistently attractive investment vehicle. Which is to say: investment in actual productive assets will be persistently unattractive. Which is a real problem for capitalism.
What’s the solution? One solution would be higher inflation expectations. This would drive nominal rates up, and make cash less attractive. But there are several problems with this solution. First of all, once inflation gets going, it may be hard to keep it from accelerating. This is emphatically not our problem today – we’re continually teetering on the edge of deflation – but if we’re imagining a world where we’re relying on inflation to keep us out of persistent recession, we’re also imagining a world in which we have to worry about an inflationary spiral.
But perhaps a bigger problem is: how do you engineer higher inflation expectations? The main mechanism central banks have is to purchase financial assets – which they have been doing, in very large volumes, without appreciably causing a rise in inflation. Now, maybe the problem is one of psychology – maybe they just need to say that they will keep buying assets of all kinds until inflation starts to tick up. But there’s a chicken-and-egg problem: expectations of institutional behavior don’t change easily. What fundamentally changes them is proof that they have changed. When Paul Volcker allowed unemployment to go through the roof to kill inflation, that proved that the Fed was genuinely serious about fighting inflation – and expectations changed, permanently. Expectations that the Fed is serious about creating inflation won’t become manifest until we actually get inflation, and the Fed declines to combat it.
Meanwhile, the zero-bound set by cash creates problems for the Fed in engineering actual inflation. The Fed can drive very short-term rates down by making short-term loans to financial intermediaries – injecting cash into the system. This should spur the expectation of higher economic activity, which in turn should spur higher long-term rates – an upward-sloping yield curve. Which is what you want to get out of a recession. But short-term rates can’t go below zero because they are competing with cash. So the Fed has had to make longer and longer-dated loans – this is “quantitative easing.” But QE is a paradoxical strategy, whereby the Fed has to buy long-dated assets in order to drive their price down. Is it surprising that it has had only middling success?
Get rid of cash, and the problem goes away. Nominal rates can now go below zero, because there is no competing instrument that carries a zero nominal interest rate. And if nominal short-rates can go below zero, then you can engineer an upward-sloping curve even if long rates are low or zero – reflecting low long-term growth expectations.
I want to stress: I’m not saying that central banks will be able to wish away recessions if cash were eliminated. I’m saying that central banks have a hard time fighting their way out of situations where long rates are close to zero. In a world of positive demographic growth, very low long-term rates are a sign of something seriously wrong. In a world of negative demographic growth, very low long-term rates will be normal. If we don’t want high unemployment to also be normal, we need better tools for fighting unemployment when long rates are very low. Which may mean that we need to get rid of cash.
Again, it’s not that we need to get rid of cash to usher in utopia. We may need to get rid of cash just to get back to adequate management of monetary policy. We may need to run faster just to stay in place.
The United States will not be the first country to go cashless; we’ll probably be among the last, along with the UK. Singapore will probably go first, and Japan will probably be the first large country to make the transition, for a variety of reasons – the advanced progress of demographic decline, lack of comfort with immigration as a stop-gap solution, comfort with technology, less concern about privacy, less attachment to the artifact of physical currency. So we’ll have time to observe how the transition goes elsewhere before contemplating it ourselves.
And we may not follow the same route. We may keep cash for a long time for sentimental reasons. But we are already living in a world in which cash means something very different than it did a generation ago. Reliance on cash is already not a practical way to protect privacy or to reduce our reliance on banks. So if we want a more decentralized future, we’ll likely have to think about achieving that within the context of cashlessness.
I want to strongly endorse everything Daniel Larison says in his post on the Democratic Party’s resilience, both this:
The Democratic Party has long been “a sprawling, ramshackle and heterogeneous arrangement,” but that hasn’t stopped it from winning the popular vote in five of the last six presidential elections. It cobbles together majorities by being “sprawling” and “heterogenous,” and doesn’t depend on a particular nominee to do this. The extremely narrow margin of Bush’s re-election in 2004 points to this. Democrats have a coalition of competing, sometimes opposing interest groups and constituencies, but then they usually don’t pretend to be anything other than that. One of the stranger conceits that many Republicans have about their party is that it is a so-called “real party”: it supposedly represents some coherent set of beliefs that makes it substantially different from being an “incoherent amalgam” of interest groups. Perhaps because Democrats don’t try to paper over the contradictions and tensions in their coalition as much, they are able to appeal to a wider variety of voters than their opponents.
I don’t see any likely challenger capable of depriving Clinton of the nomination, but the Democrats would almost certainly benefit from having one or more candidates make the attempt. Like Romney in the last Republican contest, Clinton appears dreadfully inevitable with built-in advantages in name recognition, fundraising, and support from party leaders, and unlike Romney her party’s voters genuinely seem to like her. Even so, if she faced no meaningful competition and no serious criticism while coasting to the nomination, it would very likely depress turnout in the general election and it would encourage complacency and a sense of entitlement in a Clinton crowd that is very susceptible to both.
I would add only one thing: not only would the Democrats benefit, but the country would benefit from a serious, if almost certainly futile, challenge to Clinton – specifically, on foreign policy issues.
Hillary Clinton is going to run as an extremely hawkish Democrat, because that’s who she actually is. This is not what the country needs, and probably not what the country wants, but it may well be what the country is going to get.
If Clinton runs essentially unopposed in the Democratic primary, and faces a mainstream Republican in the fall, voters will likely have a choice between two hawks. A candidate like Marco Rubio might actually enable Clinton to portray herself as a moderate centrist by comparison; a Jeb Bush will try to avoid foreign policy altogether for fear of reminding the country of his brother’s disastrous record. Rand Paul would be able to run to Clinton’s “left” on foreign policy, but (a) he’s unlikely to win the nomination, and (b) the Clinton-Paul contrast on economic policy would be so dramatic that it might devour everything else.
There’s good reason, therefore, for voters who favor a more restrained foreign policy to hope that Clinton faces at least token opposition in the primaries focused primarily on that issue. Then there would at least be one forum where the topic would be raised, and raised seriously, for Clinton to address. In the best-case scenario, such opposition would get more press attention than it deserved, which would force Clinton to make some kind of gesture to placate the doves in her coalition.
More generally, this forms part of my argument why voters concerned about a particular issue should always work both sides of the aisle, and not commit themselves to ideological identification with a particular party. Hawks certainly don’t – and doves shouldn’t either, not if they want to advance their cause.
One aspect of Ta-Nehisi Coates’s big article that I feel I gave short-shrift to was his argument about how white supremacy made capital accumulation difficult for African- Americans, and the consequences of this fact for the present-day distribution of capital. I want to come back to this point, prompted as well by a piece by David Frum that also was responding to Coates’s article.
Frum’s piece has a kitchen-sink quality to it – I get the sense that he had a strong negative reaction to the idea of reparations for slavery, and proceeded to marshall all the arguments he could think of against it. Inevitably, some arguments are better than others.
In the course of that stream of arguments, he makes a couple of claims that I think are important to push back on. To whit:
A reparations plan is likely to prove even more distorting [than affirmative action in the public sector].
If paid to individuals as an income stream, reparations would dis-incentivize work.
If paid to individuals as a lump sum, reparations would expose one of America’s least financially sophisticated populations to predatory practices that would make subprime lending seem socially responsible by contrast.
These statements make much stronger claims than I suspect Frum really wants to make.
Let’s take a look at the first statement: unearned income disincentivizes work. Why would that be the case? The usual argument against traditional welfare is that, because payments were predicated on not having an earned income, it effectively created a very high marginal tax rate on wages. That would indeed be expected to disincentivize work – profoundly. But an annuity income without strings attached would have no effect on the return to work. So why would it create a disincentive? It’s worth pointing out that Daniel Patrick Moynihan, Milton Friedman and Friedrich Hayek all advocated one version or another of a guaranteed minimum income. These are not individuals usually associated with blithe disregard for disincentives to work.
You would see a disincentive if the population in question has a target income, and, once that target income is achieved, prefers leisure to work. There’s a variety of evidence for this effect. For example, retired people who receive income from pensions and/or Social Security often do work, both for the additional income and because of the inherent satisfactions of labor, but they often prefer to work shorter hours, and more irregularly, than people of prime working age. Wealthy people of any age capable of maintaining a desired lifestyle without work may choose not to, or to work more irregularly, or in an occupation that is less-reliably remunerative. A high enough income from capital may make “mere” wage work look unappealing relative to either leisure or speculative ventures, because of the sheer number of hours of work it would take to have a meaningful lifestyle impact.
This line of argument leads logically to high taxes on unearned income – so that our most “productive” citizens don’t drop out of the labor force to live off interest and dividends. But for this disincentive to operate at the bottom of the income scale, the individuals in question would have to have very low target incomes, and limited interest in capital accumulation. Is that Frum’s contention?
Frum’s objection to a lump-sum distribution is also problematic. As Fredrik deBoer points out, a major obstacle to entrepreneurship is lack of startup capital. If one problem Frum identifies with affirmative action in the public sector is that it drew African Americans away from entrepreneurship, then it makes no sense to also criticize reparations for putting recipients in a position to take much more market risk than they would otherwise be able to do. On the contrary, you’d expect the argument that reparations would be preferable to affirmative action precisely because it makes true financial independence more possible.
Unless, of course, you believe that African Americans are much more likely to fail at entrepreneurship than Americans in general, whether because of an information disadvantage or a poor skills match or what-have-you.
I don’t intend to minimize Frum’s concerns. In certain ways, I share them – as I alluded to in my own response to Coates about not expecting reparations to close the socioeconomic gap between black and white in America. It may be that multi-generational poverty does lower your horizons in ways that make it difficult to plan to accumulating capital. It may be that there are distinct barriers to successful entrepreneurship in the African American community other than a relative lack of startup capital. There is certainly plenty of evidence that large windfalls are frequently squandered – the long list of athletes, actors, musicians and other highly gifted individuals who were successfully exploited to the point of being left bankrupt attests to the risks. But these concerns should be part of the discussion – not reasons to end it.
Coates’s argument for reparations can be read primarily from a moral perspective, as a backward-looking effort to achieve justice, which is how I read it initially. But it can also be read from a consequentialist perspective, as an argument for a variety of distributism. Read in that frame, the lengthy case that white supremacy has profoundly obstructed capital accumulation among African Americans is there largely to provide a moral justification for something one might want to do anyway because it would make for a more just and harmonious society: legislate a broader distribution of capital.
I wouldn’t want that argument held hostage to the other questions that bedevil reparations. Inasmuch as the reparations debate punctures some of the sanctimony around the existing distribution of property, well and good – but the question of the practicality of distributism is one that I think deserves its own attention. There are important arguments against distributism that have nothing to do with the sanctity of property rights – paternalistic arguments for providing economic assistance in a more structured and supervised fashion, economic arguments about the efficiency of concentrating capital, Harry Lime’s argument about cuckoo clocks – but these arguments already form part of the conventional wisdom of our time. Distributism runs counter to that wisdom, and has rarely received a full airing. It deserves one.