The summer after I graduated college in 1992, I spent several weeks wandering around Europe east of the Elbe, from Budapest, to Prague (I guess Prague is actually south of the Elbe, but whatever), and from there around Poland for three weeks before heading to Riga, St. Petersburg, Helsinki and Stockholm. While in Poland, I had the opportunity to visit a number of memorial sites related to the Nazi atrocities, including the Auschwitz-Birkenau complex and the site of the Majdanek concentration camp, near Lublin. When I visited them, it was soon enough after the fall of Communism that nothing much had changed about the way history was presented at the two camps. I don’t know how things have changed since, but I got a view of how the post-war Polish Communist government interpreted the crimes committed on Polish soil.
Auschwitz-Birkenau was the largest of the six extermination camps set up as part of Hitler’s campaign of mass-murder against Europe’s Jews, and the overwhelming majority of the victims there were Jewish (estimates are usually around 90%). But the memorial went out of its way to present the camp as the site of equal-opportunity murder, with pavilions for each of the various nationalities or ethnic groups represented among the victims. One for the Jews, one for the Belgians.
Majdanek, by contrast, was not originally conceived as an extermination camp, but rather as a slave labor camp (of which there were many across the Third Reich), and only later came to be used as a site for murder (though, needless to say, slave labor was itself frequently fatal). Majdanek’s prisoners included many Poles and Soviet POWs along with Jews, and though the Holocaust of the Jews was certainly part of the story (there was a plaque specifically commemorating the 18,000 Jews murdered on a single day in 1943), the site was presented as, effectively, the truly “Polish” death camp – that is to say, a place where Poles were gathered for extermination. And though most of those killed at the camp were probably Jews, those Jews were largely Polish (Auschwitz received Jewish deportees from all over Europe), and Jews may well have been a minority among the prisoners as a whole.
I found this attempt to nationalize the memorial at Majdanek a lot easier to take on the whole than the Auschwitz memorial’s attempt at universalism. Not much more accurate, but much easier to take. It felt less motivated by an effort to deny the obvious, and more by the entirely understandable human desire not to have one’s own suffering triaged out of consideration.
By contrast, I find the “Polish death camps” kerfuffle - Poles around the world (well, the ones that are addicted to the 24/7 news machine) bristling at the insensitivity of President Obama for referring to “Polish death camps” rather than “Nazi death camps in Poland” – a bit perplexing.
That’s not to say I don’t understand where they are coming from. I absolutely understand. If someone broke into my apartment to murder my neighbor, I wouldn’t be thrilled at having that crime known as “The Millman Murder.” And both Polish suffering and the Polish contribution to the allied fight against the Nazis have been sorely neglected by Americans in particular. And the Communist post-war government of Poland actively vilified the patriotic Polish nationalists in an effort to bolster their own legitimacy. So for a whole host of reasons, I understand why Poles bristle at the phrase, “Polish death camps.” They weren’t conceived by Poles. They weren’t run by Poles. And they weren’t intended primarily to kill ethnic Poles.
On the other hand, I’ve also heard a bit of griping from Jewish (and non-Jewish) observers of that Polish pique, saying, in effect: wait a minute, it’s not that simple. Polish resistance was substantial and real, and Poles lead the lists of individual saviors of individual Jews, but Polish collaboration in the Holocaust was also not incidental. And I understand where they are coming from as well – my maternal grandparents left Poland after the Kielce pogrom of 1946.
But I don’t like this reaction either, because I don’t think reciprocal demands for greater sensitivity get anybody anywhere. In particular, I don’t think throwing around words like “false and unjust phrases” encourages the pursuit of knowledge. Which is the only cure for “ignorance.”
The Nazis chose to commit many of their most heinous crimes on Polish soil, largely because the largest community of targeted victims was the community of Polish Jews, but also because Poland was also targeted for a more severe “renovation” into than was most of Europe – the Nazis aimed to destroy Polish society utterly and turn the Polish people into a slave caste, which isn’t what they planned for Denmark or Norway. The pressure of the Nazi yoke was much heavier in Poland than in most of the occupied countries, which meant that Polish resistance was more intense and extensive than in most countries, but also that Polish collaboration was more intense, and involved a far closer approach to the ultimate horror.
And Polish anti-Semitism before the war was real, widespread, and had already prompted anti-Semitic legislation before the Nazi invasion. The Nazi occupiers exploited this feeling, as they did in other European countries, to better pursue their war of extermination against the Jewish people. Acknowledging that doesn’t make the Polish nation a co-bearer of war guilt. And acknowledging that doesn’t require asserting that anti-Semitism is wholly irrational, as opposed to being, in many cases, a rational but repugnant response to real problems (the difficulty in establishing strong institutions in a young nation with huge ethnic minorities, and the intense competition for resources of all kinds that characterized the inter-war period).
I am a firm believer in the study of history, which emphatically includes the history of the Holocaust. But I am a dissenter from the false religion of Holocaust worship. And this kind of extreme sensitivity to language, and to the drawing of sharp black lines is, I think, a part of that religion. If the Holocaust was not merely a crime of historic proportions, but a confrontation with evil unmasked, then anyone who did not see that evil for what it was, and resist it with all his or her power, was either a fool, or a coward, or a villain. But most people are none of these – and are not heroes either. We honor the heroes of that period – like the Pole President Obama was honoring when he made his faux pas - as of all periods, for doing something extraordinary. That honor implies, if it means anything, that most of us didn’t, and wouldn’t, measure up. If we had not measured up, then, we would have, in retrospect, been implicated. That’s just an unhappy truth about what the Nazis unleashed upon the world.
President Obama should, of course, apologize, and fire the idiot speechwriter who made this mistake, because diplomacy is diplomacy. But we who are not subject to diplomatic constraints should be free to say: the Nazi death camps are part of Polish history, and history, always and everywhere, resists our neat assignment of comfortable categories.
And now, I’m going to stop imitating Leon Wieseltier, and write about something else.
Matt Yglesias today:
Normally you face a tradeoff. Taxes impose costs on the present-day population that might impair wealth creation over the long-term, but to avoid taxes by borrowing you need to pay interest to creditors. But the real interest rate we’re being asked for is low. Less than zero. So what’s the tradeoff? Why not sell as many negative-yield ten-year bonds as the market will buy (sell enough bonds and presumably interest rates will rise) and let that auction revenue “crowd out” taxes as a way of financing government activities? Now in an ideal world we’d use that money to finance valuable public sector investments, but that all gets very politically controversial and you can see why it’s impossible to agree on this given our dysfunctional politics. But what’s the constituency for taxes in a negative interest rate environment?
Okay, suppose we completely eliminated taxes. Just borrowed 100% of what we needed to finance government’s operations. In fact, suppose we just had the Fed print all the money we need to do this, and don’t even bother with borrowing from the market – that way we don’t have to worry about rising interest costs either. That reductio ad absurdum is pretty much what Yglesias is advocating, right?
Don’t be silly, he’d say. I’m not really calling for the elimination of all taxes, or the monetization of all debt. But clearly sometimes you should run a deficit; clearly sometimes you should expand the money supply (clearly, that is, if you aren’t afflicted with an Austrian conviction that money has some intrinsic value rather than always having merely a conventional value as a medium of exchange). And I’m just making the point that right now, with the deficits we are currently running, the market is telling us that we can borrow for free – that is to say, at a negative real interest rate. We probably couldn’t borrow 100% of our budget for free, but shouldn’t we keep borrowing until we find the point at which the market says, “stop; you’re borrowing too much”? Why stop before that?
These sorts of arguments are all of a piece. We should continue to cut taxes and increase spending, increasing the deficit so as to increase demand, until we get self-sustaining growth that doesn’t require government to prod it along. The Fed needs to “drop money from helicopters” and thereby signal its irresponsibility, because that perception of irresponsibility will cause investors to flee cash for other assets and/or higher current consumption, which will raise nominal growth. The common assumption behind all of these arguments is real growth expectations will not be affected, or will even be positively affected, by this kind of activity.
I would like to see some evidence for this assumption. I can think of lots of good reasons to think that the sign goes the other way – that real growth would be hurt by the perception that the American political system had given up on trying to control its budget or was committed to debt monetization as a growth strategy. And if real growth rates drop, then you won’t get as big a nominal growth bang for your borrowed buck as you might expect – indeed, depending on how much real growth is hurt, you can’t be sure that nominal growth would increase even if inflation picked up. In fact, if real growth took a big enough hit, and inflation picked up sharply, you could see a rise in nominal rates without negative real rates budging much. Which would mean, by Yglesias’s argument, that we should borrow even more even as borrowing got progressively more expensive.
Within the normal range of plausible policy stances, I wouldn’t expect that to happen. I would expect nominal growth expectations to increase, but I would expect real growth expectations to move as well, and to move with a direction and magnitude dictated by the market’s perceptions of how well-structured the stimulus is – that is to say, the degree to which it was perceived to be promoting self-sustaining growth. But for a wildly irresponsible policy course, I’d expect the effect on real growth expectations to be sharply negative.
I could present other arguments. I could say that tax cuts are difficult to reverse (particularly in our dysfunctional political system) and therefore should be understood as more likely to increase the structural deficit than to smooth out the business cycle. I could point out that the maturity structure of Treasury debt is still pretty short-term, and that while the Treasury has been extending it out since 2009, it’s been a delicate process, and we still basically don’t issue anything beyond ten-year paper. We could discover that turbo-charging debt issuance would send interest rates higher very very quickly. And then what do we do? Say, “sorry – just kidding?” But these are really specific examples to illustrate my general question: how does Yglesias know that real growth rates won’t be negatively affected by a strategy that appears, to the ordinary observer, to constitute defiant fiscal irresponsibility?
I’m not an Austrian. I don’t think the government has no role to play in promoting growth. I don’t confuse an individual’s budgeting, where you might save money as a store of value in order to spend it later, with the role of money in an economy as a whole, where if every country saved more than it could productively invest you’d just have a depression. I happen to think we need more stimulus – but stimulus of real growth, coupled with structural reforms to reduce the inflationary impact of that stimulus, and ensure that more of the nominal growth we get is real. And I am flat-out baffled trying to figure who folks like Yglesias think they convincing with these sorts of reductio arguments that amount to saying: all government really needs to do to dig us out of our economic hole is stop behaving responsibly.
I am really getting tired of repeating myself. Maybe I should just do a bloggingheads with Yglesias so I can finally hear his answer?
Felix Salmon has a good run down of all the various things that went wrong with the Facebook IPO. I don’t really care about most of this – who cares if Facebook mis-priced the deal? Who cares if they pissed off investors? This is stuff that matters only to people who are actively playing the game.
What matters to the rest of us is the insider-trading angle.
From what I’ve read, that angle is: Facebook gave late, updated revenue estimates to Morgan Stanley; Morgan Stanley passed these on to select investors as part of their analysis, but not to the general public; therefore, disfavored (retail or small institutional investors) who didn’t get the call were stuck with shares that the smart money knew to avoid. All of this is alleged, not proved – I’m not even sure lawsuits have been filed – but that’s the gist of it.
The two extreme views about insider trading look roughly like this. On the one end are the Chicago types who argue that insider trading is good, because what makes capital markets most efficient is the swiftest dissemination of information. Regardless of where the information comes from, you want it out as swiftly as possible, so prices can incorporate it. That means putting no obstacles to disseminating that information – which means no laws against insider trading.
The opposite view points out, quite correctly, that efficiency in information distribution is not the only thing that makes markets work well. You also want depth. Depth requires a certain level of trust. That, in turn, requires believing that you are not constantly being screwed by people with inside information. Plus there’s this small basic fairness issue – it really does feel like a species of fraud and corruption to let insiders profit from their position. So this extreme argues that the solution is to mandate the wide dissemination of any information that is disseminated at all. If you know something that most people don’t, you can’t trade on it – unless you first tell everybody.
The problem with this second viewpoint is that, taken to its logical extreme, it forbids proprietary research of any kind. After all, the product of that research either is useful (in terms of identifying profitable market-trading opportunities) or it isn’t. And if it is, then it’s information. Which you have. And other people don’t. The whole controversy over high-frequency trading revolves around this kind of information – information gleaned from analyzing trading patterns which fast computers can take advantage of before ordinary market participants can reach for their computer. In the real world, most of these patterns are patterns of behavior exhibited by other traders, like large mutual funds. In other words, high-frequency traders are trading ahead of retail order flow, and profiting at those retail investors’ expense. If there’s no difference in effect between trading on a tip from the mutual fund and trading based on having watched that fund’s behavior for months, then why is one evil insider trading and the other legitimate research?
Research is a Wall Street product. Back in the days of the dot-com bust, the charge was that this product was worthless – designed to sell shares, not provide a true picture of the health of companies or the likely prospects of making money by investing. The charge in the Facebook case is that the research is valuable – and therefore should have been made generally available, not only to select clients. The logical end-point of this kind of reasoning is that the Wall Street houses shouldn’t provide research – they should just offer product and let the clients decide what they want to buy, without “selling” it on the merits. Except this is exactly what the major Wall Street houses did with the mortgage-backed CDOs and other structured products that destroyed the world economy. And they have been criticized for that as well.
All of this rumination is not intended to serve as a defense of Wall Street’s practices. It’s intended to argue that trying to insure that information disseminated by Wall Street is both accurate and generally available is a fool’s errand. Accurate information is valuable, and therefore expensive. You can police the margins – and those margins may well have been crossed in this case – but the problem in intrinsic to the fact that information asymmetries arise naturally all the time, and information asymmetries are the main way people make money.
Ultimately, the way to make Wall Street work better for everybody is probably just to tax it more heavily.
I heartily recommend Rod Dreher’s thoughts elsewhere on the site about the quandries of end-of-life care. I think I end up roughly where he is – that you need to know when to forego extraordinary interventions and let nature take its course, but that you also don’t want to put doctors in a position of prescribing poison (even if you don’t have a religious objection to suicide, the conflict with the doctor’s central mission is obvious, as is the conflict-of-interest at the insurance company level).
On a personal level, I have an absolute horror of dealing with these questions. I am fortunate to have one grandmother still living, at 96, but at that advanced age we all know the end could come any time. My grandmother lives with my mother now – I’m going to visit her this evening, as it happens – and she is acutely aware of herself as a burden. I can tell her over and over that she isn’t, that we love having her around for as long as she’s granted, that, in fact, she’s lucky not to be seriously demented, not to be in serious pain, but it doesn’t make much of an impression. She’s hanging on, but she doesn’t know why – and she gets very depressed. My other grandmother also lived into her 90s. By the end, she was virtually unable to move, and in constant pain; she lived like that for months. It’s not the way anybody wants to go.
But I have to ask: as a policy matter, isn’t any serious look at end-of-life an argument for the hated “death panels?” Not for euthanasia or anything like that, but for some kind of cost-benefit analysis applied to what end-of-life care is covered.
Nobody is going to want to hear, about her mother – or about herself – that her coronary bypass surgery isn’t covered. But nobody wants to hear she’s going to die either. Resources are not infinite. They have to be allocated somehow. If everybody’s insurance covers extraordinary interventions near the end of life, then insurance companies have to price insurance on the assumption that these interventions will, frequently, be made. And we pay for that now. Bringing the government into the picture changes things not at all: either way, we, collectively, pay for a right to care that we may wind up regretting getting.
I don’t know the ethical answer to this question. But the numbers are going to dictate coming to some kind of answer. Either we cover everything for everybody – and we all pay for it, and forego other goods; or we cover less than everything – and some people die wishing they could afford to prolong their life a bit more.
Ross Douthat is right that talk of suicide and euthanasia is “less a real solution than another manifestation of the very problem” of our ability to prolong life beyond reason. But, as so often, a voluntary solution – we should all have more “courage” – doesn’t pay the bills. Shouldn’t we be forced to confront the costs of our choices, somehow? And how could we do that in an ethical manner?
Looking back over my post from yesterday questioning whether human rights require a theistic “grounding,” (and thank-you to Rod Dreher and to Michael Brendan Dougherty for their comments, and to Daniel Larison for weighing in on his blog), what I notice is a strenuous quality to my argument – a quality that, in my opinion, I share with many of the people I’m arguing with. That’s a quality that I’ve learned to interrogate: I find I often learn more by asking “why do you care so much about this?” than by simply pursuing the argument on its own terms. So I want to put myself back in the picture, by way of justifying (or at least explaining) the strenuousness of my argument. Apologies if what follows feels like “me! me! me!” – feel free not to read it if you don’t like that sort of thing.
I’m very familiar with the line of argument that the “D” brigade – Douthat, Dreher, Dougherty – are taking, because it’s a line of argument I bought and promulgated not that long ago. And I’ve come to view it with suspicion because I feel it failed me in my own life.
I went through a fairly religious phase in my late-20s to mid-30s, and one of the primary motivations for that phase was, precisely, to have a kind of “grounding” for living rightly. That grounding was both philosophical – how can I know what living rightly means? Religion will provide me with a guide – and social – how can I sustain a rightly-lived life? A religious community will provide me with support.
The problem is: I am not an island. And the necessity of following this guide to life resulted in persistent and unresolvable conflicts within my family. These weren’t so much conflicts over specific religious issues – those can always be negotiated. Fundamentally, they were conflicts over precisely the fact that I now had a “ground” for my reasons, an authority I could appeal to that, from an outside perspective, looks arbitrary. Tyrannical, even.
Now, if I had been a more mature person, I would have recognized that this was going on, and I would have also recognized that my own religious tradition places a very high value on shalom bayit - peace in the home – and that what I needed to do wasn’t to order my household but to order myself, and set a positive example of what the life I wanted to lead meant. If that attracted others in my life to join me, well and good; if not, then I would have to navigate the inevitable compromises with good cheer. That’s what I should have done if I had been more mature.
But I wasn’t more mature. Indeed, that desire for religion as a “ground” strikes me as prima facie evidence of my emotional immaturity. I was turning to religion as a short-cut to maturity, a substitute for the hard work of knowing myself.
There is no such short-cut. I feel that very strongly now. That doesn’t mean that religion is valueless – quite the opposite. But when somebody says what sounds like “it’s only religion that keeps us from behaving like savages” I think: that person is afraid he will behave like a savage. And if he is turning to religion to save him from himself, he will find no salvation. He must first know himself, because only by that route can he allow himself to be saved (and I use “allow himself” deliberately; even if you don’t believe in God, and believe that all these dynamics are happening inside an individual person, there is such a thing as getting out of your own way). Religion may or may not help in that process; that, I think, varies between individuals. But there’s no short-cut. My text on this topic, as I think I’ve mentioned before, is Tolstoy’s short novel, Father Sergius.
I think the arguments I made in my original post and in the comments are strong ones, but that’s not why the topic matters to me. It matters to me because it’s personal. That’s usually the way it is with things that matter.
Arguing that people “need” religion strikes me as an enormous waste of time. It will not convince anyone who really believes otherwise, and the people it does convince will have been convinced out of fear. And fear is a cancer; it is no stable ground for faith. The only thing – literally the only thing – to do if you care about your faith – including the faith that you don’t need God to be good, if that’s what you believe – is to live it, for its own sake. If you do that, you don’t need to do anything else. If you don’t do that, nothing else you do matters.
Just a quick note on what’s going on at Millman’s Shakesblog (since it’s not easy to find the blog on the site, I’m going to periodically post about what I’ve been posting about, redundant as that seems).
- The latest “Double-Feature Feature” on Alfred Hitchcock’s “Rope” and Richard Linklaters “Bernie.”
- A review of the recent Acting Company production of Shakespeare’s, Julius Caesar, in which the Roman Emperor bears a striking resemblance to Barack Obama (and Brutus bears a striking resemblance to John McWhorter).
- An interpretation of Harold Pinter’s The Caretaker, apropos of the excellent production of the play now on view at The Brooklyn Academy of Music.
- A review of Donald Margulies’s Time Stands Still, recently performed at the Steppenwolf in Chicago.
- And a meditation on camp, apropos of recent productions of Jean Genet’s The Maids, at Red Bull Theatre, and Shakespeare’s A Midsummer Night’s Dream, at Classic Stage.
Do come by and visit. And come back later in the week as well – I still have to write up my thoughts on Gatz, on the Mike Nichols-directed production of Death of a Salesman currently on Broadway, and of the Goodman Theatre in Chicago’s current production of Eugene O’Neill’s The Iceman Cometh.
Scott Galupo makes the very good point that the Religious Freedom Restoration Act may well invalidate the HHS mandate. (I’m less convinced that either of the Scalia cases have any real bearing on the mandate in the ACA, but I’ve said my peace on that subject for now.)
I want to take the opportunity, though, to remind everyone that religious freedom is impossible.
Winnifred Sullivan’s book argues, in a nutshell, that religious freedom, for individuals, means freedom from religious authority as well as freedom from governmental restriction on religious practice. So, you can’t ask a Catholic prelate whether this or that practice that the law would prohibit (say, putting statues on angels on graves, which is the main example in her book) is actually a formal part of Catholic religious practice, because the prelate has no standing, in a secular court, to rule on the question. If the grieving family feel that it’s an essential that Dad get guarded by a statue of an angel, then that’s their religious practice by definition, and if you want true freedom of religion you have to protect it. But this way, needless to say, lies chaos. Hence the impossibility of religious freedom.
In encourage people to read the book; a one-paragraph summary doesn’t do justice to the argument.
What I’ve argued in the past is that, regardless of where Constitutional doctrine winds up, we should strive to maximize (within reason) the zone of autonomy for religious institutions, because we should view that autonomy as a positive good, not as an absolute “right.” Hegemonic liberalism should be humble enough to accept that it doesn’t know the only ways of knowing, and that there is value, therefore, in having robust voices that claim other modes of knowledge – religious voices being preeminent examples.
Which is why I’ve argued simultaneously that I think the Constitutional objections to the HHS mandate don’t convince me, but that the mandate was a mistake – not a political mistake (it may or may not have been that as well) but a substantive policy mistake. Not because Catholics can’t freely practice their religion if the HHS mandate exists (they clearly can – indeed, it’s really easy to construct workarounds that don’t directly implicate the employer in providing the coverage, in which case I don’t see what the religious objection might be) but because we actively do want the Catholic Church out there living, in its institutions, a worldview with which the majority of the country disagrees, precisely because it has a long and profound history and the majority of the country disagrees with it. This is the kind of situation where “diversity is strength” has some actual meaning in the political ecology.
[T]he core of my argument [is] that much of contemporary secular liberalism depends on assertions that are potent and widely persuasive only because most Westerners are still deeply influenced by Christian premises about the nature and destiny of man. Sanchez, in his conclusion, suggests that this argument has an “odd circularity” to it:
The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own? If Christian doctrine is true, then external considerations are irrelevant to the truth of whatever normative beliefs it supports. If it is false, and our moral beliefs are unsustainable without this false premise, then we should be glad to be rid of false and unjustifiable beliefs. If we think it would be awful to discard those beliefs, then that awfulnessis sufficient reason to hang onto them without any religious scaffolding.
But the whole point is that I don’t think that many humanists actually do have strong reasons for their hopes regarding human dignity and human rights. I think that they have prejudices and assumptions and biases, handed down as an inheritance from two millennia of Christian culture, which retain a certain amount of force even though given purely materialistic premises about mankind and the universe they don’t actually make much sense at all.
I don’t think that’s any kind of answer. Okay, so humanists don’t have strong reasons for their faith in human rights. Do Christians have strong reasons for believing in Christianity? Strong in the terms Douthat is talking about here? If you already think that Christianity “makes sense” – that is to say, is persuasive on its own terms – then you don’t need to have a conversation about whether believing in it is pragmatically necessary for society; you already believe it. If you don’t already think Christianity makes sense, then why is it pragmatically necessary to believe in Christianity in order to believe in human rights and human dignity? Why can’t you just believe in those things directly? That’s Sanchez’s question, and Douthat’s answer – that humanists don’t have strong reasons for their beliefs – is a non-sequitur. If there are no good humanistic reasons for believing in human rights, then there are no good humanistic reasons for believing in Christianity in order to believe in human rights either. And therefore there are no good humanistic reasons for believing in Christianity. In which case Sanchez is right.
If these beliefs – belief in human rights, and belief that God redeemed the world from sin by incarnating Himself as a human being and allowing Himself to be crucified – both require leaps of faith, then what is the ground for deeming one more persuasive than the other? Presumably, the ground is something other than reason – it’s aesthetic, or psychological, or something. Among other things, the latter belief, being a myth, tells a story. But the point isn’t that without Christian premises you can’t believe in human rights – because those premises are just as ungrounded as direct belief in human rights. It’s that believing in random premises is less convincing to people than believing in myths, in stories, because that’s how human psychology works.
Add one more layer, in which you, the philosopher, admit that, yes, Christianity is just a myth, that nihilism is “true” but that society requires believing something other than this awful truth, and you’ve got the Straussian defense of traditional religion. I can see Douthat doesn’t want to go here, but what other destination can he have making the kind of argument he’s making?
But more to the point: when did Aquinas or Augustine talk about human rights? I seem to recall that rights, as we understand them today, were an invention of the Enlightenment. Notwithstanding Douthat’s argument that Locke’s views “depended on certain theological premises,” what he was arguing against in the Second Treatise was the patriarchal model of government that traditional Christians would have recognized as normative and that the Catholic Church endorsed well into the 20th century. If he was making a Christian argument, so was Filmer, which only proves that the argument was playing out within a Christian civilization – which we already knew as a matter of historical fact. Looking from the outside, it looks very much to me like Christianity has appropriated these concepts – promulgated as often by materialists and deists as they were by theists – and reestablished them on Christian foundations. Which, for all I know, may make them more secure – on some level, I agree with the Straussian defense of traditional religion. But getting the intellectual genealogy right is kind of important.
Around the Muslim world today, there is a great deal of debate about whether Islam is compatible with democracy and human rights, and if so how that compatibility should be construed. A Christian doctrine that says, “in the long term, you can’t believe in democracy and human rights unless you accept Christianity” is, effectively, arguing that Islam is not compatible with these ideas – that they are the nose of the Christian camel under the tent. A Christian doctrine that says, in an Eisenhower-esque vein, “in the long term, you can’t believe in democracy and human rights unless you believe in religion, and I don’t care what it is” winds up, effectively, endorsing at least some other religions as at least “sort of true.” Which I think any orthodox Christian would find highly problematic. By contrast, saying, “the ideas of democracy and human rights emerged from the Christian world, but they are not necessarily dependent on Christian premises, and are pragmatically useful outside of that context” leaves open the possibility that they could be re-founded on other religious principles. Which would seem to me to be a good pragmatic reason for making such an argument, in addition to its being historically more correct than the idea of posthumously baptizing ancient and medieval Christians as Lockean liberals.
It is unclear what effect the outcome of Sunday’s talks will have for negotiations due on Wednesday in Baghdad between Iran and representatives from six world powers on the broader issue of Tehran’s uranium enrichment, which the UN security council has demanded be suspended.
Western officials said an IAEA deal could improve the atmosphere in Baghdad, or conversely, damage prospects for those negotiations if Iran presents progress on an IAEA inspections framework as its sole concession. . . .
“If Amano’s presence in Tehran can produce something, it will play into this week’s talks in Baghdad,” a senior European diplomat said. “If Iran can indicate it is ready to respond to international concerns over its nuclear programme, that will be positive. But there will be no reward for simply turning up and the key issue for building confidence is still uranium enriched to 20% … If we are going to continue talking in good faith, there has to be something put forward by Iran.”
I remain skeptical that Iran is ready to, basically, capitulate on the enrichment question. My suspicion all along is that Iran wants to achieve “nuclear capability” which is to say: the ability to assemble nuclear weapons even if they don’t build an arsenal. Which would require 20% enriched uranium at least in order to conduct a nuclear test.
But who knows? Where does this goal rank in their priority list relative to their other goals? Could a deal that enabled them to enrich to 3.5% (which is all that would be necessary for nuclear power) and that forestalled more serious economic sanctions be spun by the regime as a diplomatic victory? Maybe the really important goal isn’t “nuclear capability” but “nuclear status” – which might be satisfied by the kind of agreement the six powers are trying to get to. I doubt it, but I don’t know enough to really have a firm opinion.
Does anyone disagree with me, though, that the Romney campaign will harshly criticize any agreement with Iran, no matter what the agreement says? Amano has taken a much tougher line than his predecessor has, but I assume Romney will ignore this and attack any agreement with the IAEA as “appeasement.” Similarly, if the six powers make progress in Baghdad, coming to some kind of preliminary agreement about the outlines of a nuclear deal, Romney will criticize that in similar terms. That would certainly be consistent with his approach to all other diplomatic initiatives of the Obama Administration, and, indeed, with his general contempt for diplomacy and international organizations.
My question is: does it matter? Will the Romney campaign’s inevitable criticism have any impact on the prospect of diplomacy’s success? My instinct is to say: no, it will have no meaningful impact, but I’m not sure. If a deal requires sending nuclear fuel to Iran in exchange for an agreement to open-ended IAEA inspections and a commitment to end enrichment beyond 3.5%, it’s easy to see how that would be demagogued as “agreeing to let Iran go nuclear.” Could the Obama Administration still sign on? Or would they have to push for an end to all enrichment, full stop, and thereby scuttle a deal?
Agreements almost always require both sides being able to claim victory in some fashion. If you start out from a position that the only acceptable agreement is one in which the other side capitulates completely, then you’re really saying that you don’t want an agreement. The Romney campaign isn’t going to say that no agreement is better than an agreement that might be on the table; they’ll assert (without evidence) that they could get a more one-sided agreement. But could they actually prevent a deal from happening?
Over to you, Daniel.
I’ve said very little about the George Zimmerman/Trayvon Martin situation for a variety of reasons. First, I don’t have much interest in crime stories. Second, I don’t bring any particular expertise to the table with respect to ferreting out the actual facts – and these are what actually matter if the question we care about is “what happened?” and “is George Zimmerman guilty?” I guess I agreed with Steve Sailer from the beginning that this was a “depressing” local police blotter matter that got elevated to national attention – but unlike him, that made me more inclined to ignore it rather than comment obsessively on it. I think the only aspect of the whole fracas I’ve commented on is my friend John Derbyshire’s infamous column on the subject, which I commented on mainly because I know him personally.
But my goat was got by Pat Buchanan’s most recent post elsewhere on the site, asking the question: what if George Zimmerman walks?
Buchanan transparently believes that Zimmerman is not guilty. He has no reason to be so sure of that, any more than those who are convinced of his guilt have reason for their certainty. It’s clear that Zimmerman and Martin got into an altercation. We don’t know for sure who started it. We don’t know for sure why. It’s entirely possible that the confrontation was initiated by Zimmerman, which would be entirely consistent with the physical evidence Buchanan talks about, and that Martin was the one acting (as he thought) in self defense. We can’t interrogate him because he’s dead.
Nonetheless, his question is not an idle one. The jury’s job is to determine whether the defendant is guilty beyond a reasonable doubt. Based solely on the information that has been brought to light so far, it doesn’t strike me as improbable that a jury could conclude that guilt cannot be determined to that standard of certainty. What happens if they can’t?
The reason we have a criminal justice system is precisely to remove the felt need for private justice – for revenge, personal and collective. Where individuals or distinct groups become convinced that the justice system does not provide adequate recourse, the desire for private vengeance increases. In some cases, that desire boils over into violent action. Such action is unjustified, but that doesn’t mean it is incomprehensible, or that it won’t happen.
That doesn’t mean it will, though, either. It behooves the authorities not to presume too much – or too little. The right answer to perceptions of unfairness is conspicuous fairness, not retribution. If the standard of “justice” is conviction of acquittal, we’ve already lost; there is no chance for peace. A fair trial is the answer, regardless of the verdict – and the government, at the highest levels, should say so, and well in advance of a verdict. That communications campaign is as important as any preparation that local police departments might make.
And here’s the thing. To be able to conduct that communications campaign effectively, the government has to sound credible. Which means understanding why so many people were upset that Zimmerman wasn’t taken in in the first place.
In doing that job, Buchanan’s attempt – and he isn’t alone – to turn Zimmerman into a folk hero has been completely counter-productive. Assuming the goal is to increase confidence in the integrity of the trial, acquitting Zimmerman in the media is just as bad as convicting him. And if that isn’t the goal, then Buchanan has no business criticizing people who fanned the flames.