Daniel Larison links to a Matt Feeney piece asking whether, in the event Rand Paul runs for President, Americans will even notice that he has distinctly outside-the-Washington-consensus views on the subject of foreign policy. It’s an interesting question, but the real question is which Americans we’re talking about – that is to say, are we talking about the primaries, or the general election?
In a primary contest, Rand Paul would likely have an uphill struggle to win establishment support – which is to say, an uphill shot at the nomination in general. In addition, he may have to fight to lock down the support of organized Christian conservative groups, assuming they have a champion in the race (such as Mike Huckabee). Of course, he also has advantages in his corner. To win the nomination, he’ll first of all need the luck not to face an establishment unified behind another candidate. Assuming he’s lucky that way (which is quite plausible), he’ll then need to thread the needle of simultaneously distinguishing himself from the pack and reassuring the establishment that he is an acceptable nominee.
Foreign policy could be useful for the first. There is no way, no matter what he says, that Rand Paul is going to get the support of the neoconservative faction. That’s doubly or trebly true because Hillary Clinton is overwhelmingly likely to be the Democratic nominee; it is hard to imagine a more comfortable Democrat than Clinton for a “hard Wilsonian” hawk. The core establishment concern about Paul, apart from electability (which is a big concern), is whether he’s serious about the whole “end the Fed” stuff – whether he will take reckless, ideologically-driven stances on core economic matters that panic the market. To reassure the establishment, Paul needs to signal that he’s not going to doing anything crazy on that score.
He should be able to get away with doing that because his primary opponents will probably be unable to attack him as crazy on economic or budgetary matters because too much of the Tea Party base agrees with Paul. Instead, they are likely to attack him, as they have attacked his father in the past, on his foreign policy views. Paul’s response will undoubtedly be a mixture of defense of principle and pragmatic violation of it – but the contrast in foreign policy will be made by others. He can’t avoid it, and so he might as well embrace it and make it a selling point. If he succeeds, that will be one part of the measure of his success.
But in the general election, the situation will be completely different. Hillary Clinton has almost no incentive to bring up foreign policy, except to contrast her considerable experience with Paul’s greenness. She won’t run on the need to confront evil in Syria, or Ukraine, or wherever; she’ll run on competence, not ideology. Her overwhelming incentive is going to be to focus on Paul’s economic and budgetary views, and his leadership role in some of the most ignominious moments of the Congressional GOP’s budgetary hostage-taking. (That, and play identity politics.) That is the contrast she is going to draw, and Paul is going to have to own it.
And if he tries to draw other entirely viable contrasts – on civil liberties, or on foreign policy; Paul v. Clinton would possibly be the biggest “choice not an echo” contest since Goldwater v. Johnson – he runs into the problem that in all these areas he’s running against the GOP brand. Paul’s complaints about violations of civil liberties or abuse of Presidential prerogative are mostly complaints about ways that the Obama Administration entrenched or extended Bush-era precedents. His complaints about foreign wars are mostly complaints about wars started during the last Republican Administration. I think the message, “vote Republican – now under new management that believes the exact opposite of what the old management believed,” is a very, very hard one to get across.
Then there are events. If negotiations with Iran prove successful, Clinton will take credit and Paul will have little to say – and much less incentive to bring up foreign policy at all. If they fail, and we wind up at war, Paul will be in the unenviable position of either saying that he would have handled negotiations better than the Obama Administration (a weak argument), or that even after negotiations fail we should not go to war (a stronger argument but an extremely risky one), or supporting war (thereby rendering null any attempt to draw a serious foreign policy contrast). If they fail, and we don’t wind up at war, Paul will already have run a heck of a gauntlet in the GOP primaries on the subject, with every other candidate blasting the Administration for its pusillanimity and Paul, by default, at least semi-defending it. That history would not help him draw a contrast with Clinton on foreign policy in the general election.
If Paul wins the nomination (a long shot, but not impossible), the general election will turn on economic issues. Paul will run against the Obama Administration record. Clinton will run against Paul’s, and his party’s, extremely unpopular economic views. And the result will probably turn, more than anything, on how well or badly the recovery is doing in 2016.
That having been said, after the election foreign policy will start to matter. If Clinton wins, Paul’s ideological opponents on foreign policy will certainly try to spin his loss as a decisive referendum on anti-interventionism. And if Paul wins, then he’ll have the opportunity to actually implement his foreign policy views, and we’ll finally learn just how different from the consensus they really are.
The latest Administration tweak to the landmark healthcare law – an additional year’s delay to the employer mandate on businesses with between 50 and 100 full-time-equivalent employees – has predictably excited Republican opponents of the law. Coming on the heels of the CBO report that the law would reduce overall hours worked by the equivalent of 2 million full-time positions, the charge that the law is a “job-killer” has got some wind in its sails, fairly or no. But what these events really show is how weak the ambient employment environment remains, and the cost of same for policymaking.
The purpose of the employer mandate in the first place, apart from raising revenue necessary to make the bill deficit-neutral, was to counteract the disincentive for employers to not provide health insurance to new hires, or to drop health insurance entirely where the benefit they provided previously was no longer compliant with the minimum coverage standards. The goal, in other words, was to reduce the disruption associated with the introduction of the ACA, reduce the number of employees who would be shunted from employer coverage onto individual market via the exchanges.
In a tight labor market, this regulation would be more likely to work as planned, but would also be relatively less-necessary since prospective employees would be in a good position to shop for the employer offering the best overall package. In a persistently slack labor market with weak overall demand, such as we live in now, the regulation instead creates an incentive for employers to reduce costs by reducing full-time and full-time-equivalent staff. Precisely because employers are in a stronger position to bargain with employees, they are in a stronger position to bargain with the government to reduce the regulatory burden of the law. Which is what has happened.
Suppose the employer mandate were scrapped permanently? This would shift the burden of providing health insurance for a certain class of employees off of the back of business and onto the back of individuals (subject to the individual mandate) and the taxpayers (who provide the subsidies that lower-income individuals receive). Since those subsidies phase out with income, they create a high effective marginal tax rate on income, which in turn creates a disincentive to increase income (by working more hours) within a certain income range. This is one of the effects identified by the CBO. (The other effect works in precisely the opposite direction, positing that some employees have a target income; anything that makes it easier to achieve that income with fewer hours of work serves to reduce employee hours.) In other words, there’s going to be some negative effect on employment regardless of where the burden falls (though the effect would undoubtedly be more diffuse in the absence of an employer mandate, offset by the fact that reductions in employer coverage would be larger).
What the episode illustrates is how difficult it is for the government to do what those concerned about inequality want it to do: add its thumb to the side of labor in the battle with capital for relative share of the economic pie.
This new focus is apparent on both sides of the ideological divide – albeit, one may question the degree to which reform conservatives’ interest is driven by the need to compete with a similar focus on the left. On the left (and in quirky right-wing places like this magazine), there’s increased interest in substantially increasing the minimum wage. The reform conservative counter is to advocate wage subsidies, which would cost more taxpayer money directly but would impose no regulatory burden on employers, and hence no direct disincentive to employment. Both, though, are efforts fundamentally to increase the effective return to employment – rather than to increase employment directly.
Jim Antle does a fabulous job of pointing out how the contours of the debate about work limns a cultural change since the mid-1990s. Then, the heart of the debate was about welfare reform, which is to say, how to move people to work. Now the heart of the debate is over how to make work pay adequately, both to combat inequality and, frankly, to make room for other obligations and pursuits.
Antle focuses on the cultural implications of the shift, but I would argue that a key reason for the shift is economic. Whereas in the 1990s, employment growth was robust but a segment of a population was left out of the boom, focusing on removing disincentives to employment made sense, along with finding ways to ease the transition (it’s worth recalling that when welfare reform was pursued most seriously, as for, example, in Tommy Thompson’s Wisconsin, real money was spent on helping welfare recipients transition to work). The goal was to leverage a strengthening labor market and ensure that a rising tide really did lift all boats. Now, there is no strong labor market to leverage, and policymakers are trying to adapt to and address the costs of that reality.
But slack labor markets themselves pose real risks to both approaches at raising effective wages. In tight labor markets, an increase in the minimum wage would be less necessary, but it would also be more effective in producing an incentive to invest in training and equipment to achieve higher labor productivity, which redounds to the benefit of all. In a slack labor market with weak demand, a higher minimum wage creates incentives to evade the minimum, evade an increase in costs that would make the business less-competitive, whether by reducing hours per employee, or cutting back on marginal lines of business, or offshoring, or creating categories of employee who are not subject to minimum wage rules (interns, freelancers, piece-rate workers, tip- or commission-based employees, off-the-books employees, etc.). Once average wages are up, yes, employers would respond to an expected increase in demand by increasing investment. But if they don’t forecast that increase ab initio, then the bootstrapping desired by the advocates of a higher minimum wage may never get going.
A wage subsidy approach doesn’t burden employers. Instead it burdens taxpayers – and employees. Why employees? Because the subsidies phase out, they effectively become a high marginal tax rate on wage income at the low end of the scale. Employers would be cognizant of that fact, and it would affect the wages they would be willing to offer. Again, the effect would be different in slack markets versus in tight ones. In tight labor markets, employers couldn’t afford to put a ceiling on wages because of the fear of losing out in a competition for available employees. A wage subsidy would therefore largely be captured by employees themselves. In slack labor markets with weak demand, however, the subsidy would make it possible for employers to reduce effective wages, and allow the subsidy to pick up the slack. An employee who was willing to work for $10/hour before a subsidy went into effect would, if jobs are scarce, certainly still be willing to work for $10/hour afterwards, even if now $9 came from the employer and $1 from the government. Thus there’s the possibility that wage subsidies would be largely captured by employers rather than employees. The subsidy might not increase effective wages, and might create an effective ceiling on income because of the high marginal tax rates.
None of this is intended to be an argument against the ACA, which I largely favored at the time and still do. It’s an argument that the perverse incentives created by redistribution schemes are more intractable in a weak labor market such as we have today than they are in a strong one. Those redistribution schemes may be worthwhile for other reasons nonetheless – for example, because they improve health outcomes overall, or alleviate specific social ills. All of which means we should focus more on how to strengthen that labor market (whether by improving the long-term outlook for real growth or the short-term outlook for nominal growth; the approaches are not mutually-exclusive) than on trying to perfectly optimize redistribution schemes undertaken for other purposes.
Will Wilkinson worries about the death of the old-school blog that was part of the “gift economy”:
There’s nothing wrong with blogging for money, but the terms of social exchange are queered a little by the cash nexus. A personal blog, a blog that is really your own, and not a channel of the The Daily Beast or Forbes or The Washington Post or what have you, is an iterated game with the purity of non-commercial social intercourse. The difference between hanging out and getting paid to hang out. Anyway, in old-school blogging, you put things out there, broadcast bits of your mind. You just give it away and in return maybe you get some attention, which is nice, and some gratitude, which is even nicer. The real return, though, is in the conclusions people draw about you based on what you have said, about what what you have said says about you, about what it means relative to what you used to say. People form expectations about you. They start to imagine a character of you, start to write a little story about you. Some of this is validating, some is irritating, and some is downright hateful. In any case it all contributes to self-definition, helps the blogger locate and comprehend himself as a node in the social world. We all lost something when the first-gen blogs and bloggers got bought up. Or, at any rate, those bloggers lost something. I’m proud of us all, but there’s also something ruinous about our success, such as it is. We left the garden behind. A guy’s got to eat. I mostly stopped blogging for myself because I thought I couldn’t afford to give it away. But I miss the personal gift economy of the original blogosphere, I miss the self it helped me make, and I want at least a little of it back.
I completely understand what he’s getting at – but I want to complicate the picture a little bit.
I started blogging in 2002, hanging out my own shingle on blogspot. I did it primarily as a belated response to the trauma of 9-11: I had been emailing news items to a variety of friends and family with an obsessiveness that nearly deserved a DSM number, and one of them finally told me I should stop emailing him and start a blog if I felt compelled to tell everyone what I thought. So, against my wife’s explicit instructions, I did.
And I loved it, right from the get-go. The thrill of instant response to what I said was a perfect fit for my latent writerly ambitions for recognition and my Wall Streeter’s inherent attention deficits. I would write, I would press “publish,” and someone out there would respond.
But that response wasn’t merely gratifying or instructive; it shaped what I wrote, shaped the persona (a better word than “self”) that I was developing on-line. My style, my subject matter, my politics, my sense of who I was and was meant to be evolved in part based on what got positive reinforcement and what didn’t, even though I wasn’t being paid anything at all. A gift economy is still an economy, and there’s nothing particularly pure about non-commercial social discourse. “No man but a blockhead ever wrote except for money” – so said Sam Johnson, but in fact the truer statement is that no man but a blockhead ever tried to earn money by writing. When it comes to money, Willy Sutton had a much better understanding. So all of us writers, whatever our medium, write out of some other compulsion than to earn a living. And to the extent that that compulsion has something to do with having readers, we have to watch the progress of our addiction, how it is changing us.
Some of us do have to earn a living, of course, and that can, indeed, shape the way we write. But that’s not something unique to blogging. It applies to screenwriters and print journalists and lyricists. I have no doubt that it applies to poets, who surely want to write poetry that will be understood and appreciated by the those few, disturbed individuals who make their lives reading contemporary poetry. After all, if they don’t get published, how likely are they to get that teaching gig that actually pays the bills? Anyway, there’s a reason Franz Kafka and Wallace Stevens didn’t quit their day jobs.
The struggle for anybody who actually cares about the quality of what they do is to keep an eye on something other than the immediate reception of the piece, whatever the work is, keep an eye on the object itself. Or, rather, to develop the confidence that you actually know what makes the object itself beautiful and true. The confidence to know that you are Orson Welles and not Ed Wood, to pick two artists who emphatically did it their own way.
The same is true, on a microscopic scale, for blogging. If all you’re doing is hanging out, you’re probably not writing anything very worth reading. If all you’re doing is chasing click bait, or following the news cycle, you’re probably not writing anything very worth reading. And that’s the fundamental question: do you want to write anything worth reading?
I’m quite sure Will Wilkinson does. Why else would he be pursuing an MFA in writing? Surely not because he wants to teach.
The heart of the argument sounds like a simple one:
- If the real return to capital is higher than the real growth rate, then the share of national income that accrues to capital will increase over time.
- Since capital is concentrated, this also means a steadily increasing concentration of wealth.
- For most of human history, real growth was low, and therefore there was a steadily increasing concentration of wealth.
- The period from World War I through the 1970s was an exception to this trend, as capital experienced a series of extraordinary shocks (World Wars I and II, the Great Depression, the end of colonialism and the expropriation of capital that followed, etc.) that resulted in massive redistribution of wealth.
- The developed world is now in the process of returning to the historical norm of low real growth, with the result that capital’s share of income continues to rise, which, in turn, will result in ever-widening gap between the wealthy and the rest of the population.
- This is not a market failure; indeed, the more efficient the market is, the more rapid this concentration will proceed.
I’m somewhat puzzled why this process is attributed to “capitalism” since, if you think about a pre-capitalist economy, it matches that description pretty well. When capital is overwhelmingly tied up in agricultural land, and the land is largely held by a small group of people willing to deploy violence to maintain their title, then returns to capital will certainly be higher than the negligible economic growth rate. The serfs will remain as poor as ever while the landowner retains virtually 100% of the surplus of their labor (though a percentage will accrue to artisans, soldiers, and others hired by the landowner to provide a value-added service which gives them some negotiating leverage). And, in feudal times, that’s pretty much what happened, with possible evolutionary consequences as the landed classes had a higher birth rate than the un-landed.
Or take a look at the description of the political economy of the Roman Republic in the early books of Livy. It’s an endless cycle in which the Senators, owning most of the land, reduce the plebes to debt peonage, then the plebes begin to revolt, there’s some debt reform, and the cycle begins again – until the Romans got the brilliant idea of conquering the rest of Italy and letting the whole population, plebes and Senators, live off the surplus generated by the conquered peoples.
Is there any reason to think we’re headed back to that kind of economy of scarcity? Here are some preliminary thoughts I have:
- The components of real growth are growth in productivity (output per worker) and growth in population. Growth in population in the developed world is low to negative. Productivity growth, meanwhile, has taken a different form than it did during the industrial revolution. A big component of productivity growth these days involves outsourcing functions to lower-wage countries. This makes the remaining employees in the developed country more productive, but it’s not actually comparable to the application of capital so that one laborer can do more in a given hour.
- However, that same process is part of what is driving a dramatic increase in productivity in countries like China. Is inequality actually even increasing on a global scale? I’m doubtful. China and India are very large, and are growing more wealthy at a rapid rate. Those societies are becoming more unequal – but because they started off so poor, I would expect the Gini coefficient of the world as a whole to be going down. As China’s wealth burgeons even as its population growth stalls and begins to decline, and as India follows suit several decades behind, it will be interesting to see whether this dynamic in the developed world changes. Predicting a reversion to the pre-industrial mean seems like a pretty bold move when so many large variables are still in flux.
- The other big wild card is Africa, where population growth continues to be very high and where productivity growth has only just begun to take off. By the end of the century, according to the U.N.’s medium population projection, nearly 40% of the world’s population will be African. The productivity growth of the African population is the main unknown variable that will determine the global Gini coefficient at century’s end.
- We’re still actually in the early stages of the information revolution, and so it’s too soon to say whether the kinds of broad-based, huge increases in labor productivity we associate with the industrial revolution will be replicated with the information revolution. I don’t think anyone knows the answer to that question.
- Notwithstanding what happens to “true” productivity growth in developed societies, Low population growth has an effect on the value of assets that depreciate rapidly. When the population is growing, it makes sense to spend money now on physical plant to serve growing demand, and on physical infrastructure to house and move people, even if you expect that plant to deteriorate quickly or that infrastructure to need to be replaced. When the population is stagnant or shrinking, it doesn’t make sense to invest in rapidly-depreciating assets. Instead, it makes more sense to invest in assets that will retain their value or even appreciate. Cathedrals rather than tract houses, say.
- Piketty proposes a global tax on wealth to restrain the growth of inequality. One objection to such a tax is that the incentives for a given state to cheat are simply too large – any state that had a lower tax on wealth than the cartel would attract enormous inflows of capital. But there’s another objection: Piketty implicitly assumes that such a tax would be used to restrain the growth of inequality within the developed world. But why would the developing world go along with such a scheme? It’s one thing if Switzerland or the UAE cheats. It’s quite another thing if India does. North-South dynamics should completely overwhelm the internal dynamics within the developed world, as well as the problems of coordinating between developed countries.
- On the other hand, if you wanted to impose a wealth tax within the developed world, the way to do it would be to eliminate physical cash and impose a negative overnight interest rate on savings. This is something we’re going to have to be able to do anyway before too long in societies with a negative population growth rate, because those societies will periodically experience a negative rate of economic growth. But it will also be necessary to prevent capital from capturing an outsized share of national income during such periodic recessions. I suspect that such a system would be much more effective at achieving Piketty’s objectives than a tax, because of the limited liquidity of “cheater” currencies.
Rest assured, I expect to return to this topic again.
Damon Linker does a fine job tearing into the absurdity of Jamie Dimon’s (of JP Morgan Chase) and Henrique De Castro’s (of Yahoo) stratospheric compensation in the wake of lackluster to poor performance of the public corporations in their respective charge. We’ve heard that news before, but so long as it doesn’t change, it’s still news.
But in passing, he makes a point that I think is worth another look. Apropos of why the 1% continues to get richer at a faster rate than other segments of the population, he says:
Part of it is undoubtedly a result of the greater opportunities for wealth generation enjoyed by rich people everywhere. Turning $1 million into $10 million is usually easier than acquiring the $1 million in the first place — and, all things being equal, turning $10 million into $100 million is even easier.
Why, though, should that be the rule? Why should returns to wealth “accelerate” in this fashion? It’s not a law of nature by any means.
With any enterprise, greater scale brings greater efficiencies in some areas, and worse efficiencies in others. Greater scale gives you greater bargaining leverage in contract negotiations, standardization can reduce the overhead associated with all sorts of decision making, etc. But, on the other hand, greater scale means that the process of moving information up the chain of command gets more difficult and expensive, the growth of vested interests within an organization that conflict with one another and are not aligned with the interests of the organization as a whole, and standardization can substantially reduce flexibility. At the very largest scales, it becomes impossible to grow because the market becomes saturated.
What all of the above should mean is that larger fortunes/businesses are more readily preserved, either growing or decaying slowly, while smaller ones have a greater chance of both growing rapidly and evaporating completely. It should not, in general, be the case that larger fortunes or businesses grow more rapidly more easily than smaller ones.
Moreover, right know we are purported to be in the middle of a long global savings glut (though that concept is, of course, disputed), where there is too much savings chasing too few legitimate investment opportunities. This is one explanation for low long-term rates and persistently recurrent asset bubbles. But if returns to capital are low in general, how does it become easier to grow a large fortune than a small one? Shouldn’t low returns to capital mean that big pools of money stagnate? Isn’t this precisely what Bill Gross was complaining about?
In an overall low-return environment, returns should scale inversely with the size of the investment opportunity. There should be no liquid asset classes that present attractive returns, and any large opportunities that are less-liquid but could attract large pools of capital should also show degraded returns because of the high degree of competition to invest. Whereas small, below-the-radar opportunities should still exist simply because they are too small to be worth the time of a large pool of capital to investigate. An environment of high real interest rates should be the one where capital holds the whip hand.
If it is indeed easier to make $100 million out of $10 million than to make $10 million out of $1 million, that suggests a process of cartelization in the investment world. In other words, the distinction between self-dealing in contracts for wages versus greater “opportunities” for the very wealthy to achieve high returns to capital may be specious. Both situations may involve a substantial element of self-dealing.
Alternatively or additionally, it’s yet more evidence that real interest rates are actually quite high.
Marriage patterns weren’t random in 1960 either, and the past popularity of “Cinderella marriages” is more myth than reality. In fact . . . assortative mating has actually increased only modestly since 1960. . . .
[R]ising income inequality isn’t really due to a rise in assortative mating per se. It’s mostly due to the simple fact that more women work outside the home today. After all, who a man marries doesn’t affect his household income much if his wife doesn’t have an outside job. But when women with college degrees all start working, it causes a big increase in upper class household incomes regardless of whether assortative mating has increased.
All true! But there have been some important changes from 1960 to 2005, per the data he presents. Assuming I’m reading the statistics right, there’s been a real change in the preferences of highly-educated women: the percentage of women with more than a college degree who married a high-school dropout dropped by over 70%, the percentage of women with a college degree who married a high-school dropout dropped by over 40%, and the percentage of women with more than a college degree who married a man with no more than a high school degree dropped by over 35%.
But the same is not true of the preferences of highly-educated men, which, in all three cases, are essentially unchanged from 1960 to 2005. Roughly the same percentage of highly-educated men in 2005 were willing to marry women with little education as was the case in 1960.
It’s not that men have remained more receptive to marrying below them in terms of education. Rather, women’s preferences have come to match men’s preferences over the 45 years in question. In 2005, the percentage of highly-educated women willing to marry men with a low education was essentially identical to the percentage of highly-educated men willing to marry women with a low education. In 1960, women were much more willing than men to “marry down” educationally-speaking.
That’s interesting, isn’t it – particularly given the fact that the percentage of women who complete college has soared since 1960. In 1960, men made up more than 60% of college degree-holders, versus under 40% women. By 2005, those percentages were nearly reversed.
Of course, it probably matters a whole lot more that the overall percentage of the population with a college degree has grown very substantially over the period in question, and that the overall percentage of the population that is married has dropped substantially over the period in question.
Regardless, I suspect it’s time to cue Hanna Rosin.
Rod Dreher muses about the decline of religious culture and its implications for “culturally” religious art:
From the outside, my guess is that culturally Catholic writers are more likely to be reacting against something. Their imaginations were formed by the culture and rituals of Catholicism, even if they’ve rejected the religion. I am skeptical, though, about whether there is anything identifiably or meaningfully Catholic about any culturally Catholic writer whose imagination was formed after the postconciliar dissolution of that strong and distinct American Catholic culture. I could be wrong about that; there is certainly something distinctly Jewish about culturally (but not religiously) Jewish writers. Then again, Jews are a minority in America, whereas Catholics are members of the largest church in the country — though an increasingly assimilated one.
A few thoughts.
First of all, what about Catholic writers from ethnic minority groups? People like Oscar Hijuelos, or Richard Rodriguez - these are Catholic writers, right? And they are certainly culturally distinct. And Catholicism is part and parcel of that cultural distinction – but not the whole of it.
Of course, Hijuelos and Rodriguez are interested in religious and spiritual questions in a fundamental way – they may just be names to add to the list of properly Catholic writers. But take someone like Junot Diaz. The culture he comes from is emphatically a Catholic culture. And I wouldn’t say that he’s reacting against that as a central concern – not in the way that, say, James Joyce was thoroughly formed by but reacted strongly against Irish Catholicism. But neither is his work consciously coming from anything like an explicitly Catholic perspective (indeed, I suspect he would consciously reject such a perspective, though obviously I don’t know that).
Mentioning Diaz brings me to a second point: many people have absorbed some of their worldview through other writers without themselves being strongly committed religiously-speaking (or even, necessarily, knowledgable). In Diaz’s case, Tolkien clearly means a huge amount to him, and not just as a matter of nostalgia for his childhood. Dreher would, I’m sure, agree that Tolkien was a deeply Catholic writer. How do you “score” that kind of influence on a writer? What would you say about an avowedly non-religious writer who was plainly influenced by Dostoevsky?
To pick another example, film noir is a distinctly Catholic-inflected genre of film. (Most people would describe the world of noir as godless, but I would argue that the god that is absent from the world of noir is the Roman Catholic god, which is why I say it’s a Catholic-inflected genre.) So how do you “score” a modern director or screenwriter, who may or may not have any authentic religious concerns of his own, who creates a film that is, if anything, hyper-conscious of the pulp-Catholic substrate of the genre?
Now, about Jewish writers. Judaism is simply less theology-centric than Catholicism, and as a consequence you can be a religiously observant Jew who writes books about religiously observant Jews and your fiction may still be Jewish primarily in the sociological sense. Take, as an example, Kaaterskill Falls, by Allegra Goodman. This is a very good novel to read if you want to get a feel for the dynamics of an ultra-Orthodox Jewish community. It’s also a good novel qua novel. But it isn’t god-haunted in the way that, say, Graham Greene’s or Flannery O’Connor’s work is. Or, for that matter, Isaac Bashevis Singer’s – even though Singer was less observant than Goodman is. I’d say similar things about Nathan Englander: that he’s interested in Jews and Judaism, but if he’s haunted by anything, it isn’t by God. History, maybe.
An interesting phenomenon to end on is Jewish writers who, in search of a spiritual inspiration, wandering into foreign fields precisely because that’s the best way to find their way home. Tony Kushner’s play, Angels in America, for example, is fascinated by Mormonism precisely because of its link to archaic Judaism, which (though politically very problematic) appears more spiritually nourishing than the Judaism that actually exists in the contemporary world.
And, per my comment about noir and Catholicism above, I’d encourage a more enterprising soul than myself to do a study of noir conventions and Christology in Michael Chabon’s Yiddish Policemen’s Union. This is a book that is Jewish from (as they say) the soles of its feet to the top of its head. But it’s also the story of a detective, wandering in a hopelessly fallen world, who discovers that the messiah just may have come, but his own people killed him (or, anyway, drove him to suicide) because he would not bring the kingdom the way they expected. That story sound familiar to anyone else?
If I had to pick a recent novel that best expresses contemporary American Jewish spirituality, it would be Nicole Krauss’s book, The History of Love, a fable about the possibility that nothing is ever irrevocably lost, and that, even if by an extraordinarily circuitous route, what is bashert ultimately always comes to pass. Most likely by means of a book. I didn’t love it, because I don’t experience the universe as behaving that way, and I get irritated by people who do (perhaps precisely because I can’t). But I can’t tell you how many of my friends loved it.
Posting has been light (actually nonexistent) for the past week because I’ve been in Utah for the Sundance Film Festival, primarily to see the premier of the film, “Infinitely Polar Bear,” on which I was an executive producer. So this is mostly a combination apology post and self-promotion post. But I also thought I’d say a word or two about the whole experience.
As a budding filmmaker myself, my main take-home from Sundance was: “yikes!” I saw about a dozen films, and the weakest of them was thoroughly intimidating if I imagined trying to make it myself. If I put on my critic hat, it was pretty easy for me to say, “I would have shot that scene differently;” “that character’s pretty severely underwritten;” “isn’t the film kind of missing its third act?” etc. But if I put on my teeny-tiny filmmaker’s hat, I was just amazed by the amount of talent out there. And most of these films won’t even get much (if any) distribution!
All of which means that I came away strongly agreeing with most of Tim Wu‘s criticism of Manohla Dargis’s piece about how there are too many indie films being made, and too many released into a the same small market. There is way, way too much talent out there to worry that too many films are being made, and I can’t imagine a better way to develop that talent than to put it to work making films. But Wu doesn’t adequately address an important component of Dargis’s argument, which is that the best small films get lost in the shuffle because everything with small commercial expectations is distributed in roughly the same way.
I’m not convinced that’s 100% true, but to the extent that it is it suggests that independent filmmakers could use better tools for reaching beyond the circle of aficionados to the larger universe of potential fans. As films get cheaper to make, and as the theatrical audience continues to shrink, independent film looks more and more like independent music. That has implications for distribution strategies.
In any event, highlights from the festival for me included:
- “Frank,” a cult comedy about a low-on-talent but high-on-hopes keyboardist who, on a fluke, is asked to join a bizarre noise-band fronted by a fellow who wears a giant papier mache head at all times (played by Michael Fassbender, which is a pretty funny joke in and of itself, casting Fassbender to play a role where you never see his face);
- “Blind,” a Norwegian film about a woman who has recently gone blind, and which does a fascinating job of making the interior life of said woman cinematic; much of what we see is her visualization of what she deduces – or imagines – or outright fantasizes – what might be going on around her;
- “A Girl Walks Home Alone At Night,” a delightfully atmospheric Jim Jarmusch- or David Lynch-esque vampire flick set in an imaginary Iranian ghost town of Bad City (it’s an American film, shot in California, but the cast is all of Iranian extraction and the dialogue is all in Farsi);
- “Web Junkie,” an Israeli documentary about internet addiction in China, filmed in a rehab center where teenagers are sent by their parents (generally the kids have to be tricked or kidnapped) for a tough-love cold-turkey cure – just amazing for the level of access the filmmakers got to the kids, their parents, and the facility generally.
I don’t know that any of these films will get distribution. Frankly, I don’t know that they should! But they all stuck with me, and they’ll no doubt show up on the internet at some point. So now you know to look for them.
There’s a poetic rightness to the fact that “Inside Llewyn Davis,” one of the best films of the year, was not nominated for Best Picture by the Academy of Motion Picture Arts and Sciences. The latest from the Coen Brothers, “Inside Llewyn Davis” does just about everything it can to alienate voters, starting with the fact that it’s about a raging misanthrope. Like “Her” but in the opposite emotional key, this is another story where form and subject are perfectly mated, and where the story wouldn’t work at all if they were not.
The Coen Brothers have always been interested in losers. But never before have they gotten us so close to the heart of one of those losers, and a loser who knows that he deserves to win, and knows he just isn’t going to, and is consumed by the bitterness of that condition. Like “A Serious Man,” this feels like a very personal film for them, but whereas “A Serious Man” wrestled with origins – specifically their Jewish identity – “Inside Llewyn Davis” wrestles with destiny, and the possibility of not having one.
Played with wonderful naturalism by relative newcomer Oscar Isaac, Llewyn Davis is a folk singer in New York in 1961, right before folk is about to explode out of its niche with the emergence of Bob Dylan. But Llewyn isn’t going anywhere. He can’t afford even a rathole apartment downtown, and crashes on the couches of the vanishingly few New Yorkers who don’t hate his guts. One of them is his more successful friend’s wife (Carey Mulligan, giving a nicely subtle performance – watch her eyes while she sings), who informs him she’s knocked up, possibly by him. Another is an uptown academic couple who are faultlessly generous with him, and whose generosity he rewards by lashing out, cursing, saying he feels like a trained poodle.
He’s got more than his share of rotten luck – beaten up by inexplicably malevolent cowboys, robbed of even his minimal royalties by his rotten manager, trapped for hours on the way to Chicago with an outlandishly insulting old jazz man who won’t stop poking him with his canes (the only out-and-out Coen grotesque in the film, played by John Goodman). But he also makes his own bad luck, telling his sister (Jeanine Serralles) to throw out his old stuff (including his old mariner’s license, which he turns out to need), refusing royalties on a ridiculous novelty song that his friend (the one he cuckolded, played with delightfully deadpan squareness by Justin Timberlake) wrote so that he can get the cash quicker (only to see the song do well), and, when he finally gets a chance to audition for a manager who could really take him places (F. Murray Abraham), picking an obscure and depressing song guaranteed to turn him away. And his response to every piece of bad mazel he suffers is the same, whether he’s obviously implicated or not: a sour conviction that it figures, that the universe has it in for him one way or another.
With one exception. In what is certainly a screenwriting joke (given the ubiquity of Blake Snyder’s book) this deeply unattractive character does one noble thing. He saves a cat. Or tries to. The cat belongs to that uptown couple, and he accidentally lets it out of the apartment, then locks himself out retrieving it. And so he’s stuck with it, loses it again, finds it again – for much of the movie he’s saddled with the burden of saving this cat, and its the one burden he isn’t eager to put down, the one tie he is unwilling to sever. (The cat is ultimately saved without any help from him. It figures.)
The most painful moment of the film is when Llewyn plays a song for his aged and demented father, now residing in a nursing home. The father doesn’t speak – either because he can’t or because he’s long since decided it isn’t worth it. He shows no interest in Davis’s presence – either because he’s forgotten who he is, or because he’d long since given up hope that his wayward son would ever visit. (Or anybody else. His old union buddies all remember him fondly, but none of them know what happened to him, or have troubled to find out.) And then Llewyn plays the song, an old sailor’s song, and you can see his father’s heart breaking. And you can see Llewyn’s bitterness reaching even greater depths than before – because he’s decided to give up playing, and ship out, embrace the destiny of becoming his father. And he’s looking at what his father has become. But fate won’t even let him choose his misery; without his license, he can’t ship out, and back to the Gaslight he goes, to play the same songs he always does, ones that were never new, and never get old, because they’re folk songs.
The arc of the movie doesn’t bend toward anywhere; it literally ends where it began, and there’s no sense that Llewyn has changed either because of the experiences he’s had. And from where he sits, neither he nor the world will ever change; both are too stubborn. (He doesn’t hear the most blatant and obvious sign that the times are indeed a-changin’ – but we do.) It almost isn’t a story; rather, it’s a picture of what it feels like to be trapped in that state of miserable stasis, and to be convinced – with some evidence – that you’ve got more talent than all the nicer guys who are getting ahead of you.
“Inside Llewyn Davis” is the Coen Brothers’ portrait of the artist as a young failure. No wonder the Academy voters didn’t like it.
(As an aside: “Inside Llewyn Davis” is supposedly inspired by the life of Dave Van Ronk. Llewyn’s album cover is clearly modeled on Van Ronk’s album, “Inside Dave Van Ronk” from the period. Now, I don’t know what Van Ronk was like in his youth, other than from his own charmingly self-deprecating reminiscences, but I was privileged to see him perform in the mid-‘90s, and he was an absolutely delightful fellow eager to share with the audience, quite the opposite of Llewyn Davis’s comprehensive contempt. If you want to get a feel for Van Ronk, the man and his music, I recommend his late live album, “And the Tin Pan Bended and the Story Ended,” which is about half storytelling and half music and all wonderful.)
Since Ross Douthat was so kind as to notice my complaint that he’s not arguing with worthy atheists, it behooves me to notice that kindness – and to praise his latest offering as exactly what I was looking for.
I’m very glad that he’s clarified that he’s not making a “necessary foundations” argument. If I understand his argument now, it is that the new atheists’ worldview lacks “coherence” – whereas other world views, including some other varieties of atheism, would not lack that coherence so drastically.
I suspect that’s true. But what I would say in response is that virtually nobody has a “coherent” worldview. I’m pretty sure I don’t. And it’s only a certain sort of personality that feels a psychic need for a worldview characterized by coherence. I might even go further and say that some religions are more prone to seek that particular grail than others. I’d certainly rank Catholicism far higher on the “seeks coherence” scale than, say, Judaism, or the LDS Church, to say nothing of faith traditions like Hinduism that don’t even have a clear mechanism for defining the boundaries of inclusion and exclusion, and that hence by definition cannot provide that kind of coherence.
What I think is really bothersome about the “new atheists” is their style of argument, characterized (as Douthat aptly puts it) by overconfidence and “crowing self-righteousness.” But this is precisely why they are not worth arguing with on this subject. Why would a serious atheistic philosopher, as opposed to a rabble-rouser, waste his time arguing with Pat Robertson? And if she wouldn’t, then why should a serious theist bother arguing with an atheist who obviously has no interest in understanding either religion or the history of philosophical argument about ethics, but simply holds his own prejudices to be self-evident truths?
(By the way, I forget which new atheist it was – I think it was Chris Hitchens – who averred that he preferred arguing with religious fundamentalists because they and he meant the same thing by religion, whereas those with a more subtle or complex theological approach struck him as merely being shifty. This is, I think, the right answer to Damon Linker’s question. The new atheists aren’t arguing with straw men. They’re arguing with real proponents of a contrary view who are quite as simple-minded as they are, and whose views are vastly more popular than those of serious theologians.)
I myself don’t like crowing self-righteousness of any variety, religious or atheistic. But I’m also skeptical of coherent world-views. Which leads me to Douthat’s delineation of the two types of people who might embrace religion in the absence of definitive conviction.
In the first case, the skeptic finds himself in possession of a deep-seated moral absolutism on certain questions that seems to only make sense in a divinely-ordered cosmos, yet also is intellectually unconvinced of the case that this divine ordering actually exists. Or, alternatively, she finds herself in need of a Higher Power or Purpose in her own life — like many people entering Alcoholics Anonymous, say — without necessarily being suddenly convinced that this Power is really out there. Such a person might then to decide to live as if a religious tradition is correct — to practice without assent, to speak the words without full belief. And this, I think, is perfectly defensible, because it basically represents a form of exploration, a way of testing (perhaps the only way of testing, depending on your ideas about religion) a proposition that you find doubtful but appealing, an attempt to gain knowledge that might help smooth out the contradictions in your understanding of the world. If someone in the midst of that kind of skepticism-infused experimentation were forced to suddenly elaborate a complete world-picture, they might end up sounding as incoherent as I think the new atheists often sound. But with this crucial difference: They would be aware of the tensions, aware of the difficulties, and accepting them as a hopefully-temporary part of a personal process, rather than claiming to have arrived at a permanent intellectual solution and proselytizing ardently on its behalf.
In general terms, I have no problem with these approaches to religious belief, which feel very close to William James’s “will to believe.” I’m going to have to get a little personal though to delineate an important caveat to that general assent.
I spent a chunk of my late-20s and early-30s getting progressively more religious, precisely for the kinds of reasons that Douthat articulates above. On the one hand, I wanted a firmer grounding for what I already believed; on the other hand, I wanted something to strengthen my own moral resolve. Another way of putting it is that I wanted a guide to how to live, because I wasn’t sure I knew how to do it, and I was frightened. This might have had something to do with marrying relatively young, something to do with pursuing a career that in retrospect felt somewhat alien to me – whatever the collection of reasons, I felt the need for something solid to stand on, even though I was fully aware, intellectually, of the variety of arguments that could be made against religious belief in general, and traditional Judaism in particular.
The thing is, in my experience that kind of approach feels anything but exploratory. Precisely because the need is felt so strongly, it becomes terribly important to believe that there is a coherence, that things do fit together. The cracks in the foundation, when you see them, are quite alarming. And you look anxiously for ways to patch them. That might be described as being “aware of the tensions” in one’s worldview, but I don’t think it is anything like “accepting them as a hopefully-temporary part of a personal process.”
I don’t want to generalize too much from my own personal experience, but I did come out of it with a greater appreciation for the importance of honesty, and of trying to understand oneself so that one can be honest with oneself. Can you mouth the words without full assent? Sure – but what you’re testing is not whether you can make yourself believe the words by repetition – that’s a crazy way to come to believe something, when you think about it – but what it feels like to mouth those words, over and over, weekly, or daily, or multiple times a day as the case may be. Which is a really important thing to know, after all, since the experience of a religion is primarily that – an experience, a way of living and behaving, not a set of propositions assented to.
Which brings me around to his second type, the “noble lie” type:
The “churchgoing skeptic,” as described above, is someone who embraces religion experimentally in the hopes of harmonizing his own contradictory instincts and beliefs. The “religion as a noble lie” attitude that Millman is critiquing, on the other hand, is all about other people: This kind of “pro-religion” skeptic is pretty confident that he knows the score, and he’s just worried about what might happen if everybody else knows it. So instead of conducting experiments to test his own beliefs and ideas, he’s demanding — or suggesting, quietly, to people in positions of influence — that the religious portion of society be encouraged to stop experimenting with theirs.
I don’t want to say that there’s always a bright line between “skeptical churchgoing” and the “noble lie” school of thought — one can attend mass exclusively pour encourager les autres, and (at a more subconscious level) one can want other people to remain religious so that one always has the option of making one’s own experiment in faith. (Garry Wills, in his post-Vatican II book, “Bare Ruined Choirs,” has some interesting things to say about philo-Catholics of this sort, who liked the air of timeless certainty around the pre-conciliar church precisely because they felt like it kept real religion going in case they ever wanted to dabble in it.)
But taken on its own, the “noble lie” attitude offers a form of support that actual believers should reject. Given non-religious premises, there are various defenses of this perspective that one can make, and the more Machiavellian ones — in which religion really is the opiate of the masses, and that’s a good thing, because popular piety preserves the skeptic’s own social position, intellectual freedom, etc. — have a certain grim consistency that’s lacking in the naively anti-religious sincerity of the new atheists. But believers should still prefer the thundering anathemas of Coyne and Co. to the subtleties of some of religion’s atheistic defenders: Better a sincere enemy, in the end, than a conscious liar who calls himself a friend.
I agree completely with Douthat’s conclusion here – needless to say, since he’s agreeing with what I said in my previous post. And yet, just because I’m difficult, I can’t help but introduce two complicating thoughts.
First of all, I think it’s entirely reasonable for an atheist to say that he is glad religions exist because (a) religion is natural to most of humanity, for whatever reason, so trying to suppress it would be pointlessly destructive; and/or (b) precisely because they start from premises that he rejects (maybe can’t even understand), religious traditions may come to conclusions that would otherwise never be found, and that are worth investigating for other reasons despite their origins. I’ve made this argument before, and I believe it: that a hegemonic liberalism should affirm the value of subcultures that are not founded on liberal premises precisely because without them there would be nobody capable of resisting when that hegemonic liberalism takes its own premises to destructive conclusions, which, all of us being only human, we will undoubtedly do. I don’t think either of these are “noble lie” arguments; they’re arguments from humility, not arrogance. And they don’t require anyone to lie.
Second, I don’t think there’s anything inherently wrong with practicing faith for the sake of other people, rather than as an expression of one’s own convictions, provided that it’s done in the spirit of a gift rather than of condescension. I think that a man whose faith has largely been consumed by doubt, and who no longer faithfully follows the dictates of his religion, may still attend services – may still lead them, even, if he is told that his voice helps others to pray.
I hope so, anyway, since I’ve been that man often enough in recent years.