I see another Noah beat me to the punch on “The Lego Movie.” But, you know, you can always add another brick to the critical wall. So:
A lot of the commentary on the movie has asserted that the message (or a big part of the message) of the movie is: don’t limit yourself by following the instructions. Let your imagination run free! But this strikes me as a seriously weak reading.
Emmett does not have a soaring imagination yearning to fly free. The only thing he ever invented is a double-decker couch, which everyone agrees is a terrible, terrible idea. There’s actually a scene where the other characters go inside his mind, so we know just how sunshiny-spotless it is. The only thing his head is good for is to serve as an impromptu axle for a wheel.
What he is good at is following instructions – and making sure that other people follow them.
There’s a crucial scene where the evil forces of Lord Business are attacking cloud cuckoo land, and the various master builders – Batman, Wild Style, Unikitty, etc. – have to work together to build a submarine so they can plunge safely into the ocean below. But they don’t exactly know how to do that. They each have a “thing” – Batman uses only black pieces, natch – and so they each work independently on a portion of the sub. Vitruvius even tells Emmett, in the middle of the crisis, that he can contribute to building the sub by, you know, doing whatever comes to mind. (This is when Emmett builds his double-decker couch.)
Lo and behold, only a few minutes after submerging, the sub breaks apart into its constituent pieces. It fell apart because it lacked a unified design to hold it together. As Emmett points out later, the master builders’ problem is they don’t know how to work as a team. Which is the one thing Emmett does know how to do.
Emmett does become a master builder after his encounter with the “man upstairs” (an instantaneous transformation that doesn’t actually make any sense in story terms), but even after this transformation he isn’t able to defeat Lord Business. Lord Business can only be defeated by convincing him to switch sides, stop micromanaging, and work with the master builders as their leader and coordinator rather than insisting that all ideas have to come from his office.
Yes, Kristula-Green is right that on the “meta” level of the humans who are playing with the Legos, this narrative resolution parallels the reconciliation of father and son. But within the frame of the Lego universe, this resolution represents the apotheosis of corporate management culture truisms. There’s no revolution to overthrow Lord Business; instead, Lord Business retains his position of authority but learns how to properly manage a corporate environment that encourages individual creativity and channels it effectively towards corporate ends. The captive master builders are freed from their prison cells and the rebellious ones outside are re-integrated into the new corporate culture.
This is how companies like Apple and Google present themselves. It’s also how the Lego corporation presents itself. And it’s equally valid as a metaphor for the process of making a movie like, say, “The Lego Movie,” which required effective coordination of the efforts of numerous creative individuals and the interests of multiple corporate franchise holders, and could never have been accomplished if those creative individuals had been shackled and forced to conform to a single person’s vision – but also could never have been accomplished if there were no coordinating vision at all.
And that’s not the message for the grownups. The sentimental business with the father and son – that’s the message for the grownups; that’s the ad. The corporate culture message is for the kids. Take a trip through kids’ entertainment these days, much of it produced by the Disney corporation, and you’ll see quite a bit of very similar messaging. And the dystopian visions of so much YA literature are the dark mirrors of the same corporate scenario, where children are forced to compete with each other in brutal games arranged by heartless adults. Rod Dreher has been writing a bunch about “narrative collapse” lately, but the narrative hasn’t collapsed. We still tell stories about how we’re supposed to live our lives. It just isn’t the narrative he’s looking for.
“The Lego Movie” isn’t distinctive for selling kids on the promise of a “cool” workplace culture where you can exercise your creative impulses if you learn to work well with others.
It’s distinctive for making that world actually seem fun. As fun as, you know, playing with Legos.
Daniel Larison misunderstands my point about competence and a hypothetical Clinton-Paul contest:
This [my claim that Clinton will run on competence not ideology in foreign policy] jumped out at me because Clinton doesn’t have any particular claim to foreign policy competence. Her tenure at State during Obama’s first term was very busy in terms of traveling around the world, but one would be hard-pressed to identify any successful major policies that Clinton could take credit for. Obama centralized foreign policy decisions in the White House to a great degree while she was the Secretary of State, and many of the major policies that Clinton is known to have supported don’t help her to claim competence in this area. As an advocate for arming the Syrian opposition, pushing for regime change in Libya, and backing escalation in Afghanistan, Clinton routinely took the more hawkish side in every internal administration debate, and that put her on what proved to be the wrong side of some of the most important decisions of the first term. For that matter, the main reason that Clinton is ever credited with foreign policy competence is that she reliably takes the conventional and “consensus” position on every major issue. In other words, her claim to competence is that she sticks to a predictably hawkish line. She would have to emphasize ideology, since that is what her foreign policy reputation is based on in the first place.
My point was not that Clinton actually has a record of competence in foreign policy; I don’t think she does. I agree, in fact, with pretty much all of Larison’s criticisms of her foreign policy record. I just don’t think Clinton is going to run on a platform of “She’ll keep us at war.” Rather, she will claim that she has the experience to know how to negotiate effectively and get results without war, and the clout to build a broad coalition of international support when the use of force is necessary. Whereas, she’ll portray Paul as a naive ideologue who doesn’t understand how the world works. Her actual foreign policy preferences are quite close to Senator McCain’s, but she won’t make jokes about bombing Iran, and won’t present herself as the heir to “bear any burden, pay any price.”
Clinton does not need to run on foreign policy ideology, because nobody will be asking her to – except, assuming he’s her opponent, Rand Paul. Why would she give him a debate he wants to have?
Maybe I’m wrong about that, and Clinton relishes a chance to make the case for her brand of hard Wilsonian foreign policy. But I doubt she thinks that’s the way to win. And she will be focused on that objective.
Daniel Larison links to a Matt Feeney piece asking whether, in the event Rand Paul runs for President, Americans will even notice that he has distinctly outside-the-Washington-consensus views on the subject of foreign policy. It’s an interesting question, but the real question is which Americans we’re talking about – that is to say, are we talking about the primaries, or the general election?
In a primary contest, Rand Paul would likely have an uphill struggle to win establishment support – which is to say, an uphill shot at the nomination in general. In addition, he may have to fight to lock down the support of organized Christian conservative groups, assuming they have a champion in the race (such as Mike Huckabee). Of course, he also has advantages in his corner. To win the nomination, he’ll first of all need the luck not to face an establishment unified behind another candidate. Assuming he’s lucky that way (which is quite plausible), he’ll then need to thread the needle of simultaneously distinguishing himself from the pack and reassuring the establishment that he is an acceptable nominee.
Foreign policy could be useful for the first. There is no way, no matter what he says, that Rand Paul is going to get the support of the neoconservative faction. That’s doubly or trebly true because Hillary Clinton is overwhelmingly likely to be the Democratic nominee; it is hard to imagine a more comfortable Democrat than Clinton for a “hard Wilsonian” hawk. The core establishment concern about Paul, apart from electability (which is a big concern), is whether he’s serious about the whole “end the Fed” stuff – whether he will take reckless, ideologically-driven stances on core economic matters that panic the market. To reassure the establishment, Paul needs to signal that he’s not going to doing anything crazy on that score.
He should be able to get away with doing that because his primary opponents will probably be unable to attack him as crazy on economic or budgetary matters because too much of the Tea Party base agrees with Paul. Instead, they are likely to attack him, as they have attacked his father in the past, on his foreign policy views. Paul’s response will undoubtedly be a mixture of defense of principle and pragmatic violation of it – but the contrast in foreign policy will be made by others. He can’t avoid it, and so he might as well embrace it and make it a selling point. If he succeeds, that will be one part of the measure of his success.
But in the general election, the situation will be completely different. Hillary Clinton has almost no incentive to bring up foreign policy, except to contrast her considerable experience with Paul’s greenness. She won’t run on the need to confront evil in Syria, or Ukraine, or wherever; she’ll run on competence, not ideology. Her overwhelming incentive is going to be to focus on Paul’s economic and budgetary views, and his leadership role in some of the most ignominious moments of the Congressional GOP’s budgetary hostage-taking. (That, and play identity politics.) That is the contrast she is going to draw, and Paul is going to have to own it.
And if he tries to draw other entirely viable contrasts – on civil liberties, or on foreign policy; Paul v. Clinton would possibly be the biggest “choice not an echo” contest since Goldwater v. Johnson – he runs into the problem that in all these areas he’s running against the GOP brand. Paul’s complaints about violations of civil liberties or abuse of Presidential prerogative are mostly complaints about ways that the Obama Administration entrenched or extended Bush-era precedents. His complaints about foreign wars are mostly complaints about wars started during the last Republican Administration. I think the message, “vote Republican – now under new management that believes the exact opposite of what the old management believed,” is a very, very hard one to get across.
Then there are events. If negotiations with Iran prove successful, Clinton will take credit and Paul will have little to say – and much less incentive to bring up foreign policy at all. If they fail, and we wind up at war, Paul will be in the unenviable position of either saying that he would have handled negotiations better than the Obama Administration (a weak argument), or that even after negotiations fail we should not go to war (a stronger argument but an extremely risky one), or supporting war (thereby rendering null any attempt to draw a serious foreign policy contrast). If they fail, and we don’t wind up at war, Paul will already have run a heck of a gauntlet in the GOP primaries on the subject, with every other candidate blasting the Administration for its pusillanimity and Paul, by default, at least semi-defending it. That history would not help him draw a contrast with Clinton on foreign policy in the general election.
If Paul wins the nomination (a long shot, but not impossible), the general election will turn on economic issues. Paul will run against the Obama Administration record. Clinton will run against Paul’s, and his party’s, extremely unpopular economic views. And the result will probably turn, more than anything, on how well or badly the recovery is doing in 2016.
That having been said, after the election foreign policy will start to matter. If Clinton wins, Paul’s ideological opponents on foreign policy will certainly try to spin his loss as a decisive referendum on anti-interventionism. And if Paul wins, then he’ll have the opportunity to actually implement his foreign policy views, and we’ll finally learn just how different from the consensus they really are.
The latest Administration tweak to the landmark healthcare law – an additional year’s delay to the employer mandate on businesses with between 50 and 100 full-time-equivalent employees – has predictably excited Republican opponents of the law. Coming on the heels of the CBO report that the law would reduce overall hours worked by the equivalent of 2 million full-time positions, the charge that the law is a “job-killer” has got some wind in its sails, fairly or no. But what these events really show is how weak the ambient employment environment remains, and the cost of same for policymaking.
The purpose of the employer mandate in the first place, apart from raising revenue necessary to make the bill deficit-neutral, was to counteract the disincentive for employers to not provide health insurance to new hires, or to drop health insurance entirely where the benefit they provided previously was no longer compliant with the minimum coverage standards. The goal, in other words, was to reduce the disruption associated with the introduction of the ACA, reduce the number of employees who would be shunted from employer coverage onto individual market via the exchanges.
In a tight labor market, this regulation would be more likely to work as planned, but would also be relatively less-necessary since prospective employees would be in a good position to shop for the employer offering the best overall package. In a persistently slack labor market with weak overall demand, such as we live in now, the regulation instead creates an incentive for employers to reduce costs by reducing full-time and full-time-equivalent staff. Precisely because employers are in a stronger position to bargain with employees, they are in a stronger position to bargain with the government to reduce the regulatory burden of the law. Which is what has happened.
Suppose the employer mandate were scrapped permanently? This would shift the burden of providing health insurance for a certain class of employees off of the back of business and onto the back of individuals (subject to the individual mandate) and the taxpayers (who provide the subsidies that lower-income individuals receive). Since those subsidies phase out with income, they create a high effective marginal tax rate on income, which in turn creates a disincentive to increase income (by working more hours) within a certain income range. This is one of the effects identified by the CBO. (The other effect works in precisely the opposite direction, positing that some employees have a target income; anything that makes it easier to achieve that income with fewer hours of work serves to reduce employee hours.) In other words, there’s going to be some negative effect on employment regardless of where the burden falls (though the effect would undoubtedly be more diffuse in the absence of an employer mandate, offset by the fact that reductions in employer coverage would be larger).
What the episode illustrates is how difficult it is for the government to do what those concerned about inequality want it to do: add its thumb to the side of labor in the battle with capital for relative share of the economic pie.
This new focus is apparent on both sides of the ideological divide – albeit, one may question the degree to which reform conservatives’ interest is driven by the need to compete with a similar focus on the left. On the left (and in quirky right-wing places like this magazine), there’s increased interest in substantially increasing the minimum wage. The reform conservative counter is to advocate wage subsidies, which would cost more taxpayer money directly but would impose no regulatory burden on employers, and hence no direct disincentive to employment. Both, though, are efforts fundamentally to increase the effective return to employment – rather than to increase employment directly.
Jim Antle does a fabulous job of pointing out how the contours of the debate about work limns a cultural change since the mid-1990s. Then, the heart of the debate was about welfare reform, which is to say, how to move people to work. Now the heart of the debate is over how to make work pay adequately, both to combat inequality and, frankly, to make room for other obligations and pursuits.
Antle focuses on the cultural implications of the shift, but I would argue that a key reason for the shift is economic. Whereas in the 1990s, employment growth was robust but a segment of a population was left out of the boom, focusing on removing disincentives to employment made sense, along with finding ways to ease the transition (it’s worth recalling that when welfare reform was pursued most seriously, as for, example, in Tommy Thompson’s Wisconsin, real money was spent on helping welfare recipients transition to work). The goal was to leverage a strengthening labor market and ensure that a rising tide really did lift all boats. Now, there is no strong labor market to leverage, and policymakers are trying to adapt to and address the costs of that reality.
But slack labor markets themselves pose real risks to both approaches at raising effective wages. In tight labor markets, an increase in the minimum wage would be less necessary, but it would also be more effective in producing an incentive to invest in training and equipment to achieve higher labor productivity, which redounds to the benefit of all. In a slack labor market with weak demand, a higher minimum wage creates incentives to evade the minimum, evade an increase in costs that would make the business less-competitive, whether by reducing hours per employee, or cutting back on marginal lines of business, or offshoring, or creating categories of employee who are not subject to minimum wage rules (interns, freelancers, piece-rate workers, tip- or commission-based employees, off-the-books employees, etc.). Once average wages are up, yes, employers would respond to an expected increase in demand by increasing investment. But if they don’t forecast that increase ab initio, then the bootstrapping desired by the advocates of a higher minimum wage may never get going.
A wage subsidy approach doesn’t burden employers. Instead it burdens taxpayers – and employees. Why employees? Because the subsidies phase out, they effectively become a high marginal tax rate on wage income at the low end of the scale. Employers would be cognizant of that fact, and it would affect the wages they would be willing to offer. Again, the effect would be different in slack markets versus in tight ones. In tight labor markets, employers couldn’t afford to put a ceiling on wages because of the fear of losing out in a competition for available employees. A wage subsidy would therefore largely be captured by employees themselves. In slack labor markets with weak demand, however, the subsidy would make it possible for employers to reduce effective wages, and allow the subsidy to pick up the slack. An employee who was willing to work for $10/hour before a subsidy went into effect would, if jobs are scarce, certainly still be willing to work for $10/hour afterwards, even if now $9 came from the employer and $1 from the government. Thus there’s the possibility that wage subsidies would be largely captured by employers rather than employees. The subsidy might not increase effective wages, and might create an effective ceiling on income because of the high marginal tax rates.
None of this is intended to be an argument against the ACA, which I largely favored at the time and still do. It’s an argument that the perverse incentives created by redistribution schemes are more intractable in a weak labor market such as we have today than they are in a strong one. Those redistribution schemes may be worthwhile for other reasons nonetheless – for example, because they improve health outcomes overall, or alleviate specific social ills. All of which means we should focus more on how to strengthen that labor market (whether by improving the long-term outlook for real growth or the short-term outlook for nominal growth; the approaches are not mutually-exclusive) than on trying to perfectly optimize redistribution schemes undertaken for other purposes.
Will Wilkinson worries about the death of the old-school blog that was part of the “gift economy”:
There’s nothing wrong with blogging for money, but the terms of social exchange are queered a little by the cash nexus. A personal blog, a blog that is really your own, and not a channel of the The Daily Beast or Forbes or The Washington Post or what have you, is an iterated game with the purity of non-commercial social intercourse. The difference between hanging out and getting paid to hang out. Anyway, in old-school blogging, you put things out there, broadcast bits of your mind. You just give it away and in return maybe you get some attention, which is nice, and some gratitude, which is even nicer. The real return, though, is in the conclusions people draw about you based on what you have said, about what what you have said says about you, about what it means relative to what you used to say. People form expectations about you. They start to imagine a character of you, start to write a little story about you. Some of this is validating, some is irritating, and some is downright hateful. In any case it all contributes to self-definition, helps the blogger locate and comprehend himself as a node in the social world. We all lost something when the first-gen blogs and bloggers got bought up. Or, at any rate, those bloggers lost something. I’m proud of us all, but there’s also something ruinous about our success, such as it is. We left the garden behind. A guy’s got to eat. I mostly stopped blogging for myself because I thought I couldn’t afford to give it away. But I miss the personal gift economy of the original blogosphere, I miss the self it helped me make, and I want at least a little of it back.
I completely understand what he’s getting at – but I want to complicate the picture a little bit.
I started blogging in 2002, hanging out my own shingle on blogspot. I did it primarily as a belated response to the trauma of 9-11: I had been emailing news items to a variety of friends and family with an obsessiveness that nearly deserved a DSM number, and one of them finally told me I should stop emailing him and start a blog if I felt compelled to tell everyone what I thought. So, against my wife’s explicit instructions, I did.
And I loved it, right from the get-go. The thrill of instant response to what I said was a perfect fit for my latent writerly ambitions for recognition and my Wall Streeter’s inherent attention deficits. I would write, I would press “publish,” and someone out there would respond.
But that response wasn’t merely gratifying or instructive; it shaped what I wrote, shaped the persona (a better word than “self”) that I was developing on-line. My style, my subject matter, my politics, my sense of who I was and was meant to be evolved in part based on what got positive reinforcement and what didn’t, even though I wasn’t being paid anything at all. A gift economy is still an economy, and there’s nothing particularly pure about non-commercial social discourse. “No man but a blockhead ever wrote except for money” – so said Sam Johnson, but in fact the truer statement is that no man but a blockhead ever tried to earn money by writing. When it comes to money, Willy Sutton had a much better understanding. So all of us writers, whatever our medium, write out of some other compulsion than to earn a living. And to the extent that that compulsion has something to do with having readers, we have to watch the progress of our addiction, how it is changing us.
Some of us do have to earn a living, of course, and that can, indeed, shape the way we write. But that’s not something unique to blogging. It applies to screenwriters and print journalists and lyricists. I have no doubt that it applies to poets, who surely want to write poetry that will be understood and appreciated by the those few, disturbed individuals who make their lives reading contemporary poetry. After all, if they don’t get published, how likely are they to get that teaching gig that actually pays the bills? Anyway, there’s a reason Franz Kafka and Wallace Stevens didn’t quit their day jobs.
The struggle for anybody who actually cares about the quality of what they do is to keep an eye on something other than the immediate reception of the piece, whatever the work is, keep an eye on the object itself. Or, rather, to develop the confidence that you actually know what makes the object itself beautiful and true. The confidence to know that you are Orson Welles and not Ed Wood, to pick two artists who emphatically did it their own way.
The same is true, on a microscopic scale, for blogging. If all you’re doing is hanging out, you’re probably not writing anything very worth reading. If all you’re doing is chasing click bait, or following the news cycle, you’re probably not writing anything very worth reading. And that’s the fundamental question: do you want to write anything worth reading?
I’m quite sure Will Wilkinson does. Why else would he be pursuing an MFA in writing? Surely not because he wants to teach.
The heart of the argument sounds like a simple one:
- If the real return to capital is higher than the real growth rate, then the share of national income that accrues to capital will increase over time.
- Since capital is concentrated, this also means a steadily increasing concentration of wealth.
- For most of human history, real growth was low, and therefore there was a steadily increasing concentration of wealth.
- The period from World War I through the 1970s was an exception to this trend, as capital experienced a series of extraordinary shocks (World Wars I and II, the Great Depression, the end of colonialism and the expropriation of capital that followed, etc.) that resulted in massive redistribution of wealth.
- The developed world is now in the process of returning to the historical norm of low real growth, with the result that capital’s share of income continues to rise, which, in turn, will result in ever-widening gap between the wealthy and the rest of the population.
- This is not a market failure; indeed, the more efficient the market is, the more rapid this concentration will proceed.
I’m somewhat puzzled why this process is attributed to “capitalism” since, if you think about a pre-capitalist economy, it matches that description pretty well. When capital is overwhelmingly tied up in agricultural land, and the land is largely held by a small group of people willing to deploy violence to maintain their title, then returns to capital will certainly be higher than the negligible economic growth rate. The serfs will remain as poor as ever while the landowner retains virtually 100% of the surplus of their labor (though a percentage will accrue to artisans, soldiers, and others hired by the landowner to provide a value-added service which gives them some negotiating leverage). And, in feudal times, that’s pretty much what happened, with possible evolutionary consequences as the landed classes had a higher birth rate than the un-landed.
Or take a look at the description of the political economy of the Roman Republic in the early books of Livy. It’s an endless cycle in which the Senators, owning most of the land, reduce the plebes to debt peonage, then the plebes begin to revolt, there’s some debt reform, and the cycle begins again – until the Romans got the brilliant idea of conquering the rest of Italy and letting the whole population, plebes and Senators, live off the surplus generated by the conquered peoples.
Is there any reason to think we’re headed back to that kind of economy of scarcity? Here are some preliminary thoughts I have:
- The components of real growth are growth in productivity (output per worker) and growth in population. Growth in population in the developed world is low to negative. Productivity growth, meanwhile, has taken a different form than it did during the industrial revolution. A big component of productivity growth these days involves outsourcing functions to lower-wage countries. This makes the remaining employees in the developed country more productive, but it’s not actually comparable to the application of capital so that one laborer can do more in a given hour.
- However, that same process is part of what is driving a dramatic increase in productivity in countries like China. Is inequality actually even increasing on a global scale? I’m doubtful. China and India are very large, and are growing more wealthy at a rapid rate. Those societies are becoming more unequal – but because they started off so poor, I would expect the Gini coefficient of the world as a whole to be going down. As China’s wealth burgeons even as its population growth stalls and begins to decline, and as India follows suit several decades behind, it will be interesting to see whether this dynamic in the developed world changes. Predicting a reversion to the pre-industrial mean seems like a pretty bold move when so many large variables are still in flux.
- The other big wild card is Africa, where population growth continues to be very high and where productivity growth has only just begun to take off. By the end of the century, according to the U.N.’s medium population projection, nearly 40% of the world’s population will be African. The productivity growth of the African population is the main unknown variable that will determine the global Gini coefficient at century’s end.
- We’re still actually in the early stages of the information revolution, and so it’s too soon to say whether the kinds of broad-based, huge increases in labor productivity we associate with the industrial revolution will be replicated with the information revolution. I don’t think anyone knows the answer to that question.
- Notwithstanding what happens to “true” productivity growth in developed societies, Low population growth has an effect on the value of assets that depreciate rapidly. When the population is growing, it makes sense to spend money now on physical plant to serve growing demand, and on physical infrastructure to house and move people, even if you expect that plant to deteriorate quickly or that infrastructure to need to be replaced. When the population is stagnant or shrinking, it doesn’t make sense to invest in rapidly-depreciating assets. Instead, it makes more sense to invest in assets that will retain their value or even appreciate. Cathedrals rather than tract houses, say.
- Piketty proposes a global tax on wealth to restrain the growth of inequality. One objection to such a tax is that the incentives for a given state to cheat are simply too large – any state that had a lower tax on wealth than the cartel would attract enormous inflows of capital. But there’s another objection: Piketty implicitly assumes that such a tax would be used to restrain the growth of inequality within the developed world. But why would the developing world go along with such a scheme? It’s one thing if Switzerland or the UAE cheats. It’s quite another thing if India does. North-South dynamics should completely overwhelm the internal dynamics within the developed world, as well as the problems of coordinating between developed countries.
- On the other hand, if you wanted to impose a wealth tax within the developed world, the way to do it would be to eliminate physical cash and impose a negative overnight interest rate on savings. This is something we’re going to have to be able to do anyway before too long in societies with a negative population growth rate, because those societies will periodically experience a negative rate of economic growth. But it will also be necessary to prevent capital from capturing an outsized share of national income during such periodic recessions. I suspect that such a system would be much more effective at achieving Piketty’s objectives than a tax, because of the limited liquidity of “cheater” currencies.
Rest assured, I expect to return to this topic again.
Damon Linker does a fine job tearing into the absurdity of Jamie Dimon’s (of JP Morgan Chase) and Henrique De Castro’s (of Yahoo) stratospheric compensation in the wake of lackluster to poor performance of the public corporations in their respective charge. We’ve heard that news before, but so long as it doesn’t change, it’s still news.
But in passing, he makes a point that I think is worth another look. Apropos of why the 1% continues to get richer at a faster rate than other segments of the population, he says:
Part of it is undoubtedly a result of the greater opportunities for wealth generation enjoyed by rich people everywhere. Turning $1 million into $10 million is usually easier than acquiring the $1 million in the first place — and, all things being equal, turning $10 million into $100 million is even easier.
Why, though, should that be the rule? Why should returns to wealth “accelerate” in this fashion? It’s not a law of nature by any means.
With any enterprise, greater scale brings greater efficiencies in some areas, and worse efficiencies in others. Greater scale gives you greater bargaining leverage in contract negotiations, standardization can reduce the overhead associated with all sorts of decision making, etc. But, on the other hand, greater scale means that the process of moving information up the chain of command gets more difficult and expensive, the growth of vested interests within an organization that conflict with one another and are not aligned with the interests of the organization as a whole, and standardization can substantially reduce flexibility. At the very largest scales, it becomes impossible to grow because the market becomes saturated.
What all of the above should mean is that larger fortunes/businesses are more readily preserved, either growing or decaying slowly, while smaller ones have a greater chance of both growing rapidly and evaporating completely. It should not, in general, be the case that larger fortunes or businesses grow more rapidly more easily than smaller ones.
Moreover, right know we are purported to be in the middle of a long global savings glut (though that concept is, of course, disputed), where there is too much savings chasing too few legitimate investment opportunities. This is one explanation for low long-term rates and persistently recurrent asset bubbles. But if returns to capital are low in general, how does it become easier to grow a large fortune than a small one? Shouldn’t low returns to capital mean that big pools of money stagnate? Isn’t this precisely what Bill Gross was complaining about?
In an overall low-return environment, returns should scale inversely with the size of the investment opportunity. There should be no liquid asset classes that present attractive returns, and any large opportunities that are less-liquid but could attract large pools of capital should also show degraded returns because of the high degree of competition to invest. Whereas small, below-the-radar opportunities should still exist simply because they are too small to be worth the time of a large pool of capital to investigate. An environment of high real interest rates should be the one where capital holds the whip hand.
If it is indeed easier to make $100 million out of $10 million than to make $10 million out of $1 million, that suggests a process of cartelization in the investment world. In other words, the distinction between self-dealing in contracts for wages versus greater “opportunities” for the very wealthy to achieve high returns to capital may be specious. Both situations may involve a substantial element of self-dealing.
Alternatively or additionally, it’s yet more evidence that real interest rates are actually quite high.
Marriage patterns weren’t random in 1960 either, and the past popularity of “Cinderella marriages” is more myth than reality. In fact . . . assortative mating has actually increased only modestly since 1960. . . .
[R]ising income inequality isn’t really due to a rise in assortative mating per se. It’s mostly due to the simple fact that more women work outside the home today. After all, who a man marries doesn’t affect his household income much if his wife doesn’t have an outside job. But when women with college degrees all start working, it causes a big increase in upper class household incomes regardless of whether assortative mating has increased.
All true! But there have been some important changes from 1960 to 2005, per the data he presents. Assuming I’m reading the statistics right, there’s been a real change in the preferences of highly-educated women: the percentage of women with more than a college degree who married a high-school dropout dropped by over 70%, the percentage of women with a college degree who married a high-school dropout dropped by over 40%, and the percentage of women with more than a college degree who married a man with no more than a high school degree dropped by over 35%.
But the same is not true of the preferences of highly-educated men, which, in all three cases, are essentially unchanged from 1960 to 2005. Roughly the same percentage of highly-educated men in 2005 were willing to marry women with little education as was the case in 1960.
It’s not that men have remained more receptive to marrying below them in terms of education. Rather, women’s preferences have come to match men’s preferences over the 45 years in question. In 2005, the percentage of highly-educated women willing to marry men with a low education was essentially identical to the percentage of highly-educated men willing to marry women with a low education. In 1960, women were much more willing than men to “marry down” educationally-speaking.
That’s interesting, isn’t it – particularly given the fact that the percentage of women who complete college has soared since 1960. In 1960, men made up more than 60% of college degree-holders, versus under 40% women. By 2005, those percentages were nearly reversed.
Of course, it probably matters a whole lot more that the overall percentage of the population with a college degree has grown very substantially over the period in question, and that the overall percentage of the population that is married has dropped substantially over the period in question.
Regardless, I suspect it’s time to cue Hanna Rosin.
Rod Dreher muses about the decline of religious culture and its implications for “culturally” religious art:
From the outside, my guess is that culturally Catholic writers are more likely to be reacting against something. Their imaginations were formed by the culture and rituals of Catholicism, even if they’ve rejected the religion. I am skeptical, though, about whether there is anything identifiably or meaningfully Catholic about any culturally Catholic writer whose imagination was formed after the postconciliar dissolution of that strong and distinct American Catholic culture. I could be wrong about that; there is certainly something distinctly Jewish about culturally (but not religiously) Jewish writers. Then again, Jews are a minority in America, whereas Catholics are members of the largest church in the country — though an increasingly assimilated one.
A few thoughts.
First of all, what about Catholic writers from ethnic minority groups? People like Oscar Hijuelos, or Richard Rodriguez - these are Catholic writers, right? And they are certainly culturally distinct. And Catholicism is part and parcel of that cultural distinction – but not the whole of it.
Of course, Hijuelos and Rodriguez are interested in religious and spiritual questions in a fundamental way – they may just be names to add to the list of properly Catholic writers. But take someone like Junot Diaz. The culture he comes from is emphatically a Catholic culture. And I wouldn’t say that he’s reacting against that as a central concern – not in the way that, say, James Joyce was thoroughly formed by but reacted strongly against Irish Catholicism. But neither is his work consciously coming from anything like an explicitly Catholic perspective (indeed, I suspect he would consciously reject such a perspective, though obviously I don’t know that).
Mentioning Diaz brings me to a second point: many people have absorbed some of their worldview through other writers without themselves being strongly committed religiously-speaking (or even, necessarily, knowledgable). In Diaz’s case, Tolkien clearly means a huge amount to him, and not just as a matter of nostalgia for his childhood. Dreher would, I’m sure, agree that Tolkien was a deeply Catholic writer. How do you “score” that kind of influence on a writer? What would you say about an avowedly non-religious writer who was plainly influenced by Dostoevsky?
To pick another example, film noir is a distinctly Catholic-inflected genre of film. (Most people would describe the world of noir as godless, but I would argue that the god that is absent from the world of noir is the Roman Catholic god, which is why I say it’s a Catholic-inflected genre.) So how do you “score” a modern director or screenwriter, who may or may not have any authentic religious concerns of his own, who creates a film that is, if anything, hyper-conscious of the pulp-Catholic substrate of the genre?
Now, about Jewish writers. Judaism is simply less theology-centric than Catholicism, and as a consequence you can be a religiously observant Jew who writes books about religiously observant Jews and your fiction may still be Jewish primarily in the sociological sense. Take, as an example, Kaaterskill Falls, by Allegra Goodman. This is a very good novel to read if you want to get a feel for the dynamics of an ultra-Orthodox Jewish community. It’s also a good novel qua novel. But it isn’t god-haunted in the way that, say, Graham Greene’s or Flannery O’Connor’s work is. Or, for that matter, Isaac Bashevis Singer’s – even though Singer was less observant than Goodman is. I’d say similar things about Nathan Englander: that he’s interested in Jews and Judaism, but if he’s haunted by anything, it isn’t by God. History, maybe.
An interesting phenomenon to end on is Jewish writers who, in search of a spiritual inspiration, wandering into foreign fields precisely because that’s the best way to find their way home. Tony Kushner’s play, Angels in America, for example, is fascinated by Mormonism precisely because of its link to archaic Judaism, which (though politically very problematic) appears more spiritually nourishing than the Judaism that actually exists in the contemporary world.
And, per my comment about noir and Catholicism above, I’d encourage a more enterprising soul than myself to do a study of noir conventions and Christology in Michael Chabon’s Yiddish Policemen’s Union. This is a book that is Jewish from (as they say) the soles of its feet to the top of its head. But it’s also the story of a detective, wandering in a hopelessly fallen world, who discovers that the messiah just may have come, but his own people killed him (or, anyway, drove him to suicide) because he would not bring the kingdom the way they expected. That story sound familiar to anyone else?
If I had to pick a recent novel that best expresses contemporary American Jewish spirituality, it would be Nicole Krauss’s book, The History of Love, a fable about the possibility that nothing is ever irrevocably lost, and that, even if by an extraordinarily circuitous route, what is bashert ultimately always comes to pass. Most likely by means of a book. I didn’t love it, because I don’t experience the universe as behaving that way, and I get irritated by people who do (perhaps precisely because I can’t). But I can’t tell you how many of my friends loved it.
Posting has been light (actually nonexistent) for the past week because I’ve been in Utah for the Sundance Film Festival, primarily to see the premier of the film, “Infinitely Polar Bear,” on which I was an executive producer. So this is mostly a combination apology post and self-promotion post. But I also thought I’d say a word or two about the whole experience.
As a budding filmmaker myself, my main take-home from Sundance was: “yikes!” I saw about a dozen films, and the weakest of them was thoroughly intimidating if I imagined trying to make it myself. If I put on my critic hat, it was pretty easy for me to say, “I would have shot that scene differently;” “that character’s pretty severely underwritten;” “isn’t the film kind of missing its third act?” etc. But if I put on my teeny-tiny filmmaker’s hat, I was just amazed by the amount of talent out there. And most of these films won’t even get much (if any) distribution!
All of which means that I came away strongly agreeing with most of Tim Wu‘s criticism of Manohla Dargis’s piece about how there are too many indie films being made, and too many released into a the same small market. There is way, way too much talent out there to worry that too many films are being made, and I can’t imagine a better way to develop that talent than to put it to work making films. But Wu doesn’t adequately address an important component of Dargis’s argument, which is that the best small films get lost in the shuffle because everything with small commercial expectations is distributed in roughly the same way.
I’m not convinced that’s 100% true, but to the extent that it is it suggests that independent filmmakers could use better tools for reaching beyond the circle of aficionados to the larger universe of potential fans. As films get cheaper to make, and as the theatrical audience continues to shrink, independent film looks more and more like independent music. That has implications for distribution strategies.
In any event, highlights from the festival for me included:
- “Frank,” a cult comedy about a low-on-talent but high-on-hopes keyboardist who, on a fluke, is asked to join a bizarre noise-band fronted by a fellow who wears a giant papier mache head at all times (played by Michael Fassbender, which is a pretty funny joke in and of itself, casting Fassbender to play a role where you never see his face);
- “Blind,” a Norwegian film about a woman who has recently gone blind, and which does a fascinating job of making the interior life of said woman cinematic; much of what we see is her visualization of what she deduces – or imagines – or outright fantasizes – what might be going on around her;
- “A Girl Walks Home Alone At Night,” a delightfully atmospheric Jim Jarmusch- or David Lynch-esque vampire flick set in an imaginary Iranian ghost town of Bad City (it’s an American film, shot in California, but the cast is all of Iranian extraction and the dialogue is all in Farsi);
- “Web Junkie,” an Israeli documentary about internet addiction in China, filmed in a rehab center where teenagers are sent by their parents (generally the kids have to be tricked or kidnapped) for a tough-love cold-turkey cure – just amazing for the level of access the filmmakers got to the kids, their parents, and the facility generally.
I don’t know that any of these films will get distribution. Frankly, I don’t know that they should! But they all stuck with me, and they’ll no doubt show up on the internet at some point. So now you know to look for them.