Raise your hand if you’re a conservative who has cited Edmund Burke without actually having read him closely.
Really—you’re all scholars of the Irish-born MP and oft-celebrated “father of modern conservatism”?
Okay, what did Burke mean by the phrase “the little platoon”?
Yuval Levin explains in his wonderful new book The Great Debate: Edmund Burke, Thomas Paine, and the Birth of Right and Left:
The division of citizens into distinct groups and classes, Burke writes, “composes a strong barrier against the excesses of despotism,” by establishing habits and obligations of restraint in ruler and ruled alike grounded in the relations of groups or classes in society. To remove these traditional restraints, which hold in check both the individual and the state, would mean empowering only the state to restrain the individual, and in turn restraining the state with only principles and rules, or parchment barriers. Neither, Burke thought, could be stronger or more effective than the restraints of habit and custom that grow out of group identity and loyalty. Burke’s famous reference to the little platoon—“To be attached to the subdivision, to love the little platoon we belong to in society, is the first principle (the germ as it were) of public affections”—is often cited as an example of a case for local government or allegiance to place, but in its context in the Reflections, the passage is very clearly a reference to social class.
Still feeling Burkean? Ready to go the pipe-and-slippers, Brideshead cultist route and declare yourself a loyal subject of the queen?
Levin reminds us that the context in which Burke wrote those words was a long-running intellectual dispute with a European-born radical, a man who was cheering on the secular revolution in France—and, oh, by the way, also one of the forefathers of our own revolution, favored by none other than Ronald Reagan himself—the Common Sense and The Crisis pamphleteer Thomas Paine.
That the rivalry between Burke and Paine cuts both ways through our hearts—this is precisely the kind of dialectic, if you will, that Levin hopes to provoke in the reader.
Make no mistake, though; Levin is a Burkean. In fact, the most eloquent exponent of Burkean conservatism, properly understood, since George Will circa 1983’s Statecraft as Soulcraft.
While scholarly and measured in tone, The Great Debate is a readable intellectual history that fairly crackles with contemporary relevance.
Indeed, The Great Debate is the must-read book of the year for conservatives—especially those conservatives who are profoundly and genuinely baffled by the declining popularity of the GOP as a national party. How can America, these conservatives ask, the land of the rugged individual, the conquerors of the frontier, choose statism and collectivism over freedom and liberty?!
Levin’s book provides the answer: You’re looking at the Democratic Party all wrong. It’s just as individualist as you are—maybe more so.
And that is the problem!
If you are literate today, it does not mean you can write — not even close to it in many cases. But if you were literate in 1863, even if you could not spell, you often could write descriptively and meaningfully. In the century and a half since, we have evolved from word to image creatures, devaluing the power of the written word and turning ourselves into a species of short gazers, focused on the emotions of the moment rather than the contemplative thoughts about consequences and meaning of our actions. Many everyday writers in the mid-19th century were far more contemplative, far more likely to contextualize the long-term meaning of their actions. They meticulously observed and carefully described because, although photography was the hot new medium during the Civil War, words remained the dominant way of communicating thought, memory, aspiration, hope.
Raasch’s theory is not a new one. Back in the 1980’s, when Internet was still in its primordial days and television was king, Neil Postman wrote Amusing Ourselves to Death. His book cautioned against the developing “Age of Show Business,” fed by television’s sensory, visual medium.
Postman believed three “ages” were prominent throughout information’s history: first, ancient oral cultures encouraged the preservation of information through spoken records and stories. When the printing press and writing became more prominent, oral cultures dissolved into the “Age of Exposition”: a time when written records were perceived as holding the greatest truth. Then as photography and videography developed, media began to change again—for the worse. Postman believed we would lose more than writing ability in the wake of the entertainment era: he warned of a depleting mental and emotional capacity. He believed we would become as obsessed with pleasure as the humans in Aldous Huxley’s Brave New World. Postman’s descriptions of distracted, sensationalistic consumers relate well to Raasch’s “species of short gazers.”
We can only imagine what Postman would have said about the Internet; perhaps Raasch gives us a taste when he describes it as “the Great Din”: “Today, throwing barbs and brickbats into the Great Din of the Internet has become as second nature as breathing … The Great Din requires no forethought, no real calculation of purpose or result, no contemplative brake, no need to seek angles or views beyond those that reaffirm or reassure what we think right now.”
Is this truly the future of media? Will we lose any true, deep, thoughtful communication in its havoc of pixels and pictures?
One interesting counter-opinion comes from former Daily Beast editor Tina Brown. Having recently left the world of journalism for event production, Brown has told reporters that she no longer reads magazines herself—in fact, she thinks “the whole writing fad is so twentieth century” (in the words of New York Magazine). But rather than warning of impending havoc and din, Brown calls people back to oral communication: “I think you can have more satisfaction from live conversations,” she said, adding that we are “going back to oral culture where the written word will be less relevant.”
If we experience the “death of writing,” as Rassch puts it, could we come full-circle and return to the age of oral communication? Will grandfathers sit down with their grandchildren and tell them stories, like our ancestors so long ago? One can only hope; but if such an experience were truly to flower from “The Great Din,” it would be rather surprising.
Our modern world is obsessed with specialization. Yet this specialization is often unhealthy, both culturally and personally. Aeon Magazine contributor Robert Twigger suggests we need a new area of study—“polymathics”—to counter this monopathic obsession:
Polymathics might focus on rapid methods of learning that allow you to master multiple fields. It might also work to develop transferable learning methods. A large part of it would naturally be concerned with creativity — crossing unrelated things to invent something new. But polymathics would not just be another name for innovation. It would, I believe, help build better judgment in all areas.
The study methods Twigger prescribes have actually existed for quite some time, although in a slightly altered form, as the classical liberal arts. First developed in ancient Greece, the liberal arts encompassed those skills necessary for civic and personal freedom. The Greeks even emphasized the importance of physical athleticism, as Twigger suggests in his article. The “arête” they sought fostered excellence of mind, soul, and body.
In modern academia, the liberal arts usually include a core curriculum that enables students to “master multiple fields.” The liberal arts classically emphasized a progression from rudimentary learning (the “grammar”) to practical application in communication and circumstance (similar to Twigger’s idea of developing “transferable learning methods,” “crossing unrelated things to invent something new,” and ultimately building “better judgment”).
Twigger’s emphasis on learning many fields is good. Specialization, while useful in many job settings, can disparage the interconnected and complimentary nature of learning. Twigger reminds us that the humanities do advance educational growth in multiple subjects:
An intriguing study funded by the Dana foundation and summarised by Dr. Michael Gazzaniga of the University of California, Santa Barbara, suggests that studying the performing arts—dance, music and acting—actually improves one’s ability to learn anything else. Collating several studies, the researchers found that performing arts generated much higher levels of motivation than other subjects.
But note that Twigger is arguing from a claim of practicality: “polymathics will make you smarter, more creative, more humorous,” etc. This is a common attitude toward the humanities: if learning isn’t practical, it shouldn’t be practiced. Twigger points out the quantifiable practicalities of the humanities in his article, but there is no mention of the purpose the Greeks sought after: their vision of arête and freedom.
In a Wednesday New Yorker article, Lee Siegel argues that recent studies on literature—the ones claiming literature makes you more empathetic—soil the beauty of reading literature for its own sake. “Fiction’s lack of practical usefulness is what gives it its special freedom,” Siegel writes. “When Auden wrote that ‘poetry makes nothing happen,’ he wasn’t complaining; he was exulting. Fiction might make people more empathetic—though I’m willing to bet that the people who respond most intensely to fiction possess a higher degree of empathy to begin with. But what it does best is to do nothing particular or specialized or easily formulable at all.”
Perhaps the same is true for Twigger’s “polymathics,” the humanities, and the liberal arts. Despite their quantifiable benefits, one shouldn’t denigrate the “special freedom” of learning for its own sake.
No one would deny that American philanthropy is grounded in good motives. But as The New Atlantis contributor William Schambra points out in a detailed article, philanthropy can become as poisoned as any other human venture:
America’s first general-purpose philanthropic foundations — Russell Sage (founded 1907), Carnegie (1911), and Rockefeller (1913) — backed eugenics precisely because they considered themselves to be progressive. After all, eugenics had begun to point the way to a bold, hopeful human future through the application of the rapidly advancing natural sciences and the newly forming social sciences to human problems. By investing in the progress and application of these fields, foundations boasted that they could delve down to the very roots of social problems, rather than merely treating their symptoms … According to the perspective of philanthropic eugenics, the old practice of charity — that is, simply alleviating human suffering — was not only inefficient and unenlightened; it was downright harmful and immoral. It tended to interfere with the salutary operations of the biological laws of nature, which would weed out the unfit, if only charity, reflecting the antiquated notion of the God-given dignity of each individual, wouldn’t make such a fuss about attending to the “least of these.” Birth-control activist Margaret Sanger, a Rockefeller grantee, included a chapter called “The Cruelty of Charity” in her 1922 book The Pivot of Civilization, arguing that America’s charitable institutions are the “surest signs that our civilization has bred, is breeding and is perpetuating constantly increasing numbers of defectives, delinquents and dependents.” Organizations that treat symptoms permit and even encourage social ills instead of curing them.
Schambra traces the history of “philanthropic” eugenics through the years, along with its roots and causes. “Philanthropy’s involvement in eugenics should forever remind us that, for all our excellent intentions and formidable powers, we are unable to eradicate our flaws once and for all by some grand, scientific intervention,” he writes.
The article highlights some inherent flaws in philanthropy that conservatives, and isolationists specifically, are very sensitive to: namely, the extent to which our “compassion” is motivated by a desire to control, fix, and regulate things not our business. It’s the “nanny state” paradigm, manifested in individuals like Mayor Bloomberg. It’s typically an accusation leveled at “compassionate conservatives.”
One commenter on a recent human trafficking article said he feared “a strong sense of self righteous, neo colonialist domination inherent in this kind of ‘cause’.”
However, in our fear of becoming meddlesome welfare statists, conservatives run the danger of becoming heartless. Paul Krugman accused Republicans of such sentiments in a Thursday column—he writes, “Republican hostility toward the poor and unfortunate has now reached such a fever pitch that the party doesn’t really stand for anything else — and only willfully blind observers can fail to see that reality.” Patrick Deneen excellently defined the problem with our attitudes in this regard in a recent TAC blog post:
The motivation of charity is deeply suspect by both the Right and the Left. The Right—the heirs of the early modern liberal tradition—regards the only legitimate motivation to be self-interest and the profit motive. They favor a profit-based health-care system (one explored to devastating effect in this recent article on health care in the New Yorker), and a utilitarian university (the “polytechnic utiliversity” ably explored by Reinhard Huetter in the most recent issue of First Things). The Left—while seemingly friends of charity and “social justice”—are deeply suspicious of motivations based on personal choice and religious belief. They desire rather the simulacrum of charity in the form of enforced standardization, homogeneity, and equality, based on the motivation of abstract and depersonalized national devotions and personal fear of government punishment.
I do not think most conservatives (unless they really are Randian to the core) want to forsake true “compassionate conservatism”—just its current manifestation in political circles. How, then, does one exercise philanthropic sentiment properly?
The key, according to Schambra, is personal caritas (love). He writes,
loving personal concern is at the heart of charity traditionally understood. It can only be practiced immediately and concretely, within the small, face-to-face communities that Tocqueville understood to be essential to American self-government. There, the seemingly minor and parochial concerns of everyday citizens are taken seriously and treated with respect, rather than being dismissed as insufficiently self-conscious emanations of deeper problems that only the philanthropic experts can grasp.
One could say this is the localism movement’s compassionate conservatism. It is based in present needs, rather than remote philanthropic endeavors. It seeks to love one’s neighbor, and not to “fix” him.
Does this mean true conservatives shouldn’t get involved in global crises? It might depend on the person and situation. The concerns of Coptic Christians in Egypt are of immense and immediate concern to me, because I consider them my brothers and sisters in Christ. Do I believe all secular American should intervene on their behalf? No—it is not their responsibility as it is mine. But Schambra is right: perhaps we should first focus where we are planted, and then slowly, thoughtfully spread from there.
It is sad that the Republican Party, a political group filled with religious folk, often shows callousness toward the impoverished—either by refusing to help them, or by offering only conditional charity. The Christian faith is filled with instructions to unconditionally love and help the unfortunate. Consider this passage from Isaiah 58, in which God rebukes the nation of Israel for being “religious” by fasting, but neglecting the poor:
Is this not the fast that I have chosen: to loose the bonds of wickedness, to undo the heavy burdens, to let the oppressed go free, and that you break every yoke? Is it not to share your bread with the hungry, and that you bring to your house the poor who are cast out; when you see the naked, that you cover him, and not hide yourself from your own flesh? Then your light shall break forth like the morning, your healing shall spring forth speedily, and your righteousness shall go before you; the glory of the Lord shall be your rear guard. Then you shall call, and the Lord will answer; you shall cry, and He will say, ‘Here I am.’ … If you extend your soul to the hungry and satisfy the afflicted soul, then your light shall dawn in the darkness, and your darkness shall be as the noonday.
Notice that it doesn’t say, “Only cover the naked or feed the hungry if they aren’t being lazy.” God doesn’t say, “Undo the heavy burdens—unless they really need to work harder.” Neither does he say, “Make sure they show a change of heart or get converted before you help them.” This isn’t a gospel of cutting food stamps. Perhaps it is a gospel in which food stamps never should have been necessary in the first place. If conservatives—especially Christian conservatives—want to say “government should mind its own business,” then perhaps it is time they start minding theirs.
A friend once described conservatives as people who agreed about one important thing—that at some point in the past, something went terribly wrong. After that, conservatives splinter into untold numbers of camps, since they disagree ferociously about the date of the catastrophe.
Most conservatives today agree that America has taken a terrible turn—that something went wrong at some point in the past. Most believe that America was well-founded by the Framers of the Constitution, but that something bad happened that corrupted the sound basis of the Founding. A few—generally unpopular—believe that Lincoln is to blame, that he introduced the beginnings of centralized State and the imperial Presidency. Many point to the catastrophe of the 1960s as the main source of current woes (a striking number of these constitute the neoconservative faction). But, at least in the circles in which I travel, an increasing number have settled on the Progressive era at the turn of the 20th-century as the source of today’s troubles, and see President Obama as the direct inheritor of this philosophical and political movement that was born in the late-19th and early-20th centuries.
The dominant narrative about the rise of Progressivism, both in the halls of academe and its distillation in the popular media expressed by figures such as Glenn Beck, is that Progressivism was a virus that was incubated in a foreign (particularly German) laboratory and was transported to America by intellectual elites, often educated at German universities and influenced by thinkers such as Kant and Hegel (such intellectuals include the likes of Herbert Croly, Woodrow Wilson, and John Dewey). These Progressives despised the classical liberal philosophy of the Founding, and sought either an explicit rejection of the Constitution or an effective change by re-defining it as a “living” document.
This is a plausible case – and, the fact is that major progressive figures turned often to German and other foreign sources in developing their intellectual critique of the classical liberal philosophy of the Founding. Thus, by attributing the rise of Progressivism to a foreign contagion, it can be comfortably maintained that the Founding was good and true and was corrupted by a fifth column.
However, what this argument overlooks is that the greatest analysis of American democracy—Democracy in America, published in two volumes in 1835 and 1840, a full half-century before the flowering of Progressivism—already perceived the seeds of Progressivism’s major tenets already embedded in the basic features and attributes of liberal democracy as established at the Founding. Of particular note, while the major figures of Progressivism would directly attack classical liberalism, Tocqueville discerned that Progressivism arose not in spite of the classical liberal tradition, but because of its main emphasis upon, and cultivation of, individualism.
Individualism is a distinctive phenomenon arising in liberal democracy, notes Tocqueville. The idea of the individual is at least as old as Christianity, but individualism is a new experience of self that arises with the passing of the experience of embeddedness in a familial, social, religious, generational, and cultural setting that is largely fixed and unchanging—the basic features of an aristocratic society. The rise of liberal democracy, by contrast, is premised upon a view of the individual deriving from the social contract tradition, which conceives of human beings in their natural state as defined, above all, by the total absence of such constitutive bonds, inherited roles and given identities (a philosophy that Bertrand de Jouvenel said was developed by “childless men who forgot their childhood”). Instead, as Tocqueville describes, in a democracy, “the chain” that once bound a peasant all the way to a king is “shattered,” throwing each individual in their freedom and equality finally “into the solitude of their own own hearts.”
Yesterday morning, Micah Mattix asked “is it impossible for the humanities to thrive in a secular society,” one dominated by a philosophical materialism? In one way yes, yet there is nevertheless hope.
Insofar as our secular society has become infatuated with materialism, denying the possibility of anything (except subatomic particles) which we cannot calculate or mathematically describe, the humanities will cease to matter—we will only care about the knowledge and corollary power to be sapped from the world by scientific and mathematical dominance. At this point, humans themselves will cease to matter, as our humanity is degraded into a chemical and molecular equation, much like the code and circuit boards of a computer.
The Canadian philosopher, Charles DeKoninck, describes the tension between the sciences and humanities in this way:
The problems of philosophy, when distinguished from those which Bertrand Russell calls scientific, will remain forever in debate. Should the day ever come when Leibniz has his way: when, to settle their problems, philosophers will merely have ‘to take their pens in their hands, to sit down to their desks and to say to each other (with a friend as witness, if they liked), “Let us calculate” ‘, there will be no more problems, for there will be no one to raise them. Meantime, the calculators have their use, while philosophers are forever in need of being debunked, a thing no one knew better than Socrates. Nevertheless, as Aristotle suggested, no one can deny philosophy without at least implying a philosophy of his own, and his own may prove to be a very foolish one.
What did we know of man before we found out that he is a throng of electric charges? And that he is composed of multitudinous cells? And that the circulation of his blood is an exquisite piece of chemistry and mechanics? Is it possible that, having learned all this, we may remain far more ignorant of him than Sophocles, or Shakespeare? Or the people who believe they know what these writers meant? [Bold added.-M.O.]
Biology and chemistry can offer insights into the chemical reactions of anger—why the face flushes red and the heart beats faster—but science cannot offer the insight into “the anger of Achilles” as Homer does nor the words of rage Shakespeare places in Hotspur’s mouth, “an if the devil come and roar for them,/I will not send them: I will after straight/And tell him so; for I will ease my heart,/Albeit I make a hazard of my head.”
The academy sets aside the humanities for the sciences because of a pre-imposed philosophical approach. To embrace materialism, as Dr. Ronald McArthur has noted in a speech at Thomas Aquinas College, is to embrace “nihilism. That means nobody knows anything. It doesn’t matter whether you affirm something or deny something….Education then turns to the practical…There is hardly any education that is ordered, institutionally, to anything that is sound intellectually.”
In a post for Prospect, Christopher Fear asks why academic political theory is so remote from political practice. He concludes that it’s because political theorists devote themselves to eternal riddles that he dubs “Wonderland questions” rather than today’s problems. Consider justice, perhaps the original topic of political theorizing:
One of the central questions of academic political philosophy, the supposedly universal question “What is justice?” is a Wonderland question. That is why only academics answer it. Its counterpart outside the rabbit-hole is something like “Which of the injustices among us can we no longer tolerate, and what shall we now do to rectify them?” A political thinker must decide whether to take the supposedly academic question, and have his answers ignored by politicians, or to answer the practically pressing question and win an extramural audience.
Fear is right about the choice that political theorists face between philosophical abstraction and making an impact on public affairs. But he doesn’t understand why they usually pick the former. The reason is simple. Academic political theorists ask academic questions because…they’re academics.
In other words, political theorists are members of a closed guild in which professional success depends on analytic ingenuity, methodological refinement, and payment of one’s intellectual debts through copious footnoting. They devote their attention to questions that reward these qualities. Winning an extramural audience for political argument requires different talents, including a lively writing style, an ear for the public discourse, and the ability to make concrete policy suggestions. But few professors have ever won tenure on the basis of those accomplishments.
Another reason academic political theorists avoid the kind of engagement Fear counsels is that they have little experience of practical politics. Most have spent their lives in and around universities, where they’ve learned much about writing a syllabus, giving a lecture, or editing a manuscript—but virtually nothing about governing or convincing lay readers. How does expertise on theories of distributive justice, say, prepare one to make useful suggestions about improving the healthcare system? Better to stick with matters that can be contemplated from the comfort of one’s desk.
In this respect, political theorists are at a considerable disadvantage compared to professors of law or economics. Even when their main work is academic, lawyers and economists have regular chances to practice in the fields in the fields they study. Within political science, many scholars of international relations pass through a smoothly revolving door that connects the university with the policy community. Political theorists have few such opportunities.
Fear points out that it wasn’t always this way. Before the 20th century, many great political theorists enjoyed extensive political influence. But Fear forgets the main difference between figures like Machiavelli, Locke, Montesquieu, Madison, Burke, Hume, Mill, or Marx and their modern epigones. The former were not professors. Although all devoted to the philosophical truth as they understood it, they were also men of affairs with long experience of practical politics.
The “brilliant and surreal tragedy of academic political theory,” then, is not that political theorists have been diverted into the wrong questions. It’s that political theory is an uncomfortable fit with the university. Academic political theorists gravitate toward the kind of questions that career scholars are in a position to answer.
Earlier this week I began a series of lectures in one of my classes on the thought of the Anti-Federalists. I began by echoing some of the conclusions of the great compiler and interpreter of the Anti-Federalist writings, Herbert Storing, whose summation of their thought is found in his compact introductory volume, What the Anti-Federalists Were For. I began with the first main conclusion of that book, that in the context of the debate over the Constitution, the Anti-Federalists were the original American conservatives. I then related a series of positions that were held by the Anti-Federalist opponents of the proposed Constitution. To wit:
They insisted on the importance of a small political scale, particularly because a large expanse of diverse citizens makes it difficult to arrive at a shared conception of the common good and an overly large scale makes direct participation in political rule entirely impracticable if not impossible. They believed that laws were and ought to be educative, and insisted upon the centrality of virtue in a citizenry. Among the virtues most prized was frugality, and they opposed an expansive, commercial economy that would draw various parts of the Union into overly close relations, thereby encouraging avarice, and particularly opposed trade with foreign nations, which they believed would lead the nation to compromise its independence for lucre. They were strongly in favor of “diversity,” particularly relatively bounded communities of relatively homogeneous people, whose views could then be represented (that is, whose views could be “re-presented”) at the national scale in very numerous (and presumably boisterous) assemblies. They believed that laws were only likely to be followed when more or less directly assented to by the citizenry, and feared that as distance between legislators and the citizenry increased, that laws would require increased force of arms to achieve compliance. For that reason, along with their fears of the attractions of international commerce and of imperial expansion, they strongly opposed the creation of a standing army and insisted instead upon state-based civilian militias. They demanded inclusion of a Bill of Rights, among which was the Second Amendment, the stress of which was not on individual rights of gun ownership, but collective rights of civilian self-defense born of fear of a standing army and the temptations to “outsource” civic virtue to paid mercenaries.
As I disclosed the positions of the Anti-Federalists, I could see puzzlement growing on the faces of a number of students, until one finally exclaimed—”this doesn’t sound like conservatism at all!” Conservatism, for these 18-to-22-year-olds, has always been associated with George W. Bush: a combination of cowboy, crony capitalism, and foreign adventurism in search of eradicating evil from the world. To hear the views of the Anti-Federalists described as “conservative” was the source of severe cognitive dissonance, a deep confusion about what, exactly, is meant by conservatism.
So I took a step back and discussed several ways by which we might understand what is meant by conservatism—first, as a set of dispositions, then as a response to the perceived threats emanating from a revolutionary (or even merely reformist) left, and then as a set of contested substantive positions. And, I suggested, only by connecting the first and third, and understanding the instability of the second, could one properly arrive at a conclusion such as that of Storing, who would describe the positions of the Anti-Federalists as “conservative.”
First, there is the conservative disposition, one articulated perhaps most brilliantly by Russell Kirk, who described conservatism above all not as a set of policy positions, but as a general view toward the world. That disposition especially finds expression in a “piety toward the wisdom of one’s ancestors,” a respect for the ancestral that only with great caution, hesitancy, and forbearance seeks to introduce or accept change into society. It is supremely wary of the only iron law of politics—the law of unintended consequences (e.g., a few conservatives predicted that the introduction of the direct primary in the early 1900′s would lead to increasingly extreme ideological divides and the increased influence of money in politics. In the zeal for reform, no one listened). It also tends toward a pessimistic view of history, more concerned to prevent the introduction of corruption in a decent regime than driven to pursue change out a belief in progress toward a better future.
New York University freshman Elif Koc writes in the Atlantic that high school taught her to be a good student, but not “a good learner.” After attempts at thoroughness hurt her grades, she began to ask, “How can I do as little as possible and still get an A?” At the end of her article, she expresses the hope that “college is where I can become a good learner.”
Sam Swift’s research, recently published in the journal PLOS ONE, indicates that Koc may be disappointed, particularly if she hopes to attend a top graduate program. Swift found, both in the lab and real life, that students with the highest grades at grade-inflating institutions had the highest rate of acceptance into MBA programs. Swift demonstrated this result by comparing MBA program acceptance rates of students from institutions with and without grade inflation. The test admissions committee was also given data as to how candidates ranked against their classmates. Finally, the lab results were compared to an analysis of real-world MBA admissions data. From his findings, Swift draws out this important observation: “It’s really hard for people to look away from that glaring high number or that glaring low number of raw performance.”
Koc hoped to be afforded the opportunity to actually learn in a college environment, an opportunity she felt the college admissions push denied her. But Koc has the problem reversed: high schools have not radically misunderstood today’s learning culture. Rather, they take their cues from the norm in higher education, where high grades in college lead to post-college success. Academically Adrift, a study by sociologists Richard Arum and Josipa Roksa, assesses data compiled between 2005 and 2007 through the Collegiate Learning Assessment, or C.L.A. These researchers found that “American higher education is characterized by limited or no learning for a large proportion of students.” Instead, the university model is oriented to monetary gain. In a culture enthralled with quantity, it is not surprising that this quantification, rather than the substance such quantification was intended to express, would take precedence. It has trickled down from our universities into secondary and elementary educational systems.
Koc’s experience in high school is a common phenomenon: students often seek good grades at the lowest possible cost. After four years of training, it will be all too tempting for her to continue in this mentality. How will she ever get into the right graduate school or be recruited by a good company otherwise? But perhaps Koc will represent an exception to Arum and Roksa’s findings. Perhaps one night, she will stay up until four a.m. discussing Plato’s Dialogues, Einstein, or the Civil War with her friend–not because she has an assignment on it, but because a question has caught her mind, and will not release her until she has answered it. Perhaps some of these questions will begin to consume her for days and weeks on end, until she finally arrives at a solution, or at least a senior thesis topic. By focusing on the questions and material, she will no longer be simply a good student: she will be a good learner as well.
Because most of the traditional pathways to adulthood—marriage, economic independence, stable job—seem out of reach or prove to be reversible, working-class young adults have developed a new definition of maturity. This new pathway relies heavily on therapeutic culture: You become an adult by overcoming the trauma of your past, whether that involved abusive parents, drug addiction, mental illness, or less flamboyant hardships. Young adults who take on this new definition focus on protecting the fragile self, and they reject solidarity and close, committed relationships in favor of individualistic, judgmental competition.
This is the basic thesis of Jennifer M. Silva’s insightful, frustrating new book, Coming Up Short: Working-Class Adulthood in an Age of Uncertainty. “At its core,” Silva writes, “this emerging working-class adult self is characterized by low expectations of work, wariness toward romantic commitment, widespread distrust of social institutions, profound isolation from others, and an overriding focus on their emotions and psychic health.”
Silva interviewed 100 white and black working-class adults, defining “working-class” as native-born Americans, with native-born parents who didn’t have college degrees—Silva herself fits this definition. A lot of what she found rang very true to me from my own conversations with young adults in low-wage jobs, although I suspect her findings don’t generalize quite as much as she thinks: The worldview she describes has definitely influenced some of the women I counsel at the pregnancy center, for example, but they’re also shaped by communal ties, religious history and their own religious fervor, and a culture of childbearing.
Silva finds relatively little of the “redemption through procreation” language which men expressed in the recent Doing the Best I Can: Fatherhood in the Inner City, and none of the catastrophic romanticism which drove premarital relationships for the “red-state” young adults in 2011′s Premarital Sex in America. So she’s only capturing part of the picture of working-class or poor young adulthood.
And there were many times when I felt the heavy hand of the medium upon the planchette, as Silva seemed to overinterpret her interviewees’ words to drag them closer to her preexisting theories. She often quotes an interview and then summarizes it in terms which don’t really capture what I would mean if I said what her interviewee said, or what people I’ve heard say those things meant when they said them. She talks a lot about “social institutions” and how there should be more of them or how they’ve failed, but the only ones she actually names in a positive context are unions, and she relies almost exclusively on government intervention for signs of hope.
Like I said, it’s a frustrating book. But its primary insights are important and true. There is a working-class therapeutic culture, that culture did arise in large part because of the disruption of older institutions which provided stability and a source of identity, and that culture is intensely prone to self-blame and judgmental “crabs in a barrel” sniping at other working-class people who aren’t sufficiently self-helping.