A friend once described conservatives as people who agreed about one important thing—that at some point in the past, something went terribly wrong. After that, conservatives splinter into untold numbers of camps, since they disagree ferociously about the date of the catastrophe.
Most conservatives today agree that America has taken a terrible turn—that something went wrong at some point in the past. Most believe that America was well-founded by the Framers of the Constitution, but that something bad happened that corrupted the sound basis of the Founding. A few—generally unpopular—believe that Lincoln is to blame, that he introduced the beginnings of centralized State and the imperial Presidency. Many point to the catastrophe of the 1960s as the main source of current woes (a striking number of these constitute the neoconservative faction). But, at least in the circles in which I travel, an increasing number have settled on the Progressive era at the turn of the 20th-century as the source of today’s troubles, and see President Obama as the direct inheritor of this philosophical and political movement that was born in the late-19th and early-20th centuries.
The dominant narrative about the rise of Progressivism, both in the halls of academe and its distillation in the popular media expressed by figures such as Glenn Beck, is that Progressivism was a virus that was incubated in a foreign (particularly German) laboratory and was transported to America by intellectual elites, often educated at German universities and influenced by thinkers such as Kant and Hegel (such intellectuals include the likes of Herbert Croly, Woodrow Wilson, and John Dewey). These Progressives despised the classical liberal philosophy of the Founding, and sought either an explicit rejection of the Constitution or an effective change by re-defining it as a “living” document.
This is a plausible case – and, the fact is that major progressive figures turned often to German and other foreign sources in developing their intellectual critique of the classical liberal philosophy of the Founding. Thus, by attributing the rise of Progressivism to a foreign contagion, it can be comfortably maintained that the Founding was good and true and was corrupted by a fifth column.
However, what this argument overlooks is that the greatest analysis of American democracy—Democracy in America, published in two volumes in 1835 and 1840, a full half-century before the flowering of Progressivism—already perceived the seeds of Progressivism’s major tenets already embedded in the basic features and attributes of liberal democracy as established at the Founding. Of particular note, while the major figures of Progressivism would directly attack classical liberalism, Tocqueville discerned that Progressivism arose not in spite of the classical liberal tradition, but because of its main emphasis upon, and cultivation of, individualism.
Individualism is a distinctive phenomenon arising in liberal democracy, notes Tocqueville. The idea of the individual is at least as old as Christianity, but individualism is a new experience of self that arises with the passing of the experience of embeddedness in a familial, social, religious, generational, and cultural setting that is largely fixed and unchanging—the basic features of an aristocratic society. The rise of liberal democracy, by contrast, is premised upon a view of the individual deriving from the social contract tradition, which conceives of human beings in their natural state as defined, above all, by the total absence of such constitutive bonds, inherited roles and given identities (a philosophy that Bertrand de Jouvenel said was developed by “childless men who forgot their childhood”). Instead, as Tocqueville describes, in a democracy, “the chain” that once bound a peasant all the way to a king is “shattered,” throwing each individual in their freedom and equality finally “into the solitude of their own own hearts.”
Yesterday morning, Micah Mattix asked “is it impossible for the humanities to thrive in a secular society,” one dominated by a philosophical materialism? In one way yes, yet there is nevertheless hope.
Insofar as our secular society has become infatuated with materialism, denying the possibility of anything (except subatomic particles) which we cannot calculate or mathematically describe, the humanities will cease to matter—we will only care about the knowledge and corollary power to be sapped from the world by scientific and mathematical dominance. At this point, humans themselves will cease to matter, as our humanity is degraded into a chemical and molecular equation, much like the code and circuit boards of a computer.
The Canadian philosopher, Charles DeKoninck, describes the tension between the sciences and humanities in this way:
The problems of philosophy, when distinguished from those which Bertrand Russell calls scientific, will remain forever in debate. Should the day ever come when Leibniz has his way: when, to settle their problems, philosophers will merely have ‘to take their pens in their hands, to sit down to their desks and to say to each other (with a friend as witness, if they liked), “Let us calculate” ‘, there will be no more problems, for there will be no one to raise them. Meantime, the calculators have their use, while philosophers are forever in need of being debunked, a thing no one knew better than Socrates. Nevertheless, as Aristotle suggested, no one can deny philosophy without at least implying a philosophy of his own, and his own may prove to be a very foolish one.
What did we know of man before we found out that he is a throng of electric charges? And that he is composed of multitudinous cells? And that the circulation of his blood is an exquisite piece of chemistry and mechanics? Is it possible that, having learned all this, we may remain far more ignorant of him than Sophocles, or Shakespeare? Or the people who believe they know what these writers meant? [Bold added.-M.O.]
Biology and chemistry can offer insights into the chemical reactions of anger—why the face flushes red and the heart beats faster—but science cannot offer the insight into “the anger of Achilles” as Homer does nor the words of rage Shakespeare places in Hotspur’s mouth, “an if the devil come and roar for them,/I will not send them: I will after straight/And tell him so; for I will ease my heart,/Albeit I make a hazard of my head.”
The academy sets aside the humanities for the sciences because of a pre-imposed philosophical approach. To embrace materialism, as Dr. Ronald McArthur has noted in a speech at Thomas Aquinas College, is to embrace “nihilism. That means nobody knows anything. It doesn’t matter whether you affirm something or deny something….Education then turns to the practical…There is hardly any education that is ordered, institutionally, to anything that is sound intellectually.”
In a post for Prospect, Christopher Fear asks why academic political theory is so remote from political practice. He concludes that it’s because political theorists devote themselves to eternal riddles that he dubs “Wonderland questions” rather than today’s problems. Consider justice, perhaps the original topic of political theorizing:
One of the central questions of academic political philosophy, the supposedly universal question “What is justice?” is a Wonderland question. That is why only academics answer it. Its counterpart outside the rabbit-hole is something like “Which of the injustices among us can we no longer tolerate, and what shall we now do to rectify them?” A political thinker must decide whether to take the supposedly academic question, and have his answers ignored by politicians, or to answer the practically pressing question and win an extramural audience.
Fear is right about the choice that political theorists face between philosophical abstraction and making an impact on public affairs. But he doesn’t understand why they usually pick the former. The reason is simple. Academic political theorists ask academic questions because…they’re academics.
In other words, political theorists are members of a closed guild in which professional success depends on analytic ingenuity, methodological refinement, and payment of one’s intellectual debts through copious footnoting. They devote their attention to questions that reward these qualities. Winning an extramural audience for political argument requires different talents, including a lively writing style, an ear for the public discourse, and the ability to make concrete policy suggestions. But few professors have ever won tenure on the basis of those accomplishments.
Another reason academic political theorists avoid the kind of engagement Fear counsels is that they have little experience of practical politics. Most have spent their lives in and around universities, where they’ve learned much about writing a syllabus, giving a lecture, or editing a manuscript—but virtually nothing about governing or convincing lay readers. How does expertise on theories of distributive justice, say, prepare one to make useful suggestions about improving the healthcare system? Better to stick with matters that can be contemplated from the comfort of one’s desk.
In this respect, political theorists are at a considerable disadvantage compared to professors of law or economics. Even when their main work is academic, lawyers and economists have regular chances to practice in the fields in the fields they study. Within political science, many scholars of international relations pass through a smoothly revolving door that connects the university with the policy community. Political theorists have few such opportunities.
Fear points out that it wasn’t always this way. Before the 20th century, many great political theorists enjoyed extensive political influence. But Fear forgets the main difference between figures like Machiavelli, Locke, Montesquieu, Madison, Burke, Hume, Mill, or Marx and their modern epigones. The former were not professors. Although all devoted to the philosophical truth as they understood it, they were also men of affairs with long experience of practical politics.
The “brilliant and surreal tragedy of academic political theory,” then, is not that political theorists have been diverted into the wrong questions. It’s that political theory is an uncomfortable fit with the university. Academic political theorists gravitate toward the kind of questions that career scholars are in a position to answer.
Earlier this week I began a series of lectures in one of my classes on the thought of the Anti-Federalists. I began by echoing some of the conclusions of the great compiler and interpreter of the Anti-Federalist writings, Herbert Storing, whose summation of their thought is found in his compact introductory volume, What the Anti-Federalists Were For. I began with the first main conclusion of that book, that in the context of the debate over the Constitution, the Anti-Federalists were the original American conservatives. I then related a series of positions that were held by the Anti-Federalist opponents of the proposed Constitution. To wit:
They insisted on the importance of a small political scale, particularly because a large expanse of diverse citizens makes it difficult to arrive at a shared conception of the common good and an overly large scale makes direct participation in political rule entirely impracticable if not impossible. They believed that laws were and ought to be educative, and insisted upon the centrality of virtue in a citizenry. Among the virtues most prized was frugality, and they opposed an expansive, commercial economy that would draw various parts of the Union into overly close relations, thereby encouraging avarice, and particularly opposed trade with foreign nations, which they believed would lead the nation to compromise its independence for lucre. They were strongly in favor of “diversity,” particularly relatively bounded communities of relatively homogeneous people, whose views could then be represented (that is, whose views could be “re-presented”) at the national scale in very numerous (and presumably boisterous) assemblies. They believed that laws were only likely to be followed when more or less directly assented to by the citizenry, and feared that as distance between legislators and the citizenry increased, that laws would require increased force of arms to achieve compliance. For that reason, along with their fears of the attractions of international commerce and of imperial expansion, they strongly opposed the creation of a standing army and insisted instead upon state-based civilian militias. They demanded inclusion of a Bill of Rights, among which was the Second Amendment, the stress of which was not on individual rights of gun ownership, but collective rights of civilian self-defense born of fear of a standing army and the temptations to “outsource” civic virtue to paid mercenaries.
As I disclosed the positions of the Anti-Federalists, I could see puzzlement growing on the faces of a number of students, until one finally exclaimed—”this doesn’t sound like conservatism at all!” Conservatism, for these 18-to-22-year-olds, has always been associated with George W. Bush: a combination of cowboy, crony capitalism, and foreign adventurism in search of eradicating evil from the world. To hear the views of the Anti-Federalists described as “conservative” was the source of severe cognitive dissonance, a deep confusion about what, exactly, is meant by conservatism.
So I took a step back and discussed several ways by which we might understand what is meant by conservatism—first, as a set of dispositions, then as a response to the perceived threats emanating from a revolutionary (or even merely reformist) left, and then as a set of contested substantive positions. And, I suggested, only by connecting the first and third, and understanding the instability of the second, could one properly arrive at a conclusion such as that of Storing, who would describe the positions of the Anti-Federalists as “conservative.”
First, there is the conservative disposition, one articulated perhaps most brilliantly by Russell Kirk, who described conservatism above all not as a set of policy positions, but as a general view toward the world. That disposition especially finds expression in a “piety toward the wisdom of one’s ancestors,” a respect for the ancestral that only with great caution, hesitancy, and forbearance seeks to introduce or accept change into society. It is supremely wary of the only iron law of politics—the law of unintended consequences (e.g., a few conservatives predicted that the introduction of the direct primary in the early 1900′s would lead to increasingly extreme ideological divides and the increased influence of money in politics. In the zeal for reform, no one listened). It also tends toward a pessimistic view of history, more concerned to prevent the introduction of corruption in a decent regime than driven to pursue change out a belief in progress toward a better future.
New York University freshman Elif Koc writes in the Atlantic that high school taught her to be a good student, but not “a good learner.” After attempts at thoroughness hurt her grades, she began to ask, “How can I do as little as possible and still get an A?” At the end of her article, she expresses the hope that “college is where I can become a good learner.”
Sam Swift’s research, recently published in the journal PLOS ONE, indicates that Koc may be disappointed, particularly if she hopes to attend a top graduate program. Swift found, both in the lab and real life, that students with the highest grades at grade-inflating institutions had the highest rate of acceptance into MBA programs. Swift demonstrated this result by comparing MBA program acceptance rates of students from institutions with and without grade inflation. The test admissions committee was also given data as to how candidates ranked against their classmates. Finally, the lab results were compared to an analysis of real-world MBA admissions data. From his findings, Swift draws out this important observation: “It’s really hard for people to look away from that glaring high number or that glaring low number of raw performance.”
Koc hoped to be afforded the opportunity to actually learn in a college environment, an opportunity she felt the college admissions push denied her. But Koc has the problem reversed: high schools have not radically misunderstood today’s learning culture. Rather, they take their cues from the norm in higher education, where high grades in college lead to post-college success. Academically Adrift, a study by sociologists Richard Arum and Josipa Roksa, assesses data compiled between 2005 and 2007 through the Collegiate Learning Assessment, or C.L.A. These researchers found that “American higher education is characterized by limited or no learning for a large proportion of students.” Instead, the university model is oriented to monetary gain. In a culture enthralled with quantity, it is not surprising that this quantification, rather than the substance such quantification was intended to express, would take precedence. It has trickled down from our universities into secondary and elementary educational systems.
Koc’s experience in high school is a common phenomenon: students often seek good grades at the lowest possible cost. After four years of training, it will be all too tempting for her to continue in this mentality. How will she ever get into the right graduate school or be recruited by a good company otherwise? But perhaps Koc will represent an exception to Arum and Roksa’s findings. Perhaps one night, she will stay up until four a.m. discussing Plato’s Dialogues, Einstein, or the Civil War with her friend–not because she has an assignment on it, but because a question has caught her mind, and will not release her until she has answered it. Perhaps some of these questions will begin to consume her for days and weeks on end, until she finally arrives at a solution, or at least a senior thesis topic. By focusing on the questions and material, she will no longer be simply a good student: she will be a good learner as well.
Because most of the traditional pathways to adulthood—marriage, economic independence, stable job—seem out of reach or prove to be reversible, working-class young adults have developed a new definition of maturity. This new pathway relies heavily on therapeutic culture: You become an adult by overcoming the trauma of your past, whether that involved abusive parents, drug addiction, mental illness, or less flamboyant hardships. Young adults who take on this new definition focus on protecting the fragile self, and they reject solidarity and close, committed relationships in favor of individualistic, judgmental competition.
This is the basic thesis of Jennifer M. Silva’s insightful, frustrating new book, Coming Up Short: Working-Class Adulthood in an Age of Uncertainty. “At its core,” Silva writes, “this emerging working-class adult self is characterized by low expectations of work, wariness toward romantic commitment, widespread distrust of social institutions, profound isolation from others, and an overriding focus on their emotions and psychic health.”
Silva interviewed 100 white and black working-class adults, defining “working-class” as native-born Americans, with native-born parents who didn’t have college degrees—Silva herself fits this definition. A lot of what she found rang very true to me from my own conversations with young adults in low-wage jobs, although I suspect her findings don’t generalize quite as much as she thinks: The worldview she describes has definitely influenced some of the women I counsel at the pregnancy center, for example, but they’re also shaped by communal ties, religious history and their own religious fervor, and a culture of childbearing.
Silva finds relatively little of the “redemption through procreation” language which men expressed in the recent Doing the Best I Can: Fatherhood in the Inner City, and none of the catastrophic romanticism which drove premarital relationships for the “red-state” young adults in 2011′s Premarital Sex in America. So she’s only capturing part of the picture of working-class or poor young adulthood.
And there were many times when I felt the heavy hand of the medium upon the planchette, as Silva seemed to overinterpret her interviewees’ words to drag them closer to her preexisting theories. She often quotes an interview and then summarizes it in terms which don’t really capture what I would mean if I said what her interviewee said, or what people I’ve heard say those things meant when they said them. She talks a lot about “social institutions” and how there should be more of them or how they’ve failed, but the only ones she actually names in a positive context are unions, and she relies almost exclusively on government intervention for signs of hope.
Like I said, it’s a frustrating book. But its primary insights are important and true. There is a working-class therapeutic culture, that culture did arise in large part because of the disruption of older institutions which provided stability and a source of identity, and that culture is intensely prone to self-blame and judgmental “crabs in a barrel” sniping at other working-class people who aren’t sufficiently self-helping.
A plethora of recent studies have delved into the effect social media has on our brains. As part of a widespread neurological fascination in social science, these reports offer various concerns on the effect new technological platforms have on our social consciousness.
The University of Michigan reported last month that Facebook, while cultivating an addictive fixation in its users, also fosters depression. Why? “Based on the responses from the participants,” TIME reports, “the scientists speculated that the Facebook users were comparing themselves with their peers, and many were feeling inferior as a result. Users also reported frustration and a “lack of attention” from having fewer comments, likes and feedback compared with their Facebook friends.”
Social scientists are also musing over the growing “selfie” trend – and trying to determine its ethical implications. Is our pouty-picture-taking symptomatic of typical youthful immaturity, or is it indicative of some deeper cultural phenomenon?
These are just a few minor grace notes in a growing public dialogue on “narcissism”: whether we are more narcissistic than in ages past, how social media affects our self-absorption, and whether we need to change. Dr. Jean Twenge, a professor of psychology at San Diego State University, believes that younger generations are “increasingly entitled, self-obsessed, and unprepared for the realities of adult life,” according to a recent New York Times article. Slate author Katy Waldman felt inclined to agree with her:
Look at the rise of plastic surgery, our painstaking attention to Facebook and Twitter profiles, our selfies, our relentless focus on self-improvement, and “soccer teams that give every kid a trophy … Could it be that she is onto something? How else to explain that Twenge’s thesis just feels right?
Some believe this “epidemic” of narcissism is a product of our digital era. According to another study, this one conducted by the University of California Los Angeles, language in books have shown the trend develop over the past 200 years. “The currently discussed rise in individualism is not something recent but has been going on for centuries as we moved from a predominantly rural, low-tech society to a predominantly urban, high-tech society,” said psychology professor Patricia Greenfield, who conducted the study. Words like “unique,” “individual,” “self,” “feel,” “choose,” and “get” increased significantly over time. Words like “authority,” “belong,” and “pray” are more rare than before.
The “selfie” is a commonly quoted example of the narcissistic tendency in social media. One psychologist told TIME that the self-captured image could be beneficial, since it allows “young adults and teens to express their mood states and share important experiences.” But not everyone sees the “selfie” as a correct depiction of the self. Brett McCracken, in a Mere Orthodoxy blog post, argued the self-projections we present on social media are both deceptive to others and to ourselves:
Social media’s “what are you doing now?” invitation to pose, pontificate and consume conspicuously only amplifies the narcissistic presentism of the generation depicted in The Bling Ring. It makes it easier than ever to tell the world exactly what you want them to know about you. Through a carefully cropped and color-corrected selfie, depicting whatever glamorized “now” we think paints us in the best light, we can construct a public persona as we see fit.
Are Instagram selfies and Twitter posts truly making us more narcissistic? Or are we merely viewing a greater publication of old human tendencies?
I would argue the latter. While the digital era could be making us more narcissistic, it seems more likely that it is doing exactly what it was created to do: showcasing the self in all its glory. As long as humans have roamed the planet, they have had selfish tendencies. But prior to the rise of social media, they did not have a global platform for this grandstanding. High school, with all its intense and sticky drama, was documented profusely in letters and diaries, but it rarely appeared in public discourse. Now, via Facebook, young adults have a communal outlet for their emotionally turbulent lives. Facebook is, in many ways, the new diary.
In addition, arguing that young adults are more narcissistic than ever seems too simplistic. Many young people are immature and tend to be self-interested. It is only with experience and age that we learn the planet does not revolve around us. Once again, this generation has a wider global platform for self-pontification than ever before. This may skew the narcissism balance in their favor. Not to mention that the young are less adept at disguising their selfishness. With age, we learn how to hide our self-absorption behind facades of one sort or another. Time makes some humble; it makes others clever.
It is true that individualism and life disassociated from family and community is more widespread today than in past civilizations. It could be that this individualism has fostered our egotism. The person who lives with family or roommates must learn patience, self-control, and sacrifice. The individual, however, need not reconcile with other people’s wants and desires. Personal wants reign supreme. This obviously cultivates a different set of values and perceptions in the individual. But one must caution against the tendency to stereotype and predict behavior based on such perceived “trends.”
The human mind is not one massive mound of ever-growing egotism. It is a complex, varied, wondrous thing — scarred with the pains of human experience, spotted with sin and excess, ever shifting with the cadence of human experience. While we should be wary of social media’s influence on our mental and spiritual growth, we needn’t discount the online experience because of a few paltry “selfies.” One of the beauties of social media is its ability to connect and share – to teach us more about each other.
Ali Eteraz, author of Children of Dust, wrote in Medium last Tuesday that words are losing their potency and power. He believes the mighty Instagram, with its frozen pixelated memories, is our future. Though perhaps unconsciously, he said, “Instagram and its cousins represent an undeclared war on writing. On words.”
Eteraz believes that in the beginning, the Internet encouraged words. Text statuses reigned supreme in the first days of Facebook. But then, a change slowly began to develop:
First, by progressively smaller bursts of text (websites became blogs, became status updates, became 144 character tweets), and then through the enthronement of the image. Whether it is moving pictures (Youtube, Vimeo, Liveleak), or photo-sharing sites like Instagram, Pinterest, and Snapchat, it goes without saying that we are well on our way to communicating with each other by way of pictures.
There is one important and distinctive difference, however, between the sorts of “words” projected on the Internet, and those words utilized by storytellers throughout time. In the print era, the newspapers and books told stories of the other: of the powerful in Washington or on Wall Street, of a neighbor’s child who won a spelling bee, of Bobby Jones the golf player or General Robert E. Lee. Besides the private introspection of the diary or public thoughtfulness of the memoir, stories of self were at least somewhat limited.
But in the world of Internet and social media, another narrative has begun to reign supreme: namely, the self-narration. Facebook and Twitter statuses create constant self-publication. This public diary has wooed us away from the storybooks; after all, between General Lee and myself, whom will my ego find more fascinating?
With the initiation of Instagram, self-describing words became self-describing pictures. Instead of striving to help the user know and understand “who I am and how I feel” via statuses, the Instagram image asks you to merely see who I am, and to know “me” on that front. Eteraz does not view this as an alarming trend – except perhaps selfishly, he surmises, as a writer who wants to save his income. For most of society, he supposes, such a trend is normal:
After all, we are descendants of cavemen that told their stories upon stone walls by way of images. And we are descended of societies where the primary language was the hieroglyph, which is nothing more than words represented in imagistic forms. From this perspective we shouldn’t show much concern if our societies transition away from words and move to communicating by way of the image.
But here again, Eteraz does not seem to notice that the very nature of what is communicated has changed. Hieroglyphics and caveman images did not contain self-musing diary entries. Many contained histories and chronicles of kingdoms and clans, as well as ceremonial and religious messages. The “hieroglyphics” of Instagram rarely contain any of these things. As the image grows omnipotent online, the stories we tell are changing. News has increasingly subsided from such typographic-focused sites as the New York Times to the image-based commentary of Buzzfeed. Rather than reading “long-winded” titles, we watch long-winded Netflix shows. Eteraz sees this trend to the image as inevitable:
…Most realists among the wordsmiths already know that short of some massive cataclysm that lays to waste the electronic grid that makes the delivery of images so easy, we are pretty much done for. Whenever I walk past the cinemas and the cafes with flat screen TV’s and look at our children tapping away at pictures on iPads and think about how no one cares about reading the lengths of Proust, or Yukio Mishima, or Qurratulain Hyder, the thoroughness of the wordsmith’s dispossession comes to mind…
But, he surmises, writers’ laments on the subject are merely selfish, as we cannot “live with the thought that [we] are also irrelevant. If there are no words there are no smiths.” This negates all the inherent and transcendent merits of words themselves. He does recognize – and devotes one sentence – to the thought that some writers believe “words have something to contribute to this world, something important.” But he does not extrapolate this thought: What do words contribute to this world? Why do they matter?
There is neither time nor space to full extrapolate the importance of words in this post. But perhaps this short list will identify some reasons for words’ preeminence throughout time as the highest form of communication:
Hundreds of colleges and universities assign a “common reading” book to their students every summer – often for incoming or honors students, sometimes for the entire student body. They are usually meant to represent the school’s values and philosophies, and to foster dialogue amongst students for the remainder of the semester. But according to the National Association of Scholars’ latest report, “Beach Books 2012-2013: What Do Colleges and Universities Want Students to Read Outside of Class?” the reading program often falls short of excellence.
The report found that instead of old books, most chosen works corresponded to current events and cultural issues: environmentalism, bioethics, social justice, information technology, urban poverty, animal rights, the war in Iraq, etc. Out of 309 colleges, only four assigned books qualified as classics. Only nine books were published before 1990. Some would not see this as a problem; however, the report’s authors believe there is a wealth of scholarship purposefully excluded from most reading programs. “Colleges … choose easy, trendy books because they hope that this will induce students to read the book, or at least think favorably of it,” said report author Ashley Thorne in an email interview. “Instead, they could be challenging students to go beyond their comfort zones and raise their sights to books they will find difficult but rewarding. The problem is not that books with mass appeal are bad, but that books for common reading could be much better.”
Interestingly, the most common reading chosen for the 2012-2013 year was The Immortal Life of Henrietta Lacks. The book, published in 2010, centers around Lacks and controversy over her laboratory-cultured cervical cancer cells. The NAS report saw the book as a compilation of the institutions’ favorite genres:
If colleges were primarily moved by the goal of teaching students something about science, we would expect that other books about science would also have registered strongly in the list of choices. That, however, did not happen. Accordingly it is probably best to conclude that The Immortal Life owes its popularity not to being a book about science but to being a book about science whose subjects—the Lacks family—happen to be black and poor and furnished with a victimhood narrative. In short, The Immortal Life of Henrietta Lacks is popular because it puts a racial grievance spin on science.
NAS President Peter Wood believes the book is an indication of the “soft manipulation” that often goes on at the modern university. Instead of espousing classical literature, schools often turn studies into an opportunity for acculturation. The widespread ideology amongst America’s colleges strives to re-create students according to its own understanding of race, gender, and class theory.
Anthony Esolen, a literature professor at Providence College, sees such a classifying tendency amongst his fellow professors as well, and condemns it strongly:
It does violence to the man to reduce him to such categories. It is an act of contempt for his humanity. It reduces him, not so that we may get to know him, but so that we can manipulate facts about him while not getting to know him at all. It is a study in subhumanity. That is exactly what schoolteachers, professors, and critics do to John’s art when they cram it into the pigeonholes of race, class, and gender. It is an act of violence.
Thorne called for colleges to use common reading programs as an opportunity to grow students’ knowledge of “our intellectual heritage” through including more works of classic literature. NAS published their list of 50 recommended common reading books – replete with both famous and lesser-known works, with authors spanning from Plato to Tom Wolfe.
It seems doubtful that universities already succumbing to “soft manipulation” will stop when they offer Plato’s Republic on their reading list. Rather, any inherent sexism or elitism in the work will immediately be fleshed out in discussion. This is the reductionism that Esolen refers to in his article, and it cannot be stopped by a mere exchange of titles. No book, classic or no, can offer purely impartial alternative. However, by choosing a challenging and brilliant book, there is hope that even one reader will discover a morsel of truth that would otherwise have gone undiscovered.
David M. Shribman accused Americans of ruining August on Monday, through their cacophony of work, school and frantic scheduling. He remembers when August was “an idyll of idleness, a time of pure ease” – but nowadays, it’s ebbed into work and school obligations:
“Not so long ago—well within the memory of half the American population—August was the vacation month. It was a time, much anticipated and much appreciated, of leisure, languor, lassitude and lingering at the beach well into suppertime… What we’ve done to August has made it the cruelest month: infuriating work and inescapable school obligations amid intoxicating weather.”
The New York Times actually wrote a similar story in August 2006, called “The Rise of Shrinking-Vacation Syndrome.” Mike Pina, a spokesman for AAA, told author Timothy Egan that “The idea of somebody going away for two weeks is really becoming a thing of the past. It’s kind of sad, really, that people can’t seem to leave their jobs anymore.” Egan pointed to ”the heightened pace of American life, aided by ever-chattering electronic pocket companions,” for crippling people’s ability to escape or just be “slothful.”
Since Alexis de Tocqueville’s famous study of American life in Democracy in America, we’ve been a documented case of superior work ethic. This work ethic has become tantamount to our national identity – and some might argue, our obsession. Social critic Morris Berman has written Why America Failed and Spinning Straw Into Gold, two books aimed at the nation’s work-fixated culture. He told the Atlantic in an interview last week that America is “essentially about hustling, and that goes back more than 400 years.” Americans, for the most part, lack true community or neighborly connection. They see careers, professional ambition, and prestige as means to “the good life.” In this mindset, life and pleasures become results-obsessed.
Berman does think these trends have escalated in the recent past: he writes that as the U.S. began to “speed up” from about 1965 on, “a kind of industrial, corporate, consumer ‘frenzy’ took over, which meant there was no time for anything except getting and spending.” This fits with Shribman’s description of the new August: a month that is now results-obsessed in its educational and vocational pursuits. Classes for high schoolers can begin as early as August 5 (whereas Shribman documents a time when they began after Labor day). College students return to campus mid-August, and summer travel has dropped by 30 percent. Americans, it would appear, are eager to achieve – not relax.
This frenzied environment may not stem entirely from technological advances (though gadgets certainly help) or even historical precedent. The country’s current economic situation fosters a sense of vulnerability and job insecurity, and this increases our desire to put in extra hours at the office. In turn, that economic anxiety may push students toward college and a degree with greater alacrity.