John Green’s work isn’t too difficult to recognize: it usually features two ironically funny, slightly introverted, hipster teens. It also usually features deep—and rather dark—musings on the nature of being, and the inevitability of death. Sometimes, these musings are on the nihilistic side, featuring a gloomy solipsism that pulls oddly at the fabric of the teen novel. This was especially true of his astoundingly popular book The Fault in Our Stars: a romance about two teens, each fighting cancer, each obsessed with an eccentric novel that eventually leads them to Europe and back. The main character in TFiOS is a teenage girl, Hazel Grace, who knows death is an inescapable part of her life, and learns how to cope with it.
Paper Towns is a bit different: it follows protagonist Quentin, who has had a lifelong crush on his neighbor, the mysterious and lovely Margo. She’s known for fantastical exploits and adventures—the sort indicative of a spontaneous and fun-loving nature. Though the two were friends as children, they’ve slowly grown apart. But Quentin never stops admiring Margo from a distance. One night, she lets Quentin into her world: she invites him on a whirlwind excursion, fraught with mischief and pranks. The next day, however, she disappears. Quentin is devastated, until he discovers clues that Margo left behind for him—slips of paper, circled words on a record album, highlighted words in a Walt Whitman poem. He becomes obsessed with one purpose: to find Margo, and declare his love to her.
This novel doesn’t touch on death in the same way The Fault in Our Stars did. Rather, it’s more about our ability to know and understand the “other”: the people in our lives that we think we know and love, but so often misunderstand. The book is replete with quotes from Whitman’s Leaves of Grass: a work in which Whitman slowly beings to identify with others to the point of becoming them. As Quentin reads Whitman and searches for Margo, his interactions with his best friends—Ben and Radar—begin to reveal ways in which he doesn’t understand or know them truly. Radar tells him, “You know your problem, Quentin? You keep expecting people not to be themselves.” Meanwhile, Quentin begins to see new parts of Margo, the girl he’s always admired from a distance, and he begins to wonder: has he ever really known her? Who is Margo, really?
The problem with the book, and film, is that even the “real” Margo seems fake. Our progression toward the authentic Margo seems almost illusory, as she ever remains a dreamy, impossible version of female accomplishments and eccentricities. Green is trying to make a point about how we mystify, and even deify, “the other”—but since Margo never becomes really human, book falls flat in making this point.
Quentin’s friends are real: they have flaws, quirks, weaknesses. Margo, however, is an almost-perfect mystery: her “weaknesses” are feelings of not fitting in, worries about acting fake or inauthentic. Yet she’s still the girl who breaks into Seaworld in the wee hours of the morning, the girl who writes in a jumble of capitals and small-caps because “the rules of capitalization are so unfair to words in the middle.” She has a mammoth, diverse vinyl album collection, one that she hasn’t told anyone about. She reads Walt Whitman and Sylvia Plath. She remains ever a manic pixie dream girl: a character that Nathan Rabin once wrote “exists solely in the fevered imaginations of sensitive writer-directors to teach broodingly soulful young men to embrace life and its infinite mysteries and adventures.” The manic pixie dream girl is “the most deliciously delirious young woman, always up to her false eyelashes in madcap romps,” as Neda Ulaby wrote for NPR. Doree Shafrir wrote for The Daily Beast that indie films are particularly obsessed with this female character, who is “detached, mysterious, impulsive,” loves indie bands, and is “always just out of reach, making herself scarce at crucial moments.” That’s Margo, alright.
Interestingly, Vulture Magazine’s Matt Patches accused John Green of creating the first manic pixie dream boy with his TFiOS character Augustus Waters, Hazel Grace’s boyfriend. “He’s a bad boy, he’s a sweetheart, he’s a dumb jock, he’s a nerd, he’s a philosopher, he’s a poet, he’s a victim, he’s a survivor, he’s everything everyone wants in their lives, and he’s a fallacious notion of what we can actually have in our lives.”
Of course, the great point (*spoiler*) of all Green’s stories is that we don’t often get what we want—whether it’s death, distance, or misunderstanding, something always separates us from happily-ever-after. But that doesn’t make his characters any more real, any less aspirational and crush-worthy. Regardless of whether Hazel ends up with Augustus, or Quentin ends up with Margo, there’s no question that she’s the perfect girl, and he’s the perfect guy. It’s just fate, really, that gets in the way.
The film “Paper Towns” is a little more redemptive, in this sense, than the book. Green ends his novel in the deeply philosophical (as he is wont to do), with both Quentin and Margo pondering metaphors that properly describe existence, and our ability to understand each other. It’s touching, it’s interesting—it’s even a bit profound in places. But it still casts each onto themselves, in lonely individualism. It leaves us as islands, beckoning to each other in the dark, never able to fully understand or reach each other.
The film, however, brings the plot back to Green’s strength, the camaraderie and delight that makes his work worth reading: the friendships. In all Green’s books, the friendships are diverse, funny, heart-touching. The side characters are often more real, relatable, and endearing than the protagonists. And this is perhaps especially true of Paper Towns, in both its film and literary adaptations: Ben (played by Austin Abrams in the film) and Radar (Justice Smith) are fantastic friends. The film, in its conclusion, reminds us why such friends—the ones we’ve known and loved for ages, who annoy us to death but remain loyal—make life truly miraculous, and help us bridge the gaps between the islands.
Green tried to tell the story of a manic pixie dream girl who wasn’t. But instead, he tells a story of disillusionment, misunderstanding, and the friendships that help us overcome both. Near the end of the book, Quentin says, “Imagining isn’t perfect. You can’t get all the way inside someone else.” But as Radar has pointed out earlier, that’s not the point: we are supposed to love what we don’t understand, appreciate what we are not, and recognize the mystery and beauty of those around us—their warts and all. That’s what friendship is.
“The point of having a child is to be rent asunder, torn in two.”
In light of the Planned Parenthood videos released over the past two weeks, reading those words yesterday stirred a little irony and sadness in me. True: parenthood is about the disruption of the self. But how often, in today’s world, have we reversed those roles—rejecting parenthood in favor of rending an unborn child asunder?
“Here is the truth about abortion: It kills an unborn baby,” Damon Linker wrote for The Week last Wednesday. “We all know this—and with continuing advances in ultrasound technology, it’s something we know with greater and greater certainty all the time.”
Each video by the Center for Medical Progress has exposed ways in which Planned Parenthood harvests the organs of unborn babies for research. Some protest and say that this is an entirely legal process, and one that does not benefit the company monetarily in any way. But Matthew Lee Anderson responded well at Mere Orthodoxy when he wrote,
Planned Parenthood may not make a single dime off of participating in such a system. But they are still in a “market” where the other people and institutions who do benefit from receiving the ‘fetal tissue’ doubtlessly reciprocally support Planned Parenthood in other ways, if only through donation and political support. The practice of treating infant bodies as products in a transaction should itself shock us, regardless of who profits from it.
One problem with the debate surrounding these videos—and the pro-choice / pro-life debate as a whole—is that everything becomes muddied with language that is euphemistically (or dysphemistically) skewed in order to fit the purposes of the speaker. As Anderson writes, “Abortion requires not only the dismemberment of the human body in fact, but in our speech as well. … We cannot allow ourselves to see the baby as a whole, integrated, living organism.”
The pro-choice movement’s language can also be skewed in its discussion of motherhood. Even as Planned Parenthood and pro-choice feminists have painted themselves as supportive of women’s empowerment and right to choose, they have often denigrated the honor and joys of motherhood. They see womanhood as fulfilling only within certain bounds. And while their motherhood script is employed with good intentions, out of a desire to help support women and their life choices, it may practically achieve the opposite.
This week, a friend sent me this Salon Q&A with cultural commentator Camille Paglia. In it, Paglia says feminists abdicated the realm of motherhood for the workplace:
My explanation is that second-wave feminism dispensed with motherhood. The ideal woman was the career woman–and I do support that. To me, the mission of feminism is to remove all barriers to women’s advancement in the social and political realm–to give women equal opportunities with men. However, what I kept saying in “Sexual Personae” is that equality in the workplace is not going to solve the problems between men and women which are occurring in the private, emotional realm…
It is exactly this sort of linguistic erasure that has given rise to the sort of motherhood story told by Sarah Manguso in Harpers Magazine: she speaks of the vehemence with which she opposed the idea of motherhood—the way in which she equated it with a loss of all purpose, personhood, and joy. “Before I had my son I was convinced that motherhood would ruin my writing and cause a profound loss of self that would never be compensated,” she says. But then, her perspective began to change:
Women who deride motherhood as merely an animal condition have accepted the patriarchal belief that motherhood is trivial. It’s true that motherhood can seem trivial to women who have been insulated from the demands of others; they are given few reasons to value motherhood and many reasons to value individual fulfillment. They are taught, as I was, to value self-realization as the essential component of success, the index of one’s contribution to the world, the test of our basic humanity. Service to the world was understood as a heroic act achieved by a powerful ego. Until I’d burrowed out from under those beliefs, being a writer seemed a worthier goal than being a mother.
This “burrowing out” led Manguso into motherhood, a vocation which she says has made her life full and happy, despite her gravest fears: “My old self is indeed gone, but I perceive the world more carefully and more lovingly than before because I am more aware of the effects of love and of time on an individual person.”
The modern feminist movement often equates motherhood with loss of self-fulfillment and freedom, cultural subjugation, a gross abandonment of the economic sphere. Yes, motherhood can be okay—but only if you don’t have a child too young, and are still participating in the workforce. Motherhood may be alright—if it happens entirely according to your preconceived plan, and you have a partner who willingly and equally shares all burdens of childcare and provision.
The purpose of these cautions is to invest women with a sense of control over their lives and child planning. But it seems that their unintended consequence is often more damaging: because any woman whose childbearing story does not fit this script suddenly sees her life, or her child, as anti-woman, anti-freedom. The 19-year-old pregnant woman sees no possibility of having her child and being self-fulfilled. The young businesswoman who planned on having kids seven years down the road, surprised with an unexpected pregnancy, suddenly fears that she’s thrown away all possibility of vocational success and attainment.
Yet these stories are often untrue and misleading. They distract us from the reality that life is always uncontrollable. And they distract us from the possibility that—as Manguso points out—while motherhood may indeed lead us out of our present self, in that leading out, we may find a new and better self awaiting us.
The Planned Parenthood script is very selective in how it talks about the unborn, because the more you begin to see that fetus as a baby, the harder it becomes to terminate—the harder it becomes to see its heart, head, and hands as mere lumps of tissue, mere organs to be harvested.
But the Planned Parenthood script is also very selective in how it talks about motherhood, because the more you see childbearing as fulfilling, life-giving, even empowering, the less likely you will be to see that pregnancy as unwanted, that child as a burden.
It’s important not to ignore the difficulties of motherhood. As W. James Antle III wrote for The Week, “asking a woman to carry an unwanted pregnancy to term is not a small ask.” Elizabeth Stoker Bruenig wrote compellingly for TAC some time ago that the pro-life movement should be on the front lines, helping mothers without resources or support to get the help they need. “There are women out there having abortions despite being ideologically committed to not having them, or who at least wouldn’t have them, were circumstances different,” she wrote.
And we can help those women: first, by changing the way we talk about motherhood—by helping women see that it can actually be empowering and enjoyable. But second, we must assist mothers who are overwhelmed and frightened by the prospect of having a child. It’s an understandable fear: but one that, by embracing a new rhetoric that is pro-life toward both child and mother—respecting the personhood of both—can hopefully bring both healing and hope. Because whether a woman chooses abortion, or whether she chooses motherhood, a life will be rent asunder. The challenge is in helping her decide which life will be torn.
The New York Times tells a sad tale of suicide and depression at The University of Pennsylvania—a growing trend amongst universities throughout the nation:
Ms. Holleran was the third of six Penn students to commit suicide in a 13-month stretch, and the school is far from the only one to experience a so-called suicide cluster. This school year, Tulane lost four students and Appalachian State at least three — the disappearance in September of a freshman, Anna M. Smith, led to an 11-day search before she was found in the North Carolina woods, hanging from a tree. Cornell faced six suicides in the 2009-10 academic year. In 2003-4, five New York University students leapt to their deaths.
Nationally, the suicide rate among 15- to 24-year-olds has increased modestly but steadily since 2007: from 9.6 deaths per 100,000 to 11.1, in 2013 (the latest year available from the Centers for Disease Control and Prevention). But a survey of college counseling centers has found that more than half their clients have severe psychological problems, an increase of 13 percent in just two years. Anxiety and depression, in that order, are now the most common mental health diagnoses among college students, according to the Center for Collegiate Mental Health at Penn State.
Soon after Ms. Holleran’s death, Penn formed a task force to examine mental health on campus. Its final report … recognized a potentially life-threatening aspect of campus culture: Penn Face. An apothegm long used by students to describe the practice of acting happy and self-assured even when sad or stressed, Penn Face is so widely employed that it has showed up in skits performed during freshman orientation.
While the appellation is unique to Penn, the behavior is not. In 2003, Duke jolted academe with a report describing how its female students felt pressure to be “effortlessly perfect”: smart, accomplished, fit, beautiful and popular, all without visible effort. At Stanford, it’s called the Duck Syndrome. A duck appears to glide calmly across the water, while beneath the surface it frantically, relentlessly paddles.
… Citing a “perception that one has to be perfect in every academic, cocurricular and social endeavor,” the task force report described how students feel enormous pressure that “can manifest as demoralization, alienation or conditions like anxiety or depression.”
What is it about college that encourages this incessant pressure to achieve, and these resulting moments of crisis?
It seems our performance-based education culture must play a sizable role in this: quantified measures of skill turn learning into a competition. As Charles Tsai just wrote for Medium,
Books and books are being written about how schools operate like factories and treat students as clones of one another, training them to be compliant workers rather than people who think for themselves. Even the elite schools, the ones that actually produce the future business executives and presidents, do this. They make students jump through arbitrary hoops and use the hoops to rank and sort them.
This encourages students to measure themselves against other high-performing students, and fosters culture of anxiety in which we’re all striving for some impossible-to-attain first place prize.
But even the style of learning and character developed is affected by this performance emphasis, which values quantitative skills over qualitative goods: lauding athletic prowess, the 4.0 GPA, the stunning internship portfolio—but passing over more subjective, immaterial skills such as critical thinking, mental development, and thoughtful class participation. It does not recognize a healthy social life, a deepening interest in the culinary arts, or the business aplomb of a budding entrepreneur.
What is taught can even become secondary to the grades one receives, and the resulting benefits realized by 1) the student in his or her career pursuits, and 2) the school in financial supports and academic acclaim.
It’s also important to consider the crisis of choice that many students face when they enter college. They’re often apart from any familial or communal support system, entrusted with total authority and autonomy, and receive little to no system of rules or guidelines to navigate looming dilemmas—saving the popular yet confusing “follow your heart.”
It shouldn’t be surprising that students put into this situation often suffer a crisis of identity, or at least one of direction, as they embark upon this new world. What their parents or community prized may no longer be enough: what they thought great may of a sudden appear mediocre. Some may abandon the rules of their past for a new, nearly anarchical, exploration of self and its wants. Others may throw themselves into the rigid rules of the new performance-based system they are faced with. But both paths are often damaging.
The Times article is right to point out the role often played by demanding parents, as Julie Lythcott-Haims pointed out in Slate a couple weeks ago:
In 2013 the news was filled with worrisome statistics about the mental health crisis on college campuses, particularly the number of students medicated for depression. Charlie Gofen, the retired chairman of the board at the Latin School of Chicago, a private school serving about 1,100 students, emailed the statistics off to a colleague at another school and asked, “Do you think parents at your school would rather their kid be depressed at Yale or happy at University of Arizona?” The colleague quickly replied, “My guess is 75 percent of the parents would rather see their kids depressed at Yale. They figure that the kid can straighten the emotional stuff out in his/her 20’s, but no one can go back and get the Yale undergrad degree.”
We should be seriously concerned by a culture in which academic accolades and prestige outweigh our concerns for inner emotional wellbeing.
The Times article also looks at our tendency to strive toward perfection, and all the problems that stem from such behavior. While this is true, the problem with an anti-perfectionism backlash is that it can easily lead students to the aforementioned anarchical path, in which they run helter-skelter from goal to goal, hobby to hobby, striving for fulfillment. We want to find that middle ground where we can encourage students to be themselves, without throwing all rule books or life goals out the window.
Giving students a better college experience seems to involve the refocusing of education on lasting, permanent goods—beyond one’s GPA and extracurricular performance. It should also involve the encouragement and establishment of strong community supports—on and off campus—to invest in a student as he or she grows. It is easy to become distracted by academic accolades. But what should students walk away with when they leave college? Hopefully, they should have a toolbox of critical thinking and experiential skills to help them navigate the world, along with a community of friends and mentors to assist them on their paths. Though these things can’t be quantitatively measured, they will help students overcome their perfectionism and depression, and hopefully help them emerge into the light of confidence and community.
She was an assistant U.S. attorney for the Eastern District of Pennsylvania during the Vietnam era. It was the early 1970s: she believed in the war at the time. But one day she rode a train from Haverford, Pennsylvania along with wounded soldiers from Valley Forge hospital. Many of them had lost arms, legs, eyes. Seeing those young men on the train shocked and changed her: she became familiar with the cost of war—and was convinced it cost too much.
The young woman on that train was Faith Ryan Whittlesey. She could never guess the role she would soon play in fighting the Cold War and advocating a prudent foreign policy—rising from her law career and work in Pennsylvania politics to serve the Reagan administration, both in the White House and overseas as an ambassador to Switzerland. At home and abroad, in public and private life, she would quell conflicts and champion a reality-based conservatism, bridge seemingly impassable diplomatic chasms, and take staunch if unpopular stances for what she believed to be right. Her gentility and iron resolve would soon launch the woman on that train into the national and international spotlight.
Born in 1939, Whittlesey was brought up by an Irish Catholic father and Methodist mother in northern New York. Her parents were both Republican—her mother was a social worker skeptical of the government’s “ability to do anything positive in society,” Whittlesey says. After high school, Whittlesey received a full-tuition scholarship to Wells College in upstate New York, where she majored in history. Afterward, she attended the University of Pennsylvania Law School, where she met her husband-to-be, Roger Whittlesey. They married in 1963 and went on to have three children.
Three years later Roger became president of the Young Republicans of Center City Philadelphia and in 1968 executive editor for Richard Nixon’s presidential campaign in Pennsylvania. Meanwhile, Faith worked as a law clerk to federal Judge Francis Van Dusen, then as special assistant attorney general. Historian Thomas Carty, author of a Faith Whittlesey biography entitled Backward in High Heels, emphasizes that she was working in what was “overwhelmingly a man’s world”: more than 95 percent of U.S. judges and lawyers were men.
In 1972 Roger was invited to represent the 166th district of Pennsylvania in the state House of Representatives. He declined, but Faith decided to take his place. When the district’s Republican establishment refused to endorse her, “I cried and cried,” she says. But Roger told her, “This is the best thing that could have happened. Now we owe them nothing, and we will beat the pants off them!”
He was right: starting as an outsider, Faith forged her career independent of party pressure. She won the Republican primary against six men and came to prize the fact that she wasn’t beholden to the establishment. This independence helped differentiate Faith from the Vietnam and Watergate-era scandals: “As a woman and mother, she brought a special quality that allowed her to mobilize housewives and young people who might otherwise eschew politics as dirty and corrupt,” Carty says.
But in March 1974, tragedy struck: after a long struggle with depression, Roger Whittlesey committed suicide. Faith was suddenly a single mother and sole provider for her family. Despite her grief, she says, “I was totally consumed by the task in front of me. I had very little free time, running a large house by myself.” She entrenched herself in Pennsylvania politics, building a reputation for her maverick stances and honesty.
In 1976, she clashed with Pennsylvania’s Republican Party leaders over the presidential nomination. President Gerald Ford was the establishment candidate, but Faith Whittlesey favored the outsider: California Governor Ronald Reagan. She valued his commitment to limited government and his resolve to fight communism. Though Reagan did not win that year, Whittlesey supported him again when he decided to run in 1979. She became co-chairman of Pennsylvania’s Reagan-for-President Committee and traveled all over the state delivering speeches.
“I became fluent in explaining Reagan’s foreign policy during that period,” Whittlesey says. “I had to go out into the far reaches of Pennsylvania to explain it to ordinary people.” In November, when he won the presidency, Reagan also won Pennsylvania.
In 1981, Reagan offered Whittlesey the position of ambassador to Switzerland. She had extensive knowledge of European culture, having visited several times as a student, and was an eloquent, passionate advocate for Reagan’s policies. Nevertheless, her appointment was “controversial,” according to Boston University Vice President Doug Sears, who was a member of Whittlesey’s embassy staff in Bern. In the Foreign Service, he relates, career appointees often disparage political appointees.
Career diplomats don’t mind if political appointees “just do the social scene, if they don’t do anything.” But “what made [Faith] special was that she was serious … about doing her job—and not the way the career people told her to do it, but the way she believed to be right.”
Whittlesey invited many distinguished guests to the embassy—leading Swiss bankers and businessmen, ambassadors from other countries, college professors, authors, and others—in hopes of better explaining Reagan’s policies to the Swiss. Sears says he was always a little terrified of these embassy dinners: they were strictly diplomatic in nature and each staff member was instructed to foster discussion with Swiss guests.
“Your job was to find someone, engage in conversation, and make sure that there were opportunities to advance the U.S. position on something correctly,” he says. “She’d look daggers at you if you were off in the corner with another American.”
Ambassador Whittlesey traveled to Swiss cultural and political events at every opportunity and accepted interviews with local and national media outlets—sharing Reagan’s foreign-policy vision wherever she went. During her travels, she met a student at the University of Zurich named Patricia Schramm. Schramm calls Whittlesey “a tough, iron lady. She was a very outspoken, determined ambassador.” Schramm remembers vividly a time when Whittlesey spoke at the University of Zurich about the Iran-Contra affair. The event quickly turned chaotic: anti-American protesters tried to steal the American flag out of the room and a violent tussle ensued. “There was a lot of anti-Americanism in Europe at that time,” Schramm observes. “But she was out defending the U.S. position.”
Whittlesey was often “stirring up attention in the press,” according to Sears. “There were a lot of big issues on the world stage that made Ronald Reagan controversial. But if you know of one Switzerland ambassador who is still believed and looked up to, it’s Faith.”
This is because, despite her staunch “iron lady” beliefs, Whittlesey was also a gracious diplomat. At the very beginning of her term as Swiss ambassador, she faced a tense U.S.-Swiss disagreement over U.S. stock markets and Swiss banks. “U.S. authorities pressured the Swiss to alter longstanding national laws to protect client privacy in the financial industry,” Carty explains. When Whittlesey discovered that the U.S. Treasury Department was preparing to file lawsuits against Swiss banks, she put pressure on U.S. officials to come to Switzerland and hold discussions with the bankers.
“Whittlesey personally coached both sides to pursue a mutually acceptable solution that would maintain the United States’ reputation as a trusted trade partner to Switzerland,” says Carty. The two countries came to a solution, and the crisis passed.
This desire to both understand and explain characterized Whittlesey’s work. “She projects a conservatism characterized by humility and respect for differences,” says Carty. “Rather than attempt to impose U.S. values as universal, she applied the same limited government principles to foreign policy as she did to domestic policy.”
After she had spent just over a year in Bern, Reagan’s Chief of Staff James Baker called Whittlesey and offered her the position of director of the Office of Public Liaison back in Washington. She accepted, eager to share Reagan’s vision with an American audience.
“Reagan’s foreign policy was strongly opposed—not only by Democrats, who ferociously opposed it, but by the establishment wing of his own party,” Whittlesey says. “He was called a warmonger and a cowboy.” Whittlesey began a careful and thorough outreach program from within the White House, seeking to explain Reagan’s foreign and domestic programs to a variety of interest groups, as well as to the media.
“We engaged the public, explained Ronald Reagan’s policies,” she says. “But we were just explaining: not trying to influence public opinion, but to let people know that his policies were based on facts, so they could make up their own minds once they heard the facts rationally presented.”
It wasn’t the first time: when Whittlesey campaigned for the Pennsylvania legislature in 1972, she was pregnant with her third child. As the November election neared, her pregnancy “became an object of contention,” says Carty. “Some Republican leaders cautioned Faith that tradition-bound Catholic and other voters might not vote for a pregnant candidate, and one Pennsylvania Republican leader predicted to her that she would lose the seat because of her pregnancy.”
But Faith won, and in ensuing years she cultivated a strong following of women volunteers throughout the county. Feminist groups opposed or ignored her—as Whittlesey told Carty, “They were really not ‘for all women,’ just for certain, mainly Democratic, bigger-government women.” But “ordinary women, mostly housewives and professional women, came out in droves to help me.”
Despite her disagreements with liberal feminists, Whittlesey never stopped fighting for women’s place in the political world. “The smoke-filled rooms are filled with men. I was certainly not invited in. I fought my way in,” she is quoted as saying in Carty’s Backwards in High Heels. (The title itself is a reference to how much harder Ginger Rogers had to work than her costar Fred Astaire: she had to do everything he did, only backwards and in high heels.)
Whittlesey was certainly an outsider in the White House: from March 1983 to March 1985, she was the only woman on Reagan’s 18-member senior staff. Upon her arrival in Washington, she found to her surprise that Baker and Deputy Chief of Staff Michael Deaver were planning to assign her specifically to “women’s issues.” She found the proposal demeaning. “None of these men wanted to deal with the women members of the establishment Republican Party,” she told Carty. “They wanted me to do that, which was, I believe, a completely sexist approach. In my entire political career I had repeatedly declined to be pigeonholed as someone dealing with women’s issues.”
Whittlesey “had disrupted her family life for the opportunity to serve the Reagan administration by promoting the president’s full range of administration policies,” Carty writes. She wasn’t about to be relegated to the role of “token woman on the senior staff.”
Many in the White House were worried about the “gender gap” in Reagan’s support and wanted Whittlesey to deemphasize his pro-life stance in order to garner more female voters. But she refused to ignore or gloss over Reagan’s stand. She believed a strong conservative agenda, well explained, could appeal to voters regardless of sex. Syndicated columnist Sandy Grady wrote at the time, “Women who want a strong White House voice on feminist issues, abortion, and lower Pentagon spending may not be thrilled by Mrs. Whittlesey’s views. She’s probably as far right as Reagan himself.”
In April 1985, Whittlesey returned to Switzerland for a second term as ambassador. She served there for three years, then resigned in 1988 and moved back to New York City to be closer to her family. That fall, a delegation from the American Swiss Foundation asked her to assume leadership of their association. The organization was created in 1945 to foster greater knowledge of Switzerland’s unique political traditions among an American audience and to build strong private connections between the two countries. Whittlesey accepted the position.
The most prominent of her projects at the foundation has been the Young Leaders Conference: a travel program that brings together young American and Swiss professionals. Those who attend the conference travel around Switzerland participating in a weeklong series of events, learning more about Swiss culture and government.
“Through this program, Faith has brought together over 1,000 leaders,” says Schramm, who now serves as president of the foundation. “Tocqueville talked about the importance of private associations, and her work exemplifies that. She’s built a very active foundation into a very important and powerful private association.”
“The Young Leaders Program is her baby,” Sears says. “It builds a class of people who know Switzerland better and generates a personal network. To the extent that we continue to have a good relationship with Switzerland, it will be because of those personal networks and private relationships.”
Whittlesey’s work in private diplomacy is not limited to Switzerland, however: she has also made six trips to China since 1979 and has worked extensively to better the country’s relationship with the United States.
She never thought China posed the threat to the U.S. that the Soviet Union did. “It was becoming clear to me and others,” she told Carty, “that the Chinese would be increasingly important to the U.S.—and in world events—and that communication and better understanding were desirable, despite our continued strong disapproval of their oppressive governing system.”
On a trip to China in 2005, she was alarmed at the hostility she sensed amongst the Chinese toward the U.S. because of the Iraq War and George W. Bush’s interventionist policies. “They seemed to think everyone in the U.S. was a warmonger,” she says. Whittlesey tried to explain that she—along with many other Americans—did not agree with the administration’s bellicosity and was in fact desirous of a more cautious foreign policy.
The next year, she was invited to bring a larger delegation to China and seized the opportunity to introduce the country’s officials to a broader array of conservatives. Her delegation included Andrew Bacevich of Boston University and Georgetown University Professor Joshua Mitchell.
Whittlesey was a “constant diplomat” on the trip, Mitchell recalls. She presented her delegation as those who had reservations about U.S. policies in the Middle East. The team was diverse, and its participants disagreed on various political issues. But uniformity was not the point—indeed, Mitchell says, “she knew there were differences among us. She knew the dance we’d be doing together with different positions. She just let it happen, she didn’t dictate.”
“I brought groups over there to have discussions,” Whittlesey says. “It was private public diplomacy. But I believe we have to work on a peaceful resolution of our differences with China. I know some in the Republican Party believe a conflict with China is inevitable, but I believe it would be disastrous. We have to prepare for the worst-case scenario but be constantly working on a best-case scenario.”
Whittlesey often surprises those who view Reagan’s foreign policy as interventionist or neoconservative. She adamantly argues that Reagan would not have supported the wars in Iraq and Afghanistan.
She expressed this view one day at the Heritage Foundation’s showing of “Reagan,” a documentary on the president released in 2011. There’s a section at the end of the movie, Whittlesey points out, which “purports to show why Reagan would’ve supported the Iraq War.” After the viewing, Whittlesey stood up. She said, “I had the privilege of working with Reagan for eight years, and I think differently. I don’t think Reagan would’ve supported these wars.”
While Reagan had a clear understanding of the threat posed by the Soviet Union, Whittlesey explains, “He decided the benefits of launching a land war would not justify the cost, and he did not respond militarily. He was cautious and prudent: he believed in giving freedom fighters in other nations material means to fight tyranny but not American boys and girls.”
Sears says he was surprised when he first heard Whittlesey express uncertainty about the wars in Iraq and Afghanistan. “In the fullness of time, we’ve seen her skepticism validated,” he says. “She’s always watching, she understands human nature and motivation. And she cherishes life. She’s known some hard times, her life has been marked by some tragedies, and you can feel it when you’re with her. The idea that we’re sending young men and women into conflict without a clear understanding of why really gets to her.”
Whittlesey believes the current plight of religious minorities and refugees in the Middle East is a direct result of U.S. interventions in Libya, Syria, Iraq, and Afghanistan. “Every time we go in and change a government in the Middle East, it seems to get worse,” she notes. While we should be grateful for our unique political system, she says, we mustn’t forget that it is inextricably tied to our historical background. “To send our precious young men and women to impose this system on other countries that don’t have our historical background is folly, and a waste of their lives and limbs.”
Schramm observes that America’s relationship with much of Europe has been severely damaged by foreign interventionism. “Faith’s example of diplomacy is to go to that country, listen to the leaders and what they think, take them seriously, and let their opinions flow into the public discourse in the U.S.,” she says. “This would prevent us from running roughshod over foreign relations.”
This sort of diplomacy deeply influenced Whittlesey’s standing amongst the Swiss: despite the unpopular stances she often had to take during her years as ambassador, Carty writes that she cultivated a strong friendship between the two countries, “based on shared understanding and mutual respect.” Sears says Whittlesey is “revered” in Switzerland: “I can’t think of any American ambassador who is respected as she is.”
Yet despite the esteem she has garnered overseas, Whittlesey’s name is still relatively little known in Washington. “She leads discreetly in the background,” notes Schramm. “In a way, perhaps, she is a bit of a loner. People don’t really understand her politics: how can you be a former Reagan Republican who is against interventionism abroad?”
But in this sense, as Mitchell notes, Whittlesey is simply a traditional conservative, who believes that limits and tradition still matter. “She is trying to reestablish what Republicanism should mean to the broader public,” he says. “She wants a refined view in simple formulations. She will fight if necessary, but she starts with diplomacy. Decorous conservatism—this is Faith.”
Gracy Olmstead is an associate editor of The American Conservative.
What’s the point of handwriting? In Hazlitt Magazine, Navneet Alang argues that it gives us “practical and symbolic resistance to the pre-programmed nature of the modern web” by helping us assert our own voice and style, outside of a technological box:
Handwriting is profoundly bodily. Like an exaggerated, intensified version of the sweeps and swipes we use on a tablet, writing by pen can make muscles ache. Write while crying and one’s hand becomes shaky, write with excitement and watch the swirls and loops of one’s arcs become wild—an inky neurochemical expression that type just can’t replicate or capture. … To write by hand is to always foreground an inevitable uniqueness, visually marking out an identity in opposition to, say, this font you’re reading right now.
Handwriting is inherently physical, and an expression of the individual, Alang writes. But it is also, I would argue, inherently social and familial. Take, for instance, my own handwriting: it’s far from perfect, still strewn with the inconsistencies that Alang rightfully notes are the nemesis of the aspiring writer. But when I look at my own writing, I see a variety of interesting, personal histories reflected therein:
I see the graceful, pirouetting “f,” “l,” and “s” of my mother. She was a ballerina, and I’ve always felt her script lyrically matches her dancing past. It has a sort of artless yet playful poise to it.
But in my script, I can also see the strong capitals, the precise “c” and “t” of my father’s accountant hand: his firm and meticulous script brought glorious form and concision to every post-it note and schedule. It is, interestingly enough, very similar to his banker mother’s handwriting.
My “y” and lower-case “g” are my farmer grandfather’s: he adds a graceful, arcing loop to their tails. It’s such a poetic flourish, one that I always thought slightly ironic (yet beautiful), coming from the practical, jocular man I called grandpa.
As you can see, I’m something of a handwriting thief: stealing my favorite artistic forms from the people I love most. But there’s something fun and even comforting to look at a piece of paper, strewn with journalism notes, and to see an entire family history therein.
Handwriting also tells a history, if you trace it through time: from a child’s first shaky alphabet, to an adult’s final shaky words. Here again, we see life coming full circle. We see a person’s progression, maturation, decay. My other grandfather—a pharmacist—used to write me four or five-page letters in a meticulous cursive. With age, he’s switched to cards, and careful-yet-faint capitals, as his hand trembles too much to write in his former script. Regardless, his distinctive style is there, shouting to me from every addressed envelope I receive from him in the mail, calling to mind a whole history of correspondence and affection.
People talk about the resurrection of handwriting via social media like Instagram or Pinterest—but the problem is that what we’re seeing here isn’t necessarily handwriting, but rather a (worthy and laudable) resurrection of typography. Here, it seems, we need to differentiate between three different, important, terms:
First, there’s handwriting: this is the subject at its most basic. It is very simply “writing done by hand.” Nothing specific here: it could be cursive, print, italic, what have you. It’s just the simple (and importantly physical) work of writing with your own hand, as opposed to using a keyboard.
Second, there’s penmanship: this is a more stylistic and specific term, referring to the way in which we write by hand. It carries with it a certain feeling of quality and finesse. A person with good penmanship can easily write a letter in polished cursive; a person with good handwriting may only be doing so in a simple printed script.
Finally, we have typography: a word that is important to include, because the world of graphic design and printed media has had such a tremendous impact on the way in which we write. Many of the artists you see on Instagram and Pinterest are not sharing examples of handwriting or of penmanship: they’re creating extremely stylized, beautiful letters that are exemplars of their craft—not an enunciation of their particular style or personality (in other words, of their penmanship).
Typography is increasingly popular today. But the simple, quotidian task of writing things down via “handwriting” is growing more and more rare. And thus penmanship—the daily practice of writing by hand, of cultivating a personal style and method of writing—is perhaps close to extinct.
There are interesting scientific studies that Alang refers to, in which we learn that handwriting may develop important cognitive skills and promote memory retention. But what his piece points to is that there are also qualitative goods involved in the work of handwriting that we may lose, if we abandon it altogether to technological devices. Even as computers produce a prolific, perhaps endless amount of stylized fonts for us to choose from, we may lose the humanness involved in the writing and development of letters. And this would be a sad fate—not just because of the physicality of handwriting that — considers, but because penmanship is, in my mind, a deeply familial and personal thing.
Were I to abandon all handwriting, and with it all practice of penmanship, the traces of lineage I see in my script would probably begin to die. The letters from my grandmother, in her spidery hand, may even become indecipherable as my eye grows unaccustomed to a human’s hand—even as I, as a print designer and editor, may grow ever more shrewd in recognizing the differences between fonts like Times New Roman and Garamond.
There are certain human traditions worth preserving, because they are inherently good—regardless of their utilitarian value. They bind us to the past, and give meaningful beauty to our present. Whether I am penning a letter and mimicking my mother’s loopy, graceful “f,” or whether I replicate my grandmother’s angular capitals as I fill out the crossword puzzles she loved, I am participating in a graceful linguistic and artistic tradition that threads through the past, and into the future. It’s not a cumbersome or annoying discipline: it’s a dance.
Following Hillary Clinton’s first major address on the economy this week, Jeff Spross wrote Wednesday for The Week that Clinton’s “focus on freeing up women to enter the labor force,” while laudable, is “also too narrow and uni-directional”:
She needs an approach that is both broader and more radical: giving all Americans, men and women alike, more control over when they participate and don’t participate in the labor force. … We should be working towards a society where men and women can negotiate with one another, with equal resources to pick their own version of the good life. And so that when they have to compromise that ideal, they can do so on their own terms.
Whether women are working less or working more is the wrong question to ask. Rather, ask if they’re doing what they want to do, or what society and the economy demand they do.
It’s interesting that we live in a time when society (or certain political groups within society) is single-mindedly focused on driving women into the workforce, seeing full-time work as a superior lifestyle for mothers. Contrast this with a piece also published yesterday, in which Wednesday Martin considers the “captivity of motherhood” experienced by women in the 1960s:
Johnson was groping toward feminism’s second wave before it came to be, feeling for a toehold. She listed, in her catalogue not of grievances so much as unsentimental facts about the lives of herself and her conspecifics, the following: isolation, worries about illness and money, and sexual boredom. She referred to the whole schmear as “the housewife’s syndrome, the vicious circle, the feeling of emptiness in the gap between what she thought marriage was going to be like and what it really is like.”
… A participant-observer in the bizarre social reality she describes, Johnson conveys without rancor the existentially isolated, restless feeling that my mother’s generation grappled with, and that Friedan wrote about when she quoted one of her subjects: “I’m a server of food and a putter-on of pants and a bed-maker, somebody who can be called on when you want something. But who am I?”
And Johnson might have added, “Why am I the only one?” Johnson starts off with the demographically inflected observation that women are cut off from their extended families and friends by the idealized nuclear family, with its ostensibly perfect evenings of popcorn and TV in PJs, but moves toward darker realities, including the impact of this existence on female consciousness.
This latter paragraph illustrates the geographical and urban components of Johnson’s problem: compare the isolation described above with this account from Benjamin Schwarz of working-class motherhood amongst working-class families in London up until the late 1950s:
…Working-class life was defined by an idiosyncratic approach to what is “always one of the great and indispensible functions of any society,” as Willmott and Young put it: the task of caring for children. That approach emerged from the distinctive relationship of the working-class mother—the sainted, mythic figure known as “Mum”—to her married daughters, a relationship that had developed from a universal truth: “Child-rearing,” Willmott and Young wrote in a deceptively obvious observation, “is arduous, it is puzzling, it is monotonous…” But a daughter’s “work can be less arduous because it is shared; her life less lonely because she has someone to talk to; the behavior of her children less perplexing because she has someone whose experience she can draw on.”
For a married working-class daughter whose husband was at work, her Mum was “the person with whom she can share the mysteries as well as the tribulations, the burdens as well as the satisfactions, of childbirth and motherhood.”
The result was a flourishing matriarchy, in which a woman’s authority and stature grew with age and in which—thanks to the proximity of employment for the menfolk and the geographically compressed layout of working-class districts—the households of Mum and of her married daughters’ families were, and in fact had to be, nearly always at most a few blocks away and often on the same street. (In a 1956 study of a Liverpool neighborhood, the sociologist Charles Verker revealed a not-atypical arrangement in which the households of one extended family—mother, daughter, father-in-law, sister-in-law, uncle, three cousins—occupied eight of the 22 houses of a short street.)
Children were raised as much in Mum’s house as in their own; married daughters would see their sisters and their sisters’ children at Mum’s, usually daily; their husbands would regularly have their supper after work at Mum’s; family “popping in” for a cup of tea and a chat was the norm—Mum and her daughters saw each other on as many as a dozen separate visits each day. The sons-in-law who gathered at Mum’s were usually drinking friends, and the friends and workmates of each became part of the others’ vast network of acquaintances.
In this world, Willmott and Young explained, in which daughters turned to Mum “in the great personal crises and on the small domestic occasions,” daughters and mothers would “share so much and give such help to each other because, in their women’s world, they have the same functions of caring for home and bringing up children.”
Separate clusters of extended families formed the working-class neighborhood. Its residents knew each other with an intimacy of detail and often through several points of connection—the people a young mother encountered while doing her marketing were her or her parents’ childhood playmates and the friends and relatives of her husband or her brothers- and sisters-in-law. Characterized by these informal, intimate, multilayered social networks, and by what Willmott and Young called “a sociable squash of people and houses, workshops and lorries”—in which the local pub was at the end of the block, the store for daily provisions around the corner, and a destination five minutes’ walk away was considered in a different neighborhood entirely—this was a largely static, tremendously local, intensely parochial realm.
This strikes a remarkable contrast to the suburban housewives described by Martin, “cut off from their extended families and friends by the idealized nuclear family.”
And the problem persists today—Martin says that even the “privileged mommies” she’s met are most often “cut off from extended families,” and thus “depend at least in part on nannies to help them with the heavy lifting of motherhood.” The disintegration of the home economy and its corresponding networks of support has developed steadily throughout time, and it’s had consequences for mothers.
When we talk about the plight of the 1960s housewife, we don’t talk enough about the role the rise of the suburbs—and increasing familial atomization—may have played in her isolation. Cut off from the rapport and community infrastructure Schwarz describes in his article, she was increasingly alone. Housework, child care, cooking, and cleaning responsibilities fell squarely on her shoulders—without any of the corresponding social or familial supports that may have alleviated those burdens.
Yet as our society industrialized and urbanized, women came to recognize their feelings of loneliness and isolation as an economic / inequality problem. While that may not be entirely false, there were other social, cultural, and geographic dilemmas involved. And without fixing those, the problem has continued to simmer beneath the surface.
Why? Because not all women have found contentment within the workforce, within a 9-to-5 job. Martin suggests that a lot of this discontent comes from a still-rigid set of options available to women: a lack of institutional flexibility, coupled with a lack of benefits, that make it difficult for a woman to “have it all” (job plus kids). Spross, too, despite his welcome acknowledgement that women should be able to choose their venue of work, be it in or outside the traditional workforce, continues to look to economic or career-related policies to salvage and assuage the problem. This understanding continues to reinforce the idea that SAHM’s loneliness or search for meaning can be fixed with economic band-aids. This makes no sense, when you consider the fact that this is an entirely narrow and fractured understanding of the human person, whose nature is decidedly social, relational, intellectual, and emotional, as well as vocational.
Last week, I talked to Joel Salatin, sustainable farmer and author, about the interns who come to his farm. He noted that several of them are coming from white-collar desk jobs: one current intern worked 10 years as an engineer for a Fortune 500 company. He got tired of computers and cubicles, Salatin said. “Our trajectory of institutional schooling, to student loans and college, to a white collar job keeps people from discovering other passions,” he said. “Lots of our interns were burned out by white collar careers. They tell me, ‘I always wanted to do this, but I was told I couldn’t.'”
While increased flexibility and benefits for women are of course beneficial and important, they overlook the other problem we face: that of a culture whose fabric has become increasingly rigid and atomized, in which women are separated from the familial, communal, and institutional supports that could help give them a sense of meaning and purpose. It’s a culture that gives smart, talented women one trajectory for their gifts, and tells them other pathways are a “waste,” or a sure-fire road to loneliness and dejection.
But if there’s one thing I’ve learned from the stay-at-home moms, homeschoolers, and retired ladies who I met growing up, it’s that everything is a matter of perspective, location, and relationship. What are you doing with your time? Are you living in a place that increases or decreases your ability to connect in a meaningful way? Are you reaching out to others, building an infrastructure of community and camaraderie around your family?
These questions may not be sufficient to assuage all the problems or inner conflicts that a mother faces—but they present a more holistic solution to her problem than the purely career-centric focus that modern society most often presents her with.
Some community colleges are dropping the word “community” from their name—and with it, Anna Clark argues at Next City, they are dropping an idea and distinction that is very important:
In all, more than 80 schools have cut “community” in the past 30 years, with at least 40 doing so in the last decade. … By assuming a name that has the ring of a traditional four-year school, community colleges are playing into the stereotype that they are less valuable than their counterparts. They give credence to the second-class stigma.
… [Community colleges] deserve to be taken on their own terms. Traditional education metrics, like the time it takes to graduate, aren’t neatly comparable. The Department of Education’s measure of graduation rates is designed for students who enroll in the fall as first-time degree-seeking undergraduates who attend school full time. None of that describes the majority of community college students. Most attend part-time, begin in “off” semesters, transfer to other schools and, in many cases, they are not seeking a degree or certificate when they sign up for classes. All of this distorts the statistics on community colleges, confusing “graduation” as a synonym for “educational completion.”
Because they serve a different population — including working adults, like my parents — their singular value should be recognized. And when it comes down to it, despite the snide nicknames, people get this. A June Gallup poll revealed that Americans are about as likely to describe the education at a community college as “excellent” or “good” as they are to positively rate four-year schools. With the national student loan debt at a preposterous $1.2 trillion, that fact deserves a spotlight.
Clark makes some good points here, and helps draw out some of the distinctions that make community colleges a “community” venture: they’re about serving a wider, more diverse swath of the local population. Less about academic prestige and imposing admissions standards, they emphasize vocational training and accessibility. They provide a starting point for high schoolers eager to get some prerequisites out of the way, and flexible schedules for working adults trying to obtain a degree. I once profiled a community college back in Idaho that provided classes to students with intellectual disabilities. As one mother told me, “The more the student is part of the college life and community, the more the impact on their skill acquisition and their connection to the community at large.”
In turning away from their “community” role and nomenclature, these colleges may in fact be making a dangerous turn away from the local, and from the vocation they serve within that sphere: Clark notes that when Jackson (
Community) College “announced its new name in 2013, officials boasted about how it will make them more marketable to international students. This is a surprising turn for a campus founded in 1928 to serve the people of rural mid-Michigan.”
As Zachary Michael Jack noted last month at Front Porch Republic, academia has grown increasingly rootless over the past several decades, even as the “local” becomes more popular in other venues of society: “America’s colleges and universities, particularly those in suburban and rural areas, continue to militate against students’ love of home. … Community colleges and small liberal arts campuses like mine that draw 75 to 80 percent of their students locally are widely and wrongly disparaged as ‘commuter schools’ by elitists, and yet they have gained market share precisely because they have helped students build bridges between “dual citizenships” at home and at school.”
Clark’s article should prompt us to consider the role that community colleges can, and should, play within their local community. Though four-year universities are important, they are often migratory institutions, drawing young people away from their towns and cities of origin—and often saddling them with mountains of student debt in the process. If community colleges fulfill an important niche in America’s academic society, as Clark argues, we must consider whether this name change constitutes a deeper change in their method, role, and philosophy—and whether that change could have deleterious consequences.
Ready for your library to go all-digital? That day may be coming sooner than you think—as the Washington Post reports:
Around the country, libraries are slashing their print collections in favor of e-books, prompting battles between library systems and print purists, including not only the pre-pixel generation but digital natives who represent a sizable portion of the 1.5 billion library visits a year and prefer print for serious reading.
Some of the clashes have been heated. In New York, protesters outside the city’s main branch have shouted: “Save the stacks! Save the stacks!” In Northern Virginia, the Fairfax County library system chief recently mused that the Friends of the Library were no longer friends — a feud fueled by outrage over a print collection that has shrunk by more than 300,000 books since 2009. The drop in the District is even more dramatic: Nearly 1 million books have vanished since 2009.
… “We’re caught between two worlds,” said Darrell Batson, director of the Frederick County Public Libraries system in Maryland, where the print collection has fallen 20 percent since 2009. “But libraries have to evolve or die. We’re probably the classic example of Darwinism.”
Batson’s words here reveal an unspoken—and flawed—assumption that undergirds many of these library transformations, and leads to a lot of misconceptions in the print vs. online debate over books. Namely, he’s assuming that technology always equals evolution, that embracing new technological fads is an essential part of progress.
But is this true? The new isn’t necessarily more durable or preferable—indeed, the new is often flimsy, unpredictable, and prone to unintended consequences. It has not yet withstood the test of time. In regards to book technology, specifically, a recent study has shown that e-reader users absorb less than those who read on paper. One professor reports that over 92 percent of students she surveyed “said they concentrate best when reading a hard copy.”
Technology, while an incredibly useful tool, is not necessarily better just because it is new. And the mobs of protesting bibliophiles who want their codexes back should be at least somewhat indicative of this. They’re not all reactionaries—indeed, as the Post points out, “One survey … found that just 5 percent of millennials read only e-books. Twenty-one percent of the millennials said they read more hard copy than e-books, and 34 percent reported that they only read print.” Even the youngsters still like print. They like e-readers, too, but their enjoyment of one does not exclude their use of the other. They’re “hybrid readers,” enjoying both mediums in different venues, at different times, according to need and the occasion.
It is also worth mentioning that this technological transition brings a considerable expense to the library: e-books are more expensive, says the Post, in part because “publishers fear large databases of free e-books will hurt their business.” Yet library’s budget spending on e-books has grown from 1 percent to 7 percent, while print book budgets have fallen from 67 percent in 2008 to 59 percent in 2015.
It isn’t obligatory—nor is it commendable, necessarily—for libraries to shun all new technological fads and tools. But it is important that they don’t shirk or shun the treasures of the past as they embrace new and promising gadgets.
Books have withstood the test of time for a reason: they’re excellent communicators, storytellers, and informers by their very medium. We love them exactly how they are. And some of us think that you don’t have to throw them out in the name of progress—indeed, such a hasty move may represent a regression, rather than an advancement. Rather, the trick is in preserving the best of the old, while finding ways to incorporate the best of the new.
Your parents’ income may play a large role in the major you select: in “Rich Kids Study English,” Joe Pinsker considers the elite bias toward studying the arts, history, and other less practical majors:
Kids from lower-income families tend toward “useful” majors, such as computer science, math, and physics. Those whose parents make more money flock to history, English, and performing arts. …With average earnings for different types of degrees as well-publicized as they are—the difference in lifetime earnings among majors can be more than $3 million, one widely covered study found—it’s not hard to imagine a student deciding his or her academic path based on its expected payout. And it’s especially not hard to imagine poorer kids making this calculation out of necessity, while richer kids forgo that means-to-an-end thinking.
Pinsker’s article draws us back to the discussion of the liberal arts and their relevance (or lack thereof) that has proliferated over the past few years. As B.D. McClay pointed out at The Hedgehog Review, many defenders of the liberal arts get caught up in trying to prove their vocational “usefulness.” Yet such arguments often, necessarily, fall through: “Pure mathematics has no place in a scheme of education that is about utility,” McClay notes. “Neither do the observational sciences, which are—despite being of great importance in the history of science—politely shown the door in pop accounts of the discipline. The fine arts, which have always depended on patronage for survival, will never be able to justify themselves on the grounds of utility.”
The liberal arts were taught in classical antiquity to freemen, and focused on non-vocational subjects seen as having a larger intellectual, philosophical, or political value. Yet such an education was still seen as having a practical value, considering the civic role that ancient Greeks and Romans played in the operations of their government.
What Pinsker’s research indicates is that only the rich think they can afford to learn something that isn’t useful to modern life’s larger goal (namely, procuring a secure and profitable career). While this makes sense from a practical point of view, it also indicates a changing attitude toward larger perceptions of civic duty and involvement, and what such duties entail. In today’s society, we are still called upon to vote, to engage in important local and national political decisions. Is it still necessary that we have a well-rounded and thorough understanding of non-vocational subjects such as philosophy, mathematics, economics, and history in order to make wise decisions in these matters?
Writer and philosopher Susan Neiman thinks so—in a recent Q&A with Salon, she talks about our increasingly infantile age, and points to its political-intellectual roots:
The social structures within which we live are constructed so as to keep us childish. The state has an interest in preventing us from thinking independently, and it cultivates and exploits our worst tendencies in order to do so, for grownup citizens are more trouble than they’re worth. The state’s desire for control and our own desire for comfort combine to create societies with fewer conflicts, but they are not societies of adults.
… It’s remarkable that though we are constantly told to exercise our bodies regularly, we hear very little about the importance of exercising our minds after we’ve finished our formal education. Reading Rousseau and Kant is one way to do so. … As Rousseau and Kant teach us, society has an interest in our not reaching maturity. By encouraging our most infantile characteristics, and diverting us from the truly important adult questions, it distracts us from the social problems that need to be solved.
While I would refrain from making a blanket “It’s the state’s fault” statement when considering our society’s love of youth, Neiman is right that our intellectual immaturities often prevent us from participating in civic matters with the necessary discretion, thoughtfulness, and prudence. The question of why we exercise our bodies and not our minds only points to this immaturity: we prize a fit body because it is more youthful and likely to produce longevity. But in our free time, we feed our minds on Netflix and Facebook surfing. The liberal arts are focused on developing healthy patterns of thought, robust intellectual habits and virtues. They teach us how to think, how to engage with important subjects, how to read and speak with a careful intelligence. By focusing only on career and monetary compensation, not on intellectual sagacity, our education system reinforces materialistic habits—and thus bolsters an idolatry of youth and comfort that (inadvertently or no) can encourage the dominance of the nanny state.
Of course, such problems aren’t solved by someone deciding to become an English major. But more thorough study of the liberal arts, in one’s private time as well as in one’s academic studies, may help reverse the trends of immaturity that we see scattered throughout our culture. Such studies may not lead us to greater financial benefit—but they will cultivate long-lasting political and cultural goods.
When I think of the July Fourths of my childhood, one stands out in particular. We had spent the weekend at a large family reunion in Cascade, Idaho, where my cousins and I biked along the lake, played chess, and created a makeshift parade with flags duct-taped to our hats. During mealtimes, the mothers, aunts, grandmothers, and great-aunts brought out of pasta salad, baked beans, casseroles, and all those other lovely northwestern comfort foods. We would load up our paper plates, sit around picnic tables, and listen to my great-grandfather, his sister, and brothers—all nearing or in their 90s—swap stories about the old days.
The night before the Fourth, my sister, cousin, and I travelled home with my grandparents. We spent the next morning helping them unpack supplies from their camper, and then Grandma fed us in a splendid fashion. That night, we watched fireworks from the fields next to their house: fields that led up to a steep cliff, overlooking the valley beyond. I was cold, so Grandma gave me her red jacket. Like any splendid grandmother, she wore seasonal clothing with aplomb. This unbelievably soft jacket had a bejeweled flag pinned to the left shoulder.
I’ve often wondered why that Independence Day stands out so strongly in my memory. Perhaps it’s because it embodies many of the things I love and feel devotion for: strong family ties, rich heritage and history, distinctive local food cultures, bright aesthetic displays of devotion.
In recent years, my Fourth of July has looked quite different. There were some college summers spent at Nationals baseball games in D.C., grabbing food in Georgetown and watching fireworks on the Potomac. Post-marriage, I joined a new clan, and began to acclimate to their Independence Day culture: there were less casseroles and more salads, but the close family dynamic remained.
What I’ve missed most on the east coast, perhaps, is that rich tapestry of inter-generational history and context that gives permanence and meaning to one’s celebration. Washington’s Fourth of July revelers are a less rooted bunch—many of them interns and recent college graduates, who scan the homepages of the Washington Post and Washingtonian for the best hole-in-the-wall restaurants and rooftop bars from which to watch fireworks and drink booze. They have no family get-togethers to attend. Even my husband’s close-knit family members are immigrants to this area, originally from the midwest.
Yet there is an aura to Washington’s Fourth that is distinctive and interesting: it involves a sense of intellectual camaraderie and depth that is very different from the Independence Days of my past. These days, I am better versed in Tocqueville and the Federalist Papers. I’ve visited monuments and written extensively on American politics; I’ve met congressmen and attended countless policy panels. These all give me an interesting tapestry of intellectual and political distinctiveness through which to view July 4th. They even give me a slight sense of kinship with the various interns, politicians, and think tankers scattered throughout Washington.
But I’m also learning that a little bit of homesickness is almost a necessary piece of Independence Day, as we grow older. It reminds us where we’re from: of the local ties that bind us, the fond remembrances that truly encapsulate our American experience. Without that sense of loss and homesickness, we wouldn’t understand fully what it means to love a place and its people.
So this year, as I gather to eat with new family and watch fireworks with old college friends, I’ll also be thinking of Grandma and her soft red jacket—of watching fireworks burst bright colors across the shadows of Idaho fields, and the sweet tapestry of family and history that brought me to this new place.
How ought we to read? In the New York Review of Books, Tim Parks considers a Vladimir Nabokov quote which promotes the intense reading of a few over the broad perusal of hundreds—and he wonders, is Nabokov right?
“When we read a book for the first time,” Nabokov complains, “the very process of laboriously moving our eyes from left to right, line after line, page after page, this complicated physical work upon the book, the very process of learning in terms of space and time what the book is about, this stands between us and artistic appreciation.” Only on a third or fourth reading, he claims, do we start behaving toward a book as we would toward a painting, holding it all in the mind at once.
… The ideal here, it seems, is total knowledge of the book, total and simultaneous awareness of all its contents, total recall. Knowledge, wisdom even, lies in depth, not extension. … Since a reader could only achieve such mastery with an extremely limited number of books, it will be essential to establish that very few works are worth this kind of attention. We are pushed, that is, toward an elitist vision of literature in which aesthetic appreciation requires exhaustive knowledge only of the best.
… So, is this an ideal attitude to literature? Is Nabokov right that there is only rereading? Does the whole posture, both Nabokov’s and that of critical orthodoxy, bear any relation to the reality of our reading habits, particularly in a contemporary environment that offers us more and more books and less and less time to read them?
Meanwhile, Ken Kalfus writes a relatable—and even amusing, though in an almost tragic way—piece in the New Yorker about the way we shop for books now:
Bookstores have become places of regret and shame. We once enjoyed shopping in them or simply looking in their windows, back in the days when they were ordinary retail establishments. They were like stores that sold shoes or hats, but with more appealing merchandise. Now they’ve taken on moral significance. Buying a book and choosing the place to do so involve delicate and complicated considerations. You may fail to do the right thing.
… My remorse enfeebles me. I recognize that I’m no longer thinking about the essence of the reading experience or the book I want to buy, which in the depths of my moral rumination has been turned into simply another form of consumption, and not even that, but rather the aspiration to consume.
For the bibliophile, these dilemmas are deeply understandable. We’ve all confronted that large and looming bookshelf, considering furtively—or even fearfully—what we should buy… or whether we should buy anything at all, since we’ve probably got three or even 15 books on our shelf that are as yet unread. We think of the treasure troves left to be discovered, the talented authors who we could support through our sales, the tiny indie bookstore feebly making it by, day by day. And all of a sudden, choosing a book becomes a monumental, even moral, task.
Are other pastimes saddled with this moral weight? Few of us spend such time and mental energy perusing Netflix or movie theatre listings. True, we may be overwhelmed by choices; but in the end, we’re merely seeking some evening entertainment. And unless one is a true film enthusiast, these cinematic choices don’t leave us in any sort of moral quandary or panic.
So why is reading different? Perhaps because, for many of us, it’s more than entertainment: it’s part of a larger search for truth, goodness, and beauty. It’s a way we delve deeper into our souls, and the souls of others. It often leaves us shaken and transformed. As Nabokov points out, the longer we spend immersed in a particular work, the more we begin to know and love it—and the more it begins to change us. Reading leaves a more indelible mark on the human mind than most other forms of entertainment. Thus, choosing a book is often like choosing a particular course for one’s future: mapping out the free hours of the coming days, yes, but even more, mapping out a new mental and spiritual journey for the self to embark upon.
Through our reading, we come to know and love writers. We often find ourselves compelled to keep buying and reading their material, seeking to know them better—and seeking to support their work.
Through our reading, we come to know and love bookstores. They leave us with deep sensory impressions, fond memories of serendipitous discovery, a lingering thirst for joys yet undiscovered. We frequent our favorites with religious devotion—memorizing the layout of their shelves, seeking their most comfortable chairs.
For the bibliophile, reading is a monumental part of living. And thus, the choosing of every book must be considered deliberately and thoughtfully. Which brings us back to the two questions that Parks and Kolfus consider: first, should we seek quality or quantity in our reading? And second, what moral claims should lay heaviest on our hearts in the choosing of a book?
Back in 2013, I wrote about George Vanderbilt’s enormous library and voracious reading habits: he reportedly read 3,159 books in his lifetime (approximately 80 books a year). I used to think that such a feat would be accomplishable for me—3,000 didn’t even sound like that many. But the older I get, the busier life becomes… and getting through 40 or 50 books in a year seems like a monumental accomplishment. I’ve realized that Vanderbilt’s record will never be mine—and that it doesn’t have to be. Instead, it’s become clear that I ought to savor each book: re-reading my favorites, while still seeking those essential reads that stretch brain and attention span in healthy ways. Speed reading is useful for the accumulation of necessary knowledge, while slow reading is essential for the appreciation of written beauty. Perhaps our best reading choices lie at the junction of quality and quantity: we can quickly peruse tedious or secondary works, then slowly absorb the masterpieces worth relishing.
And what of the purchasing of books? William Giraldi touched on this struggle in his excellent piece on personal libraries earlier this year. “Agonizingly aware of the human lifespan,” he wrote, “The collector’s intention is not to read them all, but, as E.M. Forster shares in his essay ‘My Library,’ simply to sit with them, ‘aware that they, with their accumulated wisdom and charm, are waiting to be used’—although, as Forster knows, books don’t have to be used in order to be useful.”
To the non-bookworm, such sentiments probably sound ridiculous and expensive. But to the bibliophile, such a statement bears witness to the dilemma we all face, the tightrope we walk on every trip to the bookstore or the bookshelf: what to read, re-read, and why, are the questions that must be weighed in the balance. Thankfully, a good book is worth all the work.
In a world moving towards vat-grown meats and Soylent, Bryce Oates hopes for a return to the sustainable, diverse, and local:
I suppose we all have our favored notion of what’s to come, what’s preferable, how we should move along the path. Mine is more people on the land farming a mix of crops and livestock, minding the recycling and biological renovation of nutrients while producing healthy food for people, and leaving room for the wildlife with whom we share the planet. That’s already a mouthful, I know, but there also needs to be something said for economic fairness, decent pay, and incomes sufficient to support these food producers and conservationists.
Creating such a future may be difficult, Oates writes, as we would need to build “a policy framework and developing market opportunities and infrastructure. You know, truly sexy things like food processing shops and developing trucking routes.”
Meanwhile, Gene Logsdon wonders whether we ought to forsake capitalistic farming methods altogether, and turn farming into a not-for-profit enterprise:
Not-for-profit farming would be based on a different economic model for farmland. “Profit” would come from the satisfaction and enjoyment and recreational value of possessing or owning land, not squeezing it to death for money profit. Then the land and the farmer’s life on it would not be subject to money manipulation and would not need the highest yields or the biggest machinery to survive. It would just need more not-for-profit food producers.
The major goal for successful farming would not be to reap the highest amount of money from the land but to reap the most pleasure and satisfaction that a farm can provide. For example, the not-for-profit farmer would be content to derive as much enjoyment out of fishing, ice skating, boating, swimming, and bird-watching on his pond that others derive from taking vacation trips to far off lands. Rather than seeing the farm primarily as a place to make money, the non-profit farmer would see it as a refuge from strife. They would then have to make only enough money to pay taxes and cover living costs, the latter being minimal since the farm, correctly managed, can provide many of those costs without cash outlay. The financial reward would come from the rise in the value of the land both as property and as increasingly fertile soil.
Why does Logsdon see this as a more beneficial method than the current one? “When highest possible profit rules farming, the possession of the land inevitably flows into the hands of the richer people and more and more poor people are dispossessed— forced off or lured off the land,” he writes.
Both of these posts seem to raise the question, “Has capitalism broken farming?” As I’ve written in the past, I think it’s more likely that crony capitalism has broken farming, giving us the bloated industrialized system we have today. But these writers aren’t wrong to call for a return to a simpler, more diversified, craftsman-esque style of agriculture. Our current industrialized mode of farming has resulted in a swath of deleterious consequences.
Oates identifies the greatest challenge here: the need for a new infrastructure, one that gives local-food-craving consumers access to the goods they desire. That infrastructure is building, but slowly—impeded by miles of red tape.
The idea of not-for-profit farming is intriguing, but I wonder how well it would sell to farmers themselves—many of whom want to build a sustainable livelihood they can pass onto their sons and daughters. (Although perhaps it’s not a bad idea to open up the possibility of not-for-profit farming for those who are interested in such a lifestyle.) Many of today’s farmers are making “only enough money to pay taxes and cover living costs” as it is: rewarding them for their toil and hard work is a good use of our time and resources. We want to make (sustainable, local) farming a lucrative practice, so that smart and talented people will be drawn to the enterprise. That’s not as likely to happen if we transform farming into a not-for-profit enterprise.
Back in 2013, I wrote about a raisin farmer fighting the federal government’s “Raisin Administration Committee”: a Truman-era program that has the power to allot a portion of every year’s raisin crop into a government-controlled reserve, where it’s kept off the market. When farmer Marvin Horne decided to take on the committee, the battle went all the way to the Supreme Court—where last Monday, the court ruled in Horne’s favor:
The Hornes went to court to challenge the whole raisin reserve program, contending that it amounted to an unconstitutional taking of their property without just compensation. … The government argued that it wasn’t taking the Hornes’ property because they were free to sell their grapes for other purposes, like wine, instead of raisins.
The Supreme Court rejected those arguments, however, by an 8-to-1 vote, with Justice Sonia Sotomayor the sole dissenter. Writing for the majority, Chief Justice John Roberts said the government’s “let them eat wine” argument is “probably not much more comforting to the raisin growers than similar retorts have been to others throughout history.”
… Raisin producer Horne was elated by the court’s decision. “It’s just an affirmation in our Constitution and the American way of life,” he said.
Modern Farmer wonders whether the court’s decision will dismantle the raisin industry Marketing Order. But George Mason University law professor Ilya Somin notes, “… All it means is that the government will have to pay compensation for the fair market value of any of the raisins that are taken this way. I’m not sure if going forward the Department of Agriculture has authorization from Congress to pay compensation in these cases and if they don’t have that authorization, then in order to keep this program functioning, they might need Congress to pass a new law, which … doesn’t seem tremendously likely.”
In other agricultural news, Washington’s corn lobbyists have declared war on sugar lobbyists:
The Corn Refiners, representing companies that produce high-fructose corn syrup, just hired 10 outside lobbyists for an aggressive, unorthodox attack on the federal sugar program just a year after a new farm bill was signed into law. Their first target is the agriculture appropriations bill, now moving through a House committee.
… The sugar program, which has existed in various forms since the 1930s, uses an elaborate system of import quotas, price floors and taxpayer-backed loans to prop up domestic growers, which number fewer than 4,500. … “While every other farm support program has received multiple rounds of reforms, big sugar has not been touched,” said John Bode, CEO of the Corn Refiners group.
… Bode, a former assistant secretary of agriculture during the Ronald Reagan and George H.W. Bush presidencies, was outspoken in an unusual way for a Washington agribusiness insider. “This is pure crony capitalism,” he said. “Sugar is a mere footnote in American agriculture production, but they make more political contributions than the rest of agriculture combined. That’s why they have defeated all attempts at reform since 1980.”
Of course, the irony here is that America’s gigantic corn industry benefits from a mammoth amount of subsidies, as well—to the extent that it’s called “King Corn” in a recent documentary about the industry’s crony practices. Michael Pollan dedicated an entire section of his book The Omnivore’s Dilemma to describe corn’s infiltration of the food industry. As James Davis, from the Freedom Partners Chamber of Commerce, told the Post: “We’re not real interested in climbing in bed with the corn lobby to accuse the sugar industry of being prostitutes. We oppose all forms of corporate welfare.”
When it comes to reforming K-12 education in America, entrepreneurs hold the key to success—or at least, this was the principal claim touted by panelists at an American Enterprise Institute panel on innovation and entrepreneurship Wednesday.
Despite the variety represented amongst the panelists, most expressed a keen desire for greater school choice and a diminishing of bureaucratic red tape. Meanwhile, there were specific things that seemed to make the panelists—as well as the parents, teachers, and entrepreneurs they work with—frustrated:
- The nationalization of educational standards (via Common Core), and a corresponding lessening of choice on the local and state level (this complaint also applies to the Common Core tests that many parents are increasingly choosing to opt out of).
- A broken educational system, insulated by bureaucracy and federal regulations, that seems to prevent any real reform or change from getting up off the ground.
- A lack of alternative schooling options for families with limited monetary resources. As Michael McShane, an AEI research fellow in education policy studies, puts it in the institute’s just-released education agenda for 2016, “School choice is about equalizing opportunity. … Wealthier families can choose where their children attend school, but poor families cannot. By allowing for the creation of open enrollment charter schools or giving families vouchers or tax credit scholarships, school choice gives low-income families this same benefit.”
While there were a lot of buzzwords floating around during Wednesday’s panels (“disruption,” “innovation,” etc.), a few interesting and thought-provoking ideas also rose to the surface—ideas that may be able to fight some of the above frustrations that Americans are experiencing.
Panelist Matt Candler started 4.0 Schools in 2010. The organization helps entrepreneurs create new educational tools for teachers, students, and parents. They also have created what Candler calls “Tiny Schools.”
Charter school startups require a massive amount of work: they must churn through charter applications, rent or renovate a large property, hire adequate staff, recruit in local neighborhoods, fundraising, procure insurance, books, and furniture—etc., etc. While such development may be lower risk than conventional district-led school improvement plans, innovators still rarely have opportunity to test their models and curricula before the students show up, Candler says.
In contrast, a Tiny School enables innovators to test their ideas and models at a very small scale, in a very personal environment, says Candler. Families and students can build strong relationships with educators, and provide extensive feedback—long before the Tiny School ever develops into a full-scale charter school.
The 1881 Institute, NOLA Micro Schools, Rooted School, and Noble Minds Institute were built through Candler’s Tiny Schools Project. As they grow, they’re each looking into different options for expansion: one is partnering with a homeschool collective and a private university to build a summer program. Another is using space in a local private school, while another is contracting with a local public charter school for a year.
Candler argues that by limiting scale and thinking small, schools can focus on building quality, accountability, and support systems. They don’t have to worry about infrastructure issues and “huge bureaucracies.” Meanwhile, students and families get personalized input and care from the school.
In a lot of ways, Candler’s program is reminiscent of the Tiny House movement: it focuses on minimizing costs in order to maximize quality. It works to cater to the needs of the homeowner/student, while also minimizing any detrimental impact on the larger community or ecosystem surrounding it.
Many of the speakers at AEI’s panels emphasized the frustrations they (and many parents) feel with our rigid yet woefully broken schooling system.We must pay our taxes: yet those tax dollars go toward an educational system that is inflexible, systemically flawed, and ailing. We must send our children to schools, of one sort or another: yet the schools we’re sending them to are often malfunctioning institutions that don’t seem to help our students as much as harm them.
The sort of entrepreneurship that these speakers seemed to be pushing for is the sort that emphasizes parental choice, providing multiple schooling options at price points that are actually feasible for a diverse body of providers.
Yet even here, there’s a degree of rigidity: as Village Capital’s Ross Baird argues, the K-12 model we’re currently working with was built for a bygone era. It worked in an industrial society, in which a bachelor’s degree was in fact a guarantor of social mobility and economic success—but in modern America, higher education is fraught with problems and the “knowledge economy” is quickly taking off. In this society, our school system often seems to be lagging behind.
What seems to be the “future,” then, would be an expansion of school choice and flexibility that enables parents to pick and choose a smorgasbord of educational opportunities, giving them the ability to orchestrate an educational program that suits their students’ needs and talents. So, for instance, a parent could choose to homeschool their child 50 percent of the time, supplementing with Khan Academy, MOOCs, or other online curricula, and then finish out with classes or extra-curriculars with a local charter school or co-op.
One of the panelists, in a private discussion between panels on Wednesday, compared this idea to iTunes and Spotify: we’re currently working with a rigid system (iTunes), in which users choices are limited to buying one full music album or another. You can’t just pick a song from the album—you have to buy the whole package. It’s all or nothing.
The future of education, he suggests, is more like Spotify: you customize and create your own playlist from a myriad of song choices. You build a user experience that fits your personal style, background, social sphere.
As a former homeschooler, the idea of building a smaller, more local, and accountable system is highly palatable and exciting—as is the idea of greater flexibility, of being able to opt in or out of educational methods at one’s own discretion. It’s exactly what my parents fought for: the ability to customize my education in such a way as to make it as rigorous, high-quality, and enjoyable as possible. They melded at-home classes, homeschool co-op literature and rhetoric classes, college language courses, private music lessons, community college orchestra, and intramural sports.
But there are also, of course, problems that can arise from such a diversified model. First, we must consider the fact that such disorganized and unquantified participation could hamper our ability to assess long-term student growth and progress nationwide—as well as impeding us from comparing our students to others in the international sphere. This is, in a sense, the opposite of Common Core, which was built around the goal of increasing our competitiveness in the global sphere.
There are also benefits to a more structured, traditional educational system that we may lose if we allow such flexibility to exist. Students could miss out on important lessons or classes they need in order to get jobs or build a portfolio. Both classical forms of education and vocational systems emphasize certain skillsets that they see as essential to building a well-rounded or well-skilled human being: the former often focusing on the development of abstract qualitative skills, the latter on the development of concrete quantitative skills.
But increased flexibility need not constitute a rejection of such systems or their schools of thought—rather, it could hopefully open up more opportunities for parents and students to tap into those systems. Most parents who don’t care particularly much about their children’s education will continue to enroll their children in one rigid program or another: programs that makes the decisions for them. And that’s completely fine.
But parents who decide to implement a more flexible and varied approach necessarily take on greater responsibility and involvement. They will be called upon to make thoughtful and principled decisions. While some may err on the side of the lackadaisical, letting flexibility devolve into anarchy, most will be able to use a greater diversity of choice to open up more opportunities for their children. Thus it seems that overall, greater flexibility would enable parents from all income backgrounds to have greater access to high-quality education options.
One final thought: an increase in flexibility and small-scale educational enterprise is very reminiscent of some changes we’re seeing in the economic sphere, as people increasingly vary their work schedules from the traditional 9 to 5, cubicle-centric career world to a work-from-home, flexible hours approach. And, just as there are going to be problems and drawbacks in the changes we see there, we should expect problems to arise as our educational system changes. The problems may in fact be similar.
But despite the drawbacks, alternative methods like homeschooling, Tiny Schools, or outside-school options like Khan Academy or Duolingo can help alleviate several of the educational problems we’re facing. The sorts of reform and innovation that AEI’s panelists suggested Wednesday could, in fact, help build a more nuanced, thoughtful, and high-quality system of education here in the U.S.
Our modern food movement isn’t working, says Pacific Standard writer James McWilliams: even though “muckrakers have been exposing every hint of corruption in corporate agriculture” and “reformers have been busy creating programs to combat industrial agriculture with localized, ‘real food’ alternatives,” factory farms are bigger and busier than ever—in fact, they’re “proliferating like superweeds in a field of Monsanto corn.”
The total number of livestock on the largest factory farms rose by 20 percent between 2002 and 2012.
The number of dairy cows on factory farms doubled, and the average-sized dairy factory farm increased by half, between 1997 and 2012.
The number of hogs on factory farms increased by more than one-third, and the average farm size swelled nearly 70 percent from 1997 to 2012.
The number of broiler chickens on factory farms rose nearly 80 percent from 1997 to 2012, to more than 1 billion.
The number of egg-laying hens on factory farms increased by nearly one quarter from 1997 to 2012, to 269 million.
It seems that, as McWilliams puts it, the sustainable food movement has “hit the brick wall of economic reality.” Despite the efforts of food reformers like Michael Pollan or Joel Salatin, factory-farmed meats and dairy are still just plain cheaper. “To most people, even ethically concerned food people, blueberries are just blueberries,” writes McWilliams. “Food is just food.”
But of course the deeper problem here is that food is not just food—it’s a piece of a larger structure of economy, ecosystem, and community. And the blossoming prosperity of factory farms is not, in fact, a normal or organic outgrowth of free-market demand: it is an artificial construct, a bloated system sustained by government subsidies, crop insurance, and regulatory supports. This should be made clear by the fact that, even as the locavore/farm-to-fork movement has swelled considerable over the past seven to 10 years, these factory farms are still doing incredibly well.
The federal government bolsters large farms and turns a blind eye to their environmental detriments (detailed at length in the Food and Water Watch report), while dis-incentivizing—and even crippling—smaller farms. As the report puts it, Big Ag corporations foster “an intensely consolidated landscape where a few giant agribusinesses exert tremendous pressure on livestock producers to become larger and more intensive.”
Heavily-subsidized corn becomes cheap feed for malnourished, maltreated cattle, as “misguided farm policy [has] encouraged over-production of commodity crops such as corn and soybeans, which artificially depressed the price of livestock feed and created an indirect subsidy to factory farm operations.”
Factory farmers don’t have to worry about their manure lagoons, so they cram as many beef cattle as possible onto their land: “lax environmental rules and lackluster enforcement allowed factory farms to grow to extraordinary sizes without having to properly manage the overwhelming amount of manure they create.”
McWilliams looks at the current situation, and suggests taking extreme measures: “Begin with animal domestication. It’s got to go. Given the centrality of animal products to industrial agriculture (and many other industries), to attack the raising and slaughtering of animals would be a far more effective way to change our food system than localizing meat production or attempting to alter the manner of domestication.”
But when one considers the strong (even fierce) consumer preference for meat, as well as the entire systems of industry and agriculture that rotate around it, it becomes clear that this plan of action would never sell. It could also have harsh consequences for the farmer and land, as author and farmer Shannon Hayes explains in this paper defending small-scale livestock farming.
McWilliam’s problem is that he is only looking at one tier of a much more complicated, layered problem. Fixing America’s food system cannot just be done at the consumer level: with our current system of artificial prices, bloated benefits, and thinly-veiled cronyism, there’s little the consumer can do long-term to fix the problem. Consumers may demand free-range eggs—but factory farms will respond by relegating a few thousand of their conventionally-farmed chickens to a “free range” area, in order to cater to that niche in the market (as Daniel Sumner explains in this EconTalk on the political economy of agriculture). Their overall practices will not change, and any money used to buy those “free-range” eggs will just flow back into the pockets of the industrial farmer.
Being well-informed and shopping locally, via CSAs or farmer’s markets, can help. But it can also be very expensive, and thus turns the sustainable “food movement” into an upper or upper-middle class issue, one to which lower-income citizens have little to no access.
On the other hand, by tackling the cronyism and regulatory system that undergirds our agriculture, we can shift the economic battle to the political sphere and push for change beyond the grocery shelves, looking to the core policy issues that push back any “change” we’re able to achieve.
This could involve fighting for local food freedom, as folks in Wyoming have currently—thus taking some of the price power out of the hands of large farmers and gives it back to small, local operations, and countering the difficulty McWilliams addresses when he notes that factory-farmed food is always cheaper.
Change may also necessarily involve the establishment of some environmental measures to crack down on the extreme pollution and maltreatment of land that is currently allowed in factory farms, as Food and Water Watch’s report argues.
But it is important to note that our current situation—undesired as it is by a growing number of consumers, costly as it is long-term for land, animal, and person—cannot be sustained without artificial incentives and consumer ignorance. By fixing the first politically, we may in fact find it easier to fix the second organically, bit by bit. This may help fix the problem that McWilliams is addressing—while still allowing consumers to have their bacon and eat it, too.
Stephanie Cohen argues at Acculturated that we should ditch summer reading lists, especially for kids:
Today, every June, newspapers, magazines and websites, along with librarians and teachers, post their must-read summer book lists for students—100 must reads, books for introverts, Bill Gates’ summer reading list, a light summer reading list, a counter-cultural book list, or a banned books list—the variations are endless.
From Milwaukee to Miami, reading clubs and contests featuring prizes are kicking off with names like Librarypalooza. Parents head to the library and the bookstore to find a few of the listed items for their sons and daughters, but often their kids don’t care for the books that were picked.
In contrast to these longwinded lists, Cohen recommends a more flexible and enjoyable reading plan:
Tropical license means throwing the bookshelves wide open and letting children dive into piles and piles of books, some of which they may love, hate, not finish, or never forget; others will make them burst into spontaneous laughter or tears, or encourage them to become deep sea divers or zoologists … parents need to give their kids the “license” to explore.
I would agree—and in fact, would argue that adults should largely follow the same rule. It seems we can easily force ourselves into reading books that we’re rather unhappy with, or bored by. Many of us read not out of joy, but out of a sense of compulsion—because certain books are “good” for us.
But summer should be an opportunity to branch out, to find new things, and to exercise some “tropical license,” as Cohen puts it. We ought to “dive into piles and piles of books,” which we can either treasure or laugh at or forget, and we ought to enjoy the process thoroughly.
This has been my goal for the summer, and it has led me to some interesting reads. Here are the most recent five:
Mr. Penumbra’s 24-Hour Bookstore, Robin Sloan
I loved the beginning of this book, and then found myself incredibly disappointed by the end—and have a sense that most other bibliophiles who read it will feel similarly. Nonetheless, the characters are intriguing and quirky. The contrast offered throughout between physical books and the blossoming world of technology is also interesting to consider.
Angle of Repose, by Wallace Stegner
This classic was a book to savor—to read slowly, and thoughtfully. Stegner’s exploration of pioneer life in the Northwest is poignant, thoughtful. But it’s more than this: it’s also a deep and considerate look at the pains and pleasures of marriage, the differences that can divide us or draw us closer together. Highly recommend.
The Luminaries, by Eleanor Catton
Worried at the start by this book’s length (820+ pages) and rather slow beginning, my first hesitancies were quickly relieved as Catton’s witty narrative unfolded. Written as a parody of the 19th-century novel, this book’s fascinating cast of characters are all interwoven within an astrologically metaphorical plot. It has the elements of a historical novel, a whodunit, and a romance. I finished it in a matter of days.
Fierce Convictions, by Karen Swallow Prior
This biography of Hannah More is well worth reading: the English poet, pamphleteer, essayist, and novelist was an essential member of the 18th and 19th-century abolitionist movement. She was friends with Edmund Burke, Dr. Samuel Johnson, William Wilberforce, and countless other key thinkers of her day—but brought her own wit, vivacity, and piety to the reform movements of the time. Prior captures much of her zeal, though I still felt by the end that the depths of More’s character remained a bit obscure and fuzzy amidst all the facts and chronology. All the same, it’s time more people read about More’s work and life.
The Secret History, by Donna Tartt
This one I’m still reading. Tartt’s novel The Goldfinch won a Pulitzer Prize last year. It was an interesting read, moody and introspective, with many elements of a Dickensian novel. But the protagonist, Theodore Decker, was rather difficult to like: an opaque, blasé, apathetic young man. This book has a similar feeling of Dickensian hyperbole, the same dazed and rather half-hearted protagonist… but at least thus far, the plot seems more intriguing. My guess is that Tartt’s protagonists have this two-dimensional character and lackadaisical outlook on purpose: that she’s making a point, perhaps, about the millennial generation, or about the world we live in. Perhaps I’ll have a better idea by the time I finish this book.
Reading in the summer is about finding new favorites, and letting yourself read whatever piques your curiosity—not limiting yourself to the “classic” (unless you want to), but rather exploring new genres, intriguing bestsellers, curious subjects or authors.
It’s too easy to fall into scholastic lists, and to lose the joy of exploration and adventure that are integral parts of reading. We can, instead, use our summers to let the creative and moral imagination bloom forth again: to set aside work and serious reads, and to delve into works that truly delight or excite us.
Millennials are forging their own path when it comes to church attendance and religion.
As a popular May Pew poll pointed out, many of these young people are veering sharply away from organized religion and towards the “nones” category, an opaque motley of agnostics, atheists, and the “free range faithful,” as Elizabeth Drescher calls them in her article for America magazine. They are, as she puts it, “ambling all about the religious landscape to partake of its diverse offerings without seeking a single set of answers (or questions) or intending to settle in one spiritual place.”
However, while they seem to be seeking more flexibility, Drescher’s article also notes that most “nones” have some specific goals in mind: they want something “relational and experiential, oriented toward being present to the spiritual based in the self, the other and the world instead of in structures of belief, belonging and behaving associated with traditional religions.”
Those who remain inside the church are seeking a similarly specific and personal religious experience—even if they’re looking for it in more conventional venues. Take this list of church qualifications, put together by a millennial named George. When Southeastern Seminary professor Chuck Lawless asked him what he would like to see in a local church, he responded with these personal guidelines:
- Bold preaching and sound doctrine that is not watered down. The church should speak truth without sugarcoating the gospel.
- Genuine opportunities to get involved, where he can make a real difference in the world.
- A real community of believers, people with whom he can hang out, but who also push and challenge him.
- A strong commitment to evangelism, especially locally. He would like opportunities to connect with and influence the local community for God.
- Worship services that are “unrehearsed, naturally flowing and Spirit-led,” but that also have an “authenticity that validates the message and structure that follows the Lord’s leading.” He would also like to see a “strong, team-focused worship leader” and variety in worship.
- Hospitality that welcomes others. He would like a church that welcomes strangers and does not “cocoon itself” around the familiar.
- Humility in leadership and flexibility in terms of where and when the church gathers. The “where” is not as important as that the congregation “truly be the church” and “truly know God.”
There is a danger to creating lists such as these: it means that we come to the church with the attitude of a consumer, looking to see how it fulfills us, rather than approaching it humbly, with an awareness of our own insufficiencies. That said, George seems to have a refreshing appreciation for strong biblical doctrine, delivered forthrightly. This would seem to indicate that, despite some broader cultural trends, there are at least some millennials who want the Gospel—not a politically correct, modernized version of the Gospel.
Additionally, we can applaud George’s desire for tangible community connection within the church, reinforced by a strong local vision. There has been, amongst some churches, an abdication of local outreach—whether for the glamor of globalized missions, or for the ease of intra-church networking. His list indicates a desire for more thoughtful and thorough ministry in the church’s own backyard.
But consider some of the other words used in George’s list to describe his desired church: “genuine,” “real,” “unrehearsed,” “natural,” “authenticity,” “flexibility.”
These words are reminiscent of the “free range faithful,” who are also engaged in an earnest search for something raw, organic, “authentic.” They’re just searching for it in a different venue than George is. The urge for a more natural religious experience is an old one, and it’s been touted by various romanticists for ages, from Thoreau in Walden Pond in the 1850s to the unbound hippies of the 1960s.
In seeming stark contrast to these free-range faithful are today’s high-church hipsters: people searching for authenticity, often seeking it in the old or obscure, scorning the modern trappings of their society. By going high church these hipsters have rejected the flexible and unrehearsed vibe that George is looking for. Rather than demanding less structure and tradition, they’re finding their comfort in more.
But in their embrace of the old liturgical service, they often are tapping into an aesthetic preference—like Thoreau and George—more than they are embracing a doctrinal and theological way of life. Take Jimmy Fallon, who talked to Teri Gross of NPR about attending mass growing up: “I just, I loved the church,” he said. “I loved the idea of it. I loved the smell of the incense. I loved the feeling you get when you left church.” Fallon stopped going to church when the service became less traditional.
Of course, there are legitimate problems with attempts to “modernize” old and carefully preserved traditions of worship, and Fallon is right to point these out. But at least some of the millennials who are going high church seem to be doing so because they see in it that vintage, nonconformist vibe they are after. Like smoking pipes and playing old records, it gives them a sense of authenticity, of separation from the vulgarities of modern culture. And in this sense, they are still members of the free range faithful: seeking something “genuine” and “real” perhaps, but not necessarily looking beyond the beautiful traditions to understand their core. Their definition of real and genuine involves more structure and form than George’s, but even liturgy can become another consumer’s choice in today’s church.
The search for “authenticity” is difficult to define or to complete, because we ourselves are a mess of contradictions and charades. How can we properly scrutinize our own motivations—and the motivations of others—determining what is real, and what is fake? Both in personal and communal living, there are layers of facade we must break through, subconscious lies that must be confronted. Most attempts to liberate ourselves from societal or religious masks will lead to greater confusion and disguise: the hipster who smokes his pipe or listens to old records out of a desire to separate himself from society is acting out a part he has scripted for himself.
We can treat church the same way: like something that’s supposed to give us a sense of nonconformist identity, something that will go against the grain amidst a mainstream culture gone mediocre. We can create our lists, and check them twice: noting with derision any perceived melodramatic elements or rote routines in a service or its congregants. We can leave with a shrug of our shoulders, explaining that the church just wasn’t “authentic” or “genuine” enough, explaining that things seemed too rigid and rehearsed.
But to merely abdicate the church for its flaws is an improper response—just as bringing a long list of qualifications and demands is also flawed.
Leaving the church should not be done merely out of an aesthetic frustration with perceived artificiality. Pretension is truly indicative of the sin that cakes and coats our lives, covering us in layers of unreality, insulating us from each other. Whether we seek genuine religious experience within or without Christianity, we will find that artificiality constantly gets in our way—because truth is difficult to seek, and often easily disguised.
But this is what church is about: the slow stripping away of such unrealities, the slow sanctification of the body as we come—again, slowly—to see each other, and ourselves, with truth and grace. It’s a painful process, one that is often stalemated or short-circuited by our own flaws and shortcomings. No church is perfect, because no person in the church is perfect. Therefore, we will have to go through the process of becoming-real, no matter what church or denomination we join, no matter what pastor or priest we follow.
The truth is, the longer you are part of a church, the more you will begin to notice its dust and dimness, its fake smiles and half hugs. Many have memorized rituals that have no heart or purpose behind them. You will begin to see the church’s flaws, and they may frustrate or even disgust you. But if you seek church (or religious experience) somewhere else—reveling in all its polished “authenticity” and golden sheen—it will not take long before there, too, you will see the fatal flaws, the pretensions. And they may again drive you away, urging you into a “free range” faith that is ever seeking the authentic.
But you can choose to stay and to love this flawed and marred church, still so far from perfection. You can choose to walk amongst the faltering limbs of this body, this ailing bride, because you know that you too are a flawed limb. You know that you, too, have caked makeup over your raw sores, and have attempted to look “normal,” even perhaps “authentic.” You know that you’ve whitewashed your tombs.
Church is not about our perfection or authenticity. There are layers of sin and blindness that we have yet to uncover. But church is about Christ, as Rod Dreher reminded his readers last week. It’s about the Gospel. And that truth reaches out to us in our states of inauthenticity, giving us a chance to rise above the facades.
In response to housing difficulties and frustrations, James Poulos suggests that Americans should just move abroad:
… In the old days lots of our struggling citizens hit the open road, bound for parts of America that weren’t, let’s just say, Americanized yet. Yet today, we don’t have to perpetrate a genocide in order to emulate them. We just have to dare to start over in a foreign country.
The case for doing so ought to be pretty strong. Turned off by an exile in suburban Siberia? Pushed to the brink by gentrification? Unwilling to believe that a twenty- or thirtysomething adult must resign to an overpriced micro-apartment to access a life or a job of adventure?
These are all extremely powerful reasons to take the American Way somewhere outside America’s borders. … Libertarian-leaning cosmopolitans already sing the praises of Latin America. (For the not-so-libertarian, soon Cuba will be in play.) Commercial centers in the Persian Gulf and Southeast Asia offer selective but distinct opportunities. Some religious Americans will find deep nourishment in Europe; others, of a different disposition, in sub-Saharan Africa. … Americans ready to adopt local ways — not just adapt to them — are very likely more welcome, and more desirable, than we are apt to imagine
But will such an emigration ever actually happen? Poulos says no—”Americans just feel too tied down emotionally to ever move abroad. … There’s also our national problem with risk. We’d rather pile up tiny, stupid risks, guaranteeing that we’re trapped in lives we dislike, than gamble our futures on one big risk.”
Yet Poulos does not mention, oddly enough, the ties to neighborhood and community that often compel people to stay, despite challenges and personal hardships. This loyalty to place is illustrated well in a Los Angeles Review of Books piece, titled “The Fight for Frogtown.” In it, author Molly Strauss considers the dilemmas of staying put,in the midst of a gentrifying neighborhood:
Elysian Valley — also called “Frogtown” — is sandwiched between the Los Angeles River to the east and the I-5 freeway to the west. Traditionally working class with a large immigrant population and, historically, a mix of residential and industrial uses, the 0.79-square-mile area has seen properties flip since the Army Corps of Engineers announced a year ago a $1-billion plan to revitalize the Los Angeles River. … With the city repositioning its river as an amenity rather than a flood control channel, the neighborhoods along its banks are receiving unprecedented attention. Property values are rising.
Factors like these leave certain Frogtown residents on edge. They’re worried about displacement. They’re worried about commercial activity coming in that won’t serve the community. … They’re worried about density and height — developers building as big as they can legally go and dwarfing the one-story houses; about naked bike rides and eminent domain. Some of these fears are well founded (that naked bike ride really did happen — “a 200-nudist cycle parade down Blake Avenue,” said De La Torre). Others, less so. Regardless, there’s a sense in the community of losing control — that outsiders serving private interests will transform this neighborhood of about 8,900 into a place residents will no longer be able to recognize, or afford.
As local resident David De La Torre tells Strauss, Frogtown’s community isn’t afraid “of the people that are coming in. Elysian Valley is the most embracing of neighborhoods. The dynamics of its demographic makeup are proof of that. Dissatisfaction comes from the failure to recognize the characteristics of the neighborhood.”
What De Le Torre seems to put his finger on are the deeper ethos changes that drive established communities out when gentrification moves in: it isn’t just about the fact that land prices go up. It’s also that the community fabric changes: marijuana dispensaries and nudie bars don’t appeal to the middle-income, blue collar, elderly, conservative, churchgoing neighborhoods that De La Torre describes. Their fight isn’t just for housing—it’s also for their way of life.
Speaking of urban ways of life, Citylab has just published its Highrise Report. Some interesting discussions therein include this look at the UK’s tower blocks, the “Anglo-American backlash against the modernist tower,” and how high-rises may in fact become trendy in the future:
In the 1960s, Britain was building more public housing than any other Western European country, and even gave then-Communist Eastern European states a run for their money. Such a radical transformation of the urban landscape with new forms was especially likely to create a backlash, not least because England (though not Scotland) traditionally focused on house building, making the concept of the apartment complex itself contentious.
… “The obvious comparison is with 19th century tenements,” says Glendinning. “In the 1970s, they suddenly went from being the most reviled thing everyone wanted demolished to something that was generally seen as a kind of heritage. There was a transitional period in the ‘70s when a lot of people were still saying how ghastly they were, while others were advocating for their preservation. I think that [when it comes to modernist high rises], we’re in that transitional period now.”
The article brought to mind Benjamin Schwarz’s excellent cover story on Britain’s urban redevelopment, published earlier this year. If you haven’t read it yet, you should. He considers ways in which this sort of housing actually worked to undermine the British working-class family and its way of life. It presents an interesting counter-narrative to Glenndinning’s description of the tower block.
It is a way of life—and its destruction—that each of these pieces considers. Poulos looks at the narrowing housing options that Americans face, and suggests a sort of pioneer movement into new territory. While this isn’t necessarily a bad idea, it could also spur on an abdication or abandonment of place that could leave things worse than they were before. In contrast, the Futura de Frogtown project seeks to present solutions to a current housing problem: to prevent destructive urban planning from taking place, and therefore enable a community to stay put.
The frightening alternative—one that could be realized either by Poulos’s suggestion, or by the urban planning described in both Strauss’s piece and Schwarz’s—is that of an undermined and fragmented community, one that slowly begins to collapse as its familial and communal fabric frays and dissipates.
Damon Linker writes for The Week about America’s struggle with obesity:
… A lot of us go about much of our lives in a state that could be described as a low-grade anxiety attack — and stuffing our stomachs with vast quantities of unhealthy food to soothe it, even if the resulting weight gain and worries about health problems ultimately contribute to making the anxiety worse.
What’s going on here?
We find one helpful suggestion in, of all places, the pages of Martin Heidegger’s philosophical masterwork Being and Time — at least part of which is concerned with exploring the multitude of ways that people flee from their mortality.
All of us know, intellectually, that we will die. But Heidegger suggests that we only come to grasp it existentially in the mood of anxiety. In anxiety, the average everyday pursuits that normally occupy and absorb us recede and appear drained of meaning. … It can be a chilling experience — and so we flee from it, throwing ourselves more fully and more deeply into the world, finding comfort and solace in its seeming (but deceptive) solidity. Addiction and obsession are particularly intense forms of this fleeing into the world, Heidegger proposes, since they turn one particular entity within the world into the nearly exclusive focus of our existence.
But if that’s the case, then an obsession with food — the consumption of which assimilates worldly entities into our very selves, causing a visceral feeling of fullness, which compensates for the haunting perception of existential emptiness that accompanies anxiety — may be among the most potent ways to ward off an existential crisis.
If Linker is right, food obsession—be it in excess or in defect—is often tied to a deeper angst, one we’re trying to assuage at every opportunity. Many of our food fears or indulgences are tied to a sort of worship: we make food an item of first importance, and it becomes impossible to resist. Either we are tethered to health with unbreakable bonds of fear—fear of mortality, perhaps, fear of a lack of control, fear of imperfection or public shaming—or we are tied to unhealthy foods by cravings for love or safety or comfort.
Our attitudes toward food often also seem inextricably tied to our community: we eat according to the habits and customs of those we share space and time with, and often according to the collective fears or pressures they impress on us.
For instance: I grew up in a lower-to-middle income area in midwestern Idaho. The county beside mine was one of the poorest in America. The area was full of large (and thus frugal) families, along with many immigrants. It was the sort of place where you stopped by Taco Bell or Wendy’s after church, where grabbing a Carl’s Jr. burger after a baseball game or school activity was simply a matter-of-course. Many moms I knew were religious coupon-clippers, stocking up on breakfast cereals and canned foods in order to stretch their groceries as far as possible. My mother knew how to incorporate fruits, veggies, and whole grains into every meal—but overall, the region was not very health-focused. If there was any sort of pressure exerted on the community, it was to be frugal, to avoid excess spending. The organic movement was seen as wasteful, those who shirked fast-food restaurants were seen as snobbish.
When I went to college, the communal eating landscape changed. I became acquainted with the salad crowd: salad for lunch and dinner, most often, piled high with vegetables and just lightly tossed with vinegar. There was a greater emphasis on the various dietary components of a dish—carbs, calories, sugar. I became acquainted with east coast eating, which is most often more health-oriented and restrictive than eating in midwestern or rural areas. And indeed, there are a variety of good and admirable principles embraced here, principles I’ve adopted. But I also saw the extremes to which this healthy eating could be taken: I watched women turn rejecting glances toward the dessert table every night, saw them painstakingly order egg-white omelets with no cheese and only x amount of veggies at the breakfast counter, heard rumors of sugar and fat that made peanut butter an unhealthy indulgence. Even whole grains seemed increasingly off-limits: carbs were viewed with a fearful or tentative eye. I began to feel dismay and frustration. What could we eat, then, besides a smattering of spinach and balsamic vinaigrette? Would peanut butter—the dearest of all snack foods—be forever tainted in my memory?
This is when I joined another community: one very much aligned with from-scratch cooking aficionados like Michael Pollan, but also very flexible, and meant to be geared toward moderation. My sister-in-law calls it the 80-20 crowd—people who try to eat healthy at least 80 percent of the time, but don’t mind a little splurging around 20 percent of the time. We still eat lots of fruits, veggies, oatmeal, et cetera—but there’s also the occasional delectable doughnut, juicy burger, slice of grease-oozing pizza. And they’re enjoyed to the fullest. One could say that this is an attempt to eat according to Aristotle’s conception of virtue.
Just as we can be tethered to certain food habits by our fears, as Linker points out, we are tethered to our community and often deeply influenced by its habits and mores. Those who live in D.C., filled as it is with fit people and SweetGreen stores, are more likely to feel public pressure to live and eat similarly. Those who live in an area where people sneer at the obsessively healthy are more likely to stop by KFC for dinner. But the two go together: we face pressure from within and without to live—and eat—a certain way. It seems incredibly complicated, and indeed it is.
I often wonder if these modern complications and pressures surrounding food are at least somewhat tied to the fading of religious food codes in America. Throughout history, people have faced pressure to eat a certain way, within certain limits: but these eating tenets used to be tied to one’s religion (and in many countries, still are). Thus, they weren’t really about the self, and they weren’t just about one’s community, either: they were reminders of our spiritual state, of our inner wellbeing. They were often reminders of the importance of nature, and of the care and stewardship we should show towards animals and plants alike. They were meant to foster self-control and an uplifted eye: an attitude that forsook the self and its craven appetites for something deeper, more lasting. They were meant to turn us toward the divine, and thus to assuage our anxieties and encourage our faith. Patterns of moderation, self-control, and self-sacrifice weren’t just about getting a lean, toned body, or being accepted by one’s peers—though care for the body and involvement in community were also seen as important. But in subjugating the self and its appetites, the goal went beyond the temporal and stretched toward the eternal. Choosing to fast, or to partake in a period of restrictive eating for a season like Lent, was in fact about looking beyond oneself.
This, too, is a community I would like to join. The community it ties us to is not one of shaming or judgment, but rather one of accountability and support—one that reminds us to look beyond ourselves, and to consider our own cravings secondary to the larger earthly and spiritual landscape that surrounds us.
Linker is right to see the obesity epidemic in America as a spiritual crisis. But I would argue that it’s also, by extension, a community crisis: the result of an atmosphere in which secular voices have adopted religious language and forsaken both moderation and reflection. To transcend this crisis requires us to find correct voices: for the secular, this may involve adopting a moderate and considerate community, like the 80-20 crowd. For the religious, it may involve reconsidering the dietary codes that have been forsaken with time—not in order to become shackled to legalism, but rather, to be released from it.
Americans are moving away from home ownership—for a variety of understandable reasons, it would seem. Mechele Dickerson reports for The Conversation that many millennials, facing economic uncertainty and a bevy of student loans, find the idea of home ownership rather distasteful at the present. “Americans of all ages are renting rather than buying, mostly because wages have been stagnant for all workers except the highest earners for about three decades, and because wages have not kept pace with home prices,” she writes. “In addition, potential first-time home buyers and those with blemished credit are being shut out because stricter lending standards make it harder for them to qualify for a mortgage loan.”
The rental rate is almost at a 30-year high, and the homeownership rate is at a 20-year low. “Until renters become more optimistic about their economic future, they will not be convinced to buy homes. And until they buy homes, there will be little reason to celebrate homeownership.”
Meanwhile, The Washingtonian reports that D.C. (and other cities) are moving away from car ownership, as apps like Uber and Lyft, as well as delivery services like Amazon and Peapod, continue to shape our streets and navigation preferences. They predict that Washington in particular will become a denser, less car-prominent city in the years to come—and that many other urban centers will follow suit. “Small things, we’ve learned, can alter neighborhood dynamics in a big way,” notes author Benjamin Freed. “And with every resident—car-owning or not—who comes to rely on a newfangled transportation service, or stops expecting there to be parking outside her home, or leads a life in which a bike lane or new streetcar is essential for getting to work, the politics change a bit.”
Speaking of politics and home, Jake Meador has an interesting blogpost over at Mere Orthodoxy, in which he considers the importance of home-as-retreat and the “Benedict Option” (see Rod Dreher’s blog for extensive coverage of the idea). Meador looks to L’Abri, Francis and Edith Schaeffer’s retreat for inquiring young students, and asks the question: “Could this be the best way for the Benedict Option to be realized?” L’Abri and its descendents are not just focused on retreat, however, but also on begetting: “The begetting is the key. The Benedict Option cannot simply be a refuge or haven from the forces that exist outside of it. It must also be an incubator, a place that remakes the world. If the Benedict Option is not an incubator as well as a retreat it will fail.” Meador goes on to advocate very strongly for the home as center of one’s cultural and familial existence:
Creating a home takes time and requires sacrifices of us. These demands force structures upon our lives that constrain our autonomy but through which we arrive at true freedom. This means that the differences of the faith must touch our material lives in tangible ways. We cannot go on having both parents work full-time jobs outside the home, thereby reducing the work of home-making to the coordination of consumption patterns and reducing the home itself to a kind of high-dollar storage shed. We cannot go on entrusting the formation of our children to government-run schools that reinforce rampant individualism and undermine more humane values. We cannot go on living life at a pace that makes silence and contemplation and the sharing of unhurried time impossible. These are the routines, habits, and customs that will eventually devour Christian community.
It’s interesting to consider how—and whether—Meador’s vision of home is changed at all by our new sharing economy, and the deescalating rates of home ownership. Do these things affect the way we interact with our neighbors, treat our houses? Will they propel us toward or away from hospitality? In some ways, current transportation trends could be beneficial: they don’t seem to have a negative effect on the way people inhabit their homes, and could in fact encourage people to spend more time in their neighborhoods, building local ties. The renting trend doesn’t necessarily seem detrimental, except for the fact that—in the absence of ownership—there’s a temptation to act more like consumers and less like stewards. We may invest less in our houses and neighborhoods, because we feel no sense of duty or responsibility toward them. But this doesn’t have to be true.
Finally, on a fun note, the Los Angeles Times has published a summer booklist for readers, compiling potential reads by genre preference.