C.S. Lewis wrote in The Screwtape Letters that there are two types of gluttony: one of much, and one of little. Modern versions of gluttony amongst sophisticated people, he argued, are often characterized by close attention to diet and an insistence on less of one thing or another—even when it puts other people at an inconvenience. Such a person is wholly enslaved to sensuality, but doesn’t realize it because “the quantities involved are small.” (Or local/artisan/organic?) Yet “what do quantities matter, provided we can use a human belly and palate to produce querulousness, impatience, uncharitableness, and self-concern?”
What would induce a person to join the ranks of ISIS? More than you might think, Ross Douthat argues over at the New York Times: in a world that offers us temporal rather than eternal promises, the Islamic State is reminding young people that they have souls: as the New York Review of Books piece that Douthat quotes points out,
France’s Center for the Prevention of Sectarian Drift Related to Islam (CPDSI) estimates that 90 percent of French citizens who have radical Islamist beliefs have French grandparents and 80 percent come from non-religious families. In fact, most Europeans who are drawn into jihad are “born again” into radical religion by their social peers. … ISIS is a thrilling cause and call to action that promises glory and esteem in the eyes of friends, and through friends, eternal respect and remembrance in the wider world that many of them will never live to enjoy …
This is something the West has a hard time understanding, says Douthat. In our largely materially and rationally focused society, we don’t understand why young people would join a group of beheading extremists. Yet in underestimating the force of ISIS’s message, we undermine our own ability to fight them:
The deep reality here (a reality not unlike the one that’s playing itself out on certain college campuses right now) is that many human beings, especially perhaps young human beings, still crave a transcendent purpose, even in a society that tells them they don’t really need one to live a comfortable, fulfilling life. And more than that, many people experience both a kind of liberation and a kind of joy in submission to these purposes, even — as is the case with ISIS — when that submission involves accepting forms of violence and cruelty that rightly shock the conscience of the world.
… “Nothing costs enough here,” Huxley’s Savage complains about the brave new world. If ISIS costs, a certain meaning-starved cohort in our world thinks, maybe that just means it’s real. That cohort is still mercifully small, and unless radical Islam acquires a lot more intellectual cachet it’s likely to remain so. But if the West’s official alternative to ISIS is the full Belgium (basically good food + bureaucracy + euthanasia), if Western society seems like it’s closed most of the paths that human beings have traditionally followed to find transcendence, if Western culture loses the ability to even imagine the joy that comes with full commitment, and not just the remissive joy of sloughing commitments off — well, then we’re going to be supplying at least some recruits to groups like ISIS for a very long to come.
“Nothing costs enough here.” It’s true of Western society in many ways—especially in the realm of the spiritual and philosophical. This is something Rod Dreher has pointed out in his columns about “moralistic therapeutic deism“: while ISIS has given people a story of transcendence, Western churches have settled for “rationalism and do-goodery.” We’ve cheapened our Gospel by cutting out the supernatural and the difficult—by making it primarily about this life, and about pleasing people, rather than refocusing on the eternal and on God.
While the fears and doubts expressed by many American Christians over the Syrian refugee crisis are understandable, I think they are often symptomatic of this refocusing on the temporal and rational, rather than the eternal and spiritual. On Friday, I argued for The Week that Christians should be encouraging the U.S. government to admit refugees. This argument could have focused on presenting a rational, data-driven discussion of the costs and benefits: whether refugees will pose a risk to national security, whether we have the means to both screen and house them properly, etc. And there are some excellent resources on this subject, giving intelligent arguments for why the risks are much lower than most Americans think.
But instead, I tried to focus primarily on biblical and ethical arguments for Christians to consider—primarily because of the argument Dreher and Douthat are making. We religious people in the West are far too quick to secularize our conversations, focusing on the material and not the spiritual. We focus on the societal, political, and personal implications: on the worries of this life. And in so doing, we sell our religion cheap. We cut the heart out of it, and only strengthen the Islamic State’s cause. We show that we are not as devout as they—that we offer no equal (or superior) path of devotion to follow. We offer only the comforts of this world, and in the process, cut off the lost and alone from both the temporal and supernatural comforts they are craving.
In his Lenten message in February, Pope Francis warned listeners to be wary of “globalized indifference.” He said,
The love of God breaks through that fatal withdrawal into ourselves which is indifference. The Church offers us this love of God by her teaching and especially by her witness. But we can only bear witness to what we ourselves have experienced. Christians are those who let God clothe them with goodness and mercy, with Christ, so as to become, like Christ, servants of God and others. This is clearly seen in the liturgy of Holy Thursday, with its rite of the washing of feet. Peter did not want Jesus to wash his feet, but he came to realize that Jesus does not wish to be just an example of how we should wash one another’s feet. Only those who have first allowed Jesus to wash their own feet can then offer this service to others.
Some may think that a deep focus on the transcendent would deaden our hearts and deafen our ears to the sufferings of this world. But as Pope Francis pointed out, it’s the exact opposite: a person transformed by the supernatural is uniquely able to serve those who live in this world. As C.S. Lewis once said, throughout history “the Christians who did most for the present world were precisely those who thought most of the next.”
This is the joy that can fight “the joy of ISIS”—it’s one that offers healing, comfort, and peace, rather than a gospel of stealing, killing, and destroying. But in order for the searching to find it, someone must preach it.
We’re all obsessed with productivity, Melissa Gregg writes for The Atlantic. A bevy of mobile and computer apps beckon to us with their promise that—through a magical concoction of distraction-defying settings—they can help us work smarter and save time. Why do we crave productivity so? She considers:
With names like “Self Control,” “Omnifocus,” “Rescue Time,” even “Freedom,” productivity solutions offer liberation from as much as consolation for everyday demands. In providing mastery over incidental matters (what time management manuals have long referred to as “trivia”), human failings can be overcome. … Productivity apps facilitate the pleasure of time management, which is ultimately the pleasure of control. Their various platforms offer strategies for closure and containment, from shutting down email and non-essential communication to identifying peak performance periods and ideal moments for efficiency.
Why do we desire this “control,” “closure,” and “containment”? Gregg thinks such longings are brought on by the nature of the modern working world:
The consumer appetite for productivity techniques reflects an environment in which work has spilled over from the office to the train, airplane, hotel room, even bed. Productivity tools offer to protect workers from the creep of jobs that lack clear beginnings and ends, whether in hours clocked or outputs produced.
I can’t help but wonder if the cause is deeper, though. Desire for control—to be overcome “human failings”—such things speak more to a discontent with our human condition than they do to a mere working environment. Even away from the desk and computer, humans seem increasingly governed by the pull of the clock—by an ever-present awareness of time and its passing. This obsession, I would argue, is guided by an undergirding awareness of finitude, and its repercussions. We grasp at control, because we are mortal. We fret over our failings, because we know they are our downfall. We obsess over time saved, because we know time is all too short.
I think of this when I pull up the GPS on my phone, and find myself obsessively checking the “ETA” throughout my drive. If I’m using Google Maps, it’ll quickly alert me when a faster route becomes available. I speed even when I’m not in a hurry, just to see the minutes disappear on my ETA—to see that I’ve saved precious time. The very idea that I’ve “saved time” can give a sensation of pleasure and satisfaction.
Meanwhile, commercials offer quick-and-easy alternatives to any and every cooking, housecleaning, or maintenance job. Dinners get hurriedly prepared in microwaves or crockpots, coffee in Keurigs or even instant packets. Because efficiency—time saved—beckons to us like sirens from every corner.
There’s nothing wrong with such desires. But when efficiency becomes an obsession, our lives become a constant, headlong rush. Our obsession with saving time results in no time—at least no time for the sorts of slowness that lead to bursts of intellectual creativity, physical health, and spiritual contemplation.
Over at Humane Pursuits, Emily Carde argues that our mad rushing about hampers our ability to see and appreciate beauty:
If there is one lesson you should pull from Homer’s Odyssey, it should be that the journey is just as important as the arrival (at least that’s what was emphasized in my freshman Western literature class). I took many insights from Homer’s story, but that particular lesson has strongly influenced my life.
I love adventuring, exploring, creating, and experiencing new things. But I have found that when you pursue any of these only for their destination or end goal, you miss half the beauty and joy. … There is a beauty in the journey itself that is often overlooked because we are so anxious about the destination.
Recent studies have shown that when we give ourselves time to be bored—time to daydream and let our minds wander—we become more creative. Our brains are able to ponder new ideas and questions. They can slip into a thoughtful laziness that “productivity” won’t allow. Productivity, in essence, can be a real danger to creativity.
Now, the good news is that productivity can set time free for the empty, unhindered, even boring moments in which creativity comes alive. But we have to be willingly, consciously seeking such time—and be willing to set aside all the to-do lists and frantic rushing in order to enjoy it.
It’s also true that free, “unproductive” time allows us to properly care for our bodies. One of the things busy people seem all too eager to sacrifice is sleep. Sleep is good for you—in fact, it’s necessary for us to be fully healthy. If you’re a runner and want to run faster, you have to get a good amount of sleep. If you’re struggling with sickness and want to be well, one of the first and best ways to fix it is to sleep. If you want to become more fit, making sure you sleep a healthy amount is vital. Yet we so often treat things like “sleep”—along with making healthy meals from scratch, or stretching tight muscles—as guilty pleasures, as things that truly busy and important people can’t indulge in too much. This attitude slowly eats away at our bodies, leaving us with less and less time to care for them.
I often wonder, too, if our obsession with productivity—with “filled time,” in essence—is stemming from a fear of free time. If we’re ever stuck in a moment of silence, we usually turn on the radio or television, grab our phones, or log onto our computers. We have to fill the empty space. But is this all about being “productive”?
Louis C.K. doesn’t think so—in 2013, he told Conan O’Brien on late-night television that he thinks we avoid such moments because we’re afraid:
… Underneath everything in your life there is that thing, that empty, forever empty. That knowledge that it’s all for nothing and you’re alone. It’s down there. And sometimes when things clear away and you’re not watching and you’re in your car and you start going, Ooh, here it comes that I’m alone, like it starts to visit on you just like this sadness. Life is tremendously sad. …
That’s why we text and drive. Pretty much 100 percent of people driving are texting. And they’re killing and murdering each other with their cars. But people are willing to risk taking their life and ruining another because they don’t want to be alone for a second. … I was alone in my car and a Bruce Springsteen song came on … and I heard it and it gave me a kind of fall, back-to-school depression feeling and it made me feel really sad and so I went, “Okay, I’m getting really sad,” so I had to get the phone and write “Hi” to, like, fifty people. … Anyway, I started to get that sad feeling and reached for the phone and then I said, “You know what: Don’t. Just be sad. Just stand in the way of it and let it hit you like a truck.” So I pulled over and I just cried like a bitch. I cried so much and it was beautiful. … Sadness is poetic. … You are lucky to live sad moments.
Free time, or alone time, can be dangerous—because they lead to moments of thought. And they can lead us into thoughts that we like to suppress. Once again, that simmering fear of finitude is present. In moments of quiet, it can rear its ugly head. But it is in considering the deep questions that we are able to understand what it means to be human, and to participate in this flawed and frightening—and beautiful and blessed—human existence.
Productivity. Efficiency. Things I so often yearn for, so often spend every day obsessing over. Yet when I look back over the past year, my favorite memories are the empty moments—moments of unscheduled, unexpected quietness that led to real connection, revelation, or wholeness. The time my brother and I sat watching the sun set over the Oregon coast, talking about life and dreams and philosophy. The time my husband and I walked along a lake in West Virginia, and he told me to sit and be still, to stop making checklists and worrying about the next thing. The times of journaling, writing down thoughts, reading books in coffee shops and bookstores. Long walks with Hobbes the Irish Setter: watching him play in creeks and puddles, enjoying the sounds of nature.
I have a lot to learn about letting go. But if there’s one thing this year has taught me, it’s that productivity can buy you time—but it can’t bring you happiness.
Oxford Dictionaries’ word of the year isn’t really a word at all. It’s an emoji. TIME magazine reports:
Caspar Grathwohl, the president of Oxford Dictionaries, explained that their choice reflects the walls-down world that we live in. “Emoji are becoming an increasingly rich form of communication, one that transcends linguistic borders,” he said in a statement. And their choice for the word of the year, he added, embodies the “playfulness and intimacy” that characterizes emoji-using culture.
It’s true that emojis have begun to dominate our national—and global—conversations over the past year. They’ve inspired a book. Some have tried to communicate entirely through their medium. I even found an “Emojisaurus.”
For the less practiced (or obsessed?) emoji user, the little symbols are a fun way to explain ourselves further when space and time are both short, and when we cannot see the face of the person communicating with us. Emojis help to further explain our mood, to add a touch of humor to a text or post, to bring a glimmer of creativity or personality to our online communication.
On the other hand, emojis can also quickly dominate our conversations, making it even more difficult to communicate. They may become a stand-in for details, feelings, or expressions that we could (or even perhaps should) use words to describe. They may become a crutch we use to display emotive feeling. And for those who do not use them, their absence may result in misunderstandings and hurt feelings: in her book Reclaiming Conversation, Sherry Turkle notes that her daughter thought her text messages were terse or angry, because they lacked the punctuation and emoticons she’d come to expect from her peers:
Why does my daughter think I am angry with her when I text? She explains: ‘Mom, your texts are always, like, “Great.” And I know it’s not great. What’s happening? What are you really thinking?’ There is no convincing her. When I texted her ‘Great,’ it was because that really was what I meant. If she were with me in person, that is what I would have said. But “Great” as a text message is cold. At the very least, it needs a lot of explanation points. … I add emojis to my iPhone. Emojis are little pictures of cats, hearts, buildings, lightning bolts, many hundreds of little things, and I feel ridiculous when I use them. I use them anyway. I ask my daughter if they are helping. She makes it clear that she knows I am trying.
The anecdote is humorous, in part because probably all of us know someone who’s struggled to communicate their real emotions or mood via text. But in a larger sense, what does this move to the emoji exemplify? What does it communicate about our current state of communication?
My generation in particular seems to love emojis—along with other forms of image-based communication, such as the gif or meme. Such images often convey the color and wit we want to bring to our conversations. They encapsulate the feeling of a sitcom one-liner or Youtube video punch line—for the millennials, who grew up watching television and viral videos online, they’re a lingua franca of pop culture references and humor we can call upon.
But what happens when such image-based communication becomes a replacement for dialogue—when an emoji becomes the “word of the year” in 2015, even though it isn’t really a word at all?
This seems to be a further conquering of the image over the word—the typographic being subsumed into the visual. This isn’t a new trend, but it definitely has picked up steam in the internet and smartphone age. One of the greatest concerns Neil Postman had with this trend (as he explains in Amusing Ourselves to Death) was that it influenced not just the way we communicated, but also the content of our communication. He believed the television age was also, because of its medium, the age of entertainment. And it seems that the internet age hasn’t changed its focus all that much: the subject of all our emojis, memes, and gifs is usually humor and light-heartedness. They keep us in the realm of the happy or silly—turning the corner in a conversation that begins to grow “too serious,” offering a note of flippant humor in the midst of a debate.
There’s nothing wrong with humor or light-heartedness; but it is wrong if our conversations continue to stay there, ever at the simmering point, never allowed to boil. It’s when we convey deep emotions or strongly-held views through dialogue that we learn more about each other, and about ourselves. We need words in order to express the entirety of our character, souls, thoughts. Emojis are fun—but they can’t do that. Not in the same way, with the same depth.
I write all this as someone who enjoys using emoji. It’s fun and light-hearted. I send my running friend pictures of cheetahs and lightning bolts and flexed muscles before her marathons. But if emojis take over our conversations, we could lose something priceless: the ability to go deeper, past the (literally) cartoonish, and into the realm of the real—where earnest and meaningful conversations reside.
Since Friday, the media has been absorbed with reporting on the terrorist attacks that wreaked havoc on Paris. Twitter was abuzz with live updates on all the latest news, while Facebook focused on updating its users on the attacks, providing a “safety check” feature to those in Paris, and prompting users to support the French via special red, white, and blue profile picture filters, or with hashtags such as #Parisjetaime, #PrayforParis, or #Jesuisparis.
But these signs of solidarity, while well-meant, will all most likely disappear within a few more days. I don’t want to sound cynical or unkind. But even the profile picture filter is—as Facebook calls it—”temporary,” one that users can only put up for a short period of time. The feelings, while largely sincere, are also temporal. This should prompt us to consider how social media prompts us to respond to global catastrophes and tragedies—and whether the emotions it generates are truly sincere.
Take Rurik Bradbury’s story: after the Twitter user posted a sarcastic (and fallacious) tweet on Friday about the Eiffel Tower having its lights off “for the first time since 1889,” nearly 30,000 people—including news organizations—retweeted his message. People finally started to call him out on the tweet, criticizing either the error or his sarcasm. But the point of his tweet was to demonstrate “why the rapid sharing of anything vaguely inspiration-shaped after a tragedy was so unsettling,” says the Washington Post‘s David Weigel. Bradbury wrote in an email to Weigel, “The part that feels the most useless to me is people’s vicarious participation in the event, which on the ground is a horrible tragedy, but in cyberspace is flattened to a meme like any other.” He continued,
Millions of people with no connection to Paris or the victims mindlessly throw in their two cents: performative signaling purely for their own selfish benefit, spreading information that is often false and which they have not vetted at all, simply for the sake of making noise … Instead of silence or helpfulness, social media pukes out stupidity, virtue-signaling and vicarious “enjoyment” (in a psychoanalytic sense) of a terrible tragedy by people thousands of miles away, for whom the event is just a meme they will participate in for a couple of days, then let fade into their timeline.
The Atlantic’s Megan Garber disagrees with Bradbury: she argues that social media’s response to the Paris terrorist attacks “‘is an act of mass compassion,” or more specifically, “compassion that has been converted, via the Internet’s alchemy, into political messaging. It is empathy, quantified.” She admits that this will change in the days to come,
just as all the “je suis Charlie” avatars reverted soon enough to human faces, just as all the marriage-equality rainbow filters dissipated, inevitably. The attention will also, as it were, flag. But, for now, all these expressions of solidarity with France are notable. Together, they treat the Internet not just as a commercial platform or a public square, but as an engine for empathy.
In her book Reclaiming Conversation, author Sherry Turkle considers the ways in which the media—and social media—have altered our reactions to news in the public square. She writes,
… The media supports a view of the world as a series of emergencies that we can take on, one by one. Events that have a long social and political history are presented as special, unusual, “unthinkable” events: massive oil spills, gun violence against elementary school children and their teachers, extreme weather—for the most part, all are represented as catastrophes. You know you are thinking in terms of catastrophe if your attention is riveted on the short term. In catastrophe culture, everyone feels part of a state of emergency but our agitation is channeled to donating money and affiliating with a website.
… Faced with a situation that you experience as an emergency, you want to use social media to huddle with your friends. A twenty-three year old who was in middle school during 9/11 says, “Most of the emergencies that are broadcast on the media, you can’t do anything about. There’s no action you know how to take that would improve the actual circumstances.” This does much to explain how the fretful self navigates the media stream of bad news: We learn about something, get anxious, and connect online.
There seem to be two especially popular ways to “connect online” in the wake of a catastrophe or disaster of the sort we’re seeing in Paris: the first is to show solidarity, through a #Kony2012 or #BringBackOurGirls hashtag, for instance. As I’ve written before for Acculturated, online social media campaigns generally make us feel good about ourselves, without forcing us beyond our spheres of comfort. Despite the collective voice our hashtag battles or profile pictures can amass, there is little practical worth in a tweet or a filter. This isn’t to condemn people’s efforts to show solidarity in this way—it is just to make sure that we don’t mistake such sympathy for real empathy: which I would argue is something that, when genuine, registers deeper and longer-lasting implications. “Empathy quantified” is not, I would argue, actual empathy.
The second way people “connect online” in response to catastrophe or disaster is to, unfortunately (yet inevitably), connect over controversy.
The controversy over Paris is starting to build now: one example of this is recent protest over the fact that Beirut’s terrorist attacks—suffered on the same day—have largely been ignored in social media and the press. What many of us did not realize was that Paris was the second area to be targeted by ISIS militants on Friday: as David A. Graham reports for The Atlantic,
Hours before the carnage in Paris on Friday, a double suicide bombing ripped through a working-class shopping district in Beirut. ISIS claimed responsibility for the explosions, which caused 43 deaths and hundreds of casualties in the worst bombing to strike the city in a quarter century. Then came ISIS’s attacks in France, which quickly subsumed much of the attention that might have been directed toward Lebanon.
… Viral articles on Facebook are demanding to know why the Beirut attacks have been overlooked. Lebanese have lamented the discrepancy. Many people are asking why Facebook didn’t allow people in Lebanon to check in as “safe” on the social network, as the company did for those in Paris.
There are some defensible reasons as to why the one attack got more attention than the other: as Graham points out, “There were three times more deaths in Paris than in Beirut.” Additionally, Graham notes that familiarity and proximity may have played a role: many Americans know people in Paris or have visited the country ourselves. We have a political history with France that dates back to our founding. Many feel some sort of connection to Paris via pop culture, as it’s been depicted in countless films, tv shows, and songs.
It could also be that the refugee crisis in Europe—a subject of debate and controversy for some time now—also caused people to pay greater attention to the situation in France. In recent weeks, some on the right have suggested that Europe’s massive wave of refugees could have security consequences. Regardless of whether they are right, that discussion has been percolating in the media long enough to lend this situation an air of political controversy for some, of political justification for others. And of course, in the responses we’ve seen in the media, many have been quick to use Paris as an opportunity to debate or discuss the refugee situation further.
None of this gives us reason to have ignored Beirut so blatantly—its situation in the Middle East is both unique and important, as Kevin A. Lees writes for The National Interest. But it does perhaps show our tendency to focus on the glamorous: the city of Paris, with all its cultural landmarks and sentimental ties, over a little-known or visited region of the Middle East that few in America feel a natural solidarity with.
But “solidarity” is a tricky feeling to muster. For countless people in Paris, a loved one is dead: a friend, family member, spouse, or child. Someone who went out innocently to enjoy their evening, and never came back.
The same happened in Beirut. As the New York Times notes, “Ali Awad, 14, was chopping vegetables when the first bomb struck. Adel Tormous, who would die tackling the second bomber, was sitting at a nearby coffee stand. Khodr Alaa Deen, a registered nurse, was on his way to work his night shift…”
Lives have been lost, mercilessly and needlessly. If we are to be honest, while we can try our best to empathize—to put ourselves in their shoes, or in the shoes of their loved ones—both emotional and physical distance will keep us apart.
So while there’s nothing wrong with the new (albeit temporary) profile pictures, with the memes, with the shows of “solidarity”—let us not mistake them for real, lasting empathy. And let’s not lose ourselves in the controversy and outrage porn that will likely continue to churn through the media in coming weeks. Hopefully the people of Beirut and Paris will experience true, immediate compassion in the days to come—not just the short, quick outbursts that social media is likely to foster.
It always took work to get me out of the house growing up. There was no place I’d rather be than curled up beside our fireplace with a book and cup of tea in the winter—or outside on the swing (with that same book) on a summer’s evening. Home was my place of rest, my sanctuary.
And our home really felt like a magical place: Mom has always had a knack for creating cozy, beautiful spaces. It was surrounded by rosebushes, with old books on the shelves, brightly lit rooms, hot oatmeal and coffee for breakfast on cold winter mornings. A friend recently told me, “Your house always had a peace about it that I envied.” It’s a peace that I believe had a spiritual dimension to it, as well as a physical one—which makes it only harder to replicate.
Yet there came a point when I realized that my “home” was no longer the home I grew up in. Though I’ve been in Virginia for over six and a half years now, the transformation has been quite recent. There was a time when I would have still called my parents’ 1930s brick house in Idaho “home.” Not on purpose—it would just slip off my tongue. “I’m visiting home,” or “we’re going home for the holidays.”
My husband and I have now lived in Alexandria for two and a half years. In that time, we’ve found our favorite coffee shops, met neighbors, walked our dog along numerous trails and streets, frequented the local farmers’ market. We’ve planted a garden, painted the kitchen, decorated a little nursery that’s soon to be filled with a new life.
And as I sit in this quiet room—sunshine spilling across wood floors, breeze wafting through open windows—I know. This has become home, a place of rest and joyful living. So it makes me wonder: when does a place become a “home”? How does that transformation happen? There seem to be a few things that have built this change:
First, a home is a place where you build a rhythm. It’s a place in which you spend enough consistent time to build a presence, a pattern. The more you commute out of your place of residence, the less chance it has of becoming a home. You have to invest yourself—time and body—in it. You have to understand its seasons.
I always think of the passage in The Return of the King where Sam is reminiscing to Frodo about the Shire. He says, “It’ll be spring soon. And the orchards will be in blossom. And the birds will be nesting in the hazel thicket. And they’ll be sowing the summer barley in the lower fields… and eating the first of the strawberries with cream. Do you remember the taste of strawberries?” Sam knows the Shire, its patterns and rhythms, to the point where he can predict them without being present. There is always a piece of him there, embedded from years of service and love.
There have to be whole days where you seep in a place’s presence, bury yourself in its projects, take in its colors and character. It has to be a place whose ethos you come to know and love through regular living and investing. A place where you scrub the floors, fill the kitchen with smells of cooking and baking, spend hours poring over books or work projects. The more time spent in that space, the more you develop a sense of its character, its feel. You discover what it needs in order to more fully develop its potential, in order to reach its form (if it’s alright to use a Platonic idea to describe the decorating and building of a home).
Second, you have to seek the good of your home—and the good of the land surrounding it. This often means giving up something: time, energy, most often money. It means investing in your local community, discovering its cares and concerns. It means getting to know your neighbors, at least a little, and seeking to offer them community (and to accept that community in return). It means taking care of the leaky roof, shabby yard, dying plants. It means weekends of raking leaves and planting trees, repairing roofs and replacing windows. It means town hall meetings, sometimes, or researching the local elections before you go to the ballot box. It means cultivating a sense of pride and warmth toward your neighborhood, your town—along with a healthy understanding of its weaknesses and flaws.
It is, in a sense, the art of “husbandry”: an art described by Wendell Berry in one of his essays as “the name of all the practices that sustain life by connecting us conservingly to our places and our world; it is the art of keeping tied all the strands in the living network that sustains us.” It’s not just about productivity, fame, or profit—husbandry is about building a web of life within a place, cultivating it and helping it flourish.
Which connects to the third point or idea: that a home must be full of life and living things. A sterile house—wiped clean of earth and plant, animal and person—seems to be lacking a feeling of “home.” There’s something about the house that’s got dust bunnies in the corners, a cat or dog curled up in a spot of sunshine on the floor. Or the house that’s spotted all over with ivies and ferns, pots of herbs and grasses.
Yet even better is the house that nearly rattles with the sounds of laughter and joy—with the chaos and clamor of people, old and young. With the sound of people breaking bread together around the dinner table, reacting passionately to Sunday football games, punctuated with the sounds of children laughing (or screaming), the pitter-pat of their eager feet. It needn’t be a perfect house, or an immaculately kept house. The house that’s lived in, and full of life, reverberates with love and joy in a unique way.
As Shauna Niequist puts it in her book Bread and Wine, “What people are craving isn’t perfection. People aren’t longing to be impressed; they’re longing to feel like they’re home. If you create a space full of love and character and creativity and soul, they’ll take off their shoes and curl up with gratitude and rest, no matter how small, no matter how undone, no matter how odd.”
Finally, a home must be filled with stories and memories. This goes back to the idea that time spent in a place is vital to building a sense of home. One of the reasons Idaho was “home” for so long was because it held my most deeply-cherished memories: my grandmother reading aloud to my sister and me until we fell asleep. Great-grandfather telling stories about his childhood, driving a four-horse team and digging ditches, of his wife’s beautiful singing voice and immaculate hospitality. Christmas evenings spent reading Grimm’s Fairy Tales by the crackling fire. Nighttime stargazing with my sister and father. Playing “Narnia” with my brothers, hiding in the big wardrobe in one of the spare rooms. Working on math homework in my dad’s office, or helping my mom can peaches late in the summer. Simple things that built a fabric of memories and belonging.
In Marilynne Robinson’s Home, protagonist Glory considers an important memory from her childhood—a ritual that characterized her early days:
How to announce the return of comfort and well-being except by cooking something fragrant. That is what her mother always did. After every calamity of any significance she would fill the atmosphere of the house with the smell of cinnamon rolls or brownies, with chicken and dumplings, and it would mean, This house has a soul that loves us all, no matter what. It would mean peace if they had fought and amnesty if they had been in trouble. It had meant, You can come down to dinner now, and no one will say a thing to bother you, unless you have forgotten to wash your hands. And her father would offer the grace, inevitable with minor variations, thanking the Lord for all the wonderful faces he saw around his table.
These are the memories that turn a house into home.
The memories are still being built here. It’s a new home, at least to us, and we have countless stories to uncover. But arriving home from work to the smell of my husband’s freshly baked bread or sweet rolls, to a happy puppy who covers my face in kisses, to flickering candles and a thick book full of mystery—these are the little things that have slowly built character and belonging here. Harvesting jalapeños and rosemary from the garden, hanging new pictures, growing a regular rhythm of guests and visitors: as we fill this place with new stories and new lives, I know it will continue to delve its roots deep into our hearts. It’s possible to have secondary homes, and Idaho will always be that for me—with its own sense of peace and joy, its own set of rhythms. But this place, too, has become home, and it’s where I feel my new stories and memories brewing.
During Tuesday’s GOP debate on Fox, Senator Marco Rubio made two controversial statements that fueled a social media fire for much of the night. First, he argued that “Welders make more money than philosophers”—a claim that was quickly shot down by fact checkers. He quickly followed up that statement by saying, “We need more welders and less philosophers.”
Now, first, it’s worth noting that—unless your definition of “philosopher” is very broad—we probably don’t have nearly as many philosophers as we do welders here in the U.S. But that’s really beside the point. Within Rubio’s statement are a series of assumptions about education and vocation that we should consider carefully.
The first claim Rubio seemed to be making was that technical work such as welding—perhaps the sort of work one might call “blue collar”—is more important and valuable than the ideological, intellectual work of a philosopher.
This could’ve been a statement made primarily to appeal to populist GOP voters. It fits with the “us vs. them” rhetoric that tends to dominate our politics recently—the idea that elites in Washington are out to get you, that the snobby media care little for the lives of everyday Americans (echoed in Cruz’s later statement that journalists would care more about illegal immigration if their jobs were the ones in jeopardy).
But honestly, I don’t think we ought to question the important role philosophy plays—or at least, ought to play—in our political and cultural lives. Those who study ideas and try to transmit them to the larger public are often (though not always) performing an important societal role. Not all of them will be famous or world-changing. But as New York Times columnist Nick Kristoff put it,
Rubio: “We need more welders and less philosophers.” Ouch. The world would be worse off if Plato had been a construction worker.
— Nicholas Kristof (@NickKristof) November 11, 2015
So yes, welders are important. And we need them. But philosophy matters too—and oftentimes, ideas (and their consequences) get tossed by the wayside all too often in American political debate. We need voters who are also deep thinkers, who are asking the question Rand Paul repeatedly asked Tuesday evening: What does it mean to be a conservative? What are we trying to conserve?
Perhaps what Rubio really meant was that we need fewer college kids to pursue “impractical” majors like philosophy or the humanities, and rather should encourage them to get simple vocational training. His claim seemed to indicate that profit and practicality should be the primary motivators behind the job or major one chooses.
But not only do welders not make as much as philosophers, as has already been pointed out—Rubio’s statement constitutes an assumption about education that is both fallacious and potentially harmful.
While there is nothing wrong with pursuing a technical degree, there is nothing wrong with pursuing a classical liberal arts education, either. In fact, as TAC contributor and George Washington University professor Samuel Goldman pointed out on Twitter on Tuesday:
But seriously, majors in traditional liberal arts subjects have no problem getting or keeping decent jobs. — SamuelGoldman (@SWGoldman) November 11, 2015
Granted, we have problems with student debt and degree completion that indicate it wouldn’t be a bad thing if more people pursued vocational training. Too many students graduate from four-year universities with mountains of students loans weighing them down. But this isn’t the only answer, or the best one, to this problem. Classical liberal arts education trains the mind and soul of a person. It gives them a breadth and depth of knowledge that extends beyond the realm of career, and helps make them better thinkers, communicators, and responders. These are skills a person can bring into any job setting.
Take, for instance, Carly Fiorina: regardless of what you think of her as a political candidate or businesswoman, it’s worth noting that she got a degree in philosophy and medieval history from Stanford University… then went on to become CEO of Hewlett Packard. It’s not exactly welding—but it’s a high-paying job in a technical field, which seemed to be what Rubio was getting at.
Or we could talk about Matthew Crawford, author of Shop Class as Soul Craft and The World Beyond Your Head—he has a Ph.D. in political philosophy, but after experiencing the doldrums of desk work, decided to become a motorcycle mechanic. Yet does this mean he spurns academia or the education he received? It does not appear so; as he puts it in Shop Class as Soul Craft, “When the point of education becomes the production of credentials rather than the cultivation of knowledge, it forfeits the motive recognized by Aristotle: ‘All human beings by nature desire to know.'”
Ultimately, the best sort of education is one that does not just make you a better worker, but rather, the sort makes you a better, fuller human being. The sort that informs your soul as well as your mind, your understanding of ethics and prudence as well as your understanding of a given skill set. The classes that correspond with a philosophy major or degree are the very sorts that help cultivate phronesis—prudence—an understanding of the world and its workings that helps us make the best decisions in our homes, our workplaces, or at the ballot box.
To be a “philosopher” is to be a lover of wisdom, if we are to take the Greek literally. And that is not something to be despised. It’s a virtue even a welder can put to good use.
This week, I’ve read a defense of bad coffee, and a defense of white bread—alongside a story about artisan wheat and another about artisan apples. The latter stories tell me all that I’m missing by indulging in big Costco loaves or grocery bags of granny smiths; the former tell me that it’s okay to love those mainstreamed, instant-pleasure sorts of things you pick up at big box stores or diners: “the big, red jars of Folgers, the yellow Chock-full-o-Nuts, the sky blue cans of Maxwell House,” or the bread “engineered and designed to look like a streamlined wonder, like an edible piece of modern art.”
Funny—New York Times reporter Ferris Jabr says of that same bread, “America has grown wheat tailored to an industrial system designed to produce nutrient-poor flour and insipid, spongy breads soaked in preservatives. For the sake of profit and expediency, we forfeited pleasure and health.” He speaks of the glory of wheat grown in bygone days:
From the 18th century to the early 19th century, wheat was grown mainly near the coasts. During this time, immigrants and American emissaries introduced numerous varieties — Mediterranean, Purple Straw, Java, China, Pacific Bluestem — which breeders tinkered with, adapting them to various soils. All that preindustrial wheat was a living library of flavors: vanilla, honeysuckle, black pepper. Agricultural journals of the time noted the idiosyncrasies of wheat kernels — whether they were red and bearded, velvety or ‘‘plump, round, of a coffeelike form’’ — and distinguished wheats that produced ‘‘excellent’’ and ‘‘well-flavored’’ bread from those that yielded ‘‘inferior’’ loaves.
Sounds like a wine connoisseur—not someone tasting a kernel of wheat. Yet the same attention to details of taste, color, texture are displayed in this New York Times article about artisan apples: “the best-flavored heirloom apples,” writes author David Karp, “offer an added dimension of intensity and complexity akin to that of fine wines.” He writes that “many consumers are fed up with mass-marketed fruit chosen mainly for looks and shelf life. Their current quest is to restore the flavor and eating quality, despite the compromises required by large-scale production.”
But one wonders whether eventually, our quest for artisan apples or wheats may reach the point Keith Pandolfi describes in his bad-coffee defense—whether we’ll reach peak artisan, and develop a sort of nostalgia for the old cheap versions we used to get:
Maybe it all started a few months ago when I found myself paying $18 for a pound of what turned out to be so-so coffee beans from a new roaster in my neighborhood. It was one of those moments when I could actually imagine my cranky diner-coffee-swilling Irish grandfather rising from the grave and saying, “You know what, kid? You’re an idiot.”
It’s more than just money, though. I’m as tired of waiting 15 minutes for my morning caffeine fix as I am waiting the same amount of time for my whiskey, cardamom, and pimento bitters cocktail at my local bar. I am tired of pour-overs and French presses, Chemexes and Aeropresses. “How would you like that brewed?” is a question I never want to hear again.
… Cheap coffee is one of America’s most unsung comfort foods. It’s as warming and familiar as a homemade lasagna or a 6-hour stew. It tastes of midnight diners and Tom Waits songs; ice cream and cigarettes with a dash of Swiss Miss. It makes me remember the best cup of coffee I ever had. Even though there was never just one best cup: there were hundreds.
Alongside Pandolfi, I have to admit that I really don’t mind buying a granny smith or a fuji (though red delicious are too mealy to be enjoyable). They’re both tart, crunchy, and taste great with either a block of aged cheddar or a couple tablespoons of peanut butter. That’s all I need. Not to say I won’t go to an apple orchard and go crazy over all the gorgeous heirloom varieties, bringing home several to try. But I don’t feel dogmatic about it.
And that’s what Pandolfi—and many others I talk to—seem to be responding to: a growing sense of dogma or religious devotion surrounding our food industry. They sense that one’s food choices can prompt a bevy of applause or disdain: some may cheer loudly as you go buy that Egg McMuffin, but others will shoot you a look of disgust as they munch on their artisan croissants—or perhaps sip their local, freshly-pressed green juices. That the disdain often goes both ways: you’re just as likely to get eye rolls if you put kombucha and kale in your shopping cart.
There’s an increasing “fixation on the virtue of food” amongst Americans, Claudia McNeilly writes for Broadly, one that can have very dangerous consequences. I’ve touched on it in blogposts about food in the past—but McNeilly explores one of its furthest, and most dangerous, manifestations: orthorexia. A sort of eating disorder only recently identified by the medical community, this attitude is characterized by an obsession with “virtuous” (aka healthy) food, and a refusal to eat anything outside those specific boundaries.
I know about this from firsthand experience. There was definitely a time in college when I struggled with obsession over the healthy. It’s easy, when you begin to develop legalistic attitudes toward food, to gradually shrink your idea of what’s permissible. To slowly go from occasionally avoiding meats or dairy products, perhaps enjoying Meatless Monday and saving sugar for weekends, to refusing to eat anything fried, sugary, or fatty… to then thinking that all meat, dairy, gluten, and egg products are bad for you… to then eating a plate of raw fruits and vegetables without salad dressing, because you’re afraid of the calories.
When food becomes an end in and of itself, it becomes destructive.
Not everyone obsesses over healthy food in an orthorexic fashion—but there are other food obsessions that can dominate our lives. We can become fixated on the artisan, the local, the organic, the fair trade. We can refuse to drink or eat anything from McDonald’s or Dunkin Donuts. We can choose to only buy groceries from Whole Foods, or refuse to eat our grandmother’s casserole because it’s got Campbell’s cream of mushroom soup in it.
But at this point, food ceases being food, and becomes religion. Our world has developed an ethics of dietary consumption that worships the self, the idealized body. Whether our diet of choice consists of juicing, raw food, paleo dieting, vegan eating—or eating at McDonald’s or Pizza Hut every night—modern styles of eating tend to resemble religious codes more than they do a set of gastronomic guidelines.
I was talking to my younger brother, a barista at a local indie coffee shop, about the bad coffee story. He summed up his views on the subject thus: “It’s good to be capable of having a bad cup, or good cup. You never know who might be brewing your coffee. The key is to appreciate the art, and be content with the crap.”
Seemed like wise words to me. Hopefully, with time, we will learn that all food is to be appreciated, in moderation—whether it be heirloom apples or Folgers coffee.
Age segregation isn’t new—throughout time, young adults have separated themselves into their own class(es), building their own cliques and culture. But Mark Bauerlein writes for First Things that this sort of segregation is actually gaining momentum: “Social media, in particular, have accelerated youth contact, serving mainly as a vehicle of peer pressure and adolescent culture”—
This is why family physician and psychologist Leonard Sax in his latest book, The Collapse of Parenting, emphasizes its opposite, parental authority. “In many families,” he writes, “what kids think and what kids like and what kids want now matters as much, or more, than what their parents think and like and want.” In disputes over whether kids can do this or that, go here or there, get this new gadget or not, the kids prevail. Parents face two forces that persuade them to comply, or simply wear them down, one, the insistent demands of kids (reinforced by youth media and consumerism), and two, the opinions of other children. All the weight falls on the younger side: “In American culture today, same-age peers matter more than parents.”
This is particularly distressing, Sax says, because the culture youths learn from one another is one of disrespect and indignity. Sax finds it in popular TV shows such as Dog with a Blog, Jessie, and Liv and Maddie, which display adults as clueless and incompetent, as well as hip-hop music and pornography (which boys view avidly).
Our culture has perpetuated this “us vs. them” relationship between young adults and their parents / older adults for quite some time: a brief perusal of recent Disney films confirms this as a solid, almost constant, theme throughout their work in the past several decades. The kid-as-hero—conquering (or gently lecturing) the ignorant and backward adult—is the sugary, self-commending propaganda millennials and the generations under them have been consuming for decades.
Social media only caters to this tendency: we build our cultural and social circles around an ever-changing web of trends and fashions. Adults who refuse to keep up—or perhaps worse, try to join in and be “cool”—put themselves at an enormous disadvantage. They solidify their standing as regressive and ignorant.
But honestly, we can’t just say that social media has deepened segregation between people of different ages—in many cases, it’s exacerbated segregation across economic, social, political, and ethnic spectrums, as well. Consider scrolling down your news feed vs. walking down the street:
How many people do you see who look similar to you?
How many vote similarly to you, and share news stories or opinions you’d generally agree with?
How many are the same age as you?
How many have the same religious preferences and beliefs you do?
My guess is that your news feed is more homogenous than the average town or city street—by a long shot. Even growing up in small-town Idaho, I’d see more diversity than I see on my news feed today. At the local coffee shop, I’d run into old farmers, young Mormon families, Latino immigrants, kids from the local community college, homeschool moms, probably a few goth teenagers. We can’t easily skirt people of a different lifestyle, religion, or economic class when they’re walking past us in real-time.
But to foster diversity online is a difficult and challenging proposition. It often requires a purposeful curation. More than this, it requires maturity to realize that those different from you—those you may want to assume are ignorant or stupid—are actually deserving of respect. They won’t be forced upon you—so you have to seek them out, build them into your online life and experience.
It seems that young people’s tendency to dismiss adults is only a concentration of a larger sort of progressive segregation that we’ve adopted wholeheartedly for some time—as Michael Dirda put in an excellent consideration of reading and the humanities, we modern folks are constantly prone to “excessive privileging of the present”:
So powerful are the online forces of conformity and political correctness that it sometimes seems that knowledge of the past is being judged as irrelevant and every former age dismissed as unenlightened. For centuries, antiquity might have been over-reverenced; now earlier eras are condescendingly patronized, smugly disdained as racist, imperialist, classist, sexist, and generally reprehensible. Such presentism is intellectually impoverishing, as well as generally bad for one’s character, and should be resisted. The timeworn adage remains at least partly true: We are but pygmies standing on the shoulders of giants.
This is perhaps one of the most dangerous forms of segregation. If we separate ourselves from the lessons and wisdom of the past, we put ourselves on a perilous and unknown path. You may have some difficulty convincing your teenager of this—but it’s a lesson that all of us must learn in life. And it’s a lesson that we hope social media will not endanger: in the best of scenarios, it would rather become a platform through which knowledge, relationships, and respect can grow, across generations and across economic, political, cultural, and ethnic spectrums. It’s something we can work on, within the world of social media, without staging a mass exodus—but it will require some conscientious work.
Seven or eight years ago, I was at a salon getting a haircut. I watched a mom walk in with her toddler on her hip, a blond little girl who looked about two or three. The girl was holding a handheld DVD player directly in front of her face. Her mother placed the little girl on a chair in the front of the room, said hello to her stylist, and settled in for her haircut.
When the stylist said something about the little girl’s quiet demeanor, the mom replied, “Oh, we can’t live without that DVD player. We bring it with us everywhere. She starts screaming in the car if we don’t have it.”
I couldn’t help recalling this experience when I read the New York Times‘s piece published yesterday on children’s use of mobile devices. The small survey, while not nationally representative, adds to a growing list of data that show parents’ reliance on technology to raise, placate, or distract their kids—for longer periods of time, and with less boundaries or supervision:
One-third of the parents of 3- and 4-year-olds said their children liked to use more than one device at the same time, noted Dr. Hilda Kabali, a pediatrician and the lead author of the survey.
Seventy percent of the parents reported allowing their children, ages 6 months to 4 years old, to play with mobile devices while the parents did housework, and 65 percent said they had done so to placate a child in public.
A quarter of the parents said they left children with devices at bedtime, although bright screens disrupt sleep. “They are putting their child to sleep in an environment that keeps them from going to sleep,” Dr. Rich said.
According to the parents, nearly half of the children younger than 1 used a mobile device daily to play games, watch videos or use apps. Most 2-year-olds used a tablet or smartphone daily.
There are a variety of concerning components to this piece—the first being the multi-device usage mentioned in the first paragraph. This piece of information comes at a time when more and more people are acknowledging that multi-tasking is bad for our brains. To foster multi-tasking and multiple screen usage in children whose brains are still developing seems like a very harmful idea.
Second, this story shows that parents are increasingly using technology as a sort of pacifier: instead of fostering a relationship with their children and using disciplinary tactics, they employ technology as a child-soothing crutch. While this can be a great short-term problem solver, there are a variety of long-term problems that present themselves if it becomes a habit: it decreases parents’ ability to communicate with their children and develop a loving but firm relationship with them. It may actually incentivize kids to rebel, since they know they’ll get more screen time if they do. And if you don’t have a mobile device around, your kids may make life miserable and exhausting—because they’ve built an expectation around that mobile device.
As one psychologist said in a July New York Times story, “Children have to know that life is fine off the screen. It’s interesting and good to be curious about other people, to learn how to listen. It teaches them social and emotional intelligence, which is critical for success in life.”
Finally, there’s the bedtime aspect reported in the story. Not only could using a technological device at night disturb children’s ability to build a regular sleeping schedule (vitally important to cognitive and physical development) because of its distracting nature—research has shown that exposure to artificial light from electronics can decrease melatonin and badly affect our sleeping patterns.
And all of this doesn’t get into what too much screen time does to our eyes: eye irritation, dryness, fatigue or blurred vision due to technology use are on the rise, according to a CBS news story. For kids, the problem can be even more serious: as one teacher with a degree in early child development told me, too much time spent staring at a screen at a fixed distance can cause children’s eyes to not know how to change focus. The eyes are a muscle that needs a diverse and varied workout, as the CBS story puts it—especially as those muscles are just maturing and growing. Our obsessive screen time does not lend itself to this.
Too much technology is bad for adults—just like an excess of junk food is bad for you. It’s a question of moderation. If you think about technology as a treat—as a sugary dessert, perhaps—it seems to suggest that the amount we give to kids should be appropriately measured and cautious, for their own health and wellbeing. A kid raised on Twinkies and doughnuts will miss out on key nutrients for their development and health. A child raised on iPhones and tablets will face similar challenges. So the question is, how do we foster a healthy, balanced technology diet in our children’s lives? And is there a point at which children are just too young to be using smartphones and tablets?
What is Halloween really about? The Atlantic‘s Megan Garber notes that, while it “used to revolve around, and revel in, familiarized fear—as the night before All Souls’ Day, it was historically celebrated as a time of communion between the living and the dead,” the holiday has increasingly changed as it’s become secularized (and commercialized):
[Halloween is] primarily about fun. Spectacle long ago replaced scariness as the core driver of Halloween festivities among both kids and adults, aided by the Costume Industrial Complex and the assorted quirks of the “one night stand” and the “Freudian slip” and the “cereal killer,” and also of the “9 Clever Costumes for Punny Procrastinators” and the “21 Insanely Clever Costume Ideas for You and Your Friends” and the “29 Halloween Costumes That Will Make You Nostalgic” and the “67 Awesome Halloween Costume Ideas.”
…Costumes, by their nature, are about thwarting social norms (the word stems from the Latin consuetudinem, or “custom, habit, usage”); when they collide with the current culture’s emphasis on creativity, however, they transform Halloween from a holiday into an excuse. For experimentation, for crossing lines, for dabbling with otherness. Our forebears may have spent All Hallows’ Eve fighting against goblins and ghosts; we, for our part, battle our Ids.
This reminded me of the consideration of identity Wesley Morris wrote for the New York Times a few weeks ago—he writes of a world obsessed with image, and with the flexibility/translatability of that image in a modern era. He writes of a people who, detached from spirituality and tradition, are able to build their own definitions of the self and the good life. Could it be that Halloween, too, has become less about “the other” and more about our selves and how we want to define them—be it as a “sexy librarian” or a Superman?
Mark Tapson writes for Acculturated that people have traditionally liked Halloween—and the accompanying horror stories, haunted mazes, and creepy houses that come with it—because they like to be scared: because they like to consider the fact that there might still be something “else” out there, something besides our selves:
We are drawn to tales of ghosts and vampires and other creepy mysteries that point beyond human nature because we crave a direct experience of the supernatural, especially in a world in which the ascendance of atheism and the hostility of some prominent scientists to the supernatural has diminished that experience. … On one level, movies like The Sixth Sense and The Others are just Hollywood entertainment, but they affect us at least in part because on a deeper level they are also a chilling reminder of the ineffable veil that separates the living from the dead, the tangible from the intangible, the known from the unknown. We don’t know for certain what lies beyond that veil, but deep down we know it is there and the thought of breaking through that otherworldly plane and glimpsing “the sublime” simultaneously terrifies and thrills us.
As Rod Dreher has written in the past, many people—secular and religious—continue to have metaphysical, mystical experiences. The author of The Exorcist, according to a story recently published in The Washingtonian, believes his story was “an argument for God. I intended it to be an apostolic work, to help people in their faith. Because I thoroughly believed in the authenticity and validity of that particular event.”
Evidences of the supernatural are not easy to explain, yet they cannot easily be excused from human life. They’re something that we, regardless of our religion, will be forced to confront—and attempt to explain—at some point in our lives. Traditionally, Halloween forced us to consider such things: the things that frighten us, the things that seem inexplicable. Halloween asked us the question, “Are you all that exists? Could there be something else out there—and are you prepared to confront it?” Halloween forced us to consider the evil that paints our world: it reminded us of the darker sides of human nature, of the poisonous sins and errors that plague our world.
So what do we lose when Halloween becomes simply “fun”? What we lose whenever we take a holiday with spiritual roots and secularize it: we turn it into a time of buying things and gorging on food, reveling in the silly and taking lots of pictures—but it becomes a time without mystery or meaning.
This isn’t to equate Halloween with other religious holidays—by no means. It has pretty pagan roots, and I think a lot of Christians are right to be cautious of it: as Tapson notes, fear should be a reminder of what we can’t control or explain, and it’s far too easy to trivialize the spiritual—to a dangerous degree—during times such as Halloween. Of course, as a Christian, I also believe we should not be too frightened, because we don’t believe that evil has the upper hand in this world. We believe that light conquers darkness.
Yet this year, I’m also considering the ways in which the traditional Halloween has its meanings and purposes. I do think that Tapson is right: there’s something important about “our fearful fascination with the forbidden unknown, our yearning to be embraced by the sublime.” Halloween reminds us of the darkness that surrounds us, as well as of the fact that we are not alone in this world. In a month, Thanksgiving will arrive, reminding us of the people and blessings that comfort us, that bring us through trials and difficulties. And then—most importantly—Christmas will remind us that God is with us, that light can still shine through that darkness. It’s important not to forget the value of fear—especially when that fear can point us to the source of all comfort.
Friendship is on the decline, David Roberts writes for Vox—and he thinks that a sense of place and community (or lack thereof) has a great deal to do with it:
For the vast majority of Homo sapiens’ history, we lived in small, nomadic bands. The tribe, not the nuclear family, was the primary unit. We lived among others of various ages, to which we were tied by generations of kinship and alliance, throughout our lives. Those are the circumstances in which our biological and neural equipment evolved.
It’s only been comparatively recently (about 10,000 years ago) that we developed agriculture and started living in semi-permanent communities, more recently still that were thrown into cities, crammed up against people we barely know, and more recently still that we bounced out of cities and into suburbs.
So everything about how we live now is “unnatural,” at least in terms of our biology. Of course, that doesn’t mean it’s bad — it’s generally a bad idea to draw normative conclusions from evolutionary history — but it should remind us that socially constructed living patterns have shallower roots than we might think from our parochial perspective.
Point being, each of us living in our own separate nuclear-family castles, with our own little faux-estate lawns, getting in a car to go anywhere, never seeing friends unless we make an effort to schedule it — there’s nothing fated or inevitable about it.
It’s worth noting that traditional agriculture, though perhaps not the most primitive form of tribalism, actually used to cultivate a tribal atmosphere—as can still be evidenced in the community-centric demeanors of the Amish, who rely on each other during harvest times and barn raisings, and build their social lives around each other. As Ronald Jager writes in The Fate of Family Farming, on the traditional farm, “Every farmer … knew his neighbors, and worked with them, and in harvest time exchanged work with them, each lending a hand to the other. Community, farm, ecosystem, crops, woodlot, animals, family, gardens, work, neighbors, worship, leisure—together they promised land and, when effective, shaped a coherent system, a total community of life.”
Much of the reason that we don’t see or interact with each other, Roberts argues, is because we’ve lost the art of the “spontaneous encounter.” This is the “key ingredient for the formation of friendships,” he writes: when we are “forced into regular contact with the same people,” it becomes “the natural soil out of which friendship grows.” Many of us encounter this in college—but as we grow older, the opportunity for such spontaneity fades:
… When we marry and start a family, we are pushed, by custom, policy, and expectation, to move into our own houses. And when we have kids, we find ourselves tied to those houses. Many if not most neighborhoods these days are not safe for unsupervised kid frolicking. In lower-income areas there are no sidewalks; in higher-income areas there are wide streets abutted by large garages. In both cases, the neighborhoods are made for cars, not kids. … Seeing friends, even friends within “striking distance,” requires planning. “We should really get together!” We say it, but we know it means calls and emails, finding an evening free of work, possibly babysitters. We know it would be fun, but it’s so much easier just to settle in for a little TV.
Wendell Berry has also written about this: he argues that the television has had a huge impact on the life of the neighborhood, perhaps just as much as (or more than?) the automobile. Before the television, many people would spend their summer evenings on the front porch, or invite people over for dinner. Their diversion, their amusement, was found in community—not in the entrancing distractions of a flickering TV screen. Now, we see similar patterns developing around the computer and smartphone, as well.
For some, the church still largely functions as a place for spontaneous and consistent meeting. However, people increasingly commute to church, often staying only for a minimal amount of time, without conversing or communing with other members. Many churches have tried to combat this trend by encouraging the development of “small groups” or book studies that encourage smaller circles of community to develop, but it remains to be seen whether this will be successful long-term.
Additionally, many of us commute for work away from our neighborhoods, thus creating a divorced lifestyle in which we rarely meet with the people who live next door. Not only does separating our spheres of work and social life make it more difficult for us to meet spontaneously—it divides our areas of interest and concern. Amish neighbors are worried about the same seasons, confront the same deadlines, help each other through the same dilemmas. This sort of united social and vocational interest is increasingly rare in American life.
Roberts points out that shared public spaces can function as a place for spontaneous meeting. It’s why we need safe sidewalks, parks, and places like the farmer’s market. It’s also why shared housing spaces like apartments can often (though definitely not always) cultivate a sort of tribal community: in a New York Times story published last Friday, Jennifer Miller tells the story of New York families who choose to stay in cramped spaces for the good of their children, and for the communities they’ve built. A father explained to her why they turned down cheaper, larger housing:
“Almost 100 percent of the reason was our daughter, Grace, and the relationships she’s built on our floor,” said Mr. Goepfert, 34, who is pursuing a degree at Columbia’s School of General Studies. The Goepferts live in a one-bedroom rental on West 119th Street in Harlem. Their floor has six children between the ages of 4 months and 7 years and the family next door has a baby on the way. Especially in the winter, the children hang out in the 12-by-15-foot landing outside the elevator. They ride their scooters down the hallway and kick around a ball. While their children are playing, the families keep their doors unlocked.
“As the kids have come together, so have the parents,” said Mr. Goepfert, whose daughter is 2. “We have dinners together, share toys, hand-me-downs, get help with last-minute babysitting. That kind of community isn’t all that common in New York, and it fosters an environment that’s so valuable that we turned down a significant amount of money.”
“These parents are willing to make significant sacrifices in space and living expenses to preserve the uber-local community their families have formed with others in nearby blocks or even inside the same building,” Miller writes.
Could such dedication provide an answer Roberts’s dilemma? It seems that oftentimes, we give up community out of a desire for greater space, greater financial stability, a better job situation. But, as Roberts puts it, “we are meant to have tribes, to be among people who know us and care about us.” As our jobs increasingly carry us away from the neighborhood, and our reliance upon the car grows steadily, it may be that a simple dedication to staying put can help community grow. Because while our rootlessness often results in greater space, it can also compromise or neglect our deeper need for place: for a sense of home, and community.
How often have you stared out of a taxi or bus window, witnessing some moment—a meeting, perhaps, or a person shouting into their phone—and wondered what the deeper story might be? Who are they? Why are they upset or happy? To the curious, people-watching can be one of life’s greatest enjoyments. In Paula Hawkins’s The Girl on the Train, it becomes one of life’s greatest dangers.
The book follows protagonist Rachel Watson, a divorced alcoholic, as she commutes to London. Every day, the train passes her ex-husband’s house: one he now shares with his new wife, Anna. Every day, Watson watches the house pass, and aches for her old life, for all that she’s lost.
But there’s another house that Watson also watches: a home down the street from her ex-husband’s residence, owned by a young couple she’s nicknamed “Jess” and “Jason.” They often sit on their terrace in the mornings, and so Rachel’s gotten used to watching them—to the point that she feels as if she knows them. They’re a stand-in for all the domestic bliss she dreams of. “They are,” she thinks, “a perfect, golden couple.” That’s what she thinks, at least, until “Jess” (actually named Megan) disappears—and suddenly, her “golden couple” fantasy falls apart.
Watson starts out as an incredibly passive protagonist. She rides the train aimlessly and watches houses pass: deep in her pain, and usually deep in alcohol. She is a rather pitiful character, falling into drinking binges time and time again. As the novel progresses, she slowly becomes more active and purposeful—but she also falls back into those old habits with a painful consistency.
Huffington Post writer Claire Fallon compared the novel’s themes and feel to Gillian Flynn’s Gone Girl, and I can definitely see the resemblance: both books focus on protagonists that are really flawed, broken, often unlikeable. But that also makes them incredibly human, and it makes The Girl on the Train authentic in a way few thrillers often are. We have no perfect damsel in distress, with serene good intentions and stunning beauty. Instead, the book’s stars are a trio of women with problems ranging from the alcoholism to depression, from utter arrogance to complete emotional vulnerability. Watson’s unrequited desire for a baby and perfect family life leads her into a downward spiral of alcoholism and guilt; Megan’s tragic past and incessant panic attacks steer her toward sexual unfaithfulness; Anna’s vehement drive to preserve her domestic status and “perfect life” feeds her habits of cruelty and unkindness.
The men in Hawkins’s book mostly reflect the same sort of personal nuance, though there are a couple who seem to display a sort of one-dimensional badness. Hawkins is writing about misogyny and callousness, the sorts of relationships that break us down rather than building us up. It seems that in doing so, she’s tempted into creating villains that are occasionally cartoonish (though their impulses and words still display moments of ringing truth).
The Girl on the Train is refreshing and thrilling for many of the same reasons Gone Girl was: it contemplates the worst parts of ourselves, and shows the brokenness and sin that result from our self-centeredness. Unlike Gone Girl, however, Hawkins more fully contemplates the ways our love often gets tangled up in our brokenness, and how our emotions often lead us down paths we may regret later. The characters are less hyperbolic and unbelievable than Gone Girl’s (though that’s half the appeal of Flynn’s novel, so I’m not faulting her for that). Additionally, The Girl on the Train is less over-the-top in its plot—and purposefully so. In a Q&A with CBC News, Hawkins said, “The kind of crimes that happen in very mundane, ordinary, domestic settings, the kind of crime that could happen to all of us, the things that could be happening behind your neighbours’ doors — those are the things that I find intriguing and compelling rather than spies and serial killers. I’m interested in the domestic, everyday, ordinary and quite sad violence that goes on around us.”
In light of this, the book’s ending does seem a bit over-dramatic—it seems almost incongruous with the slow, gentle build of the rest of the book. But it doesn’t spoil the rest of what is an enjoyable, well-constructed novel.
It’s that combination of subtlety and surprise that seems to have kept The Girl on the Train atop the bestseller’s list since the beginning of the year—and now, will pave the way for a film adaptation next year (starring Emily Blunt). It’s a book about women, and in many ways for women—it’s a challenge, perhaps, to those who allow themselves to get trapped in their brokenness. But I also think it’s a book that anyone who has struggled with addiction or heartbreak can find relatable.
As a former president of a church Women’s Auxiliary, I can say one of the biggest problems I had was the unwillingness for people to commit to bringing things and/or following through by showing up with them. People today always seem like they are waiting for a better event to come along or aren’t coming unless they decide they are in the mood that morning. It’s really quite unbelievable.
This isn’t the first time I’ve heard this complaint: when it comes to regular church attendance, commitment to voluntary associations, and participation in local elections, our overall involvement levels are down across the nation. Yet it’s these local, communal sorts of events that often build a strong and cohesive community.
In an interesting article for Aeon Magazine, Polina Aronson points out that when it comes to matters of love, Westerners increasingly build their lives around a “Regime of Choice”:
… In most middle-class, Westernised cultures (including contemporary Russia), the Regime of Choice is asserting itself over all other forms of romance. The reasons for this appear to lie in the ethical principles of neo-liberal, democratic societies, which regard freedom as the ultimate good. However, there is strong evidence that we need to re-consider our convictions, in order to see how they might, in fact, be hurting us in invisible ways.
To understand the triumph of choice in the romantic realm, we need to see it in the context of the Enlightenment’s broader appeal to the individual. In economics, the consumer has taken charge of the manufacturer. In faith, the believer has taken charge of the Church. And in romance, the object of love has gradually become less important than its subject. … The most important requirement for choice is not the availability of multiple options. It is the existence of a savvy, sovereign chooser who is well aware of his needs and who acts on the basis of self-interest.
… But perhaps the greatest problem with the Regime of Choice stems from its misconception of maturity as absolute self-sufficiency. Attachment is infantilised. The desire for recognition is rendered as ‘neediness’. Intimacy must never challenge ‘personal boundaries’. While incessantly scolded to take responsibility for our own selves, we are strongly discouraged from taking any for our loved ones: after all, our interference in their lives, in the form of unsolicited advice or suggestions for change, might prevent their growth and self-discovery. Caught between too many optimisation scenarios and failure options, we are faced with the worst affliction of the Regime of Choice: self-absorption without self-sacrifice.
Aronson encourages her readers to embrace the unpredictability and vulnerability that, while counter to the “Regime of Choice,” are also (in her opinion) necessary for truly loving relationships to develop. It requires an abdication of control that can be difficult, but “to become truly adult,” she writes, “we need to embrace the unpredictability that loving someone other than ourselves entails.”
The unpredictability, yes—and also the commitment. As Aronson’s story points out, modern Westerners live like consumers: associating different events or obligations with a social/personal price tag. We weigh the costs associated with each, and then pick according to our preference. Which will give us the most personal benefit and satisfaction? We often wait as long as possible to make a decision before jumping in. This is life according to the Regime of Choice—and it slowly kills community.
Sometimes I think it’s really more of a youthful tendency: it seems there were countless times in college when friends would bail or events would get cancelled because a sudden blitz of “never mind, found something else (aka more exciting/entertaining) to do”‘s would come in at the last minute. Yet as we grow older, this seems to be greatly decreasing: as people get married, procure full-time jobs, begin balancing chores and bills, their desire for regular community and companionship seems to increase. If we’re going to see each other, we have to put it on the calendar. The mayhem of adult life seems to foster a deeper desire for commitment. It’s the only way we can build stability, and keep friendships alive.
Yet a defense of the reluctant committers also seems necessary: depending on how busy we are, and how introverted we are, it’s sometimes difficult to commit to something up front—because we may end the week with no energy left, without having seen family members, without getting needed sleep. We may be overwhelmed by social commitments, by our own limits, and need some solace and quite before diving into a new work week. These things happen.
But the question remains: when should such considerations trump the need to commit? When should we drop people in favor of comfort? When should we push through our exhaustion, and go to dinner with a group of friends (or to a church potluck, say) anyways?
I’m not really sure. But one thing I do know: the more you establish a routine of gathering, the more relaxing and enjoyable it becomes. The more it seeps into the fabric of your weekend and becomes a place of solace, a comforting repast in the midst of life’s stresses. Sometimes all it takes is a few weeks, perhaps months, of committing—despite our deepest desires to break away.
Read this blogpost on a woman’s Sunday dinners with family. In it, you see that what could become a stressful or frustrating ritual instead becomes a time of comfort, camaraderie, and enjoyment—because 1) it’s a regular and expected part of the weekend, 2) people willingly and regularly commit to it, and 3) everyone chips in to make it enjoyable. This is the sort of tradition that, once started, seems to offer solace—not stress—to the participant.
Wesley Morris calls 2015 “the year we [Americans] obsessed over identity”:
Gender roles are merging. Races are being shed. In the last six years or so, but especially in 2015, we’ve been made to see how trans and bi and poly-ambi-omni- we are.
… What started this flux? For more than a decade, we’ve lived with personal technologies — video games and social-media platforms — that have helped us create alternate or auxiliary personae. We’ve also spent a dozen years in the daily grip of makeover shows, in which a team of experts transforms your personal style, your home, your body, your spouse. There are TV competitions for the best fashion design, body painting, drag queen. Some forms of cosmetic alteration have become perfectly normal, and there are shows for that, too. Our reinventions feel gleeful and liberating — and tied to an essentially American optimism. After centuries of women living alongside men, and of the races living adjacent to one another, even if only notionally, our rigidly enforced gender and racial lines are finally breaking down. There’s a sense of fluidity and permissiveness and a smashing of binaries.
Of course, one could argue that Americans have been grappling with this huge identity crisis for many decades—that questions surrounding gender identity have been building since the sexual revolution, for instance, and the societal/cultural change that happened around that time. (Though even this seems like drawing an unnecessary line in the sand.) But Morris is right that things seem to have picked up pace this year: our conversations surrounding the self and what it constitutes, what role (if any) our bodies play in that role, and how we can change or redefine ourselves, have gotten increasingly prevalent, and often clamorous.
Jake Meador responds to Morris over at Mere Orthodoxy, applauding many of his points, but also noting that something is lacking in his analysis:
Morris ignores one of the most important questions concerning identity: the role that place plays in shaping and defining a person’s life and work. Throughout the essay he explores how various individuals struggle to find a comfortable or secure identity, typically as they press up against the pressures of social expectation and custom, two things which we today simply assume are negotiable boundaries that are actually meant to be transgressed. Yet what Morris never wrestles with is the role that physical places and individual people play in the shaping of an identity. It is relatively easy to dispense with limitations when it is an abstraction like “social acceptability.” It is quite another when it is the fact of a known land, known people, and a known way of life that knits the two together.
I like Meador’s inclusion of this, because it also acknowledges the diversity that still characterizes this issue in the U.S. Our questions of identity often take place on a national stage, played out in dramatic news and social media venues. But our actual ability to build an identity relies much less on these platforms, and much more on the places we call home, and the people that populate them. While the individual attributes that constitute the self are of course important indicators of identity, one’s family, home, neighborhood, private associations, friend group, and state are also important. And all of these things often foster our own intuitive sense of self in important ways.
Of course, angst and a feeling of not belonging can still plague one’s interactions with one’s place. I would argue that as we put greater and greater emphasis on self-analysis, these feelings will only be exacerbated. This is not to say that we should just adopt herd mentalities and go with the social flow associated with our place—but an inability to set aside one’s differences to associate with one’s community often leads to societal fractures and estrangement. And oftentimes, national media emphasize this sort of “if you don’t agree with me, you’re against me” sort of attitude—to the detriment of real communities and relationships.
We live in a world where self-analysis is abundant and applauded. Facts are increasingly bowed to the power of feelings, history to the power of hubris, place to the power of personality. The separation of self from place, and the increasing atomization of society, have played a large role in this—with great help from social media and modern communication technology. If place and people help craft our identity, then decreasing physical interaction with both would seem to have a negative effect on our ability to build a confident, anchored sense of self. Meanwhile, constantly obsessing over one’s identity and how one is perceived can only spiral into a self-fixation that is bad for society—and bad for ourselves. Grounding ourselves in community and in the support systems we have can help relieve the identity crises that leave us ever lost, searching for ourselves, unable to find answers.
What happened to the church potluck? Richard Beck fears it is an increasingly rare and neglected tradition:
… Potlucks are happening less frequently and, when they do happen, they aren’t done very well. A symptom of a potluck gone bad at our church is when the potluck has to be supplemented by Little Caesar’s pizza.
So I have to ask, is the Golden Age of the Church Pot Luck over? It seems so.
With our friends we floated two hypotheses about the decline of the potluck. The first was church size. It seems that churches are either very big or very small, making it harder to achieve the sweet spot for a congregation-wide potluck.
Our other hypothesis was about a loss of generational skill. The consensus was that our mothers and grandmothers really knew how to do a potluck. And the main thing was that they brought to the potluck a ton of food, enough for their family and many, many more. And that, we all know, is the secret to having a good potluck. You have to have a critical mass of people bringing more food than they or their families will eat. A lot more food. And our mothers and grandmothers had go-to pot luck dishes to help produce this abundance.
Though Beck has a good point about church size, I think it would apply more to the large church than to the very small—having attended a tiny little church that had potlucks every other week. The very littleness of our church guaranteed that we would be meeting up regularly: events were easy to coordinate, and easy to host. At a large church, if any such gathering happens, it’s usually catered by an outside organization—which seems to remove some of the personality, initiative, and enthusiasm. Can you compare a picnic with Chik-fil-A sandwiches to a potluck with Mrs. Smith’s grandmother’s chili and Mrs. Johnson’s famous cornbread?
This ties in to Beck’s point about skill: if we aren’t used to providing food for large groups of people, we won’t have the tools in our tool belt with which to do so. And it seems that large communal events are an increasingly rare thing—with the exception of Christmas and Thanksgiving, how often do we host a large group of people (let alone make all the food ourselves)?
For those of us who grew up in a potluck-y church, we saw our mothers and grandmothers do this very thing. I was raised in an area where casseroles, “salads” (often of the potato and jello varieties), and pies were ever-present at social functions. And these women knew how to cook in bulk—I don’t think I even knew an 8×8 pan existed until middle school. The 9×13 reigned supreme. Our church had one Sunday potluck a month dedicated to soups: everyone brought their favorite family recipes, with appropriate accompaniments (homemade rolls, cornbread, cheddar or sour cream, et cetera). The regularity of these events, their steady rhythm of comfort food and care, made it so each of us knew what was expected, and had a support team (as well as a swath of amazing recipes) when our turns came to cook.
But another modern potluck problem that Beck doesn’t mention—one I think worth considering—is that a change in diets may have affected people’s comfort levels when bringing food to an event. Perhaps I’m wrong, but it seems that people are pickier these days. We have such an abundance of special diets—gluten intolerance, lactose intolerance, paleo diets, vegan diets, et cetera—that it becomes really challenging (and intimidating) to bring food to a function. I understand that many of these new diets and eating styles are related to medical problems—but there are many that are not. At the church potluck, you can try to please everyone, yet still walk home with an untouched crockpot or casserole dish. And there’s nothing more heartbreaking than having poured your heart and soul into a dish for an event, to then drive home with full hands.
You see, the hospitality of a potluck requires gracious bakers and cooks—absolutely—but it also requires gracious eaters. It requires those with picky taste buds and dietary preferences to be as magnanimous as they can be. It requires those who may look down on meat-eaters to exercise some charity when they see a platter of bacon-wrapped jalapeño bites (and vice versa: the church potluck isn’t time to tease or make fun of your vegan friend). Meanwhile, it requires us to be aware of those with actual medical restrictions, to strive to learn how to cook for them—while also requiring them to be aware and gracious of our ignorance: to be willing to provide their own food, and to slowly educate other cooks on what they can or cannot eat.
The church potluck is an incredible opportunity for us to break bread together: to enjoy each other’s company, and learn valuable lessons (and recipes) from each other. But it also requires a lot of us: as Beck puts it, it’s “excess and abundance that makes the hospitality of a potluck possible, allowing the spontaneous invitation to the visitor who comes empty-handed to be an experience of gift and grace.” It can be difficult to buy the groceries and dedicate your Saturday night to putting together an abundance of food for a potluck. But it’s grace—an excess and abundance of it—that enables us both to feed each other, and to be fed.
This is the question asked—and to some extent, answered—by George Handley in his LAROB review of Mark Stoll’s new book Inherit the Holy Mountain. Stoll argues that environmentalism has very religious roots—and that these roots are, in fact, essential to the ecological movement:
The takeaway from Stoll’s indispensable and game-changing study is quite simple but profoundly important: “a religious perspective gives the history and development of environmentalism a trajectory, unity, and power.”
… [F]or Puritans the world was animated by God and served as a vital witness of God, second only to the Bible in its importance. For Calvin, God is in the world, and therefore earthly life is not what separates us from Eden, but rather the conduit back to God and the means, of finding ourselves in paradise, here and now. Second, because the world is dynamic and the creation ongoing, we need a method of reading it that stimulates empirical understanding and metaphysical hope in the same breath, akin to what Stoll calls the “Calvinist desire to truthfully render creation” in Dutch landscape painting.
Yet this tradition, Stoll and Handley argue, has largely been lost in modern Christian circles:
A disturbing implication of Stoll’s study is that the communitarian ethos is largely absent from the more vocal elements in American Christianity today, further deepening the divide between secular progressives and Christian conservatives.
It does seem that Christians—perhaps especially American Christians—have lost a deep understanding of Christian stewardship. The Bible is laced with meditations on the beauty and wonder of creation, the care with which God has crafted plant and animal. (Psalm 145:15-16: “The eyes of all look to you, and you give them their food in due season. You open your hand; you satisfy the desire of every living thing.” Matthew 6 speaks of God providing food to the sparrows, and “clothing” the lilies and grasses of the field.) In Genesis, Adam and Eve were commanded to be lifetime gardeners—to dedicate themselves to cultivating and caring for the ground.
Yet today, few Christians speak out on issues of environmental sustainability and support. Their attitude towards agriculture, environment, and animal life are often laced with either apathy—or even occasionally disdain. Why has this happened?
First, I think that many Christians employ an either/or fallacy when discussing this issue. Are you saying I should ignore the orphans and genocide victims and the persecuted? they ask. Shouldn’t my resources and efforts go to helping humans, not trees? It’s difficult for all of us to consider how we should best steward our resources in a way that reflect the many tragedies of our world—in which genocides and war, poverty and famine wreak unspeakable havoc on human life, while abuse and maltreatment, drought and natural disaster often have a similar woeful effect on the created world. How do we consider how best to give, love, and show compassion in a world so riddled with crises?
Perhaps one example worth considering here is William Wilberforce: he’s a British politician best known for fighting the slave trade. But he was also incredibly active in fighting animal abuse—he actually founded the world’s first animal welfare organization, “the Society for the Prevention of Cruelty to Animals.” Wilberforce didn’t just restrict himself to one cause—he saw the virtue of compassion as something that should be applied to all layers of life.
But in addition, it seems many Christians have lost the communitarian ethos discussed in Stoll’s book—an ethos that enables us to transcend the self and consider the cares of this larger world we inhabit. Wilberforce was, importantly, part of a group called the “Clapham Sect“: a group of Christians who pooled their efforts, and galvanized each other, to effect important societal change. Yet even something as simple as knowing where your food comes from can help cultivate a more conscientious attitude: many Americans are seeking to educate themselves on the way land, animal, and human are treated in the production of various goods—and to make their purchases accordingly. But such people are still often a minority. Most aren’t thinking about such practices or methods when they peruse clothing racks, fill up on gas, or shop for groceries. They’re thinking about the price tag. We lack a personal connection to the various spheres of commerce and conservation that might help us cultivate a more compassionate, sustainably-minded ethos. (This is what the locavore movement, and localism as a whole, seeks to provide.)
Finally, I think many Christians have abdicated the realm of environmental concern because such concern is usually considered “lefty” or progressive—whereas conservatism is more traditionally associated with pro-business, pro-capitalist creeds (ones that can often fall into excess). Political rhetoric and defensiveness can foster in conservatives a dangerous tendency to ignore deforestation, land erosion, oil spills, even the maltreatment of animals.
Yet the blame does not entirely rest in our camp, Handley argues. He believes “Practitioners in science, religion, and the arts have too often preferred to preach to their own choirs, and have shunned the responsibility of speaking across the many differences that separate us.” He continues,
… Too little has been accomplished for the environment, and I suspect it is because too many have preferred to debate who is right rather than find a way to do what is good. We simply have not worked hard enough to identify common values. This is why we end up with debates where opposing sides deserve, or almost seem to depend on, each other. The rhetoric in such fights might look like courage, but it is really disguised cowardice: it speaks forcefully but without the slightest interest in genuine dialogue.
It’s important that we depoliticize the issues of sustainability and environmental care as much as possible in our dialogue—because the issues they encircle and describe are fundamentally important for present and future life. Of course, there are still likely to be disagreements amongst us on how to steward and care for our planet—those disagreements will always exist. But the more we’re able to take each other seriously, listen attentively, and strive to understand, the less we’ll fall prey to stereotypes, hyperbole, and bombast. And maybe, just maybe, we will find a way to work together in stewarding the world we’ve been given.
Kevin Williamson’s hometown is dead—and he doesn’t really care. He writes for National Review,
The town where my parents grew up and where my grandparents lived no longer exists. Phillips, Texas, is a ghost town. Before that it was a company town, a more or less wholly owned subsidiary of the Phillips Petroleum Company. … Phillips, Inc., in the end decided it had no need for Phillips, Texas, and the town was scrubbed right off the map. … It was the right thing to do. Some towns are better off dead.
… My own experience in Appalachia and the South Bronx suggests that the best thing that people trapped in poverty in these undercapitalized and dysfunctional communities could do is — move. Get the hell out of Dodge, or Eastern Kentucky, or the Bronx. Cheap moralizing of the sort that Theroux engages in, or the cheap sentimentalism that informs the Trump-Buchanan-Sanders view of globalization — “globalization” being another way of saying “human cooperation” — helps exactly no one. We spend a great deal of money trying to help poor people in backwards communities go to college; we’d probably get better results if we spent 20 percent of that helping them go to Midland, Texas, or Williamsport, Pa., or San Jose, Calif. …
First, I think it’s important to look at the truths in what Williamson writes here. Nostalgia shouldn’t keep us from facing the inevitable: if a place is falling apart economically, at an economic dead-end, it is sometimes alright to move on. It has to be—we want our children to be able to make a living. Additionally, we shouldn’t hold our children back from pursuing their vocation or dreams. Using negative measures like guilt-tripping to hold them in place will only make them more discontent. While we can hope that they eventually find their way home (if home is still there), sometimes we have to first let them go. Some of the biggest advocates for place such as Wendell Berry had to leave their homeland first, before they were able to return and cultivate an ethos and dream around the idea of “staying put.”
But it is important to note that many people leave their hometowns for non-economic reasons: to pursue a promising new job, a more culturally exciting area, etc. As Jim Russell noted in a Pacific Standard piece yesterday, a lot of people migrate out of their home areas not because they’re economic ghost towns, but because they’re actually growing more prosperous. The towns that are slowly improving often build in their inhabitants a desire for more: “An improving economy (a place gets better) exacerbates out-migration,” he writes. “More accurately, an education makes potential migrants more aware of pull factors. … Prosperity makes one more likely to leave home.”
I’ve talked to a lot of farmers lately—bright, interesting, educated people who were actively discouraged from “staying put” by their teachers, mentors, and peers. Why? Because they were “too smart,” “too talented” for a rural or agrarian existence. They were told that blue-collar work in the middle of nowhere was below them, that it would be a waste of their potential. They were encouraged to move somewhere urban, somewhere “important.”
Yet this mobility can often take a long-term toll on family and community life—while staying “close to home” can offer a safety net, support group, and a community. Small-town living is less glamorous, but it does offer a good deal of security, comfort, and community.
Abandoning place, however, doesn’t have to involve a migration from your home town—it can happen anytime you begin to view your place as a passive consumer or bystander, outside the actual fabric of community life, without responsibility for its long-term good or wellbeing. When you care little for the growth or cultivation of your town, you are more likely to begin living in an economic and social world apart from it—a world that slowly but surely eats away at the cultural fabric necessary to keep that town alive.
Diana Butler Bass considers this in a thoughtful article she just wrote for The Atlantic. She describes it as the difference between living above place, versus living in it:
A recent book, The New Parish, asserts that one of the primary shortcomings of contemporary society is “living above place,” which is the tendency to develop structures that keep cause-and-effect relationships far apart in space and time where we cannot have firsthand experience of them. For example, you have probably experienced buying groceries without any idea where the food originated or who was involved in the production and delivery process. Living above place describes the process where this type of separation happens so frequently that we become disoriented to reality.
The result is “a cocooned way of life,” with people “unaware of how their lives really affect each other and the world.” The farmers’ market is about living in place, not above it.
… One of the most successful is St. Stephen’s Market at St. Stephen’s Episcopal Church in Richmond, Virginia. Gary Jones, the senior minister at St. Stephen’s, refers to the market as the “Saturday congregation.” Every week, vendors bring their wares to the market, where the church also invites musicians to play, provides an onsite café, hosts food trucks, and offers activities for children. St. Stephen’s opened the market as an act of hospitality. Kate Ruby, the market manager says, “Everyone is welcome, the church, the neighborhood at large. Everyone is welcome at the market. Bring your dog. Bring your kids.” The purpose of the market is “connection,” a spiritual principle that the organizers trust makes the world better through care for the environment, building community, and teaching people how place and food are related.
… Farmers’ markets are only one indication of a larger trend: the desire for meaningful local community and to locate in a place appears to be an ever-growing reality for vast numbers of people. A recent study of American adults showed that three-quarters of the population believe the nation’s economic future is dependent on the health and quality of local communities. Both the youngest and oldest American workers would rather choose a desirable place to live than a company to work for.
These poll results don’t seem to match up with Williamson’s article—he believes that we should move where the economic opportunities are, without letting small-town sentiments bind us. That this will bring us the most happiness in the long run.
But if Bass is right, you need a lot more than a steady job to cultivate human flourishing and happiness. There’s a cultural and social fabric, a deep and rich local community, that Americans are also searching for—and that they’re increasingly willing to work for, perhaps even stay for. This is the other half of the story that we can’t neglect.
Buying organic produce often gives people a feeling of responsibility and care: it helps many feel they’re making a difference in the world, and making healthful purchases for their families.
But is this correct?
In a story yesterday for Modern Farmer, Brian Barth considers the hidden practices that may lurk behind your USDA-certified organic products:
By all appearances, Kathy Evans would seem the ideal organic farmer. The fourth-generation proprietor of Evans Knob Farm, in Bruceton Mills, West Virginia, she has never used chemical pesticides or growth hormones. Her poultry—45 laying hens, 250 broiler chickens, 50 turkeys, and 22 ducks—is free-range; her Romney and Hampshire sheep, grass-fed. Evans also shears, cards, spins, and dyes fiber produced by those sheep, as well as that from her alpaca and llama. (She reserves a few cows and one goat “just for the family.”) The resulting mountain of manure enriches vegetable plots where the 53-year-old grows everything from potatoes and peppers to squash and salad greens. Yet not a single cage-free egg or Toma Verde tomatillo that emerges from Evans’s 130 acres sports a USDA Organic label.
This wasn’t always the case. In 2003, when the United States Department of Agriculture (USDA) first bestowed organic certification on Evans’s operation, which supplies farmers markets and a CSA, the annual processing fees totaled $200. By the time she opted out, in 2013, they had risen 350 percent, to $900. Though diversification is an important aspect of sustainable agriculture, the government “rewards” the practice with a separate form for each crop and animal, burying Evans Knob Farm and others like it in an avalanche of paperwork. For Evans, the inspections began to take on a “big brother” quality, but the final straw came when she learned that eggs can be, in her words, “considered organic if the chickens are fed organic grain and are cage-free. But they can be confined to a building, and the USDA doesn’t say how long they have to be outside.” (See sidebar, below.) The more Evans learned, the more dissatisfied she became. “It felt like the Department of Agriculture was creating loopholes for big agribusiness.”
Back in 2oo7, Michael Pollan warned in his book The Omnivore’s Dilemma that terms such as “cage-free” and “organic” could be more deceiving than you may think. He profiled several Whole Foods farmers whose products marketed happy chickens and small sustainable farms. Sadly, the reality was often starkly different from the branding. “Free range” chickens had but a fraction more space than their conventionally-housed counterparts, and often lived similarly putrid and unsavory existences. The idyllic little organic farm was often a large factory farm, treated with synthetic substances approved under USDA organic guidelines.
Last year, I wrote a story about the locavore movement and its adherents. It was interesting to hear various farmers’ opinions on the “organic” subject. While some thought it important to get organic certification, for its accountability and importance to consumers, others were less certain that it mattered:
Hope Hall, owner of Sunflower Farm Creamery in Cumberland, Maine, says organic farming has undergone a “sad twist” as its popularity has grown. Many organic enterprises have adopted the careless commercialism they once fought. And [Joel] Salatin believes the federal government has “hijacked” the organic movement. Accountability has weakened as a result. The mystique of organic certification remains intact, but standards have eroded.
Procuring organic certification is often expensive, too. The Glaesers already pay $600 a month for GMO-free chicken feed. If they were to go further and “went organic,” the cost difference would trickle down into the price of the eggs. “That’s a jump a lot of people don’t want to have to make,” Glaeser says. And no one has complained about the eggs’ lack of organic certification. “It seems to me that a lot of people are not wanting the organic stamp as much as they want things to be done the right way.”
I think Glaeser is right: a lot of shoppers aren’t as concerned about an organic stamp on their tomatoes as they are about knowing whether farmers are raising their produce or animals in a sustainable, humane fashion. Certifications have become a stand-in for relationships between farmers and consumers. Where consumers are able to know their farmer and interact with them, those labels increasingly become irrelevant—or at least, less important.
None of this necessarily means you shouldn’t buy USDA organic—but it does mean we should consider what our vision for being responsible, wise consumers includes. This vision will likely vary from person to person: for some, getting grass-fed and humanely-raised meats will be the most important thing. For others, knowing their produce is free of harmful pesticides and herbicides will trump the list. Most of us have to balance our wish list items with their accompanying price tags, which can often be steep.
But being an informed and wise consumer is worth it, in the long run. Super market branding can be intentionally and subtly deceiving—”cage free” or “free range” eggs are just one example of the ways in which, apart from a “USDA organic” label, we can be tricked into thinking a product is something it isn’t. This is why the locavore movement appeals to so many—you don’t have to take the label’s word for it: if you want to know how your chickens are being raised, go visit the farm.
When we read, we take journeys—into a new world, back in time. We re-meet old selves, uncover new places and horizons. Books are often as much about our pasts as about the stories of the books themselves. They’re also about the relationships they remind us of, the people we loan them to, the readers who came before us.
There are a lot of old children’s books on my shelves: some old family heirlooms, some bought in used bookstores. Each tells a story. There’s a late 19th-century illustrated paperback of The Gingerbread Man: the cover is sewn together with thread, the edges are tattered, a child’s signature is scrawled across the first page. The pictures bear riveting poppy reds and mustard yellows. On the bookshelf beside it is a 1960s copy of Now We Are Six, a collection of nursery rhymes from A.A. Milne that used to belong to my cousins. Beside that sits a pop-up version of The Little Prince: newer, but already laced with memories. I read it aloud to my little brother and fiancé (now husband) one Christmas eve as we drove home in a snowstorm, navigating perilous roads. The book kept us awake, aware, and cheerful.
It’s amazing how the old hardback novels on the shelf blend so beautifully together: their covers were often moss green, navy, cinnamon brown—the letters gilded in rich metallic. The older typography was often simple and scholarly, traditional serif fonts with delicate forms. The Victorian-era books have greater title flourishes, more feminine scripts. But if you stack them side-by-side on a shelf, they all blend in lovely harmony. There’s a stately grace to them.
Books today have a different character: rather than complementing each other, they often seem to be at war with each other, a clashing and clamoring of colors, fonts, and styles. There’s often a great creativity and artistry to their covers, but they can also seem as riotous and mentally-assaulting as a bunch of tv commercials. Their diversity—one of the beauties of the print book—can also be their greatest aesthetic turn-off.
Yet e-books are in an entirely separate world: they all have covers, certainly, but they’re glimpsed rarely by the reader, as the book automatically saves its place and opens to the last page you left. The pages’ fonts are particular to the tablet and its owner, not the book: ones you pick and customize according to your taste. Even the font size will change according to your preferences. E-books aren’t things you buy “used”—each is a new digital edition, particular to you, stripped of history. All of these things make the reading experience easier—but do they make it memorable, endearing?
Reading Craig Mod’s recent piece in Aeon Magazine about the future of reading makes all these questions come to mind. He writes about his embrace of digital reading, and then his abdication of it—an abdication spurred on largely by aesthetics and the limits of the digital vs. the physical (as backwards as that may at first sound):
As a consumer of digital books I feel delighted, but as a reader, I feel crestfallen. All of the consumption parts of the Kindle experience are pitch-perfect: a boundless catalogue, instant distribution, reasonable prices (perhaps once too reasonable, now less so with recently updated contracts). … But after a book has made its way through the plumbing and onto the devices, the once-fresh experience now feels neglected.
… [When] opening a Kindle book … there is no procession, and often no cover. You are sometimes thrown into the first chapter, sometimes into the middle of the front matter. … Often, you have to swipe or tap back a dozen pages to be sure you haven’t missed anything. … Titles that fall off the first-page listing on a Kindle cease to exist. Compare that with standing in front of a physical bookshelf: the eye takes in hundreds of spines or covers at once, all equally at arm’s length. I’ve found that it’s much more effortless to dip back into my physical library – for inspiration or reference – than my digital library. The books are there. They’re obvious. They welcome me back.
The pile of unread books we have on our bedside tables is often referred to as a graveyard of good intentions. The list of unread books on our Kindles is more of a black hole of fleeting intentions.
He thinks innovation could solve many of these problems, however:
Kindles could remind us of past purchases – books either bought but left unread, or books we read passionately and should reread. And, in doing so, trump the unnetworked isolation of physical books. Thanks to our in-app reading statistics, Kindle knows when we can’t put a book down, when we plunge ourselves into an author’s world far too late into the night, on a weeknight, when the next day is most definitely not a holiday. Kindle knows when we are hypnotised, possessed, gluttonous; knows when we consume an entire feast of words in a single sitting. Knows that others haven’t been so ravenous with a particular story, but we were, and so Kindle can intuit our special relationship with the text. It certainly knows enough to meaningfully resurface books of that ilk. It could be as simple as an email. Kindle could help foster that act of returning, of rereading. It could bring a book back from the periphery of our working library into the core, ‘into the bloodstream’, as Susan Sontag put it. And yet it doesn’t.
… The potential power of digital is that it can take the ponderous and isolated nature of physical things and make them light and movable. Physical things are difficult to copy at scale, while digital things in open environments can replicate effortlessly. Physical is largely immutable, digital can be malleable. Physical is isolated, digital is networked. This is where digital rights management (DRM) – a closed, proprietary layer of many digital reading stacks – hurts books most and undermines almost all that latent value proposition in digital. It artificially imposes the heaviness and isolation of physical books on their digital counterparts, which should be loose, networked objects. DRM constraints over our rights as readers make it feel like we’re renting our digital books, not owning them.
To overcome the digital book’s drawbacks in the way Mod describes, however, would require an intervention into the life of the reader that is akin to a nanny’s meddling interventions. To have a Kindle or iPad reminding me of past purchases, prodding me to read old books over again, or collecting data on my preferences would not be a welcome feature. It would be an intrusion. The relationship between reader and book is not one that should be controlled by digital algorithms. To describe the unmonitored relationship between reader and physical book as “unnetworked isolation” is to miss the whole point: that the closest relationships, even between a reader and book, often involve intimacy and privacy, a sense of aloneness in space with the other, an undistracted focus.
Neither does the mutability of e-books seem to be a boon, necessarily. Their mobility is most definitely useful—but it’s a limited usefulness for the reader. They’re about efficiency and ease: making your bag lighter as you travel and commute, perhaps, or eliminating the need for a night light when you want to read a couple chapters before bed.
But in this way, e-books remind me a lot of smoothies: they’re an efficient way to get sustenance. They often taste good and are enjoyable. They’re especially nice when you’re on a tight schedule and need to eat on the run. But to live on smoothies is to turn away from the multitudinous pleasures of the edible world: to neglect the delights of French or Thai cuisine, wine-soaked dishes, freshly-baked bread, steaming stews. To live on smoothies is possible, and definitely efficient. But it’s a rather sorry way to eat.
To neglect the world of physical books is to miss out on a constant adventure and journey: to discover the beauties that come from the relationship between reader and book—as well as the fascinating tales of readers past, to miss the mark they left on their copies, the journeys they may tell in old and tattered pages. It’s to miss out on the beauties of shifting design, the fascinating progress of the written word through time and space, trends and innovations. It’s to take your relationship with a beloved work, and tie it forever to the all-knowing algorithms of the internet—while also abandoning any potential for sharing the book with your friends or family.
For example: I recently loaned my copy of The World Beyond Your Head to a friend. Because I’d reviewed it for TAC, the pages were dog-eared and underlined, little notes were in the margin. My friend actually loved this (thankfully). When he asked me whether he could also borrow my copy of Shop Class as Soul Craft, I realized with dismay that I had purchased the book on my iPad, and couldn’t loan it to him. Regardless, the book had been read with greater swiftness and less care, without those dog-eared pages and slowly thought-through comments. What did I lose by buying that book online? More than losing a deep connection to the book itself, I lost the opportunity to be generous to a friend.
None of this means digital books can’t be enjoyed. But it means that they are, as Mod writes, limited by their medium, and limited in a way physical books are not. It means that as much as we should enjoy the former, we shouldn’t forsake the latter. Unless you really do prefer drinking literary smoothies all the time.
… It’s much harder to be “here and now”, here and now, what with all the bells and whistles, the bright lights and ring-tones of modern technology—“technology stains the moment with its disconcerting transmission of elsewhere.” “[T]he digital realm” doesn’t “foster”, even “allow” “calm, linear, reflective thinking”. … We all know the state of distraction that Birkerts illuminates here, the listless march of the eye dictated by “our devices”, those screens that ceaselessly draw us away from the faces and pages right in front of us, and from the depths that those faces and pages contain; march us away from the more arduous adventures involving the long arcs of attentiveness required by the novels of Tolstoy and Joyce; call us away from what will come only if we allow ourselves to linger over a line without any thought of clicking on to the next link.
Nunokawa and Birkerts both are considering the importance of presence: of fully embedding oneself in a given moment, without allowing the mind to stray or wander to other things. If we consider our daily habits, many of us will find that the internet has affected our ability to do just this (though for many of us, it could be that awareness and focus have always been struggles). We’ve become so used to interruptions, eye-catching distractions, the ability to just click through to another thing when we grow bored. Cultivating awareness requires a level of intentionality and focus that seem hard to muster in our present circumstances.
But the detriments of distraction aren’t confined to the realm of intellectual thought and personal focus—whether you can focus on the article you’re reading, or whether you’ve gotten into unhealthy multitasking habits. Love itself, community and friendship, require focus: the ability to see the “other,” and be completely centered in our appreciation of them.
Later on in his review, Nunokawa suggests that the internet has given us greater connectivity and opportunity for “real” friendships. But I don’t think this is entirely true. Because deep and sustained friendship requires both intentionality and exclusivity: a limiting of our time, interests, and contacts in order to fully invest ourselves in the people who matter. If we are living with increasingly short attention spans, are we going to be able to truly be attentive when it matters most?
Additionally, social media (and the internet in general) is heavily “me”-centered. It’s through posting our own thoughts, feelings, pictures, and updates that we interact with the outside world. It’s not a place where we associate as equals, but rather where we associate with our friends as a plural crowd—a giant audience, to whom we direct our thoughts and passions. As Nunokawa puts it,
The friends that I am trying to reach … gather together and converge within a single someone else, a person addressed by a single pronoun—the second person plural. Thus grammatically (but not merely so) I am usually concentrating on just one person when I write, a pronominal Prince who represents the princely multitudes I just mentioned, as well as people I may know now or sometime in the future.
The internet is a field in which we can experience a sense of heightened importance and renown, depending on the popularity of our published thoughts and life updates. But when you write something on Facebook, Twitter, or Instagram, how often is it written to the second person singular—to a focused soul?
I would argue that one of the ways (and perhaps one of the most dangerous ways) the internet and social media encourage distractibility and short attention spans is the way in which they encourage us to look at the world as a large, abstract mass—rather than as a set of unique persons, places, or things. We get used to thinking in collective rhetoric, in stereotypical statements, in soundbite solutions. All of this runs counter to true charity and love, which require a focused and gracious gaze: one that sees flaws, but seeks to overlook evil and overwhelm instead with good. The sort of gaze and aim that ignores inflammatory rhetoric and “outrage porn,” and instead turns to the true, the good, and the beautiful.
Compare Nunokawa’s definition of online “friends” and audience with these words on love and friendship from Thérèse of Lisieux:
I have noticed (and this is very natural) that the most saintly Sisters are the most loved. We seek their company; we render them services without their asking; finally, these souls so capable of bearing the lack of respect and consideration of others see themselves surrounded with everyone’s affection…
This seems a fitting description of the online world, in which we are always seeking the attention of the most beautiful, popular, prestigious, or funny—in which we rate someone’s importance by their algorithmic, numerical success. (The Facebook friend who has 15 followers and gets 3 likes on their statuses is not as important as the person with 1,500 friends and 200 likes.) Thérèse continues,
On the other hand, imperfect souls are not sought out. No doubt we remain within the limits of religious politeness in their regard, but we generally avoid them, fearing lest we say something which isn’t too amiable. When I speak of imperfect souls, I don’t want to speak of spiritual imperfections since most holy souls will be perfect in heaven; but I want to speak of a lack of judgment, good manners, touchiness in certain characters; all these things which don’t make life agreeable. I know very well that these moral infirmities are chronic, that there is no hope of a cure, but I also know that my Mother would not cease to take care of me, to try to console me, if I remained sick all my life. This is the conclusion I draw from this: I must seek out in recreation, on free days, the company of Sisters who are the least agreeable to me in order to carry out with regard to these wounded souls the office of the Good Samaritan. A word, an amiable smile, often suffice to make a sad soul bloom…I want to be friendly with everybody (and especially with the least amiable Sisters) to give joy to Jesus.
Notice the difference here: between being friends with everybody, vs. being friendly with everybody. The former requires only the proper platform (the internet), along with enough wit and sparkle to draw accolades. The latter requires something entirely different: a quiet, gentle spirit, an openness to the ignored, a willingness to be humble, a gift for focus and mindfulness.
It’s true that we can be kind online: by dropping a note on a friend’s profile telling them how much they mean to us, by liking friends’ pictures, and/or putting affirmative comments on their status updates. But all of this is done publicly, and thus often prompts our egos to creep in and invade the kind sentiments we may be trying to express.
In contrast, the sort of quiet and humble friendship that Thérèse advocates for is one shorn of public accolades, but full of quiet grandeur—and lasting meaningfulness. It’s one that will be harder to cultivate online, but that may have more permanent rewards.