Why assume you need to make compromises to achieve connubial bliss?
In an article for The Atlantic, Olga Khazan profiles several polyamorous couples and wonders whether more families should consider open (non-monogamous) marriages. Khazan argues that polyamory’s great advantage is that practitioners better divide up and delegate the duties and pleasures of a relationship, mixing and matching for the best of all possible marriages. She writes:
Even many devout monogamists admit that it can be hard for one partner to supply the full smorgasbord of the other’s sexual and emotional needs. When critics decry polys as escapists who have simply “gotten bored” in traditional relationships, polys counter that the more people they can draw close to them, the more self-actualized they can be.
There’s an enormous assumption tucked into that first sentence. Monogamy isn’t premised on the idea that one person can ever be everything to a partner. When a marriage fails to fulfill “the full smorgasbord” it’s not a sign that anything’s wrong. An expectation that a partner (or full set of them) is meant to be a perfect complement is destructive to romantic and platonic relationships.
Unfortunately, the premises of Khazan aren’t confined to a negligible niche (polyamorous or otherwise). A survey commissioned by USA Network of 18-34 year olds in four cities (Austin, Omaha, Nashville, and Phoenix) found that 10 percent of respondents endorsed multiple partners within a marriage, “each of whom fulfills a need in your life.”
What does this mean in practice? One of the women profiled in the Atlantic story explains that she and her husband looked to add partners to their marriages because the spouses couldn’t fulfill all of each other’s needs. Her husband was interested in kinky sex, so he found a woman to practice BDSM with him, but the wife’s new boyfriend was picked for a more prosaic need: the boyfriend goes to the theatre with her and sees shows her husband wouldn’t enjoy.
The reporter asks what she calls “the logical, mono-normative question” why the wife didn’t simply leave her husband for her theatre-boyfriend, but the more relevant question is: why she didn’t just book season tickets for herself and a friend? Kinky sex is, well, sexual, but going out to the theatre isn’t an activity that’s reserved to lovers.
It’s natural for friends to fill the gaps in a marital relationship, indulging interests that aren’t shared with the spouse, providing emotional support, and simply varying our lens on the world. After all, C.S. Lewis’s observation in The Four Loves that “Lovers are normally face to face, absorbed in each other. Friends, side by side, absorbed in some common interest,” wasn’t meant as an aspirational image for spouses.
Spouses shouldn’t wind up completely sated by a relationship, able to retreat from the rest of the world. Married people, just like singles, have some needs that are best met by a friend or by a neighbor or by family. Our mutual, unsated needs draw us together in service to each other. Read More…
In her essay on the paleo diet in the latest New Yorker, Elizabeth Kolbert aptly describes our modern attitude toward food: “For almost as long as people have been eating, they’ve been imposing rules about what can and can’t be consumed,” she writes. But many of these historic restrictions were religious in nature, designated by one’s spiritual beliefs. They had to do with the cleanliness or set-apartness of a religious group. They often consisted in periodic fasts that were meant to draw the community’s attention away from momentary, physical concerns, and to fixate their attention on the divine, or on each other.
In today’s world, however, the ethics of gastronomic consumption all come back to the self, and the idealized “healthy body.” Whether it consists of juicing, raw food, paleo dieting, or vegan eating, modern restrictions tend to be more diverse, eccentric, and extreme. And unlike the prohibitions of days past, these diets tend to fixate on the human body, rather than striving to look past it. Unfortunately, such dietary pursuits tend to push us toward extremes—either of excess or defect. And such attitudes, in any area of life, are more prone to vice than virtue. Our society needs to develop a more virtuous attitude toward foods: but how do we cultivate it?
Virtue necessitates moderation—it is an intentional discipline of choosing the mean between excess and defect. Thus, eating at McDonald’s every day is a vice of excess; going on extreme juicing cleanses is, I would argue, a vice of defect.
It really doesn’t matter what diet you look at: many suffer from problems of excess and defect. Eating purely raw foods is a healthy option, perhaps—except for the fact that, as Michael Pollan points out, cooking enables our bodies to absorb essential nutrients in food. We have to eat much more of those raw foods in order to get the same amount of nutrients. “It’s very hard to have culture, it’s very hard to have science, it’s very hard to have all the things we count as important parts of civilization if you’re spending half of all your waking hours chewing,” he says. So if you want to gnaw on carrots all day, go ahead—but it might be simpler to toss them in some olive oil, salt, and pepper, and stick them in the oven.
Many diets completely exclude foods that, while not the epitome of healthiness, are still good and delightful. Americans have begun to look with shock and concern upon foods like croissants and freshly baked breads—because of the carbs, the terrible evil carbs. We forget that a carbohydrate is merely a unit of energy, a modern measurement meant to quantify and measure our eating habits. It’s not a religious rule to be broken or followed. We forget that one croissant can be delightful—and that five is probably too much. Instead, we put foods into “good” or “bad” boxes. We eat sweet potatoes, but ban any other variety. We buy buckwheat and spelt flours, but avoid all-purpose flour with a passion.
There’s another, equally dangerous extreme we can fall into: we can look with derision and scorn on those who try to improve their diets, while eating our burgers and drinking our Cokes with relish (and perhaps some devilish glee). We can eat two slices of pie, and Instagram a picture with the hashtag #sorrynotsorry. Of course, enjoying food in all its forms is a good thing (at least this pie addict thinks so). We should have a healthy appreciation for the glories of dessert and burgers. But this, too, requires moderation. Even eating dessert requires a form of contemplative virtue.
What happens when our eating becomes a list of do’s and don’ts—a pattern of pure excess, or pure defect? We live in a world in which shows like Biggest Loser draw the attention and accolades of millions—a show purely fixated on losing, on shrinking numbers. But then, when the numbers shrink too low, society rears its ugly head in anger—and we insist the numbers go up again. It’s all a number game, a frightening fluctuation between too much and too little.
We don’t have to do paleo diets or juice cleanses. We can eat fresh fruits and vegetables, and cook from scratch whenever possible. We can (and indeed, should) eat croissants and pie, but perhaps not for every meal. We can sip and savor wine, one glass at a time. We can exercise consistently, drink water often, chew instead of inhaling. We can acknowledge that sustenance is just that—sustenance.
As C.S. Lewis put it in The Screwtape Letters, there are two types of gluttony: one of much, and one of little. Modern versions of gluttony amongst sophisticated people, he wrote, are often characterized by close attention to diet and an insistence on less of one thing or another—even when it puts other people at an inconvenience. (And he wrote this before gluten-free and paleo diets even existed!) Such a person, writes Screwtape, is wholly enslaved to sensuality, but doesn’t realize it because “the quantities involved are small.” Yet ”what do quantities matter, provided we can use a human belly and palate to produce querulousness, impatience, uncharitableness, and self-concern?”
Food has become the anxious obsession of our society, and it’s time we put food back in its place. This is what virtue is about: a modest pursuit of the mean, a sweet enjoyment of life that is considerate and moderate. No matter the diet regimen you embrace (or reject), your life will not fall into health until it falls into balance. Hopefully, with time, we will learn that all things were created good, and should be enjoyed—in moderation.
From the teenage romance between an amputee and an oxygen-tank user in the box-office success The Fault in Our Stars to the conjoined sisters at the circus in the Kennedy Center’s Side Show, representations of disability and difference are prominent as of late. But as Christopher Shinn noted yesterday at The Atlantic, the recent plethora of disabled characters also has another thing in common: they are played by able-bodied actors. Once again, Shinn said, “Pop culture’s more interested in disability as a metaphor than in disability as something that happens to real people.”
Disability is often used as a metaphor for exclusion and subsequent triumph, themes easier to swallow when an actor twitches sensitively across the stage for two hours only to walk back calmly for the curtain call. So it goes exactly in the production of “The Curious Incident of the Dog in the Night-Time” at London’s National Theatre, currently showing in cinemas worldwide before it heads to Broadway in the fall.
Based on a popular 2003 novel by Mark Haddon, “Curious Incident” is a family drama packaged as a mystery. It is seen from the perspective of a teenager named Christopher with an autistic spectrum disorder that some reviewers have compared to Asperger’s syndrome. The production uses technical elements, from cool blue lighting to projected numerical graphics to dizzying synthesized sound effects, in order to communicate the experience of sensory overload that accompanies neurological conditions like Christopher’s.
Because this manner of presentation merely informs the audience’s experience of a rather simple plot—the titular incident is a quickly resolved mystery, and most of the second act is a train ride—the play, like the book, seems to run counter to the frequent use of disability as plot obstacle and metaphor for triumph. In fact, Christopher remarks that a metaphor “is when you describe something by using a word for something that it isn’t. … I think it should be called a lie because a pig is not like a day and people do not have skeletons in their cupboards.”
But in the program note for the stage adaptation of “Curious Incident,” Haddon backtracked. Jane Shilling wrote in her review for The Telegraph, “His 15-year-old protagonist, Christopher, exhibits a constellation of quirks that are recognisably on the autistic spectrum, but his behavioural problems are also a metaphor for the solitariness of the human condition. ‘Curious is not really about Christopher,’ Haddon concludes. ‘It’s about us.’”
In navigating the ethical implications of work like Haddon’s, blogger Mary Maxfield suggested that the problem is not using disability as a metaphor, but using disability as a metaphor for the wrong thing. Christopher, a beloved son integrated into his family and school structures, does not fit Shilling’s metaphor for solitariness. Likewise, Haddon’s editorial “us,” unambiguously separated from people with physical and neurological differences, would have the value of certain lived experiences dependent on their contribution to a grander “human experience.”
As Shinn asserts, the inclusion of disabled actors and artists can bring lived experience rather than distant research to the table and facilitate the kind of responsible art Maxfield imagines. But a willingness to tell stories that are about disabled people for their own sake, rather than about disability per se, would be an even more welcome change.
As people begin to develop a renewed interest in where their food comes from, many young people and urbanites are seeking out agricultural lifestyles, giving up desk jobs for tractors and field work. But it’s difficult to kickstart a profitable farm, especially as a primary career.
A new initiative in Virginia is striving to help these new farmers—even while encouraging them not to quit their day job. Created through a partnership between Virginia Tech, the Virginia Cooperative Extension’s Loudoun Office, and the Loudoun Department of Economic Development, the new program targets Loudoun County residents who are launching second careers in agriculture. Program coordinator Jim Hilleary explained to the Washington Post:
‘Across the nation, there’s this recognition that there is a new type of farmer emerging, and that is generally a second-career farmer,’ he said. ‘Virginia Tech realized that, and they drafted a curriculum for beginning farmers. And what we’ve done here locally is to take part of that statewide curriculum, localize it and apply it to the residents here in Loudoun County.’
These second-career farmers, says the Post, now “account for the majority of new agricultural business owners in the county.” This model will probably continue to increase in popularity: even while a lot of mid-sized farms are suffering, there is a “growing army,” as the New York Times put it, of small local farms, springing up in response to the sprouting market for organic and locavore foods. But many of these aspiring agriculturists don’t know what they’re getting themselves into—and this where Hilleary’s program steps in:
Rather than delve into the technical elements of farming, the worksheet urges aspiring farmers to think more broadly about what they hope to accomplish and to thoroughly consider what a new agricultural venture will demand of spouses, children and other family members.
“That’s where I’d say that this is distinct from other introduction-to-farming programs,” Hilleary said. “It doesn’t teach you how to be a swine producer; it doesn’t teach you how to raise cattle. . . . Rather, it helps you develop a mind-set for the challenges that are to come. And if people say ‘This is not for us,’ then that’s a success, because we just saved them a lot of time and money.”
Modern farming, bombarded by federal regulations and certification requirements, can be an expensive endeavor—even if you only own a small farm. Aspiring farmers need a program like Hilleary’s to help them grapple with the real costs involved in their chosen vocation.
The article reminded me of a piece I read last year about the newest generation of farmers, and how they’re faring: Narratively published a feature about married couple Dan and Kate Marsiglio, who left their teaching jobs in 2005 to start an organic farm. Though they’ve made great improvements over the years, they’ve also found farming to be more difficult than hoped:
In mainstream food magazines and agricultural journals alike, tales of city kids and hedge fund managers trading suits and ties for overalls have many forecasting a future of yeomanry in America. To be sure, new farmers remain hopeful that moment will come. But they’re also the first to report that in beginning farming, the honeymoon period is brief. It is almost a matter of course that regardless of how mentally and physically prepared a new farmer is for long, sweaty days of toil and winters of debt, farming will deliver more stress and heartache than expected.
Eight years after they launched their farm, the Marsiglios are still barely breaking even, and all thought of retirement remains in the murky unknown. Meanwhile, the gritty everyday work of farming grows more wearing with every year.
It will be interesting to see how these new second-career farmers cope with the difficulties of the modern industry—and how they’re received by more established producers in their area. Hilleary mentioned the “raised eyebrows” that these young farmers can get from veteran family farmers, even while “newcomers might have misconceptions about established, conventional farmers.” Hilleary hopes the initiative will bind both groups together: “We want to help them understand that they are tied together by common goals, and they shouldn’t allow themselves to be in categories like old versus new or organic versus conventional,” he said.
The way Americans farm seems to be evolving at present—current growth represents a more decentralized mode of agriculture that seems popular and promising. It may be years or even decades before such endeavors turn into full-time work. But through initiatives like Hilleary’s, perhaps we will build a band of farmers who can confront these challenges head-on.
The world’s fast-growing elderly population faces more age-related disease, higher health costs, and fewer children to care for them than ever, while the resulting caregiver shortage puts them at an increased risk of abuse and neglect. Some medical professionals, like geriatrics professor Louise Aronson, are proposing robots as a solution to both assist overwhelmed human caregivers and replace those guilty of mistreatment, as “most of us do not live in an ideal world, and a reliable robot may be better than an unreliable or abusive person, or than no one at all.”
Aronson’s robotic geriatrics are no fantasy but an existing solution in places like Japan, which has the world’s grayest population and the economic resources available for $100K, yard-tall robots to be feasible. Yet Japan’s relationship with robots shows that making robot caregivers cheaper might not make them any more successful. Japan’s elderly have rejected the robots, asking instead for humans. The only robots with modest success among Japanese elderly have imitated pets, providing limited social engagement rather than medical care and companionship—tasks still preferably assigned to human caregivers.
As Japan shows, the robot caregiver solution does not fail on economic or technological grounds, where boundaries are largely surmountable with time. Rather, turning an intimate job like geriatrics into an automated service sector is a misunderstanding of the profession at hand, which requires both emotional and ethical investment in patients.
Caitrin Nicol Keiper, countering David Levy’s Love and Sex with Robots, explained that such encouragement of human-robot intimacy stems from a misunderstanding of the human as mere biochemical machine. The caregiver shortage does not merely stem from a lack of medical aides to perform mechanical tasks, but also an absence of loving companions who ensure the experience of disability and old age is not a solitary one. These robots, after all, are often explicitly designed to counter the negative health effects of loneliness.
But that loneliness has been cemented in a medical and legal culture that is guided above all else by the principle of individual bodily autonomy. Advance directives and living wills allow patients to lay out their medical decisions ahead of time, discouraging the real-time participation of family members or other caregivers in the medical lives of the elderly. As Leon Kass, then chairman of the President’s Council on Bioethics, reflected in a 2005 report on geriatrics, “Living wills make autonomy and self-determination the primary values at a time of life when one is no longer autonomous or self-determining, and when what one needs is loyal and loving care.”
This cultural reluctance to participate communally in the care of the elderly often expresses itself as avoiding the “burdening” of loved ones. But as Gilbert Meilaender asked in 1991, “Is this not in large measure what it means to belong to a family: to burden each other and to find, almost miraculously, that others are willing, even happy, to carry such burdens?” He continued, “I have tried, subject to my limits and weaknesses, to teach that lesson to my children. Perhaps I will teach it best when I am a burden to them in my dying.”
As Meilaender and Kass suggest, the central problem is not medical incompetence, or even moral indifference, but a break in generational relationships. Neither the elderly nor their medical professionals want them to be dependent on robots rather than people, but, especially among the childless or otherwise socially disconnected, the aged may have little choice. As such, the inhumanity of Aronson’s geriatrics may not be a particularly medical problem, but a social problem. As long as we culturally insist on autonomy, we will technologically insist on automation.
I wanted to get off Facebook—to deactivate my account entirely. It seemed like such a waste of time, a distraction from real-life interactions and relationships. If Facebook no longer pulled at my attention, I thought, perhaps I would be a better friend, and invest in those people who are truly closest to me. I could invest time in the place I live, rather than in a virtual world full of acquaintances and people I barely know.
So I decided to take a break from Facebook to see whether my social relationships would improve or change at all. Before logging off, I let friends know that I’d be away, and gave them my email. It wasn’t really a full “unplugging” experiment, since I use the computer so much for work. But it meant that, in the evenings, I spent much less time online. I occasionally worked on writing projects or wrote emails—but not much else. I wrote long email letters or made phone calls to my closest friends and family members. I continued to use Instagram, but tried to send direct-message pictures to my family, rather than simply using the “public” feature. I marked friends’ birthdays on my personal calendar before deactivating my Facebook account, and tried to email or call them on their birthdays, rather than leaving the prosaic “Happy birthday!” wall post.
Leaving Facebook showed me how much time I do, in fact, rely on it to fill moments of pause. When I sat in the car, waited for the metro, or stood in line, social media was the first thing I turned to. Without Facebook, my fingers itched. What else could I browse—Instagram? Twitter? Anything to feel connected. Anything to pass the time. I realized how frenzied and information-obsessed my brain can become, and made an effort to cultivate quiet, and to appreciate the moments of stillness.
However, despite these advantages, my month away from Facebook wasn’t a time of great awakening, social revitalization, or spiritual growth. Though it did serve a few good purposes, there were also strong disadvantages to leaving Facebook—primarily, the sense of disconnection from family and friends. The world didn’t pause its social media usage when I did: friends would ask me why I hadn’t responded to messages or event invites, whether I had seen this picture or that link. I realized how much I relied on Facebook to get updates from more distant family members or old friends in my home state; though Instagram provided some information, I hadn’t thought about the fact that relationships, engagements, weddings, and graduations are primarily announced (and commented upon) via Facebook.
I began to evaluate my experiment. The thing I craved most about non-Facebook interactions was their closeness, their intimacy and depth. I was tired of the self-aggrandizing statuses, the public displays of affection between couples (or even friends) that would have been more meaningful, at least in my eyes, if shared privately.
But Facebook also does one thing very well—better, perhaps, than any other social media tool: it enables us to form and cultivate little platoons. And this, I realized, was what I had missed in the last month. Though I was able to invest in individual friendships, my lack of Facebook presence made it harder to host events, or to check up on the groups of people who meant so much in my life. I realized that not everyone checks email with the same rapidity I do—but everyone checks their Facebook notifications. I tried to coordinate a dinner with friends via email a few days ago—and only received one reply over the course of the next 48 hours. Then I created a Facebook event, and invited all the same people. They RSVP’d within 10-15 minutes. Read More…
Twitter has revolutionized the way constituents interact with their representatives in Congress. Will Wikipedia be the next interactive legislative platform?
If developer and Library of Congress employee Ed Summers’ ideas take off, maybe so. This week, Summers created a bot called @congressedits that tweets out anonymous Wikipedia edits from congressional IP addresses. The account has mainly uncovered the innocuous and the banal, from noting the availability of Choco Tacos in the Rayburn building to correcting grammar in the article for Step Up 3D. However, the account also enables the public to see when staffers vandalize or rewrite politicians’ biographical information, whether updating word choice (Justin Amash is an “attorney,” not a “corporate lawyer”) or casually defaming likely opposition (activist Kesha Rogers is a “Trotskyist”).
Rogue political Wikipedia edits have been controversial before. In 2006, staffers for politicians from Rep. Marty Meehan to Sen. Joe Biden were publicly called out for removing criticism from their bosses’ pages. Wikipedia’s usual crowd of vigilant editors reversed the few problematic edits they found after investigating other congressional activity on the site, but left most edits intact as intended “in good faith.”
But Summers’ project is not a series of overt agendas connected to individual staffers. Its real-time, eerily specific feed of edits streams activity from the entire congressional workforce in what Megan Garber has called a project of “ambient accountability.” Like the earlier controversies, Wikipedia can yet again serve as a proxy for political fights happening elsewhere, but it can also serve as a window into everyday life on the Hill at its most bizarre and inconsequential.
There is a significant online audience for Capitol Hill quirkiness. Buzzfeed’s Benny Johnson more or less makes a living off it, while members of Congress have social media interns delving into the ever more surreal with legislative doge memes. The @congressedits project could appeal to both easily amused political junkies and to accountability advocates who see it as an opportunity to expand access to the people that they say should be the government’s most visible and engaged group. Read More…
A mother lets her daughter play in the park unaccompanied. A mother leaves her son in the car for a few moments, while she runs into a store to buy headphones. Many parents would consider these actions to be unwise—but are they criminal? According to three recent stories in the news, yes.
In the first case, Debra Harrell, a resident of North Augusta, South Carolina, allowed her daughter to play at the park while she worked at a local McDonald’s. She gave her daughter a cell phone. Lenore Skenazy noted in Reason that the park is “so popular that at any given time there are about 40 kids frolicking … there were swings, a ‘splash pad,’ and shade.” But on her second day at the park, an adult asked the girl where her mother was. When the little girl said she was working, the adult called the cops, who declared the girl “abandoned,” and arrested Harrell.
The second story was shared by Kim Brooks in Salon back in June: her four-year-old son insisted on accompanying her to the grocery store for a quick errand, but then refused to go inside the store. After noting that it was a “mild, overcast, 50-degree day,” and that there were several cars nearby, Brooks agreed and quickly ran into the store. Unbeknownst to her, an adult nearby saw her leave her son, and proceeded to record the whole incident on his phone, watched Brooks return and drive away, and then called the police. The police issued a warrant for her arrest.
These are only a few recent stories in which parents have faced arrest after leaving their children unsupervised. As Radley Balko notes at the Washington Post, these incidents seem to signal the “increasing criminalization of just about everything and the use of the criminal justice system to address problems that were once (and better) handled by families, friends, communities and other institutions.”
This latter point hearkens back to Robert Nisbet’s excellent book The Quest for Community: Nisbet predicted that, in a society without strong private associations, the State would take their place—assuming the role of the church, the schoolroom, and the family, asserting a “primacy of claim” upon our children. “It is hard to overlook the fact,” he wrote, “that the State and politics have become suffused by qualities formerly inherent only in the family or the church.” In this world, the term “nanny state” takes on a very literal meaning. Read More…
On May 31, a bicyclist found a young girl, stabbed 19 times with a five inch blade, after she crawled out of the Wisconsin woods and dragged herself toward the nearest road.
The perpetrators Morgan Geyser and Anissa Weier, both 12 and classmates of the victim, are being charged as adults with attempted murder. The stabbing was an attempt to pay tribute to Slenderman, a faceless, betentacled, and besuited character from Internet lore. The two girls were caught along the road after they had committed the crime, apparently walking to an imaginary rendezvous point with Slenderman.
They first discovered Slenderman on the Creepypasta Wiki, which is where most of the current fan fiction resides. They reportedly planned the attack for months, finally luring the victim into the woods with a game of hide-and-seek.
The Slenderman myth is one of the first pieces of popular lore truly borne of the Internet, beginning online and accruing momentum and backstory as people photoshopped and blogged Slenderman into existence. The rapid spread of his legend surprised even Eric Knudsen, Slenderman’s creator. He said in an interview that he didn’t expect it to move beyond the Somethingawful forum where he posted the first Slenderman image:
It was amazing to see people create their own little part of Slender Man in order to perpetuate his existance [sic]. … I found it interesting to watch as sort of an accelerated version of an urban legend.
When he created Slenderman, he said that he wanted something “whose motivations can barely be comprehended,” and that caused “general unease and terror in a general population.” He here pinpoints the power of Slenderman: the omnipotence of the unknown. The Internet has, after all, given us the ability to know every imaginable aspect of our world; but not to belong to it.
Vice chalks the violence up to poorly-managed hormones and small-town boredom. An Mytheos Holt at R Street asks whether their violence could have been prevented by addressing mental illness openly. Farhad Manjoo at the New York Times makes Slenderman’s faceless horror emblematic of the “selfie” age—an attempt to use fear to push against compulsive, narcissistic self-documentation.
Collin Barnes, Assistant Professor of Psychology at Hillsdale College, mentioned in an e-mail that the need to find meaning and community, to craft an identity, could have driven the crime, “Killing in the name of Slenderman and investing oneself in religious rituals are not entirely different and may reflect latent fears we have about being utterly alone in the universe.”
In the mythos, Slenderman’s victims are always alone, and radically estranged from help or support. There is no intelligible pattern or motive to the victimization. In contrast to the bogeymen of “organic” folklore, he has no distinct vendetta against transgressors of social or moral norms.
The two girls were not driven to violence by their encounter with Slenderman. He was emblematic of faceless, nameless dread: of complete alienation. As Kathleen Hale pointed out at Vice, girls of their age are experiencing radical emotional isolation, and possible mental health issues and public school social dynamics only exacerbate the problem. In a way, the killing was a gesture of solidarity, an attempt to connect with someone or something when faced with being “utterly alone.” Slenderman is the demon of a suburban age.
As more Americans than ever tuned in to watch the World Cup over the past few weeks, the American media’s quadrennial habit of analyzing soccer’s place in the country raged on. Cranky right-wingers, embodied by Ann Coulter’s now-infamous ramble, put forth common criticisms of soccer: it has an insufficient gender gap, allows scoreless ties, prohibits using hands, is foreign and liberal, prioritizes team effort over individual prowess, and constitutes all-around “moral decay.” In the face of such resistance, soccer fans like Daniel Drezner proposed simply changing the rules of the game to assuage his fellow Americans’ sense of fairness, rather than asking Americans to adapt to the game’s delightful capriciousness like the rest of the world. Meanwhile, Peter Beinart and other commentators on the left celebrated the “soccer coalition” of youth, immigrants, and liberals—the same one that elected President Obama, he recalled—proving that Americanness is not contingent upon the white working-class culture idealized by Coulter. In short, Americans loudly participated in a soccer nation’s rite of passage by reading domestic politics into the sport every chance they could get.
Though the debate largely focused on whether soccer could possibly have a place in accepted American identity, this process of political theorizing and contention mirrors the way soccer has been absorbed into other cultures throughout the sport’s history. Americans who chafe at the sport’s European origins join the long tradition of our southern neighbors who idealized the “creolization” of soccer while forming national identity after the Latin American revolutions of the 19th century. In Argentina, soccer was the manifestation of the “melting pot” where Italian and Spanish immigrants took over British cultural imports, a process crafted in the pages of the magazine El Gráfico. In Brazil, soccer was a place to reconcile racial tensions by highlighting diversity as a source of American ingenuity and creativity, superior to formulaic and homogenous European play. The contemporary American media’s ongoing narratives of soccer are similar not just in their obsessive nature, but in the diverse subcultures they are trying to weld together.
Soccer has always come with class connotations that plague burgeoning sports cultures. The prevailing image of soccer, both in the U.S. now and in Latin America a century ago, is of white urban and suburban elites who use the sport to moralize. Soccer was formalized in British public schools in the 19th century in order to promote Victorian morality and “muscular Christianity”—as well as to simply keep boys busy—but it largely came to the Americas as the pastime of the “gentleman-athletes” among British immigrants to South America. The “amateur era” of early 20th century soccer parallels the American “soccer mom” values that encourage teamwork and cooperation in children before moving on to more individualist sports as adults, and it is just as widespread and pejoratively viewed as its predecessor. As American pundits critique this intrusion of foreign collectivist values, they are echoing, among others, 1920s and 1930s Argentines calling for “our own style” (“la nuestra”) to counter and replace British beliefs. Read More…