From the teenage romance between an amputee and an oxygen-tank user in the box-office success The Fault in Our Stars to the conjoined sisters at the circus in the Kennedy Center’s Side Show, representations of disability and difference are prominent as of late. But as Christopher Shinn noted yesterday at The Atlantic, the recent plethora of disabled characters also has another thing in common: they are played by able-bodied actors. Once again, Shinn said, “Pop culture’s more interested in disability as a metaphor than in disability as something that happens to real people.”
Disability is often used as a metaphor for exclusion and subsequent triumph, themes easier to swallow when an actor twitches sensitively across the stage for two hours only to walk back calmly for the curtain call. So it goes exactly in the production of “The Curious Incident of the Dog in the Night-Time” at London’s National Theatre, currently showing in cinemas worldwide before it heads to Broadway in the fall.
Based on a popular 2003 novel by Mark Haddon, “Curious Incident” is a family drama packaged as a mystery. It is seen from the perspective of a teenager named Christopher with an autistic spectrum disorder that some reviewers have compared to Asperger’s syndrome. The production uses technical elements, from cool blue lighting to projected numerical graphics to dizzying synthesized sound effects, in order to communicate the experience of sensory overload that accompanies neurological conditions like Christopher’s.
Because this manner of presentation merely informs the audience’s experience of a rather simple plot—the titular incident is a quickly resolved mystery, and most of the second act is a train ride—the play, like the book, seems to run counter to the frequent use of disability as plot obstacle and metaphor for triumph. In fact, Christopher remarks that a metaphor “is when you describe something by using a word for something that it isn’t. … I think it should be called a lie because a pig is not like a day and people do not have skeletons in their cupboards.”
But in the program note for the stage adaptation of “Curious Incident,” Haddon backtracked. Jane Shilling wrote in her review for The Telegraph, “His 15-year-old protagonist, Christopher, exhibits a constellation of quirks that are recognisably on the autistic spectrum, but his behavioural problems are also a metaphor for the solitariness of the human condition. ‘Curious is not really about Christopher,’ Haddon concludes. ‘It’s about us.’”
In navigating the ethical implications of work like Haddon’s, blogger Mary Maxfield suggested that the problem is not using disability as a metaphor, but using disability as a metaphor for the wrong thing. Christopher, a beloved son integrated into his family and school structures, does not fit Shilling’s metaphor for solitariness. Likewise, Haddon’s editorial “us,” unambiguously separated from people with physical and neurological differences, would have the value of certain lived experiences dependent on their contribution to a grander “human experience.”
As Shinn asserts, the inclusion of disabled actors and artists can bring lived experience rather than distant research to the table and facilitate the kind of responsible art Maxfield imagines. But a willingness to tell stories that are about disabled people for their own sake, rather than about disability per se, would be an even more welcome change.
As people begin to develop a renewed interest in where their food comes from, many young people and urbanites are seeking out agricultural lifestyles, giving up desk jobs for tractors and field work. But it’s difficult to kickstart a profitable farm, especially as a primary career.
A new initiative in Virginia is striving to help these new farmers—even while encouraging them not to quit their day job. Created through a partnership between Virginia Tech, the Virginia Cooperative Extension’s Loudoun Office, and the Loudoun Department of Economic Development, the new program targets Loudoun County residents who are launching second careers in agriculture. Program coordinator Jim Hilleary explained to the Washington Post:
‘Across the nation, there’s this recognition that there is a new type of farmer emerging, and that is generally a second-career farmer,’ he said. ‘Virginia Tech realized that, and they drafted a curriculum for beginning farmers. And what we’ve done here locally is to take part of that statewide curriculum, localize it and apply it to the residents here in Loudoun County.’
These second-career farmers, says the Post, now “account for the majority of new agricultural business owners in the county.” This model will probably continue to increase in popularity: even while a lot of mid-sized farms are suffering, there is a “growing army,” as the New York Times put it, of small local farms, springing up in response to the sprouting market for organic and locavore foods. But many of these aspiring agriculturists don’t know what they’re getting themselves into—and this where Hilleary’s program steps in:
Rather than delve into the technical elements of farming, the worksheet urges aspiring farmers to think more broadly about what they hope to accomplish and to thoroughly consider what a new agricultural venture will demand of spouses, children and other family members.
“That’s where I’d say that this is distinct from other introduction-to-farming programs,” Hilleary said. “It doesn’t teach you how to be a swine producer; it doesn’t teach you how to raise cattle. . . . Rather, it helps you develop a mind-set for the challenges that are to come. And if people say ‘This is not for us,’ then that’s a success, because we just saved them a lot of time and money.”
Modern farming, bombarded by federal regulations and certification requirements, can be an expensive endeavor—even if you only own a small farm. Aspiring farmers need a program like Hilleary’s to help them grapple with the real costs involved in their chosen vocation.
The article reminded me of a piece I read last year about the newest generation of farmers, and how they’re faring: Narratively published a feature about married couple Dan and Kate Marsiglio, who left their teaching jobs in 2005 to start an organic farm. Though they’ve made great improvements over the years, they’ve also found farming to be more difficult than hoped:
In mainstream food magazines and agricultural journals alike, tales of city kids and hedge fund managers trading suits and ties for overalls have many forecasting a future of yeomanry in America. To be sure, new farmers remain hopeful that moment will come. But they’re also the first to report that in beginning farming, the honeymoon period is brief. It is almost a matter of course that regardless of how mentally and physically prepared a new farmer is for long, sweaty days of toil and winters of debt, farming will deliver more stress and heartache than expected.
Eight years after they launched their farm, the Marsiglios are still barely breaking even, and all thought of retirement remains in the murky unknown. Meanwhile, the gritty everyday work of farming grows more wearing with every year.
It will be interesting to see how these new second-career farmers cope with the difficulties of the modern industry—and how they’re received by more established producers in their area. Hilleary mentioned the “raised eyebrows” that these young farmers can get from veteran family farmers, even while “newcomers might have misconceptions about established, conventional farmers.” Hilleary hopes the initiative will bind both groups together: “We want to help them understand that they are tied together by common goals, and they shouldn’t allow themselves to be in categories like old versus new or organic versus conventional,” he said.
The way Americans farm seems to be evolving at present—current growth represents a more decentralized mode of agriculture that seems popular and promising. It may be years or even decades before such endeavors turn into full-time work. But through initiatives like Hilleary’s, perhaps we will build a band of farmers who can confront these challenges head-on.
The world’s fast-growing elderly population faces more age-related disease, higher health costs, and fewer children to care for them than ever, while the resulting caregiver shortage puts them at an increased risk of abuse and neglect. Some medical professionals, like geriatrics professor Louise Aronson, are proposing robots as a solution to both assist overwhelmed human caregivers and replace those guilty of mistreatment, as “most of us do not live in an ideal world, and a reliable robot may be better than an unreliable or abusive person, or than no one at all.”
Aronson’s robotic geriatrics are no fantasy but an existing solution in places like Japan, which has the world’s grayest population and the economic resources available for $100K, yard-tall robots to be feasible. Yet Japan’s relationship with robots shows that making robot caregivers cheaper might not make them any more successful. Japan’s elderly have rejected the robots, asking instead for humans. The only robots with modest success among Japanese elderly have imitated pets, providing limited social engagement rather than medical care and companionship—tasks still preferably assigned to human caregivers.
As Japan shows, the robot caregiver solution does not fail on economic or technological grounds, where boundaries are largely surmountable with time. Rather, turning an intimate job like geriatrics into an automated service sector is a misunderstanding of the profession at hand, which requires both emotional and ethical investment in patients.
Caitrin Nicol Keiper, countering David Levy’s Love and Sex with Robots, explained that such encouragement of human-robot intimacy stems from a misunderstanding of the human as mere biochemical machine. The caregiver shortage does not merely stem from a lack of medical aides to perform mechanical tasks, but also an absence of loving companions who ensure the experience of disability and old age is not a solitary one. These robots, after all, are often explicitly designed to counter the negative health effects of loneliness.
But that loneliness has been cemented in a medical and legal culture that is guided above all else by the principle of individual bodily autonomy. Advance directives and living wills allow patients to lay out their medical decisions ahead of time, discouraging the real-time participation of family members or other caregivers in the medical lives of the elderly. As Leon Kass, then chairman of the President’s Council on Bioethics, reflected in a 2005 report on geriatrics, “Living wills make autonomy and self-determination the primary values at a time of life when one is no longer autonomous or self-determining, and when what one needs is loyal and loving care.”
This cultural reluctance to participate communally in the care of the elderly often expresses itself as avoiding the “burdening” of loved ones. But as Gilbert Meilaender asked in 1991, “Is this not in large measure what it means to belong to a family: to burden each other and to find, almost miraculously, that others are willing, even happy, to carry such burdens?” He continued, “I have tried, subject to my limits and weaknesses, to teach that lesson to my children. Perhaps I will teach it best when I am a burden to them in my dying.”
As Meilaender and Kass suggest, the central problem is not medical incompetence, or even moral indifference, but a break in generational relationships. Neither the elderly nor their medical professionals want them to be dependent on robots rather than people, but, especially among the childless or otherwise socially disconnected, the aged may have little choice. As such, the inhumanity of Aronson’s geriatrics may not be a particularly medical problem, but a social problem. As long as we culturally insist on autonomy, we will technologically insist on automation.
I wanted to get off Facebook—to deactivate my account entirely. It seemed like such a waste of time, a distraction from real-life interactions and relationships. If Facebook no longer pulled at my attention, I thought, perhaps I would be a better friend, and invest in those people who are truly closest to me. I could invest time in the place I live, rather than in a virtual world full of acquaintances and people I barely know.
So I decided to take a break from Facebook to see whether my social relationships would improve or change at all. Before logging off, I let friends know that I’d be away, and gave them my email. It wasn’t really a full “unplugging” experiment, since I use the computer so much for work. But it meant that, in the evenings, I spent much less time online. I occasionally worked on writing projects or wrote emails—but not much else. I wrote long email letters or made phone calls to my closest friends and family members. I continued to use Instagram, but tried to send direct-message pictures to my family, rather than simply using the “public” feature. I marked friends’ birthdays on my personal calendar before deactivating my Facebook account, and tried to email or call them on their birthdays, rather than leaving the prosaic “Happy birthday!” wall post.
Leaving Facebook showed me how much time I do, in fact, rely on it to fill moments of pause. When I sat in the car, waited for the metro, or stood in line, social media was the first thing I turned to. Without Facebook, my fingers itched. What else could I browse—Instagram? Twitter? Anything to feel connected. Anything to pass the time. I realized how frenzied and information-obsessed my brain can become, and made an effort to cultivate quiet, and to appreciate the moments of stillness.
However, despite these advantages, my month away from Facebook wasn’t a time of great awakening, social revitalization, or spiritual growth. Though it did serve a few good purposes, there were also strong disadvantages to leaving Facebook—primarily, the sense of disconnection from family and friends. The world didn’t pause its social media usage when I did: friends would ask me why I hadn’t responded to messages or event invites, whether I had seen this picture or that link. I realized how much I relied on Facebook to get updates from more distant family members or old friends in my home state; though Instagram provided some information, I hadn’t thought about the fact that relationships, engagements, weddings, and graduations are primarily announced (and commented upon) via Facebook.
I began to evaluate my experiment. The thing I craved most about non-Facebook interactions was their closeness, their intimacy and depth. I was tired of the self-aggrandizing statuses, the public displays of affection between couples (or even friends) that would have been more meaningful, at least in my eyes, if shared privately.
But Facebook also does one thing very well—better, perhaps, than any other social media tool: it enables us to form and cultivate little platoons. And this, I realized, was what I had missed in the last month. Though I was able to invest in individual friendships, my lack of Facebook presence made it harder to host events, or to check up on the groups of people who meant so much in my life. I realized that not everyone checks email with the same rapidity I do—but everyone checks their Facebook notifications. I tried to coordinate a dinner with friends via email a few days ago—and only received one reply over the course of the next 48 hours. Then I created a Facebook event, and invited all the same people. They RSVP’d within 10-15 minutes. Read More…
Twitter has revolutionized the way constituents interact with their representatives in Congress. Will Wikipedia be the next interactive legislative platform?
If developer and Library of Congress employee Ed Summers’ ideas take off, maybe so. This week, Summers created a bot called @congressedits that tweets out anonymous Wikipedia edits from congressional IP addresses. The account has mainly uncovered the innocuous and the banal, from noting the availability of Choco Tacos in the Rayburn building to correcting grammar in the article for Step Up 3D. However, the account also enables the public to see when staffers vandalize or rewrite politicians’ biographical information, whether updating word choice (Justin Amash is an “attorney,” not a “corporate lawyer”) or casually defaming likely opposition (activist Kesha Rogers is a “Trotskyist”).
Rogue political Wikipedia edits have been controversial before. In 2006, staffers for politicians from Rep. Marty Meehan to Sen. Joe Biden were publicly called out for removing criticism from their bosses’ pages. Wikipedia’s usual crowd of vigilant editors reversed the few problematic edits they found after investigating other congressional activity on the site, but left most edits intact as intended “in good faith.”
But Summers’ project is not a series of overt agendas connected to individual staffers. Its real-time, eerily specific feed of edits streams activity from the entire congressional workforce in what Megan Garber has called a project of “ambient accountability.” Like the earlier controversies, Wikipedia can yet again serve as a proxy for political fights happening elsewhere, but it can also serve as a window into everyday life on the Hill at its most bizarre and inconsequential.
There is a significant online audience for Capitol Hill quirkiness. Buzzfeed’s Benny Johnson more or less makes a living off it, while members of Congress have social media interns delving into the ever more surreal with legislative doge memes. The @congressedits project could appeal to both easily amused political junkies and to accountability advocates who see it as an opportunity to expand access to the people that they say should be the government’s most visible and engaged group. Read More…
A mother lets her daughter play in the park unaccompanied. A mother leaves her son in the car for a few moments, while she runs into a store to buy headphones. Many parents would consider these actions to be unwise—but are they criminal? According to three recent stories in the news, yes.
In the first case, Debra Harrell, a resident of North Augusta, South Carolina, allowed her daughter to play at the park while she worked at a local McDonald’s. She gave her daughter a cell phone. Lenore Skenazy noted in Reason that the park is “so popular that at any given time there are about 40 kids frolicking … there were swings, a ‘splash pad,’ and shade.” But on her second day at the park, an adult asked the girl where her mother was. When the little girl said she was working, the adult called the cops, who declared the girl “abandoned,” and arrested Harrell.
The second story was shared by Kim Brooks in Salon back in June: her four-year-old son insisted on accompanying her to the grocery store for a quick errand, but then refused to go inside the store. After noting that it was a “mild, overcast, 50-degree day,” and that there were several cars nearby, Brooks agreed and quickly ran into the store. Unbeknownst to her, an adult nearby saw her leave her son, and proceeded to record the whole incident on his phone, watched Brooks return and drive away, and then called the police. The police issued a warrant for her arrest.
These are only a few recent stories in which parents have faced arrest after leaving their children unsupervised. As Radley Balko notes at the Washington Post, these incidents seem to signal the “increasing criminalization of just about everything and the use of the criminal justice system to address problems that were once (and better) handled by families, friends, communities and other institutions.”
This latter point hearkens back to Robert Nisbet’s excellent book The Quest for Community: Nisbet predicted that, in a society without strong private associations, the State would take their place—assuming the role of the church, the schoolroom, and the family, asserting a “primacy of claim” upon our children. “It is hard to overlook the fact,” he wrote, “that the State and politics have become suffused by qualities formerly inherent only in the family or the church.” In this world, the term “nanny state” takes on a very literal meaning. Read More…
On May 31, a bicyclist found a young girl, stabbed 19 times with a five inch blade, after she crawled out of the Wisconsin woods and dragged herself toward the nearest road.
The perpetrators Morgan Geyser and Anissa Weier, both 12 and classmates of the victim, are being charged as adults with attempted murder. The stabbing was an attempt to pay tribute to Slenderman, a faceless, betentacled, and besuited character from Internet lore. The two girls were caught along the road after they had committed the crime, apparently walking to an imaginary rendezvous point with Slenderman.
They first discovered Slenderman on the Creepypasta Wiki, which is where most of the current fan fiction resides. They reportedly planned the attack for months, finally luring the victim into the woods with a game of hide-and-seek.
The Slenderman myth is one of the first pieces of popular lore truly borne of the Internet, beginning online and accruing momentum and backstory as people photoshopped and blogged Slenderman into existence. The rapid spread of his legend surprised even Eric Knudsen, Slenderman’s creator. He said in an interview that he didn’t expect it to move beyond the Somethingawful forum where he posted the first Slenderman image:
It was amazing to see people create their own little part of Slender Man in order to perpetuate his existance [sic]. … I found it interesting to watch as sort of an accelerated version of an urban legend.
When he created Slenderman, he said that he wanted something “whose motivations can barely be comprehended,” and that caused “general unease and terror in a general population.” He here pinpoints the power of Slenderman: the omnipotence of the unknown. The Internet has, after all, given us the ability to know every imaginable aspect of our world; but not to belong to it.
Vice chalks the violence up to poorly-managed hormones and small-town boredom. An Mytheos Holt at R Street asks whether their violence could have been prevented by addressing mental illness openly. Farhad Manjoo at the New York Times makes Slenderman’s faceless horror emblematic of the “selfie” age—an attempt to use fear to push against compulsive, narcissistic self-documentation.
Collin Barnes, Assistant Professor of Psychology at Hillsdale College, mentioned in an e-mail that the need to find meaning and community, to craft an identity, could have driven the crime, “Killing in the name of Slenderman and investing oneself in religious rituals are not entirely different and may reflect latent fears we have about being utterly alone in the universe.”
In the mythos, Slenderman’s victims are always alone, and radically estranged from help or support. There is no intelligible pattern or motive to the victimization. In contrast to the bogeymen of “organic” folklore, he has no distinct vendetta against transgressors of social or moral norms.
The two girls were not driven to violence by their encounter with Slenderman. He was emblematic of faceless, nameless dread: of complete alienation. As Kathleen Hale pointed out at Vice, girls of their age are experiencing radical emotional isolation, and possible mental health issues and public school social dynamics only exacerbate the problem. In a way, the killing was a gesture of solidarity, an attempt to connect with someone or something when faced with being “utterly alone.” Slenderman is the demon of a suburban age.
As more Americans than ever tuned in to watch the World Cup over the past few weeks, the American media’s quadrennial habit of analyzing soccer’s place in the country raged on. Cranky right-wingers, embodied by Ann Coulter’s now-infamous ramble, put forth common criticisms of soccer: it has an insufficient gender gap, allows scoreless ties, prohibits using hands, is foreign and liberal, prioritizes team effort over individual prowess, and constitutes all-around “moral decay.” In the face of such resistance, soccer fans like Daniel Drezner proposed simply changing the rules of the game to assuage his fellow Americans’ sense of fairness, rather than asking Americans to adapt to the game’s delightful capriciousness like the rest of the world. Meanwhile, Peter Beinart and other commentators on the left celebrated the “soccer coalition” of youth, immigrants, and liberals—the same one that elected President Obama, he recalled—proving that Americanness is not contingent upon the white working-class culture idealized by Coulter. In short, Americans loudly participated in a soccer nation’s rite of passage by reading domestic politics into the sport every chance they could get.
Though the debate largely focused on whether soccer could possibly have a place in accepted American identity, this process of political theorizing and contention mirrors the way soccer has been absorbed into other cultures throughout the sport’s history. Americans who chafe at the sport’s European origins join the long tradition of our southern neighbors who idealized the “creolization” of soccer while forming national identity after the Latin American revolutions of the 19th century. In Argentina, soccer was the manifestation of the “melting pot” where Italian and Spanish immigrants took over British cultural imports, a process crafted in the pages of the magazine El Gráfico. In Brazil, soccer was a place to reconcile racial tensions by highlighting diversity as a source of American ingenuity and creativity, superior to formulaic and homogenous European play. The contemporary American media’s ongoing narratives of soccer are similar not just in their obsessive nature, but in the diverse subcultures they are trying to weld together.
Soccer has always come with class connotations that plague burgeoning sports cultures. The prevailing image of soccer, both in the U.S. now and in Latin America a century ago, is of white urban and suburban elites who use the sport to moralize. Soccer was formalized in British public schools in the 19th century in order to promote Victorian morality and “muscular Christianity”—as well as to simply keep boys busy—but it largely came to the Americas as the pastime of the “gentleman-athletes” among British immigrants to South America. The “amateur era” of early 20th century soccer parallels the American “soccer mom” values that encourage teamwork and cooperation in children before moving on to more individualist sports as adults, and it is just as widespread and pejoratively viewed as its predecessor. As American pundits critique this intrusion of foreign collectivist values, they are echoing, among others, 1920s and 1930s Argentines calling for “our own style” (“la nuestra”) to counter and replace British beliefs. Read More…
The romantic comedy film is either dying or dead, according to writers at The Atlantic and The Daily Beast. After watching “They Came Together,” a romantic comedy that parodies the genre, the Beast’s Andrew Romano argued that the romcom’s heydey has come to an end, due to shifts in audience targeting and gender preferences, as well as money problems and failed branding.
The Atlantic’s Megan Garber thinks that romcom plots no longer address the “way we live now,” in the age of online dating and delayed marriages. Christopher Orr made a similar argument last year: he said romcom plots are too outdated for today’s society—we no longer have taboos against premarital sex, nor do we have societal class divisions. The romantic conflicts of yesteryear are outdated in today’s society. However, Noah Millman wrote a rebuttal to Orr’s argument, reminding us that the romantic movies of 1940 weren’t popular or good “because there were arranged marriages (there were none) and it isn’t because women couldn’t get a divorce (all the female protagonists of the movies I cited are or get divorced) or couldn’t have sex … they work because they go internal, into character, to find both the conflict and its resolution, and they work because they don’t isolate the world of romantic love from the rest of the social universe.”
The troubles of the modern romcom may have monetary or societal threads, but it also has a problem with simplification and homogeneity that we can’t ignore. Most romantic comedies follow either a star-crossed lovers plot, or a “You Got Mail” storyline—the man and woman hate each other, or would never marry each other, but then slowly find out they’re perfect for each other (examples: “When Harry Met Sally,” “How to Lose a Guy in 10 Days,” “Sweet Home Alabama,” “The Switch,” “27 Dresses,” et cetera).
It’s true that both these types are rooted in classics—the star-crossed lovers are classic “Romeo and Juliet,” while the we-hate-each-other-no-wait-we-love-each-other is usually some reincarnation of Pride and Prejudice. But both these classics had greater complexity and depth than most of their modern manifestations. Both told stories of class and family, prejudice and tradition, virtue and vice. Their supporting characters were just as important as their leads—we couldn’t have Pride and Prejudice without Mr. Collins or Mrs. Bennet. Modern films don’t usually give us this rich, colorful tapestry.
As NPR’s Linda Holmes wrote in response to Orr last year, “The best [films] often have other elements, elements of real sadness, like the terrific and underappreciated Hugh Grant-Julia Roberts vehicle Notting Hill, for instance, which touches on not artificial obstacles, but on the way people in difficult circumstances sometimes hurt each other’s feelings and let each other down, not to mention supporting characters struggling with disability and fertility issues.” In contrast, says Holmes, “The [films] that take nothing seriously except dating … rarely work, and they’ve rarely ever worked, because love in life is usually mixed up with all kinds of other nasty stuff.” Millman agrees:
The romantic comedies that suck are the ones that adhere to a formula that none of the great romantic comedies of yore followed. They try to make both protagonists as “relatable” as possible by making them into everymen and everywomen – thereby depriving them of any interest. They focus overwhelmingly on the romance, treating the rest of the universe as so much “business” for low comedy, rather than exploring other themes that might reflect productively on the romance at the center. And they gin up artificial external obstacles instead of persuasive, character-driven internal ones.
Yet these are the films that we keep getting, with increasing regularity. They all tell familiar stories, with familiar conflicts—the plots may change somewhat, but they never surprise us. And romcoms aren’t the only films that suffer from this problem: modern cinema is teeming with stereotypical superhero stories, underdog sports stories, exploding/smashing action films, and their like. We can usually guess exactly how the plot will unfold in the first few minutes of the film.
People increasingly want different, surprising stories—and we’re starting to see some that are new, interesting, and complex. Many explore themes of friendship, rather than romance. Disney created an international sensation when they released “Frozen”—and perhaps one of its greatest surprises was that it was mainly about sisterhood, rather than the usual romance. “The Grand Budapest Hotel,” “Saving Mr. Banks,” “The Monuments Men,” “Gravity”: all were primarily stories of friendship, trust, camaraderie, sacrifice. In the realm of television, many people love BBC’s new “Sherlock” series, and the friendship between Benedict Cumberbatch’s Sherlock and Martin Freeman’s Watson.
We may be tired of films that tell the same old story—but that doesn’t mean we should get rid of the romcom, or the dystopian film, or the action movie. We just need to reconsider the stories we tell, the plots we create, and bring innovation and complexity to these genres once more. We need stories that allow tragedy in their endings, stories with real protagonists and real villains, stories that reflect the complexity and confusion of life. If we get rom-com movies that reflect these things, then perhaps the romcom will be revitalized. But for now, the genre feels much like a broken record. It isn’t that we’ve run out of stories to tell; we’ve just told the same story too many times.
“Call me Ishmael,” the opening line to one of America’s greatest works of literature, looks very different when rendered in emoji characters.
The Library of Congress accepted data engineer Fred Benenson’s pictorial rewrite of Moby Dick, titled “Emoji Dick,” after Michael Neubert advocated its addition:
“[The book] takes a known classic of literature and converts it to a construct of our modern way of communicating, making possible an investigation of the question, ‘is it still a literary classic when written in a kind of smart phone based pidgin language?’”
Pictorial communication is becoming increasingly widespread as Emoji, “the more elaborate cousins of emoticons,” get deployed incessantly across social media. There is a forthcoming communication app called Emoji.li that uses exclusively emoji characters to communicate. One Tumblr account offers emotional analysis based on emoji use. There is even an art and design show dedicated to the pictorial system.
Hannah Rosenfield took a look at the linguistic possibilities (and impossibilities) of emoji. It has yet to develop syntax or grammar: changes in the placement of emoji within a “sentence” fail to convey any significant change in meaning. It shares many characteristics with pidgin languages, which often arise when two groups without significant linguistic common ground must communicate. “Pidgins typically have a limited vocabulary and lack nuance, a developed syntax, and the ability to convey register,” Rosenfield writes.
But just because pictorial systems are non-viable as a means of communication themselves doesn’t mean that they can’t enrich the language—and this goes beyond emoji to other forms of visual media. For example, teachers are utilizing graphic novels to aid reading comprehension in schools, as well as relying more upon digital and visual media to engage young children in their text work.
The key, though, is that these are used to supplement—not replace—traditional language. Camilla Nelson writes that “good transmedia narratives do not merely repeat across media platforms. Rather, each text offers a way to supplement, analyse and evaluate the rest—a bit like pieces of a puzzle that need to be put together through the use of imagination and problem solving.” Indeed, “Emoji Dick” is primarily an exercise in translation: Benenson accompanied the strings of emoji “sentences” with the original text, in order to provide context for the reader and intelligibility to the characters.
Picture languages give us an opportunity to emphasize or complement the language in which we think and speak, be it utilized for the sake of education, to bridge a language gap, or in casual communication. “Emoji, for all its detractors, is about embellishment and added context,” writes Rhodri Marsden for The Independent. “[I]t’s about in-jokes, playfulness, of emphasising praise or cushioning the impact of criticism, of provoking thought and exercising the imagination.”
The idea that pictorial systems could be used to engage language—streamline it, give it further nuance—has been around since before the 1500s. The problem is that visual representation is just that: representation. It refers to something concrete; points, as it were, to something else. Advanced languages derive meaning from context, from the relationships between the words themselves as well as the associations they evoke. Emoji and other visual media are highly contingent, useless without at least some explanation.
Useful? Perhaps, but by no means representative of the eclipse of the written word.