As providence would have it, the cast recording of the acclaimed new Broadway musical Hamilton dropped the same week as the Pope’s visit. This line—spoken by the Marquis de Lafayette and Alexander Hamilton, before a high-five and a revolutionary victory—played through my earphones on my ride in to the papal parade on Washington’s National Mall, and came to mind throughout the morning as I joined fellow pilgrims to welcome another beloved son of immigrants to our city.
The crowd along the parade route was jovial and kaleidoscopic. A few men and women religious bookended the route—kids asked for selfies when they saw the habits, and the friars and sisters happily obliged—but the overwhelming majority of the pilgrims were families. They brought wide-eyed infants, smiling grandmothers with canes, and an abundance of parade paraphernalia. Mostly, they brought an exuberance so persistent it kept them standing and cheering for four hours in hopes of seeing the Pope speed by for about thirty seconds.
The Pope would later bless and kiss those wide-eyed infants—let the children come to me, we were all caught thinking, watching what Ross Douthat has called Francis’ “living Christian iconography.” The crowd would make space when the elderly and disabled at the front of the line needed a place to rest, putting their coats down for them to sit. Somehow, even more flags and signs appeared in the crowd after they passed security, many purchased on the spot for whatever change they had.
A man in a bright yellow “I <3 Pope Francis” t-shirt wore the Peruvian flag as a cape, and put his toddler on his shoulders. The little boy waved a Vatican flag and squealed “¡Papa!” at every white media truck that went by. While his family chattered in Spanish, the teenagers in front of me livetweeted in Vietnamese and a young black woman next to them prayed the Rosary in English. We erupted into occasional olés together, we prayed together, but mostly we jabbered together, exchanging stories of pilgrimage and commute. Everyone went silent momentarily when the jumbotron outside the Washington Monument began broadcasting the White House papal welcome, as if out of liturgical habit. The entire scene was aggressively Catholic.
Even more so, and in a way more touchingly so, it was fiercely American. In Washington, D.C., what was long one of the most important and proud black-majority cities on earth, a black Catholic choir welcomed the Pope to the White House with the country’s beloved contribution to Christian music: gospel. The crowd clapped when the Pope introduced himself as the son of immigrants—“somos también inmigrantes, Papa,” the Peruvian father shouted, “we are immigrants too.” An enormous group of people who otherwise seemed to have little in common started tearing up when the Pope said “God bless America.”
After the parade, the Pope opened his address to Congress by thanking them for inviting him—another “son of this great continent”—to speak “in the land of the free and the home of the brave.” He invoked the spiritual strengths of four prominent American Christians: the freedom-fighting spirit of Abraham Lincoln, the inclusive dream of Martin Luther King, Jr., the passionate social-justice activism of Dorothy Day, and the contemplative dialoguing of Thomas Merton. He thanked the legislature for this opportunity to present “the richness of your cultural heritage, of the spirit of the American people.”
Later that day, he celebrated a Mass at the country’s patronal church, the Basilica of the National Shrine of the Immaculate Conception. The Mass featured the linguistic and liturgical diversity of American Catholicism, from the multilingual petitionary prayers to the inclusion of 18th-century Baroque Mexican music alongside contemporary compositions and traditional Latin hymns.
In Catholicism, this kind of diversity in unity is often described using the biblical image of the Church as the body of Christ: we are many parts, but one body. Throughout the Pope’s visit, the phrase that came to my mind was e pluribus unum.
The Pope’s visit has been accompanied by a media frenzy in search of the “Francis Effect.” The Pope inspires good feelings of inclusiveness and a-partisan identity—so what? Will he change doctrine? Will he revitalize the Catholic left? Will he get more people to go to church? The discussion strikes me as well-intentioned, but misplaced. The “Francis Effect” as a projection for the future was never going to be reasonably predictable. The “Francis Effect” as an atmosphere of belonging and openness is already present and visible. And for American Catholics, if not for the anxious media, that might just suffice.
In his remarks to Pope Francis at Independence Hall, Archbishop Charles Chaput of Philadelphia name-dropped Alexander Hamilton in praising the contributions of immigrants to the nation’s civic fabric. It may have been a coincidence that Hamilton was mentioned just as so many people are discovering his story for the first time through the popular musical, but it’s wildly appropriate. Hamilton tells the story of the unlikeliest Founding Father using hip-hop and rap, a cast almost entirely composed of people of color, and a lens on the American Revolution that portrays the war as a gritty struggle rather than a fated victory for liberty. It tells an old story in new voices, and in so doing it enables people who have often felt excluded from the narrative of American history to feel it is their birthright. It shows Americans of color that the traditions of their nation belong to them, and what it looks like when they say so.
Pope Francis is doing something similar for American Catholics. He is telling an old story (his politics, however vaguely controversial their stylistic expression, are unsurprisingly traditional and his preaching is about two thousand years old) but throughout his visit he made a point of using new voices—American voices. He canonized an American saint, recalled American heroes, enjoyed American music and American liturgy, and exalted American ingenuity and uniqueness. His visit showed American Catholics what it looks like to be rooted in both of their traditions, and make that visible. Francis does for Catholicism what Lin-Manuel Miranda, the creator of Hamilton, does for American history: he invites a new audience to take part.
That doesn’t mean the Pope’s visit will produce a surge in American Mass-attendance numbers any more than Hamilton will cause history Ph.D. applications to spike. What it does mean is that for a few days, American Catholics got to look at themselves represented in their fullness and their beauty, and they got to celebrate that. For many of us—who spend so much time writing about the next religious-liberty fight or fitting in at our secular schools or feeling out of place in a country that doesn’t always feel like it counts us among its own—that will be enough.
Catherine Addington is an editorial fellow at The American Conservative.
While bemoaning hard-liners’ ideologically motivated opposition to the recent U.S. shift in policy on Cuba, Daniel Larison recently pointed out one of the most underrated features of the change: “Normalization with Cuba also removes one of the irritants in our relationships with the rest of Latin America, which can only make our dealings with the rest of our hemisphere more constructive.”
As one senior Obama administration official told the New York Times, the U.S. policy in Cuba was a primary obstacle in diplomatic negotiations in the region. “In the last Summit of the Americas, instead of talking about things we wanted to focus on — exports, counternarcotics — we spent a lot of time talking about U.S.-Cuba policy. A key factor with any bilateral meeting is, ‘When are you going to change your Cuba policy?'”
Why does Cuba matter so much to Latin America? Certainly, just as Cuba is an ideological touchstone in the United States, where it has represented one of the last vestiges of full-throated Cold War-era Communism, the island nation has a powerful symbolic presence in Latin America as well. Politically, the U.S. policy against Cuba has played as just another episode in the long history of American interventionism in its “sphere of influence,” particularly on the upstart new left.
President Obama has been known to joke about the outdated nature of the Cuba dispute. When asked in 2012 about the prospect of allowing Cuba’s reintegration into the Organization of American States, he said, “Sometimes those controversies date back to before I was born. And sometimes I feel as if … we’re caught in a time warp … going back to the 1950s, gunboat diplomacy, and Yankees, and the Cold War and this and that.” For the U.S., the policy has often been shrugged off as admittedly outdated but ultimately in line with American values surrounding human rights and democracy.
But for Latin Americans, the Cuba embargo evokes a visceral living memory of the United States’ destructive interventionism in the region. Decades of U.S. military intervention followed the Cuban Revolution of 1959, aiming to prevent Communist regimes elsewhere. In the process, the U.S. helped overthrow democratically elected governments and install military dictatorships in Guatemala (1954), Brazil (1964), and Chile (1973); supported military repression in El Salvador (1980) and rebel groups in Nicaragua (1981-1987); invaded the Dominican Republic (1965), Grenada (1983), and Panama (1989); and operated the U.S. Army School of the Americas (1946-present), which trained many Latin American military leaders who went on to become human rights violators in their home countries.
In short, though the focus has since shifted from fighting Communism to fighting drugs—and, to some extent, to fighting terrorism—the idea that the U.S. policy toward Cuba was instituted, and has been maintained, because of an American commitment to democracy in the region is not seen as credible in Latin American eyes. The pattern is now so established that U.S. involvement is suspected in every disturbance, as was the case with the 2002 coup attempt in Venezuela and the 2009 coup in Honduras.
The embargo is even more specifically entangled in the U.S. pattern of economic, not just military, intervention to the south—though the two are often not all that separated. (For instance, the first major U.S.-backed coup in the region, that of Guatemala in 1954, was largely motivated by the impact of labor reforms on the profits of the United Fruit Company.) The populist governments of the new left rose to power across the region in reaction against the “Washington Consensus” neoliberal policies of the 1990s, which they characterize as an imposition by a U.S.-controlled International Monetary Fund on Latin America. Though the United States can not be reasonably blamed for every economic crisis in Latin American history, the country’s domineering past has given it a lasting reputation for manipulation.
Though many Latin Americans would be of a mind with most Americans in their opinions of Raúl and Fidel Castro’s leadership, they also associate these histories of military and economic intervention with the United States in interpreting the Cuba dispute. As such, U.S. policy there is rarely seen as either concerned with or effective on human rights, but rather as part of its longstanding pattern of wielding the “big stick” to quash resistance, no matter the effect on its poorer and weaker neighbors. The American punishment of Cuba has only contributed to the island’s image as a heroic nation standing up against an imperialist behemoth, which has ultimately distracted from the human rights violations committed under the Castros’ leadership.
Many regional diplomatic opportunities will present themselves post-normalization. For instance, one of Cuba’s major regional influences has been its support for the Revolutionary Armed Forces of Colombia (FARC). FARC, a guerrilla movement that has been in armed conflict with the Colombian government for decades and is designated as a terrorist group by the U.S. government, is currently in peace talks with Colombia—the top U.S. ally in the region. The current ceasefire, particularly if transformed into an armistice, could be spurred on if the U.S. had influence on both sides of the conflict. For Latin American leaders previously disillusioned by Washington’s isolation from the region, normalization with Cuba is a major sign that the U.S. is willing to step up as a reasonable leader.
Restoring ties with Cuba will not be a panacea for all of the United States’ diplomatic problems with Latin America. Even at the most recent Summit of the Americas, held this past April in Panama City, conversation was derailed by a new political distraction: the executive order in which President Obama referred to Venezuela as “an unusual and extraordinary threat to the national security and foreign policy of the United States.” As Cuba’s economic patron and the Caribbean’s main source of oil, Venezuela is hugely influential in the region despite its recent political struggles and economic devastation. But perhaps just as crucially, it is the standard bearer for the leftist Bolivarian movement—so named for the revolutionary leader Simón Bolívar, who has become a symbol for Latin American and Caribbean solidarity—and the executive order was seen to be right out of the paternalist playbook Latin American countries thought the U.S. was using Cuba normalization to leave behind. The dramatic speeches at that Summit (“The Yankees do not change!” exclaimed Nicaraguan president Daniel Ortega) reflect how much unnecessary havoc such ideological missteps can wreak, and how many new obstacles they can create for hemispheric diplomacy.
It matters that the United States gets this diplomatic transition right, not least because the leftist bloc led by ever-poorer Venezuela (and often symbolized by Cuba) is ailing, and its allies are in the market for new friends. Cuba and the Caribbean as a whole are increasingly unable to rely on Venezuelan oil, and they are looking to diversify its economy by engaging with U.S. businesses—even under the embargo, the U.S. has become Cuba’s fifth-largest trading partner. A successful thaw will prove valuable: Cuba will be a relatively untapped market if the blockade is removed, and the U.S. needs to increase its influence in the Caribbean due to its growing problems with drug and human trafficking to the U.S.
But it will also go a long way toward becoming a partner that the rest of the Americas can trust again. Earlier this summer, Chas Freeman urged the United States “to rediscover noncoercive instruments of statecraft that can persuade others that they can benefit by working with us rather than against us.” The Cuban thaw is a major opportunity to do just that, and on a larger scale than it may first appear.
Catherine Addington is an editorial fellow at The American Conservative.
What happened to Sandra Bland? This is a question many Americans (particularly, and rightfully, black American women) are asking, following the death of a young civil rights activist in a Waller County, Texas jail cell two weeks ago. Was any of it—her arrest after a traffic stop by state trooper Brian Encinia, her three-day detention, the neglect that resulted in her alleged suicide by hanging—legal?
Bland’s death remains under investigation, but the dashboard camera footage of her interaction with Encinia shows the escalation of a warning for the failure to signal into the forceful detention of an epileptic woman. Surprisingly, much of what occurred between Encinia and Bland appears to have been legal, if imprudent. Encinia’s tactics could be called “brutality-adjacent policing,” in which the standard for behavior is the bare legal minimum rather than actively good policing.
The phrase comes from Leah Libresco’s reflection on California’s “affirmative consent” law, which Libresco called an attempt to minimize “rape-adjacent sex”—the “gray area” into which many rape cases fall, in which one partner believes he or she is behaving appropriately, while the other partner experiences the interaction as rape. Libresco continues, “Rape-adjacent sex gives cover to serial predators, who are believed to be the main driver of sexual assaults on campus, since the kind of sex they’re trying to have doesn’t look very different from the sex everyone else is already having.”
Brutality-adjacent policing works in much the same way: the officer believes he or she is behaving appropriately, while the civilian experiences the interaction as brutality. This way of policing gives cover to bad actors, because the kind of policing they are exercising doesn’t look very different from the policing everyone else, “good cops” and “bad cops” alike, is exercising.
For instance, the major legal quandary in Encinia’s arrest of Bland arises from the point at which he asked her to put out her cigarette. When she refused, he ordered her to exit her vehicle. The former order was in fact a request, which Bland was within her rights to decline, but the latter was a command that Encinia had the legal authority to enforce. Encinia was not required to justify his command, and the impression that he gave the command in service of his ego rather than his safety is legally irrelevant—though socially damaging. When a police officer chooses to force compliance (giving a command) rather than encourage cooperation (writing warnings, ignoring frustration, explaining good safety habits), he or she not only wastes police resources on inconsequential issues, but also breaks down trust between police and civilians in a very tangible way. It is legal policing, but it is not good policing. (It is worth noting that most relevant authorities, including the local district attorney and mayor, seem to agree. The State Department of Public Safety is currently conducting an inquiry, having stated that the trooper violated unspecified protocol.)
The obvious, but faulty, assumption is that the civilian side of the encounter could easily be improved. After all, it is fairly clear that Sandra Bland was arrested for an “unwritten crime … contempt of cop,” a phenomenon more or less avoidable by deference to the police. It is true that Bland could have responded more “respectfully” (that is, quietly) to Encinia’s actions. In fact, Orin Kerr explains, the way the law is structured would have pressured her to do so: it is often impossible for a citizen to know when an officer is giving a lawful order, either because of unfamiliar laws or uncontrollable circumstances such as unknown evidence. “Faced with this, a citizen’s cautious strategy might be just to do everything the officer says regardless of whether the officer’s command is lawful.”
In reality, Charles M. Blow is right to say that the parameters of “respectable behavior” vary from person to person, and, crucially, those parameters are informed by race, gender, and circumstance. Everything from Encinia’s mood that day to his socialized expectations of a black female civilian is beyond prediction, and those factors proved just as crucial for Bland’s fate. Her safest option, legally and otherwise, would have been total deference to the police, but this is unreasonable to demand in a democratic society—not to mention impractical, since police, not civilians, are the ones trained to anticipate and resolve conflict.
That training is where concrete progress can be made. Former police officer and legal scholar Seth Stoughton has called the tragic police-civilian encounters propelling the #BlackLivesMatter movement “symptoms of a systemic problem: a police culture that trains and encourages officers to adopt a ‘warrior mindset.’” The “warrior” mindset trains officers to prioritize personal survival in each encounter, as if in response to a uniformly hostile environment. The highly-publicized murders of police officers, such as recent killings in California and New York, illustrate the very real nature of the danger faced by police officers in their day-to-day work.
But when danger and defense are the only factors informing a police officer’s default attitude toward civilians, it is easy to lose sight of the power dynamic at play. As Lonnae O’Neal has reported, police officers are, by the numbers, significantly more of a threat to civilians than civilians are to them: according to the FBI, 51 officers were killed in the line of duty in 2014, while according to a Washington Post database of police shootings, police have shot and killed 544 people this year thus far. (“Of that number, 76 people were unarmed or had a toy weapon and at least 34 of them were driving a vehicle,” O’Neal notes.)
Without minimizing the occasional necessity of a “warrior” mindset in the face of life-or-death situations, Stoughton proposes an alternate model for everyday use and emphasis in modern policing: that of the community “guardian.” The “guardian” takes a long-term view of how to achieve the goal of community protection, prioritizing “service over crime-fighting.” Training officers to be “guardians,” Stoughton suggests, could involve encouraging more non-enforcement encounters in order to build relationships in the community; training in de-escalating conflict and tactical restraint; and incorporating analysis of police-civilian encounters gone wrong in early training.
In short, better policing requires the recognition that Brian Encinia’s actions would have been devastating even if Sandra Bland were alive. It requires hearing the #BlackLivesMatter movement’s call not just to protect black lives from police brutality, but black living from brutality-adjacent policing. It requires raising the necessary but insufficient standard of legality to a stronger standard of guardianship, and shifting the focus from officer safety to community protection.
Catherine Addington is an editorial fellow for The American Conservative.
“What does it mean when you never see yourself in the reading you are provided at school? Does it mean you don’t exist, you don’t count, you are not important?”
James Blasingame, a professor at Arizona State University, wonders. He works at the intersection of two genres constantly on the defensive: Native American literature, forever overshadowed by the Little House on the Prairie’s of the world, and young-adult literature, trashed regularly as a non-entity invented to market Twilight. He shrugs off the young-adult literature naysayers. “Scholars, teachers, librarians, people who actually work with young people every day and know what reading can mean in their lives do not question its value.”
But with regard to Native American literature, he worries that students in Arizona are being shortchanged. The state’s sizeable population of Native students rarely encounter characters like themselves in the books they read at school. This lack of representation, Blasingame says, “robs the young reader of the power of literature to do what books and reading are so good at: to provide readers with a means for making sense of the world and their place in it.”
To help meet that need for representation, Blasingame teamed up with fellow ASU professor and renowned poet Simon Ortiz (Acoma Pueblo) to design and implement a Native American literature curriculum at Westwood High School in Mesa, Arizona. The pair worked with Timothy San Pedro (a resident of the Flathead Indian Reservation), then an ASU doctoral student and now a professor at Ohio State University, and Westwood literature teacher Andrea Box, as well as other scholars, teachers, and tribal members to craft the course. Besides the curriculum’s cornerstone―The Absolutely True Diary of a Part-Time Indian by Sherman Alexie (Spokane and Coeur d’Alene)―Box chooses from a recommended reading list of approximately 70 works by Native American young-adult authors like Cynthia Leitich Smith (Muscogee), N. Scott Momaday (Kiowa), Joseph Bruchac (Abenaki), and Joy Harjo (Muscogee). Box now teaches the course every term as part of the school’s regular literature offerings.
Blasingame outlines a concern common among educators in an increasingly diverse America. Teachers must dig out from under distantly crafted standards to re-engage with cultural traditions long neglected in American life. They seek to understand their students’ contexts not as a barrier to be overcome on the way to assimilation, as in the past, but as a vital asset in expanding what American literature―more literally, the American story―really is.
“I think most Native American literature is unreadable by the vast majority of Native Americans,” Sherman Alexie said in a 2001 interview with the Iowa Review. “If it’s not accessible to Indians, then how can it be Native American literature?”
Educator Debbie Reese (Nambe Pueblo) has similarly remarked at her blog, American Indians in Children’s Literature, that appropriate literature is in low supply. An overwhelming majority of the books published for young people about Native Americans every year are historical fiction taking place on tribal lands, for instance, despite the fact that there are about five million Native Americans living today and 61 percent of them reside in cities. In the American imagination, the Native population is confined not just to physical reservations but to the historical reservation of the past.
Non-Native authors have also confined Native Americans to a more nebulous cultural reservation, allocating them only two typical representations: the noble savage and the romantic mystic. The menacing, violent Native Americans of Little House on the Prairie―the phrase “the only good Indian is a dead Indian” appears in the children’s classic three times―became by the 1980s the grunting, naïve “chief” of The Indian in the Cupboard. The suspiciously environmentalist Native Americans of Dear America books, “retold” myths, and Disney movies have spawned a commercialized “native” spirituality that offers little in the way of relevance to Native Americans today. Taken together, the noble savage and romantic mystic tropes, alongside a sports mascot or two, are often the only images of Native Americans that young people, Native and non-Native alike, get.
Kenan Metzger and Wendy Kelleher, researchers in curriculum development and teacher training, explained the significance of relatable contemporary characters for students in a 2008 article for The ALAN Review:
Over-generalized, arrested forms of representation created by sports mascots, Thanksgiving and Columbus myths, non-Indian literature, and Hollywood movies perpetuate the perception that American Indian/Alaska Native/Hawaiian Native people still look and act the same as they may have hundreds or even thousands of years ago. Imagine an Indian child, watching Dances with Wolves or reading James Fenimore Cooper’s The Last of the Mohicans looking in a mirror. His mother tells him he is an Indian, but he sees neither feathers, nor war paint, nor other accoutrements associated with the Indians he sees in these visual and literary media. In his confusion, he may ask himself, ‘Are you a real Indian?’
Metzger and Kelleher represent the burgeoning academic community seeking to promote literature by and for Native Americans, not just about them. Most popular books about Native American subjects ― from history to beliefs to ethnography ― are inaccurate accounts by white authors. Sherman Alexie has characterized these books as “colonial literature,” akin to any outsiders’ view of their conquests.
Alexie’s description is politically charged―but so is the literary market of non-Native writers who presently dominate the Native American narrative. In his book God Is Red, Vine Deloria Jr. wrote about this displacement of contemporary Native voices by voices out of the past in the context of the 1960s pan-Indian civil rights movement:
it seemed as if every book on modern Indians was promptly buried by a book on the ‘real’ Indians of yesteryear. The public overwhelmingly turned to Bury My Heart at Wounded Knee and The Memoirs of Chief Red Fox to avoid the accusations made by modern Indians in The Tortured Americans and Custer Died for Your Sins. The Red Fox book alone sold more copies than the two modern books. … Each takeover of government property only served to spur further sales of Brown’s review [Bury My Heart at Wounded Knee] of the wars of the 1860s. While the Indian reading public was in tune with The New Indians, The Tortured Americans, The Unjust Society … and other books written by contemporary Indians on modern problems, the reading non-Indian public began frantically searching for additional books on the Indians of the last century.
Until the 1960s, literary representations of Native Americans were found mainly in “as told to” autobiographies, a form in which non-Native writers convey personal narratives supposedly transcribed from real-life encounters. The genre is exemplified by John G. Neihardt’s 1932 Black Elk Speaks, in which an amateur white historian unsurprisingly had problems accurately relating the life of an Oglala Lakota medicine man.
“The truth is, the ‘as-told-to’ lives (even that of the primogenitor Nick Black Elk) are the margins of Indian history, not the center of it,” scholar Elizabeth Cook-Lynn (Crow Creek Lakota) wrote in Anti-Indianism in Modern America. “The reason for that is they are based in sociology, not the literature of the people.”
Some are based in even less. One of the most popular of these white-written “Native autobiographies,” The Education of Little Tree, was in fact a literary hoax perpetrated by Asa Carter, a Klansman and a speechwriter for George Wallace. Masquerading as a Cherokee memoirist named Forrest Carter, he told of the lessons he learned as a young boy with his (fictional) Indian grandparents in the Appalachian Mountains, all generally related to harmonious living and personal independence. Though Native scholars pointed out the book’s stereotypes, invention of Cherokee words and customs, and overall romanticized picture of Native life, only the author’s objectionable personal history pulled it off Oprah’s shelf of recommended reading and pushed it from the New York Times’s nonfiction list onto its fiction counterpart. The masking of white supremacist beliefs with romanticized Indian narrative had never been so literal, and the non-Native literary world’s indifference to accuracy and relevance had never been so blatant.
The book also exhibited a subtler problem. The Education of Little Tree is about a supposedly “Native” approach to life, a sort of idealized indigeneity that serves as a vehicle to pass down a vague libertarianism to a new generation. It does not try to be a book about Native Americans, but about Native Americanness, or at least the way Carter perceived it.
Metzger, a professor at University of Missouri–Kansas City, says that avoiding stereotypes is just the beginning of a culturally relevant curriculum. “As young-adult literature develops and as it represents more diversity―ethnic diversity, but also gender identity, disability, and other things students are dealing with―we hope to see, as educators and researchers, that we have books that are about young people’s experience wherein characters just happen to be diverse,” he explains. “The book isn’t about Native Americanness, it’s about a teenager or young person who happens to be Native American. That’s a more helpful kind of representation in literature for young people because they can identify with the characters.”
S.D. Nelson (Standing Rock Sioux), a children’s author with 28 years of teaching experience in public schools, emphasizes contemporariness above all. “Here in the 21st century we still perpetuate this romantic revision of Native Americans with feathers in their hair, riding painted horses … and that’s all fine and well at traditional ceremonies, and we want to keep those alive, but along with that we need to recognize that time continues to move forward. As an author I am speaking to young people today. One of the important things I hope to pass on to young readers, young Native American readers, is a sense of hope and a sense of their importance in the world and in America―today.”
That’s a heavy sense of purpose to attach to a genre. But the Native American novel has always had intense implications: the first Native American novelist was John Rollin Ridge, a Cherokee leader in the mid-1800s who called his philosophy of assimilation for survival simply “civilization.” Ridge saw the Native adoption of the English-language novel as a way to preserve Native storytelling in a world dominated by European narrative.
While today the Native American novel is no longer necessarily an assimilation technique, it is not purely entertainment either. As children’s author Christopher Myers has put it, stories are not just mirrors of life as young people live it―they are maps for the road ahead of them.
Sherman Alexie has embraced this role as “cartographer” in defending his books’ occasionally violent material. As he wrote in the Wall Street Journal in 2011:
When some cultural critics fret about the ‘ever-more-appalling’ YA books, they aren’t trying to protect African-American teens forced to walk through metal detectors on their way into school. Or Mexican-American teens enduring the culturally schizophrenic life of being American citizens and the children of illegal immigrants. Or Native American teens growing up on Third World reservations. … I write books for teenagers because I vividly remember what it felt like to be a teen facing everyday and epic dangers. I don’t write to protect them. It’s far too late for that. I write to give them weapons—in the form of words and ideas—that will help them fight their monsters. I write in blood because I remember what it felt like to bleed.
The contemporary Native American young-adult novel is not just about Native storytelling traditions or European colonial art forms, then. It is about finding a way to speak to the contemporary reality of American minority youths. Perhaps, as James Blasingame suggests, the key is simply to hand over the microphone. The most important feature of Andrea Box’s Arizona curriculum, Blasingame says, is “a frame of mind that human history has been recorded most often from only one perspective, and literature is also written from one perspective, often an inaccurate and biased one. If we would have the true story of a nation of people, we must hear their story from them.”
Catherine Addington is a TAC editorial assistant.
“Whatever happened to Michael Brown in the moments before he died has become secondary to what the response to his death has revealed,” Jelani Cobb wrote in The New Yorker. Since a police officer shot and killed the unarmed black teenager in Ferguson, Missouri on August 9, the shooting—and the vigils, looting, volunteer cleanup, peaceful protests, and overwhelmingly disproportionate police response—has become a national microcosm of urban racial injustice and what is being called the “militarization” of police forces.
Deadspin’s Greg Howard summarized the tensions at play:
If officers are soldiers, it follows that the neighborhoods they patrol are battlefields. And if they’re working battlefields, it follows that the population is the enemy. And because of correlations, rooted in historical injustice, between crime and income and income and race, the enemy population will consist largely of people of color, and especially of black men. Throughout the country, police officers are capturing, imprisoning, and killing black males at a ridiculous clip, waging a very literal war on people like Michael Brown.
That war is enabled by military-grade weaponry available to police since the 1990s under the Department of Defense’s Defense Logistics Agency and the “section 1033” program over which it presides. In Rise of the Warrior Cop, John Payne explained earlier this year, journalist Radley Balko makes the case that the Founders would have seen that kind of militarized police as an unconstitutional standing army. Balko wrote, “Just before the American Revolution, it wasn’t the stationing of British troops in the colonies that irked patriots in Boston and Virginia; it was the England’s decision to use the troops for everyday law enforcement.”
Indeed, to many, the scenes of tear gas seemed more like images from Iraq and Afghanistan than suburban St. Louis (even though tear gas is illegal in warfare, if legal domestically). Jamelle Bouie, writing for Slate, was among them:
This would be one thing if Ferguson were in a war zone, or if protesters were violent—although, it’s hard to imagine a situation in which American police would need a mine-resistant vehicle. But an episode of looting aside, Ferguson police aren’t dealing with any particular danger. Nonetheless, they’re treating demonstrators—and Ferguson residents writ large—as a population to occupy, not citizens to protect.
Veterans spoke out against “militarized” police action in Ferguson on Twitter. Jason Fritz observed, “As someone who studies policing in conflict, what’s going on Ferguson isn’t just immoral and probably unconstitutional, it’s ineffective.”
Adam Weinstein put it more bluntly at Gawker. “The U.S. armed forces exercise more discipline and compassion than these cops.” He cites the first page of the Army’s field manual on civil disturbances, which emphasizes proportional, nuanced responses. “Inciting a crowd to violence or a greater intensity of violence by using severe enforcement tactics must be avoided.” The manual also notes that “highly emotional social and economic issues” inform such disturbances, and that “it takes a small (seemingly minor) incident” to set off violence “if community relations with authorities are strained.”
Unlike the military, who are trained in nonviolent options for conflict resolution, the police often lack such knowledge. Bonnie Kristian expounded this failure and reasons behind systematic police brutality earlier this summer, noting also that cops are rarely held accountable for abuse. “Only one out of every three accused cops are convicted nationwide, while the conviction rate for civilians is literally double that.”
The entrenched racial injustice behind Michael Brown’s death will be difficult to root out, as it has been over centuries of American history. But the decades of policy that allowed for police abuse of Brown, and his town’s peaceful protesters, could be reversed—and if the public outcry over Ferguson is anything to judge by, Americans will be keeping a closer eye on the police in the coming years.
As the Islamic State forces northern Iraq’s religious minorities—Christians, Shia Muslims, and Yazidis—to flee, convert, or die, the United States has begun dropping humanitarian aid as well as bombs in an effort to stave off genocide, despite many Americans’ trepidation at getting involved in Iraq again. But many Iraqi-Americans, especially members of the Chaldean Catholic community, have long been protesting and praying for some kind of action.
Chaldean Catholics have a long history in the United States, but their numbers have been growing in past decades as they have fled from aggressors in Saddam Hussein’s Iraq and the Islamic State alike. The American Spectator’s Lucy Schouten recalled their exodus:
Most [U.S. refugees] joined the Chaldean Christian community in Michigan, which began in the 1870s. They had helped build the automobile industry, saving factory wages to bring family members to the land of opportunity. The Detroit community of Chaldeans now numbers 200,000 and has associations for every profession from pharmaceutics to CPAs.
The Iraqi Christians were an enterprising group and established smaller communities in San Diego, Chicago, Arizona, and Las Vegas, while maintaining ties to faith, family, and their home country community.
That community continued to grow and flourish even after the war ended, although, as Schouten put it, “most Americans would not now call Detroit a land of opportunity.”
Now, the community has come together to support family and friends across the ocean. The federal building in downtown Detroit has seen several rallies over the past two weeks. An August 1 procession saw a thousand Iraqi-Americans pray for peace while carrying a large cross around Mother of God Chaldean Church in Southfield. The Detroit Chaldean community has raised tens of thousands of dollars for humanitarian aid in Iraq through parish collections and a new online diocesan initiative, HelpIraq.org.
Detroit Chaldeans have partnered with their smaller, but just as active, brethren in California to raise awareness. San Diego’s “Little Baghdad” neighborhood in El Cajón is home to the second largest Iraqi-American community, including vibrant activists from protest rappers to visiting Iraq-based nuns. Many members of the community have family and friends suffering back in Iraq, and local doctor John Kasawa has noted an uptick in anxiety and depression in the neighborhood as the violence takes a toll “on the collective conscious.”
Little Baghdad’s most visible leader is local entrepreneur and Ending Genocide in Iraq spokesman Mark Arabo, who had been working with Congress and the administration on anti-genocide action and humanitarian aid for months before news of the airstrikes came last week. He now plans to go to the United Nations, where he hopes to convince leaders to give asylum to the nearly half-million newly displaced Iraqi Christians. Meanwhile, some are already preparing for new arrivals in San Diego.
Arabo has described the decision the U.S. faces in Iraq as “an honorable predicament.” In considering the extent of military intervention, the U.S. is “specially positioned to be viewed as a failure for foreign inaction, and ‘imperialist’ for our willingness to act,” he said. “I tend to view our foreign role as a nation of great power, blessed with a moral obligation to enact change on a global scale. This, I must stress, is a blessing.”
Not all members of the Chaldean community agree. “We do not want to see American [sic] involved in a third war in Iraq, Gulf War 3.0. We don’t want that,” Bishop Bawai Soro of the San Diego Chaldean diocese told local news. “At the same time, we want ISIS to be stopped.”
The Internet is no longer in English, even if the coding on its back end still largely is. That’s what MIT’s Ethan Zuckerman has concluded as online language diversity has increased over the past decade, from Facebook posts in Afrikaans to tweets in Zulu. But the typographical design world that brings online text to life has lagged behind, producing endless variations on the Latin script used in English (like the documentary-inspiring Helvetica and the font you’re reading right now, Georgia) but far fewer for other languages.
The result is an increasingly bilingual, but visually clunky, Internet that looks like this:
Google is looking to streamline that with its Noto project (so named for its goal, “no tofu,” a reference to the tiny squares that pop up for unsupported scripts). A new, free font family that “aims to support all the world’s languages” for use in web pages and URLs, Noto already supports over 100 scripts (and the 600 written languages they facilitate) from Cherokee to cuneiform. Some of the project’s efforts have been applauded, such as their rejection of Han unification, which detrimentally conflates chunks of Chinese, Japanese, and Korean scripts.
Noto’s inclusion of endangered languages like Inuktitut (an indigenous Canadian languages which has under 40,000 speakers) and Tlingit (an Alaska Native language with just about 1,000 speakers) has also won praise. But since Noto has thus far failed to tackle far more widely-used languages, some are questioning Google’s priorities. For instance, Noto cannot yet be used to type in Oriya, an Indian language with over 30 million speakers, or the nastaliq script used by Urdu speakers.
Ali Eteraz, a Pakistani-American writer campaigning for the online inclusion of nastaliq, has summarized concerns with Noto by saying, “Language is the building block of people’s identities all around the world, and Google is basically saying that, ‘We got this.’ …Whether that strikes you as hubris or whether it’s noble depends on whether they pull it off.”
When it comes to hubris, Google can learn from its own past exploits, as Kevin Roose recounts Google’s struggle to design a suitable universal font for its Android products. The main challenge, Roose notes, is that “unlike most innovations in computing, typeface design doesn’t succeed by grabbing your eye.” Writing all the world’s languages in one style is challenging enough, but doing it in a way that looks good across the Internet—no matter what size screen, or with what resolution, it is accessed—compounds the design challenge.
Noto won’t turn the web’s words uniform overnight. But it is a sign of a permanently multilingual Internet, and the challenges of creating a truly global product.
Argentina has defaulted eight times in its 200-year history, the latest coming on Thursday after a bizarre legal saga that left Argentine sovereign debt in the hands of a Manhattan federal district judge.
Judge Thomas Griesa ruled that Argentina could not make its next payment on restructured debt from its 2001 default—money that is already sitting in the New York bank in charge of mediating the payments—until including another set of bondholders in that exchange. That second set of bondholders, representing only seven percent of Argentina’s creditors, consists of hedge funds represented by Elliott Management’s NML Capital. The funds bought Argentine bonds as the country’s economy spiraled downwards, and they rejected the restructuring, holding out for the bonds’ full original value.
The Supreme Court refused to review Griesa’s decision, while also permitting bondholders to issue subpoenas in order to locate Argentine assets abroad. Argentina refused to pay, as negotiations failed and the country defaulted on its debt last Thursday at midnight. Argentina’s standing in international debt markets, not to mention its domestic economy, is so bad that very little has actually happened as a consequence.
Since its 2001 default, Argentina has been experiencing inflation, recession, and exclusion from international capital markets. None of that has changed, though it is slightly accelerating. Argentines, many of whom lost their savings 13 years ago, have long turned to the U.S. dollar as the under-the-table currency of choice, as Argentina’s own peso is worth less and less every year. Last week’s default is practically a laughing matter in the Argentine papers, perennially full of bad economic news. The ever-opportunistic administration of President Cristina Fernández de Kirchner has railed against American injustice rather than making any attempt to minimize the harm.
The case’s international ramifications are even less dramatic, despite concerns over the future of creditors’ rights in debt markets. Peter Eavis and Alexandra Stevenson suggested that “the Argentine dispute will make it much harder for indebted countries to cut their obligations to manageable levels,” since investors now have a greater incentive to demand better deals from countries in crisis. But Hung Tran suggests such worries may be overblown due to the very limited and particular nature of this dispute. In fact, the likeliest outcome is mainly an international study session. After seeing such a small economic problem threatened to cause such a large one in Argentina, countries will likely look to clean up and clarify pari passu clauses, the legal mandate for “equal treatment” in debt repayments that caused the Argentine problem in the first place.
To that end, Nobel laureate Joseph Stiglitz called for a global system of debt restructuring. Calling the hedge funds “vultures”—as the Argentine press has—Stiglitz said that the investors had no interests in the country other than to profit from its demise, and that should have consequences. Read More…
If you use social media or have a smartphone, chances are you’ve encountered facial recognition technology. FRT allows computers to recognize pixel patterns that suggest human faces, allowing selfie-taking cameras to mugshot-filled databases alike to distinguish when they are looking at human faces. Even though it is fairly commonplace, some would rather avoid it, leading to one journalist’s experiment with clownish black-and-white makeup on the streets of D.C.
Robinson Meyer, an associate editor at The Atlantic, tried a camouflage technique called computer-vision dazzle, or “CV dazzle,” which uses face paint and hairstyling to stymie FRT. The makeup deceives FRT by obscuring the eyes, symmetry, and the nose bridge, among other features that characterize the face. “Here was a technology that confounded computers with light and color,” Meyer reflected. But as he learned, CV dazzle is far from a guarantor of privacy. “The very thing that makes you invisible to computers makes you glaringly obvious to other humans.”
Nancy Szokan alarmingly theorized that Meyer’s camouflage experiment is “something a terrorist might want to do”: escaping government surveillance. But in reality, Meyer’s experiment mainly resulted in evading Facebook auto-tagging, a seemingly tame privacy threat. FRT is routinely employed in the private sector beyond social media, from catching cheating gamblers to providing security at large sporting events like the Super Bowl. Now, its capacity to foretell age has stirred interest in insurance companies, while its real-time entrepreneurial applications are being explored by advertisers.
But when it comes to FRT falling into the wrong hands, concerns are generally directed at the authorities rather than vice versa. Though FRT has existed in its most basic form since the 1960s, it has blossomed under the biometrics industry fueled by the wars in Afghanistan and Iraq, where the need to identify local populations induced the military development of portable biometrics systems. The government has enthusiastically inserted FRT into more routine use with increasing success: it shows up alongside other biometrics at airports and is now being introduced into police detective use. As Sameer Padania noted at Witness.org, “Law enforcement and security services particularly like FRT, as it does not require consent or knowledge of the subject being processed – unlike finger-printing, iris-scanning or similar biometric technologies, this can be done at a distance.”
It’s not the technology that is a major concern, Padania went on.
What’s new is this: this technology, which used to be accessible only to a few agencies, is now being used voluntarily, and unwittingly by millions of us through our use of social media. Our willingness to tag people in photos, and rapid advances in computer vision and object recognition have accelerated the use of FRT. We share so many images now that Facebook has, as this chart shows, the largest photo collection in history.
This voluntary engagement with FRT, which facilitates its intersection of cloud computing, is where change is beginning to occur. Jared Keller explained that the public’s increasing tech savviness opens the doors to “criminal, fraudulent, or extralegal ends” that are “as alarming as the potential for government abuse.” When private citizens organized a Google group to combine FRT with public records in search of identifying London rioters, they illustrated a new model of digital vigilantism.
The question, Keller says, is not how to escape FRT, whether donning masks or makeup. The question is how to live with it. “No matter what you choose to do or not to do, your life exists in the cloud. …Your digital life is becoming inseparable from your analog one. You may be able to change your name or scrub your social networking profiles to throw off the trail of digital footprints you’ve inadvertently scattered across the Internet, but you can’t change your face. And the cloud never forgets a face.”
CV dazzle may not become haute couture overnight. But surrendering selfies to Uncle Sam might, without anyone noticing.
After a popular online campaign to legalize cellphone unlocking, which allows a consumer to change the settings on a phone in order to use it on a different wireless network, the president is about to sign the Unlocking Consumer Choice and Wireless Competition Act into law. It will legalize unlocking until the Librarian of Congress, who administers the Digital Millennium Copyright Act, reviews exemptions again next year.
The law is a significant victory for copyright reform activists like Derek Khanna, whose 2012 memo for the Republican Study Committee on how current copyright law stifles the free market set the tone for reform (after it got him fired). Khanna has called the ban on cellphone unlocking a denial of “a fundamental tenet of property rights; which is the ability to modify your own property.”
I spoke to Khanna to learn more about where the copyright reform movement will go from here.
A Democratic president standing up for consumer choice certainly represents a sort of conversational victory, but the law itself is something of a temporary fix. Are you happy with how the bill turned out?
Yes. It’s a short-term bill—this needed to be addressed urgently—but at the same time, Congress is considering other long-term fixes. To that end, there are ongoing copyright hearings in the House Judiciary Committee.
The tech field is fast-paced, while American government is purposefully slow-moving by design and by politics. How can Congress ensure the laws are keeping pace with the technologies they regulate?
The particular problem is that Washington hears only one narrow perspective on these issues. A lot of what I call “the forces of the status quo” have lobbyists that make their voices heard. Entrepreneurs and smaller business owners aren’t really being represented, so in Washington they almost don’t know what their regulations are preventing.
As you noted in your cover story for TAC earlier this summer, Republicans only took action on this legislation after the White House’s endorsement, which in turn followed a public outpouring of support. Will it always take that kind of massive push to get congressional Republicans to move forward on regulatory reform?
I hope not. I hope Republicans take the initiative. Our whole campaign here was based on the free market, which Republicans run on across the country. But they’re one step behind on technology, which is a shame, because that’s where the modern economy is.
But they’re starting to turn around on this. Congressmen like Thomas Massie and Jason Chaffetz are real leaders on this issue. The Young Guns Network, which represents Kevin McCarthy, Paul Ryan, and Eric Cantor, included a section on regulatory reform in their “Room to Grow” report. It goes out of the way to say we need wholesale copyright reform and makes a very enthusiastic plea for IP reform. It even directly cites my RSC memo. So these things take a long time, but there are real successes.
Tech policy is a straightforward way to win over the youth vote, but Republicans don’t seem to have noticed. Do you think that disconnect is purely generational? Can young conservatives just hope the party grows out of it?
I don’t know if it’s generational, but I know that it’s changing.
According to the College Republicans National Committee, in 2012, “young people simply felt the GOP had nothing to offer.” Kristen Soltis Anderson concluded, “There is a brand. …And it’s that we’re not in the 21st century.” That’s pretty stark. But the thing that polls best among young people is talking about innovation and technology. This isn’t just good policy, it’s good politics.
Those congressional offices never knew what hit them with SOPA/PIPA. For some people that was a seminal experience, the first time they had ever engaged in the political process and were able to make a change. And now with unlocking we have the first time an online campaign was able to actually introduce legislation. There is a whole generation of people who see these policies as really stifling innovation.
What’s next for copyright reformers?
There is a lot of work to be done in copyright reform still. How long should copyright terms be? The founders set it at 14 years and today it can be over 120. That’s kind of ridiculous in a world where every text, every tweet, every Facebook post is copyrighted longer than anyone who writes them will ever live.
The phone unlocking bill is great. But other issues are very closely related and if Congress doesn’t act soon, we’re going to see the ‘Internet of things’ collapse. A great example is that the next Keurig coffee machine is expected to have a digital chip technology built in such that you can’t use any other coffee pod. It would be a felony to use any other coffee pod with it! The technology would be used to stifle competition in the coffee market. This is just the tip of the iceberg because the benefits for existing businesses are overwhelming.
Any final thoughts?
There has been a sea of change in policymaking on copyright on the right since 2012, it’s almost impossible to find any conservatives, other than lobbyists for industry, opposed to substantial reform. The conservative position is we need to restore our founding principles on copyright.
From the teenage romance between an amputee and an oxygen-tank user in the box-office success The Fault in Our Stars to the conjoined sisters at the circus in the Kennedy Center’s Side Show, representations of disability and difference are prominent as of late. But as Christopher Shinn noted yesterday at The Atlantic, the recent plethora of disabled characters also has another thing in common: they are played by able-bodied actors. Once again, Shinn said, “Pop culture’s more interested in disability as a metaphor than in disability as something that happens to real people.”
Disability is often used as a metaphor for exclusion and subsequent triumph, themes easier to swallow when an actor twitches sensitively across the stage for two hours only to walk back calmly for the curtain call. So it goes exactly in the production of “The Curious Incident of the Dog in the Night-Time” at London’s National Theatre, currently showing in cinemas worldwide before it heads to Broadway in the fall.
Based on a popular 2003 novel by Mark Haddon, “Curious Incident” is a family drama packaged as a mystery. It is seen from the perspective of a teenager named Christopher with an autistic spectrum disorder that some reviewers have compared to Asperger’s syndrome. The production uses technical elements, from cool blue lighting to projected numerical graphics to dizzying synthesized sound effects, in order to communicate the experience of sensory overload that accompanies neurological conditions like Christopher’s.
Because this manner of presentation merely informs the audience’s experience of a rather simple plot—the titular incident is a quickly resolved mystery, and most of the second act is a train ride—the play, like the book, seems to run counter to the frequent use of disability as plot obstacle and metaphor for triumph. In fact, Christopher remarks that a metaphor “is when you describe something by using a word for something that it isn’t. … I think it should be called a lie because a pig is not like a day and people do not have skeletons in their cupboards.”
But in the program note for the stage adaptation of “Curious Incident,” Haddon backtracked. Jane Shilling wrote in her review for The Telegraph, “His 15-year-old protagonist, Christopher, exhibits a constellation of quirks that are recognisably on the autistic spectrum, but his behavioural problems are also a metaphor for the solitariness of the human condition. ‘Curious is not really about Christopher,’ Haddon concludes. ‘It’s about us.’”
In navigating the ethical implications of work like Haddon’s, blogger Mary Maxfield suggested that the problem is not using disability as a metaphor, but using disability as a metaphor for the wrong thing. Christopher, a beloved son integrated into his family and school structures, does not fit Shilling’s metaphor for solitariness. Likewise, Haddon’s editorial “us,” unambiguously separated from people with physical and neurological differences, would have the value of certain lived experiences dependent on their contribution to a grander “human experience.”
As Shinn asserts, the inclusion of disabled actors and artists can bring lived experience rather than distant research to the table and facilitate the kind of responsible art Maxfield imagines. But a willingness to tell stories that are about disabled people for their own sake, rather than about disability per se, would be an even more welcome change.
The world’s fast-growing elderly population faces more age-related disease, higher health costs, and fewer children to care for them than ever, while the resulting caregiver shortage puts them at an increased risk of abuse and neglect. Some medical professionals, like geriatrics professor Louise Aronson, are proposing robots as a solution to both assist overwhelmed human caregivers and replace those guilty of mistreatment, as “most of us do not live in an ideal world, and a reliable robot may be better than an unreliable or abusive person, or than no one at all.”
Aronson’s robotic geriatrics are no fantasy but an existing solution in places like Japan, which has the world’s grayest population and the economic resources available for $100K, yard-tall robots to be feasible. Yet Japan’s relationship with robots shows that making robot caregivers cheaper might not make them any more successful. Japan’s elderly have rejected the robots, asking instead for humans. The only robots with modest success among Japanese elderly have imitated pets, providing limited social engagement rather than medical care and companionship—tasks still preferably assigned to human caregivers.
As Japan shows, the robot caregiver solution does not fail on economic or technological grounds, where boundaries are largely surmountable with time. Rather, turning an intimate job like geriatrics into an automated service sector is a misunderstanding of the profession at hand, which requires both emotional and ethical investment in patients.
Caitrin Nicol Keiper, countering David Levy’s Love and Sex with Robots, explained that such encouragement of human-robot intimacy stems from a misunderstanding of the human as mere biochemical machine. The caregiver shortage does not merely stem from a lack of medical aides to perform mechanical tasks, but also an absence of loving companions who ensure the experience of disability and old age is not a solitary one. These robots, after all, are often explicitly designed to counter the negative health effects of loneliness.
But that loneliness has been cemented in a medical and legal culture that is guided above all else by the principle of individual bodily autonomy. Advance directives and living wills allow patients to lay out their medical decisions ahead of time, discouraging the real-time participation of family members or other caregivers in the medical lives of the elderly. As Leon Kass, then chairman of the President’s Council on Bioethics, reflected in a 2005 report on geriatrics, “Living wills make autonomy and self-determination the primary values at a time of life when one is no longer autonomous or self-determining, and when what one needs is loyal and loving care.”
This cultural reluctance to participate communally in the care of the elderly often expresses itself as avoiding the “burdening” of loved ones. But as Gilbert Meilaender asked in 1991, “Is this not in large measure what it means to belong to a family: to burden each other and to find, almost miraculously, that others are willing, even happy, to carry such burdens?” He continued, “I have tried, subject to my limits and weaknesses, to teach that lesson to my children. Perhaps I will teach it best when I am a burden to them in my dying.”
As Meilaender and Kass suggest, the central problem is not medical incompetence, or even moral indifference, but a break in generational relationships. Neither the elderly nor their medical professionals want them to be dependent on robots rather than people, but, especially among the childless or otherwise socially disconnected, the aged may have little choice. As such, the inhumanity of Aronson’s geriatrics may not be a particularly medical problem, but a social problem. As long as we culturally insist on autonomy, we will technologically insist on automation.
Twitter has revolutionized the way constituents interact with their representatives in Congress. Will Wikipedia be the next interactive legislative platform?
If developer and Library of Congress employee Ed Summers’ ideas take off, maybe so. This week, Summers created a bot called @congressedits that tweets out anonymous Wikipedia edits from congressional IP addresses. The account has mainly uncovered the innocuous and the banal, from noting the availability of Choco Tacos in the Rayburn building to correcting grammar in the article for Step Up 3D. However, the account also enables the public to see when staffers vandalize or rewrite politicians’ biographical information, whether updating word choice (Justin Amash is an “attorney,” not a “corporate lawyer”) or casually defaming likely opposition (activist Kesha Rogers is a “Trotskyist”).
Rogue political Wikipedia edits have been controversial before. In 2006, staffers for politicians from Rep. Marty Meehan to Sen. Joe Biden were publicly called out for removing criticism from their bosses’ pages. Wikipedia’s usual crowd of vigilant editors reversed the few problematic edits they found after investigating other congressional activity on the site, but left most edits intact as intended “in good faith.”
But Summers’ project is not a series of overt agendas connected to individual staffers. Its real-time, eerily specific feed of edits streams activity from the entire congressional workforce in what Megan Garber has called a project of “ambient accountability.” Like the earlier controversies, Wikipedia can yet again serve as a proxy for political fights happening elsewhere, but it can also serve as a window into everyday life on the Hill at its most bizarre and inconsequential.
There is a significant online audience for Capitol Hill quirkiness. Buzzfeed’s Benny Johnson more or less makes a living off it, while members of Congress have social media interns delving into the ever more surreal with legislative doge memes. The @congressedits project could appeal to both easily amused political junkies and to accountability advocates who see it as an opportunity to expand access to the people that they say should be the government’s most visible and engaged group. Read More…
As more Americans than ever tuned in to watch the World Cup over the past few weeks, the American media’s quadrennial habit of analyzing soccer’s place in the country raged on. Cranky right-wingers, embodied by Ann Coulter’s now-infamous ramble, put forth common criticisms of soccer: it has an insufficient gender gap, allows scoreless ties, prohibits using hands, is foreign and liberal, prioritizes team effort over individual prowess, and constitutes all-around “moral decay.” In the face of such resistance, soccer fans like Daniel Drezner proposed simply changing the rules of the game to assuage his fellow Americans’ sense of fairness, rather than asking Americans to adapt to the game’s delightful capriciousness like the rest of the world. Meanwhile, Peter Beinart and other commentators on the left celebrated the “soccer coalition” of youth, immigrants, and liberals—the same one that elected President Obama, he recalled—proving that Americanness is not contingent upon the white working-class culture idealized by Coulter. In short, Americans loudly participated in a soccer nation’s rite of passage by reading domestic politics into the sport every chance they could get.
Though the debate largely focused on whether soccer could possibly have a place in accepted American identity, this process of political theorizing and contention mirrors the way soccer has been absorbed into other cultures throughout the sport’s history. Americans who chafe at the sport’s European origins join the long tradition of our southern neighbors who idealized the “creolization” of soccer while forming national identity after the Latin American revolutions of the 19th century. In Argentina, soccer was the manifestation of the “melting pot” where Italian and Spanish immigrants took over British cultural imports, a process crafted in the pages of the magazine El Gráfico. In Brazil, soccer was a place to reconcile racial tensions by highlighting diversity as a source of American ingenuity and creativity, superior to formulaic and homogenous European play. The contemporary American media’s ongoing narratives of soccer are similar not just in their obsessive nature, but in the diverse subcultures they are trying to weld together.
Soccer has always come with class connotations that plague burgeoning sports cultures. The prevailing image of soccer, both in the U.S. now and in Latin America a century ago, is of white urban and suburban elites who use the sport to moralize. Soccer was formalized in British public schools in the 19th century in order to promote Victorian morality and “muscular Christianity”—as well as to simply keep boys busy—but it largely came to the Americas as the pastime of the “gentleman-athletes” among British immigrants to South America. The “amateur era” of early 20th century soccer parallels the American “soccer mom” values that encourage teamwork and cooperation in children before moving on to more individualist sports as adults, and it is just as widespread and pejoratively viewed as its predecessor. As American pundits critique this intrusion of foreign collectivist values, they are echoing, among others, 1920s and 1930s Argentines calling for “our own style” (“la nuestra”) to counter and replace British beliefs. Read More…
This week, seven college students and voting-rights advocates are challenging a North Carolina voting regulation law, alleging age-based discrimination. They argue that the law, which does not permit state university IDs or out-of-state driver’s licenses as acceptable voter ID and ends a DMV pre-registration program for teenagers, violates the 26th Amendment that enfranchised citizens 18 and over. Separately, efforts to shut down voting sites at universities are adding to complaints that the Republican-dominated state and local governments are deliberately blocking the youth vote, which turned out overwhelmingly for President Obama twice in North Carolina and nationwide.
The irony is, Republicans may be moving to depress the youth vote just when it could be starting to turn in their favor. While the millennials who comprise young voters now look to be strongly Democratic in the short term, David Leonhardt argues that today’s teenagers may grow up conservative:
In the simplest terms, the Democrats control the White House (and, for now, the Senate) at a time when the country is struggling. Economic growth has been disappointing for almost 15 years now. Most Americans think this country is on the wrong track. Our foreign policy often seems messy and complex, at best.
To Americans in their 20s and early 30s — the so-called millennials — many of these problems have their roots in George W. Bush’s presidency. But think about people who were born in 1998, the youngest eligible voters in the next presidential election. They are too young to remember much about the Bush years or the excitement surrounding the first Obama presidential campaign. They instead are coming of age with a Democratic president who often seems unable to fix the world’s problems.
As Leonhardt argues, college students and young voters in general are not inherently liberal groups. In the 1980s, Republicans dominated the youth vote: Ronald Reagan and George H.W. Bush won first-time voters, under-29 voters, and voters with some college education by large margins. Those then-young voters remain a consistently Republican constituency, lining up with Leonhardt’s argument that politics are more generational than anything. Young voters are entering the electorate while making their political allegiances in reaction to ongoing policies, forming beliefs that they will carry throughout their lives.
Legislating away unfriendly voters is rarely a productive path to long-term future success for a party seeking democratic legitimacy, and voting blocs generally aren’t courted by efforts to impede their franchise or deny their voting rights. With their gaze fixed firmly backward at their past two presidential setbacks, North Carolina Republicans and their counterparts nationwide are at risk of scoring a series of own goals.
This generation in particular could be a political opportunity ripe for Republicans’ taking. The teenagers who voted in the last election, and those entering the electorate now, are voting increasingly Republican in reaction to the current administration’s failures. A Democratic president that leans interventionist and is misleadingly ineffective on student debt makes for even more fertile ground for conservative alternatives. Rather than trying to inhibit the youth vote, Republicans should craft policy solutions that could serve to swing young voters to their side and take advantage of their momentum.
The ongoing Central American child migrant crisis gained the national spotlight last week when the president asked Congress for emergency funds to stem the influx. Many of the children, like other immigrants, are looking for work and education, or are trying to reunite with family. But as Ross Douthat has pointed out, the numbers are spiking in large part because the children are following smuggler-spread rumors of amnesty, possibly inspired by the mixed signals of the DREAM Act. Since smugglers make more profit trafficking children than more logistically challenging adults, the administration’s recent efforts to counter the misinformation have not gone far.
The language surrounding the crisis on the U.S. side of the border can be almost as confused, however. As the crisis made headlines, one false dichotomy dominated the rest: “Please don’t call this an immigration reform issue. This is a humanitarian crisis,” Rep. Kay Granger of Texas recently said. Refugee advocate Jennifer Podkul was quick to echo the juxtaposition. “This is not a migration issue. This is a humanitarian crisis and a foreign policy issue.”
The rush to call this anything but an immigration story is usually intended to highlight the root causes of poverty and violence in Central America. Rhetorically, it creates urgency and helps encourage a distinction between short-term solutions for children suffering at the border and long-term solutions to reform the system.
In reality, though, those are not competing frameworks. The child migration situation is both a humanitarian crisis and a migration issue, and it cannot be resolved without taking both aspects into consideration. A prime example of the importance of both priorities can be found in the motivating factor in this child migration influx that most defies easy categorization: the proliferation of gang violence in Central America.
Central American child migrants widely cite gang violence as a motivation for leaving their countries, and the gangs they flee are fundamentally tied up in the migration issue. The most prominent Central American gangs, Mara Salvatrucha (“MS-13”) and 18th Street Gang (“Calle 18”), began among Latino youth in Los Angeles in the 1960s and the 1980s respectively, but both expanded from the United States to Central America after mass deportations following the 1996 Immigrant Reform and Immigrant Responsibility Act. This migration policy decision fomented cross-border crime networks that now have an estimated 70,000-100,000 members in several countries.
The gang violence plaguing these children does not just illustrate the long-term consequences of immigration policy, but also the reason for considering this in international refugee terms. As many as 48 percent of Central American child migrants are fleeing violence in their communities, including the violence gangs perpetrate in their recruitment of adolescents. Central American minors specifically seeking international protection as refugees from persecution in the form of gang violence have won asylum in the U.S. in the past. The gangs’ sheer scope, as transnational criminal organizations and sometimes paramilitaries, has led some advocates to describe the child migrants as akin to defecting child soldiers. Read More…
Today, the United States of America commemorates 238 years of independence from the British—but an approaching bicentennial serves as a reminder that the struggle to establish sovereignty was only beginning when the Declaration of Independence was written. During the summer of 1814, the United States was in the full throes of what historians have long considered a “second war of independence.” The turning point of that war would come at the Battle of Baltimore, known to most Americans today as the inspiration for the national anthem, “The Star-Spangled Banner.”
The invading British had just burned the White House and plundered the port of Alexandria. On August 27, the Baltimore Patriot printed a letter from President James Madison, who had recently fled from the burning capital:
“On an occasion which appeals so forcibly to the proud feelings and patriotic devotion of the American people, none will forget what they owe to themselves; what they owe to their country and the high destinies which await it; what to the glory acquired by their fathers, in establishing the independence which is now to be maintained by their sons, with the augmented strength and resources with which time and Heaven have blessed them.”
The letter was addressed to the whole nation, but its exhortation to preserve the young nation’s independence was keenly felt in the notoriously anti-British city of Baltimore. The city was then known for its privateers, who raided British ships’ merchandise, as well as for a series of anti-British riots during which mobs burned the offices of anti-war newspapers. Baltimore was also of great strategic importance as a busy port that connected several East Coast cities by land and sea. The city’s residents were perhaps uniquely prepared for such an invasion due to their strong local tradition of civic self-defense. After the riots, local elites had recognized the authorities’ failure to keep the crowds under control and independently organized patrols of armed citizens to assist in expanding the city’s defensive resources.
But the events of the summer of 1814 pushed the city to formalize their efforts more strongly. After news of the fall of Washington, Baltimoreans set up a Committee of Vigilance and Safety comprised of the existing Maryland militia as well as civilians who served as volunteers on a rotating night patrol. Besides the night watch, the Committee was also responsible for raising funds for the local war effort, which generally took the form of donating at each prominent community center (usually a local tavern). A prescient Washingtonian wrote to Baltimore’s American and Commercial Daily Advertiser: “If the British visit Baltimore I have no doubt you will receive them in American style—we are disgraced.”
The local militia was particularly focused on unity in the face of the national military’s failure. The biggest blow to military success was the strategic blunder made by the Secretary of War, John Armstrong, who was thoroughly convinced that the British would not attack Washington. He saw the city as strategically insignificant, so he failed to prepare for its defense, which ended up drawing military resources to the region in a scrambled last-minute effort. Armstrong resigned after the defeat at Washington, but the local militia were already prepared to count on themselves for the most part. When his resignation letter made it to Baltimore, along with a full account of the disastrous battle, publishers refrained from comment and blame games. Invoking patriotism, the editor of the Baltimore Patriot proclaimed: “An enquiry must be made–when the nation is saved.” This restrained attitude continued through the city’s preparations for war, as the staff printed assurances of the city’s proper defense without giving particulars for fear of over-informing the enemy. Read More…
Chinese dissident and human rights activist Liu Xiaobo has a habit of making headlines from prison. The political reformer began his fourth prison term, this time an eleven-year sentence for “subversion,” in 2009, only to receive the Nobel Peace Prize in 2010, and now a surprising congressional move has pulled him into the most local of politics. Last week, the House Appropriations Committee approved an amendment to next year’s budget that would rename the address of the Chinese embassy in northwest D.C. to “1 Liu Xiaobo Plaza” in his honor.
David Keyes of the nonprofit Advancing Human Rights explained the position of the move’s bipartisan advocates when the proposal was initially made. As he tells it, the idea is to remind other countries that their domestic policy decisions have an international cost: “Every time the representatives of tyranny walk outside of their offices, they should be confronted with the faces and names of those whose freedom they deny. Dissidents languishing in prison must know that they are not forgotten.”
Washington street names have been political arenas before. Similar motivations led Congress to rename the address of the Soviet embassy “1 Andrei Sakharov Plaza” after a Soviet dissident and human rights activist in 1984.
Criticism from China on this latest move was to be expected: a spokeswoman from their Ministry of Foreign Affairs called the proposal a “complete farce,” while online commenters proposed renaming the address of the U.S. embassy in Beijing after Edward Snowden. But Americans are faulting the move as well. Richard Bush of the Brookings Institution, for instance, complained that the renaming’s “symbolic shaming” would not accomplish much. “Of course, what the regime did to Liu Xiaobo violated every reasonable moral standard, and this action will make some in the West feel good. But it will not speed his release by even one day.”
Yet no one questions that the move is anything other than symbolic. The proposal’s sponsor, outgoing Virginia Rep. Frank Wolf, defended the move in moral language: “Renaming the street would send a clear and powerful message that the United States remains vigilant and resolute in its commitment to safeguard human rights around the globe.” The question is not whether the U.S. can force China to release Liu Xiaobo by renaming a street. Secretary of State John Kerry has already made the U.S. position on Liu’s case perfectly clear in the past. Rather, Wolf’s message-sending may be aimed in another direction entirely.
Human rights advocacy has taken a back seat as an American foreign policy priority in dealing with China. Taken in that context, the street sign proposal may be sending a message to Americans, rather than the Chinese. Naming the street of the Chinese embassy after a jailed dissident may be a small effort to suggest to Americans that human rights should be a bigger national priority. It is that agenda that should be debated, not the overdramatized foreign policy implications of a street sign.
When it comes to homelessness, many communities’ first instinct is to regulate the problem away. Making certain aspects of life on the street illegal, the approach goes, will force the homeless into city programs—or into other cities. This regulatory approach, sometimes referred to as the municipal criminalization of homelessness, includes the seizures of homeless Americans’ private property through police sweeps, laws against panhandling, and restrictions (or even bans) on sharing food with the homeless in public. These measures end up wasting money through the overincarceration of the homeless for nonviolent crimes: according to the National Coalition for the Homeless, it costs up to three times as much to keep someone in jail for one night as it does to keep someone in a shelter.
But the approach, which only deals with the visibility of homelessness and not its root causes, is also fundamentally flawed in that it tends to manifest as merely a short-term bandage for a much more complex issue. The misguided strategy is exemplified by Honolulu mayor Kirk Caldwell’s “war on homelessness,” which has quickly devolved into a “war on the homeless” by seizing the property of the homeless, banning tents in public spaces, and drafting bills to authorize the police to harass anyone sleeping in public spaces. Though it is intended to improve the local economy by boosting tourism—and a booming local economy would be beneficial to the homeless population in the long run, to be sure—this regulatory approach provides no alternatives other than exodus for the homeless population. As Leah Libresco put it, “Hawaii, more than other states, shouldn’t just try to hide their homeless, since, as an island state, they can’t pull the trick other cities have used and hand out one-way bus tickets to shunt their homeless to another city.”
That same regulating impulse on the local level is also driving up housing costs in cities across the country, likely contributing to homelessness. Scott Beyer recently illustrated how housing policy intersects with homelessness in D.C., where the public health crisis at the decrepit General Hospital shelter is contrasted with housing prices that are rising along with regulations slowing development. “Collectively, writes Cato Institute economist Randal O’Toole, these ‘planning penalties’ add $135,000 to the costs per unit in D.C. Such expenses are paid upfront by businesses, but ultimately get passed onto consumers, making the idea of owning — or even renting — housing impossible for many residents,” Beyer says. He notes that the situation is not specific to D.C. but has spread to politically similar cities like New York, San Francisco, Portland, and Seattle.
Even affordable housing requirements, meant as a regulated solution to those inflated housing costs, are handled in the same wasteful way. Josh Barro recently detailed the issue of inclusionary zoning, an attempt to increase affordable housing in New York City by offering Manhattan developers the ability to build more luxury apartments if some are allocated to lower rent levels. But while perhaps politically necessary, the strategy underperforms. According to Barro, “Inclusionary zoning generates fewer affordable housing units than a cash equivalent because luxury apartments make for an expensive form of affordable housing.” Read More…
Breathy think pieces have lauded the latest viral app Yo as “merging the physical and digital worlds” with its focus on “context-based communication.” They naively praised its simplicity: “unlike most other messaging apps, Yo doesn’t collect any personal information from users.” They shouted into the void, “Yo…I am here. Is anybody out there?”
Yo’s sole function is to allow the user to send the word “yo” to contacts. (The app calls this “zero-character communication,” previously manifested by buzzers, pagers, doorbells, and even Facebook “pokes.”) The app was born when the CEO of Tel Aviv company Mobli, Moshe Hogeg, asked his coworker and engineer Or Arbel to design it for personal communication. The pair released their app quietly on April Fools’ Day—and had to fight to get it on the App Store because Apple thought Yo lacked enough substance to be complete.
The app’s creators and investors describe its usefulness outright comically: “Yos are used as verifications (‘Yo, I made it home from school’), acts of thoughtfulness (‘Yo, I’m thinking of you’) and as alerts (‘Yo, I need your help’). Hogeg’s wife, for example, will Yo him daily to let him know she loves him.” Meanwhile, critics of the app like American tech blogger Robert Scoble point out that Yo’s success was largely facilitated by an excitable media. News outlets latched onto the absurdity of the concept and the app creators’ claim that investors have committed to putting $1.2 million into Yo, even though the startup currently has no money in the bank whatsoever.
The brilliant PR campaign for such a ridiculous business could normally be chalked up to amusement, and left at that. But as UpStart’s Alex Dalenberg put it, “When there’s actual money at stake, things start to get less funny.” Last week the fragile app, which was built in just eight hours, was hacked by college students. The hackers had full access to the only personal data solicited by Yo: users’ phone numbers. While Yo’s leak of personal data is not all that dangerous, the poorly crafted app exemplifies how quickly and passively tech consumers will open themselves up to vulnerabilities. Another app, Snapchat, had a similar hack and leak of personal data last year, while archetypally nefarious flashlight apps have been known to track users’ locations. In that sense, the Yo phenomenon is startling in how easily a less innocuous team could have done damage by exploiting consumers’ good humor and boredom.
Yo’s blatant willingness to tap into that boredom is what has won it such appreciation. Kia Kokalitcheva called Yo “a sign that at the end of the day, we want to feel connected to other humans, and sending someone a nudge and getting an acknowledgement in return actually helps, even just a little bit.” Boiling down genuine attempts to reach out into silly notifications of “yo” tries to circumvent the awkward self-awareness that comes with digital communication. If texting is the wrong medium for “I love you,” or Snapchat is an artificial way to say “I’m thinking of you,” then why not reduce the social din to its most absurd manifestation and send out a “yo”? Read More…