The contretemps over the release of Sgt. Bowe Bergdahl has been occasion for a raft of commentary taking President Obama’s lack of competence as the defining feature of the affair. And while there is certainly ample cause to call into question the merits of the deal with the Taliban, the wisdom of Mr. Obama’s highly misleading press conference with the Sergeant’s parents, and the subsequent reappearance of the wondrous Susan Rice on the Sunday morning talk shows, to my mind the most troubling aspect of the Bergdahl affair has to do with how someone so obviously troubled made his way into the ranks in the first place.
Like the deeply troubled Pfc. Chelsea (née Bradley) Manning before him, Bergdahl should never have been accepted into the ranks in the first place. He was admitted into the Army largely because an incompetent President, going against the wishes of the country, decided to double down on an ill-conceived and grossly mismanaged war. The story of how Bergdahl, who was discharged from the Coast Guard for psychological reasons in 2006, found his way back into what we are endlessly told is the greatest military in the history of the world, is profoundly discouraging. The Washington Post reports that by 2008, the year Bergdahl enlisted, the Army was issuing waivers to those with criminal backgrounds, health issues, and “other problems” at the rate of one for every five recruits. This perhaps points to a larger problem, reaching beyond the armed services.
The post-9/11 national security state, which consists of at least 17 federal intelligence agencies and organizations, requires hundreds of thousands of individuals to staff it. In light of the cases of Messrs. Manning, Snowden, and Bergdahl, it has become increasingly clear that the government has created a significant problem for itself. This was bound to happen given the sheer numbers involved. Consider the following from the groundbreaking 2010 report by the Post’s Dana Priest and William Arkin:
- Some 1,271 government organizations and 1,931 private companies work on programs related to counterterrorism, homeland security, and intelligence in about 10,000 locations across the United States.
- An estimated 854,000 people, nearly one-and-a-half times as many people as live in Washington, D.C., hold top-secret security clearances.
Four years on, the number of security clearances issued has continued apace. According to a report released by the Office of the Director of National Intelligence this past April, from 2012-13 the number of people deemed “eligible” for access to classified information increased by nearly a quarter of a million people. Roughly 5.15 million people currently hold security clearances, out of which around a million are outside contractors, about half of whom hold a top-secret clearance.
The conversation that needs to be going on should be focused on whether the national security structure, as it stands right now, is actually supportable. The Bergdahl affair ought to serve as a warning that as we keep expanding the military and enlarging the intelligence apparatus, the law of diminishing returns will (and probably has) set in. Yet no one in Washington ever thinks to say: enough. It’s past time for Congress to reconsider the efficacy, to say nothing of the desirability, of the post-9/11 national security leviathan.
Focus is a difficult thing to muster these days. As David Brooks wrote in a recent New York Times column, we are all “losing the attention war. … Many of us lead lives of distraction, unable to focus on what we know we should focus on.” This affects many of our daily activities, especially things that require special mental concentration. It creates a minute-to-minute challenge for the reader, who will always feel the pull of the next Twitter story, the latest email, the newest Facebook post, etc. Whether reading news articles or books, we feel the distractions itch at our brain.
Yet despite our society’s general lack of focus, we aren’t necessarily abandoning long books—the Goldfinch, one of the most popular novels on the market right now, is 771 pages long. But this lack of focus does mean that modern books increasingly cater to the short-term attention span. Tim Park explains at the New York Review of Books:
Never has the reader been more willing than today to commit to an alternative world over a long period of time. But with no disrespect to Knausgaard, the texture of these books seems radically different from the serious fiction of the nineteenth and early-twentieth centuries. There is a battering ram quality to the contemporary novel, an insistence and repetition that perhaps permits the reader to hang in despite the frequent interruptions to which most ordinary readers leave themselves open. Intriguingly, an author like Philip Roth, who has spoken out about people no longer having the “concentration, focus, solitude or silence” required “for serious reading,” has himself, at least in his longer novels, been accused of adopting a coercive, almost bludgeoning style.
The author suggests that, in this world of limited attention spans, the serious writer may have to parcel out his or her writing into shorter sections or volumes, in order for anything like the eloquent and verbose reading of prior eras to remain. An idea worth considering: could this trend toward short reads give rise to a deeper appreciation of poetry in the future? It’s thought-provoking and often mind-taxing, true—but it’s short. Or at least, some of it is. It would be interesting to see whether poetry makes a comeback in the future.
Another possibility worth considering: what if we brought back the short story, or syndicated novel? The New Yorker publishes short fictional works, but few other large journalistic publications do. Yet these fictional stories used to occupy a considerable portion of the news cycle. What if modern novelists, like Charles Dickens in days past, published their novels in publications like the New York Times or TIME magazine—one chapter at a time? I think the public would love it—and it could also help print publications build an audience that they seem to be steadily losing. Read More…
Handwriting is largely viewed as an outdated skill: typing offers more efficient ease to teachers and students alike. The new Common Core standards, adopted in most states, only includes teaching of legible writing in kindergarten and first grade.
But according to new studies by psychologists, this recent dismissal of handwriting could have unintended consequences: the underrated skill is actually a boon to brain development and memory retention. New York Times reporter Maria Konnikova explained these studies in a Monday article:
When the children composed text by hand, they not only consistently produced more words more quickly than they did on a keyboard, but expressed more ideas. And brain imaging in the oldest subjects suggested that the connection between writing and idea generation went even further. When these children were asked to come up with ideas for a composition, the ones with better handwriting exhibited greater neural activation in areas associated with working memory — and increased overall activation in the reading and writing networks.
… Two psychologists, Pam A. Mueller of Princeton and Daniel M. Oppenheimer of the University of California, Los Angeles, have reported that in both laboratory settings and real-world classrooms, students learn better when they take notes by hand than when they type on a keyboard. Contrary to earlier studies attributing the difference to the distracting effects of computers, the new research suggests that writing by hand allows the student to process a lecture’s contents and reframe it — a process of reflection and manipulation that can lead to better understanding and memory encoding.
Current educational trends tend to emphasize vocational and pragmatic elements of education. Which subjects will help students get the most lucrative jobs? Which will make them the most competitive on a global stage? Which skills guarantee the greatest college-readiness?
Yet in the midst of our quantification, we’ve lost qualitative ground. In the age of numbers, we can’t teach handwriting because it is beautiful, fun, and a building block for deeper communication and understanding of language. Instead, we dispose of it—at least until the studies come out, in all their number-crunching glory, to tell us that handwriting is actually worth something. Then, in a rather ironic twist, we discover that these qualitative skills actually hold some quantitative value, after all.
This discovery reflects our larger discussion of the humanities and their role in the modern sphere: we wonder what such studies are worth, when the modern job market seems to demand experiential, pragmatic skill sets. We vest importance in what you can do, not how you can think.
Yet the new data on handwriting seems to have some people, formerly dismissive of handwriting’s importance, conceding ground. One Yale psychologist admitted that “Maybe it [writing by hand] helps you think better.”
Some people have always believed handwriting to be beautiful and important. They know writing by hand has helped them connect meaningfully with information, in addition to helping them communicate clearly with others. But perhaps there is a level of sentiment in such value-based affection. Thankfully, we now have the data to prove that, aside from its qualitative benefits, handwriting serves important quantitative purposes, as well. Hopefully some teachers (and students) will see these truths, and take note.
Last week Michael Strain composed a compelling summary of many of the reasons technology’s effects on employment are beginning to be feared now at levels perhaps unseen since the time of the first Luddites during Britain’s Industrial Revolution. As he wrote,
There is no question that technology is already having a major impact on the labor market. Over the last several decades, employment in Western economies grew in both low- and high-skill occupations, but fell in middle-skill occupations. That’s because middle-skill, middle-class occupations are those that can be most easily replaced by technology. (Think of a 1970s-era bank that employed a president, a bank teller, and a custodian. Today, it’s the bank teller who’s gone, replaced by an ATM.)
Many professionals perusing this site will read that paragraph and cluck to themselves with mild concern at the plight of the former bank teller, but reassure themselves that their job is high skill, and that they couldn’t possibly be replaced by a machine. Middle-skill jobs are for the undereducated, after all. Yet in the latest issue of City Journal, John O. McGinnis makes the case for seeing even such an august profession as the law as being composed of a great many essentially middle-skill jobs, almost all of which are increasingly coming under the competency of computers. As he puts it,
Law is, in effect, an information technology—a code that regulates social life. And as the machinery of information technology grows exponentially in power, the legal profession faces a great disruption not unlike that already experienced by journalism, which has seen employment drop by about a third.
McGinnis surveys all the legal work computers are already starting to take up, and finds “Discovering information, finding precedents, drafting documents and briefs, and predicting the outcomes of lawsuits—these tasks encompass the bulk of legal practice.” Just as a welder may have once thought his work to be too exacting to be done by brute robotics, so professionals think of their work as being too humane for a computer to understand. This mindset makes the classic mistake of thinking that computers would have to replicate human phenomenology in order to compete with human activities. Instead, all a computer has to do is emulate a task with greater speed and in greater volume. An initial lawyer is needed to guide the programmer, and a final lawyer will for the foreseeable future be needed to tidy up the finished product. But all the hordes of junior associates laboring in discovery and research can be neatly replaced by search engines powered by the artificial intelligence of IBM’s Watson, the computer with the language skills to vanquish Jeopardy’s human champions.
Just as rapid reporting journalism is starting to be done by algorithm, so too will many traditional high-skill workers find their skills start to degrade in the eyes of the marketplace when automation comes to them.
Kianna Karnes was a 41-year-old mother of four children diagnosed with kidney cancer in 2002. Her doctors prescribed interleukin-2, the only medication approved by the Food and Drug Administration (FDA) at the time to treat the disease, but it proved insufficient to stop the cancer’s spread. Kianna’s family petitioned the FDA to allow their mother to try an investigational new drug (IND) still stuck in the agency’s approval process. After all, she had nothing to lose. Despite gaining powerful allies including Congressman Dan Burton (R-Ind.) and the Wall Street Journal editorial board, it was too late. The FDA approved Kianna’s IND request the very same day she died.
Kianna’s story is just one of countless patients whose lives have remained in jeopardy because of the FDA’s dangerously inefficient drug approval process. Now some states are taking it upon themselves to make the necessary reforms. Last week, Colorado became the first to sign a so-called “Right to Try” bill into law, empowering patients with the ability to petition pharmaceutical companies to provide them with INDs if they’ve exhausted all other options. While the push for Right to Try reform is compelling, its fate still remains uncertain due to the FDA’s reluctant history of allowing patients access to INDs.
While Kianna’s case was critical in drawing attention to the FDA’s bureaucratic approval process, it is by no means the first time the problem has gained national attention. In the 1980s, Milton Friedman wrote and spoke about the FDA’s perverse incentives in his book and television series, Free to Choose. Friedman explained that it is much riskier for the FDA to approve a drug than disapprove it, since the agency can be publicly blamed if a bad drug goes to market, but nobody would know if a good drug is rejected. As a result, “we all know of people who have benefited from modern drugs,” yet “we don’t hear much about … the beneficial drugs that the FDA has prohibited.”
Granted, the approval process has marginally improved since the era of Free to Choose. In the late 1980s, the FDA introduced Expanded Access Programs (EAPs) in response to the AIDS crisis, allowing patients access to potentially life-saving INDs. However, the federal agency’s bloated bureaucracy has unsurprisingly crept into this compassionate use program. As Christina Corieri of the Goldwater Institute explains, “[F]rom 1987 until 2002, the FDA approved only 44 treatment IND applications for conditions ranging from AIDS to chronic pain – an average of less than three per year.”
While the FDA also grants individual IND applications for specific drugs, the approval process to do so is also backed up. Patients must appear before an institutional review board (IRB) at a regional medical facility. However, many IRBs only meet once a month and can be located hundreds of miles from a patient’s home. Consequently, countless lives may have been lost navigating the FDA’s bureaucracy.
Right to Try may soon allow America’s sickest to cut through the red tape. Originally drafted as model legislation by the libertarian Goldwater Institute, the bill has received bipartisan support, having been approved as a ballot measure in Arizona and signed into law in Colorado; it is currently awaiting Gov. Bobby Jindal’s signature in Louisiana. Specifically, the model legislation would allow pharmaceutical companies to provide terminally ill patients with INDs that have passed at least Phase I of the FDA’s approval process, and prevents the state from bringing legal action against them. Read More…
Yesterday an Obama administration-convened task force released what the New York Times called “perhaps the most elaborate survey of decay conducted in any large America[n] city,” detailing the pervasiveness of perceived blight in the Motor City. The Detroit Blight Removal Task Force surveyed 377,603 properties, and recommended 40,077 for demolition, 38,429 for further review. Task force leader Dan Gilbert set the stakes somewhat colorfully, saying, “Blight sucks the soul out of anyone who gets near it.” In order to fully follow the task force’s clear-cutting recommendations, Detroit would need to spend at least $850 million, almost twice the $450 million the city has already planned to spend on blight.
The Times story, and its accompanying infographics, follow a traditional script in discussing Detroit: staggering back before the enormity of the city’s failure, peering in at the ruin porn lining the city’s streets. Yet even as the city has gone bankrupt, has been placed in the hands of an appointed manager, and now faces the prospect of spending enormous sums it doesn’t have just to tear down tens of thousands of its properties, there are local kernels of hope blossoming out of the void.
In a recent discussion on the EconTalk podcast, Charles Marohn of Strong Towns pushed back against the idea of Detroit as pure desolation:
If you go right now, today, to the core of Detroit, it’s actually one of the most exciting places in the world. And largely because of the absence of government. There’s nobody there telling people: You can’t open this business, or, You have to get a permit to do that or inspections to do this. There are very few barriers for young people to start a business and get things going.
Likewise, the famed New Urbanist architect and urban planner Andrés Duany wrote earlier this year that “Detroit is going to be the next ‘Brooklyn.’ Perhaps not all of Detroit. But certainly a portion of the city has the potential to become as rich and thriving as New York’s trendiest borough.”
How could Detroit, poster child for post-industrial urban decay and dysfunctional governance possibly be characterized as “one of the most exciting places in the world,” or seen as holding—even in part—the potential to rival “New York’s trendiest borough”? Precisely because the city’s governance has collapsed in on itself, and the area is so incredibly cheap. As Duany recounts, Read More…
Suburban sprawl often comes under criticism for a variety of aesthetic, environmental, and social reasons, but one criticism rules them all, Aaron Renn writes. They aren’t financially sustainable:
new suburbs look attractive for a number of transitory reasons: everything is new, state of the art, and exactly in line with current market tastes; no legacy costs; no legacy institutions, deals, political dynasties, etc; few low income residents and thus low social service costs; deferred infrastructure development; the efficiency of large lot development; and scale economics in public service provision in a growth environment.
Eventually though, your shiny new suburb fills up and so growth comes to a halt, then often about the same time it gets old. This send all of those positive factors into reverse, triggering a cycle of decline that will ultimately cause major problems in vast tracts of suburban America that aren’t either a) wealthy communities or b) in markets that have tight restrictions on new building (which preserves these communities at the expense of rendering them unaffordable).
Renn recounts the experience of his current home of Indianapolis, where the city chased after its fleeing tax base by annexing the surrounding suburbs and forming a truly sprawling metropolitan government. As the shine wore off, however, the suburbs declined, and sprawl’s short-sighted design began to take its toll. As Renn wrote,
The bottom line is that the type of development that’s been ongoing in Indy and most American communities can’t ever generate enough tax revenue to pay to provide the infrastructure, amenities, and services necessary to support it.” Even the old city was comprised of widely-spaced single-family houses without so much as curbs, much less sidewalks. Suburbs are built because the land is cheap, as is generic development. There is simply no tax base to fund infrastructure developments that could revitalize the dragging sprawl.
Contrast Indianapolis’s infrastructure dilemma with the rapidly developing neighborhood of NOMA in Washington, D.C. Located north of Union Station, NOMA has seen an explosion of multi-story business and residential development over the past few years, with rooftop views of the Capitol obstructed only by the sheer number of cranes at work. The neighborhood has conspicuously lacked “open space,” or parks, in the eyes of its residents and developers, but that will soon change as the city has allocated $50 million to build NOMA some parks. Why does NOMA get park space when Indy can’t afford sidewalks? It has the tax base to pay for it. Although for accounting reasons the money is coming out of general expenditures, the NOMA neighborhood now contributes $49 million more per year to the city in tax revenue than it did in 2006. By building dense communities with businesses and residences intermixed, urban development can go where decaying sprawl fears to tread.
For generations of young Americans, a driver’s license stood as the ticket to freedom, freeing teenagers from the watchful parental eyes that accompanied being dropped off at bowling alleys, bookstores, and boyfriends’ houses. Living in a sprawling, vast country, particularly one whose postwar planning consensus had been dedicated to subsidizing suburban sprawl, a car was often the only feasible way to connect their geographically disparate destinations. Recent years, however, have seen millennials deserting the car in startling numbers. While the drop is certainly driven in part by the economic pinch of soaring gas prices and the increasing burden of graduated license regimes, an accompanying trend has given young would-be drivers a transportation off-ramp that preserves their mobility: the revival of transit.
One of the latest and clearest examples of transit’s poaching of the young comes from just north of the border, where the Vancouver metro region has seen the percentage of residents aged 20-24 even possessing a drivers license to have dropped from 70 percent 2004 to 55 percent last year. As Kenneth Chan describes the data,
The greatest declines were seen in the municipalities that are the most urbanized and served by a substantial level of public transit. …
Burnaby and New Westminster’s proportion declined from 68 per cent to 50 per cent, likely due in part to the increased accessibility to transit following the construction of the Millennium Line.
Richmond also saw a similar drop of nearly 20 per cent from 2003. Metro Vancouver’s data shows that the biggest year-to-year drop for both Vancouver and Richmond was in 2009 when the Canada Line opened for service.
While driver’s license data likely wouldn’t reflect changes in older cohorts that had already procured licenses (indeed, those were mostly flat, even increasing among the over-65), the Vancouver’s aggressive push to increase the accessibility of transit in its region has clearly started to capture the rising generations.
The trend is well-documented below the 49th parallel, as well. Last year Brad Plumer over at the Washington Post noted that the average yearly miles driven by 16- to 34-year-olds fell 23 percent between 2001 and 2009. Over that same period, public transportation use per capita rose 40 percent, and bicycling rose 24 percent.
While these data sets do certainly overlap with significant economic pressures that could be depressing the results, transit has started to shake off any perceptions of it as the poor person’s transportation of last resort. As Amy Crawford describes at Atlantic Cities, transit has become so popular in many places that the announcement of new transit extensions will drive up nearby real estate prices, and neighborhoods with newly-installed transit saw people with incomes over $100,000 disproportionately flock to them. And while the youth migration to cities may have once been seen as a luxury of unattached twenty-somethings who would once again return to the suburbs when it came time to settle down and nest, the New York Times recently reported that even stalwart suburbs like those of New York’s Westchester County were starting to get anxious at the failure of younger adults to boomerang back out to the ‘burbs.
While the exact numbers will continue to shake out, the trend seems reasonably clear: once famously sprawl-friendly Americans are flocking to dense communities, and are willing to ride the rails to get there.
It’s Saturday—the day of waiting, the day of quiet. The day when disciples quaked behind closed doors, and darkness covered the lands, and the Son of God lay in a tomb. The day of aching, grieving, seething pain.
That 24-hour cycle of numbness and fear throbbed through Jesus’ disciples, through the people who were “looking for the kingdom of God,” like Joseph of Arimathea. It was after Jesus was dead that Joseph and Nicodemus finally exposed their allegiances—they took Jesus’s body, wrapped it in a linen shroud, wrapped it in 75 pounds worth of spices. They wrapped his body in their own allegiance and love, telling the world who they followed.
And the women followed and saw—the women who had cared for Jesus, ministered to his traveling troupe—they followed him from road, to cross, to tomb. They didn’t fear the blood or turn away. They didn’t run and hide. They followed and watched, then went to prepare their spices and ointments for His body. But first, on the Sabbath, “they rested according to the commandment.”
How do you rest when your hopes and dreams are lying in a grave?
We live in a culture of pain. So often, our response to the world’s pain and death is either cynicism or despair. Author Leslie Jamison writes in her essay, “Grand Unified Theory of Female Pain,” that we live in a post-wounded culture (she limits this to women, but I think it could apply to much of our world):
The post-wounded posture is claustrophobic: jadedness, aching gone implicit, sarcasm quick on the heels of anything that might look like self-pity … Their hurt has a new native language spoken in several dialects: sarcastic, jaded, opaque; cool and clever.
This is a world that has screamed with the pain of genocide, holocaust, terror and war. It’s a world in which 55 million babies have been aborted since Roe v. Wade in 1973. It’s a world of shunning and racism, hate and abuse, violence and fear. We grow accustomed to the stories—we look back on anniversaries and shrug our shoulders: What could we have done differently? Perhaps nothing. We sit in the silence and nurse our aching wounds. We begin to believe the lie: we were made for this bleak, hostile, hurting world. We were made for death and destruction.
When then Justice John Paul Stevens handed down his now infamous ruling in the eminent domain case Kelo v. New London, he insisted that the seizure of a Connecticut neighborhood “was not intended to serve the interests of Pfizer, Inc., or any other private entity, but rather to revitalize the local economy by creating temporary and permanent jobs, encouraging spin-off economic activities and maximizing public access to the waterfront.” The public use taking he approved was a public enterprise designed to serve the city and its people, not the state seizing and transferring private property for the benefit of other, wealthier, more powerful private hands.
Should Justice Stevens make it up from his Florida retirement to the town of New London, he could behold the revitalized local economy, with its temporary and permanent jobs, encouraged economic activities and maximized public access to the waterfront. He’ll have to squint, though. There’s nothing there.
As Charlotte Allen found on her tour of the area a few months ago, the Fort Trumbull area of New London is now “a vast, empty field—90 acres—that was entirely uninhabited and looked as though it had always been that way.” You see, the private entities whose interests were not the core justification of New London’s taking pulled out of the project: the developers failed to find funding; Pfizer engaged in a merger that allowed it to close its New London facilities, not expand them, and to get out just before the tax incentives the city gave them ran out and they would have had to pay full fare for their property.
New London’s latest mayor has another plan in the works for Fort Trumbull, as the city’s coffers remain empty thanks to a missing tax base, this time “a national first—a green, integrated mid-rise community. There would be green tech, LEED-certified buildings, solar power. It would be a green, self-sustaining neighborhood.” Even that remains in the wispy aspiration phase at the moment, however. The only actual occupants of the Fort Trumbull development area since the seizure, and the clear-cutting, have been piles of garbage and waste, piled there in the aftermath of Hurricane Irene. Oh, and there have been reports of feral cats. Read More…