Science fiction has always been one of the leading ways for our culture to process and project the changes that science and technology will introduce into our lives, going back to Francis Bacon’s New Atlantis, perhaps the first work of sci-fi from one of modern science’s founding fathers. Fox took up the Baconian baton this week when it launched its new show “Almost Human” about a broken down cop returning to the field alongside his administratively imposed, but likewise retrieved from the scrap heap, android partner.
In her initial recap, Christine Rosen, a senior editor at The New Atlantis (the magazine) and fellow at the New America Foundation, notes that “Fear and loathing of robots is a trope of long standing in science fiction,” but that “fear and loathing might be giving way to grudging acceptance.” Robots after all, are creeping ever further into every facet of our everyday life, whether deploying in the place of citizen soldiers in the distribution of lethal force, or offering comfort to the lonely and aged in Japan’s great greying. Machines have replaced men in factories for doing the heavy labor, and are being adapted to supplement the remaining human workforce elsewhere. Even cars, the symbol of American liberation from constricting locality, are seen to have an automated future when it might very well be unethical or illegal for an unreliable and precious member of our species to be wielding the wheel unaided.
Rosen notes previous cop-machine pairings in popular culture, like KITT and Michael in Knight Rider:
KITT the computerized talking car was a reliable and sympathetic friend to Michael (played by David Hasselhoff). But although the Hoff was always thankful for KITT, he rarely ended an episode of Knight Rider without securing a new girlfriend. The ineffable pleasure that relationships with other human beings bring was something earlier shows took for granted.
Rosen sees “Almost Human” to suggest, in line with techno-utopian tendencies currently lurking in some prominent corners of Silicon Valley, that robots can replace or substitute for human intimacy, emotional or otherwise. After all Dorian, the outdated android partner, is programmed with an empathetic circuitry that seems to exceed the wetware of his grizzled human companion, John Kennex. And the second episode centered around breakthroughs in “sex-bot” technology that replace significant chunks of demand for human prostitutes by offering attractive and attentive androids for rent. To Rosen, “Almost Human” ”is normalizing the notion that we can create technologies that can teach us how to be better human beings.”
From my first watching, of the first two episodes that have so far aired, however, “Almost Human” may be susceptible to a more generous reading. Read More…
In our post-Snowden world, we can no longer deny what was always implicit in digital communication: someone, somewhere can find and read your data, if they are determined enough or have special government clearance. For Americans who value their privacy, fourth amendment rights, and believe it unnecessary for the government to have access to our every digital stroke, the question they must ask is who can “you trust with sensitive data these days?”
Since 1976, public key, or asymmetric-key, encryption has been the default method of private and secure digital communication. Public key encryption works through employing two keys, a public key encryption key and a private decryption key employed by the two computers. This technology remains secure because it employs extremely large numeric combinations: for example, Lavabit, the secure email service run by Ladar Levison and used by Edward Snowden, used “Elliptical Curve Cryptography (ECC) with 512 bits of security to encrypt messages. The private, or decryption, key is then encrypted with a user’s password using the Advanced Encryption Standard (AES) and 256 bits of security,” which is the level of security the NSA has approved for government work. Yet, as we have seen, even those most committed to protecting privacy (like Levison) can be thwarted by more conventional means. This occurred with Lavabit when the US government demanded Lavabit give them the private SSL keys after being refused access to “information about each communication sent or received by the account, including the date and time of the communication, the method of communication, and the source and destination of the communication.” More problematic is that, if the September 5th leak from Edward Snowden is correct, “the HTTPS and SSL encryption used by most email and banking services offers little to no protection against NSA surveillance.”
A further potential threat to public key encryption, beyond compromised keys and eavesdroppers, is quantum computers. Quantum computers rely on atomic properties that allow the machine to compute at speeds currently out of practical reach. This is not because quantum machines compute faster, but because they can take mathematical shortcuts rather than sequentially calculating each possibility as computers today do. At Computer World, Michele Mosca, deputy director of the Institute for Quantum Computing at the University of Waterloo in Ontario, explains:
Breaking a symmetric code…is a matter of searching all possible key combinations for the one that works. With a 128-bit key, there are 2128 possible combinations. But thanks to a quantum computer’s ability to probe large numbers, only the square root of the number of combinations needs to be examined — in this case, 264.
As many tech journals have highlighted lately, the very technology that now poses a threat to our security could also pose a solution. Quantum cryptology, a word that sounds more like something out of science fiction than a science journal, could be the next move for information privacy. Quantum Key Distribution (QKD) works very much like public key distribution, with added protection provided by particle physics. As David Holmes describes it:
The first part is the same: Data is encrypted using an algorithm. But then the data itself is encoded on a light particle known as a photon. Because photons are smaller than atoms, they behave in some pretty crazy ways. For example, you can “entangle” two photons so their properties correlate with one another. A change to one photon (which can occur as easily as by someone observing it) will cause a change in the other photon, even if the two are a universe apart.
After entanglement occurs, the sender transmits the first photon through a fiber cable to the receiver. If anyone has measured or even observed the photon in transit, it will have altered one of the properties of photon no. 1, like its spin or its polarization. And as a result, entangled photon no. 2, with its correlated properties, would change as well, alerting the individuals that the message had been observed by a third party between point A and point B.
Quantum cryptography relies on the observable traits of quantum physics, the aspect that Einstein referred to as “spooky action at a distance” to offer added protection: the knowledge of when data has been compromised by an attempted third-party observation.
The problem with this technology is its practical application. So far, the technology is still in its infancy, while physicists and tech experts try to solve problems such as distance, transmission, and integration with cloud based technologies. But as Snowden and Glenn Greenwald release more leaks, greater demand for digital security will pressure scientists to solve these problems. Quantum mechanics, with all its strange, fascinating, and downright unbelievable properties, could provide us with a myriad of innovative technological advancements. For those who only think about the effects of quantum physics when the Nobel prize in physics is announced, hold on: the study of quantum physics is about to get a lot more practical.
A plethora of recent studies have delved into the effect social media has on our brains. As part of a widespread neurological fascination in social science, these reports offer various concerns on the effect new technological platforms have on our social consciousness.
The University of Michigan reported last month that Facebook, while cultivating an addictive fixation in its users, also fosters depression. Why? “Based on the responses from the participants,” TIME reports, “the scientists speculated that the Facebook users were comparing themselves with their peers, and many were feeling inferior as a result. Users also reported frustration and a “lack of attention” from having fewer comments, likes and feedback compared with their Facebook friends.”
Social scientists are also musing over the growing “selfie” trend – and trying to determine its ethical implications. Is our pouty-picture-taking symptomatic of typical youthful immaturity, or is it indicative of some deeper cultural phenomenon?
These are just a few minor grace notes in a growing public dialogue on “narcissism”: whether we are more narcissistic than in ages past, how social media affects our self-absorption, and whether we need to change. Dr. Jean Twenge, a professor of psychology at San Diego State University, believes that younger generations are “increasingly entitled, self-obsessed, and unprepared for the realities of adult life,” according to a recent New York Times article. Slate author Katy Waldman felt inclined to agree with her:
Look at the rise of plastic surgery, our painstaking attention to Facebook and Twitter profiles, our selfies, our relentless focus on self-improvement, and “soccer teams that give every kid a trophy … Could it be that she is onto something? How else to explain that Twenge’s thesis just feels right?
Some believe this “epidemic” of narcissism is a product of our digital era. According to another study, this one conducted by the University of California Los Angeles, language in books have shown the trend develop over the past 200 years. “The currently discussed rise in individualism is not something recent but has been going on for centuries as we moved from a predominantly rural, low-tech society to a predominantly urban, high-tech society,” said psychology professor Patricia Greenfield, who conducted the study. Words like “unique,” “individual,” “self,” “feel,” “choose,” and “get” increased significantly over time. Words like “authority,” “belong,” and “pray” are more rare than before.
The “selfie” is a commonly quoted example of the narcissistic tendency in social media. One psychologist told TIME that the self-captured image could be beneficial, since it allows “young adults and teens to express their mood states and share important experiences.” But not everyone sees the “selfie” as a correct depiction of the self. Brett McCracken, in a Mere Orthodoxy blog post, argued the self-projections we present on social media are both deceptive to others and to ourselves:
Social media’s “what are you doing now?” invitation to pose, pontificate and consume conspicuously only amplifies the narcissistic presentism of the generation depicted in The Bling Ring. It makes it easier than ever to tell the world exactly what you want them to know about you. Through a carefully cropped and color-corrected selfie, depicting whatever glamorized “now” we think paints us in the best light, we can construct a public persona as we see fit.
Are Instagram selfies and Twitter posts truly making us more narcissistic? Or are we merely viewing a greater publication of old human tendencies?
I would argue the latter. While the digital era could be making us more narcissistic, it seems more likely that it is doing exactly what it was created to do: showcasing the self in all its glory. As long as humans have roamed the planet, they have had selfish tendencies. But prior to the rise of social media, they did not have a global platform for this grandstanding. High school, with all its intense and sticky drama, was documented profusely in letters and diaries, but it rarely appeared in public discourse. Now, via Facebook, young adults have a communal outlet for their emotionally turbulent lives. Facebook is, in many ways, the new diary.
In addition, arguing that young adults are more narcissistic than ever seems too simplistic. Many young people are immature and tend to be self-interested. It is only with experience and age that we learn the planet does not revolve around us. Once again, this generation has a wider global platform for self-pontification than ever before. This may skew the narcissism balance in their favor. Not to mention that the young are less adept at disguising their selfishness. With age, we learn how to hide our self-absorption behind facades of one sort or another. Time makes some humble; it makes others clever.
It is true that individualism and life disassociated from family and community is more widespread today than in past civilizations. It could be that this individualism has fostered our egotism. The person who lives with family or roommates must learn patience, self-control, and sacrifice. The individual, however, need not reconcile with other people’s wants and desires. Personal wants reign supreme. This obviously cultivates a different set of values and perceptions in the individual. But one must caution against the tendency to stereotype and predict behavior based on such perceived “trends.”
The human mind is not one massive mound of ever-growing egotism. It is a complex, varied, wondrous thing — scarred with the pains of human experience, spotted with sin and excess, ever shifting with the cadence of human experience. While we should be wary of social media’s influence on our mental and spiritual growth, we needn’t discount the online experience because of a few paltry “selfies.” One of the beauties of social media is its ability to connect and share – to teach us more about each other.
Gwynn Guilford reports over at Quartz that the latest trend to hit South Korea is surgery for a smile:
Cosmetic tweaks like Botox have long minimized furrowed brows and frown lines. But a new technique called “Smile Lipt” carves a permanent smile into an otherwise angry face. The procedure, whose name combines “lip” with “lift”—get it?—turns up the corners of the mouth using a technique that’s a milder version of what Scottish hoodlums might call the “Glasgow grin.”
While it has been long said that it takes fewer muscles (or at least less effort) to smile than to frown, it appears that many South Koreans just want to cut to the chase. The procedure is “increasingly popular among men and women in their 20s and 30s—especially flight attendants, consultants and others in industries aiming to offer service with a smile.”
The most concerning thing about this trend, however, comes at the end of Guilford’s piece: ”as with the popularity of other cosmetic procedures in South Korea, which have made it hard for the natural of face to compete for jobs, permanent smiles may too become the norm.”
She links to a New York Times report from a couple of years back that documented how a plastic surgery culture has boomed in South Korea.
In traditional Korea, tampering with the body bestowed by one’s parents was a violation of Confucian precepts that also discouraged cremation and, later, organ and blood donations. But in recent decades, cosmetic surgery has become a weapon in Koreans’ efforts to impress others, “like buying an expensive handbag,” said Whang Sang-min, a psychologist at Yonsei University.
According to one makeup artist preparing to go under the knife, “‘You must endure pain to be beautiful,’ she said, adding that an eye job is so routine these days ‘it’s not even considered surgery.’” One market survey indicated that “one of every five women in Seoul between the ages of 19 and 49 said they had undergone plastic surgery.” The surgeons themselves report that “their main patients are young women entering the marriage and job markets. ‘As it gets harder to find jobs, they’ve come to believe they must look good to survive,’ said Choi Set-byol, a sociologist at Ewha Woman’s University.”
Such a boom has led to a situation now where “‘Koreans agree on what constitutes a pretty face,’ he said. ‘The consensus, now, is a smaller, more sharply defined youthful face — a more or less Westernized look. That makes 90 percent of Koreans potential patients because they’re not born with that kind of face.’” The trend towards facial uniformity has been so widespread that “The film director Im Kwon-taek says it has become all but impossible to find an actress who still has a traditional Korean face.”
If one stays agnostic on the medical ethics of cosmetic surgery, the procedures become a matter of free choice and open markets, where patients and doctors have the freedom to arrange an operation, as long as there is no coercion.
But the South Korean example is a very effective demonstration of the ultimate limits of a libertarian paradigm revolving around atomistic free actors. Individuals “with photos of starlets whose face they want to copy,” aggregate to create new norms, because they are part of a social order. And social order inescapably comes with the soft coercive power of conformity.
A girl satisfied with her traditional Korean face, as all 90 percent of that country’s women should be, is disadvantaged in a competitive culture for maintaining her natural body. As we gain ever greater powers over our own bodies, the implications reach beyond Seoul’s matchmaking markets to raise a cautionary note about our biotechnological future.
To what extent can science predict the future? Both TIME Magazine and Aeon Magazine looked at predictive scientific studies this week. Their stories described more ethically questionable research, perhaps, than most: one calculated the “fate” of first-graders, the other neurological tendencies in Romanian orphans.
TIME considered an Education Week study that purports to show whether a first-grader will become a high school dropout. The study analyzes predictive factors like behavioral problems, frequent school absences, and below-average reading skills. These weaknesses tend to escalate by third grade. If reading development does not happen here, students may fall into the “fourth-grade slump,” and fall into a vicious cycle of academic straggling. While the study’s author cautioned that his formula only identifies “signs of students who drop out—it doesn’t mean they are dropouts,” it also seems this scientific labeling system could create problems for students who need encouragement in order to succeed. How would you like to know, at seven years old, that you will likely become a high school dropout?
Meanwhile, Aeon described a scientific study hoping to improve Romania’s terrible orphanages. Despite international outcry, Romanian officials “staunchly believed that the behavioral problems of institutionalized children were innate” rather than a result of organizational deficiencies. The report’s scientists sought to prove them wrong. They enrolled 136 institutionalized children, placed half in foster care, and tracked their physical, psychological, and neurological development. “They knew from the outset that the project would be ethically precarious,” author Virginia Hughes writes. “Could there be a more vulnerable study population, after all, than orphans with physical and psychological disabilities living in an economically feeble and politically unstable country?”
Nevertheless, scientists proceeded cautiously, and published their findings in Science in 2007. They were able to demonstrate that children placed in foster care “showed significant gains in IQ, motor skills, and psychological development” – but unfortunately, the study did little to change Romania’s orphan situation: an international adoption moratorium was made permanent in 2005. Domestic adoption exists, but with “onerous regulations.” Children in orphanages are often “undeniably miserable.” Nelson, the Romania report’s leading scientist, “has become desensitized, holding on to the idea that scientific data will eventually pave the road for better social policy.”
Both reports strive to help vulnerable or troubled children succeed in life. However, one must ask: What good can this data actually do in improving subjects’ circumstances? The Romanian project hoped to pave the way to a national foster care system. But Romania’s most destructive policies are still in place. Telling a first-grader’s parents that their child is likely to fail could motivate action. But what of the parents who see this as a scientifically-determined outcome? What of students who believe they are marked out for failure?
Consequentalist studies, though useful, cannot ultimately create change. Life is made up of human choices that are unpredictable and unprecedented. Patterns of destruction can be broken – even in Romania. In a 2005 article, the Guardian shared one Romanian orphan success story:
“She tells of a mother of four children, all by different husbands, who had abandoned the first three one after another. Then the mother arrived at the day-care centre with her fourth. ‘She didn’t want to touch the baby. She wouldn’t kiss her. We had to teach her how. Then she learned to kiss the baby. And now they’re still together.’”
Whether dealing with abandoned orphans or rebellious seven-year-olds, the key is personal investment – not data. Sometimes the vulnerable needed to be taught, nurtured, and encouraged. Their “fate” is determined by those who invest and intervene despite – not because of – the numbers.
There’s a brawl going down on the internet over the validity of evolutionary psychology. On defense for evolutionary psychology: biologist Jerry Coyne and Steven Pinker, possibly the most eminent evolutionary psychologist. On the warpath: PZ Myers, a developmental biologist, who argues that “most of the claims of evolutionary psychology are fallacious.”
Though Myers’ main line of attack centers on data and methods, the long and contentious political debate over Darwinian social science gets dragged into the fray.
While that argument has raged for decades, this century’s round opened with Steven Pinker’s classic The Blank Slate: The Modern Denial of Human Nature, famously knocked down a naive “blank slate” theory of human nature: namely, that human behavior and preferences are entirely shaped by culture and thus endlessly malleable. Though it’s less widely held by actual scientists, Pinker demonstrated how influential Blank Slate thinking has been in the humanities departments, popular culture, and political philosophy.
Locke’s tabula rasa undermined the dogma and authority of aristocratic social systems, since it meant that no man inherently possessed any more wisdom or virtue than others–only what experience imparted. And indeed, the modern versions of the Blank Slate were bolstered by an appropriate wariness of ugly Darwin-justified racism and sexism.
But the Blank Slate is also a great foundation on which to build catastrophic social engineering schemes (Mao Zedong said “It is on a blank page that the most beautiful poems are written”), as well as a wall behind which to hide PC shibboleths. Some racial strands of political thought have latched onto evolutionary theory, but certain strands of conservatism have welcomed the insights of evolutionary psychology because they reinforce the conservative intuition that human beings are not as malleable as the many on the Left want them to be.
More recently, Peter Lawler’s New Atlantis essay, “Moderately Socially Conservative Darwinians,” argues that evolutionary psychology “reinforces the conservative lesson that we are not merely autonomous individuals but also social and relational beings.”
And so, unsurprisingly, politics gets dragged into the latest spat as well. Coyne accuses skeptics of evolutionary psychology of being motivated by ideology and politics:
Like the opponents of sociobiology thirty years ago, these skeptics object to the discipline because they see it as both motivated by and justifying conservative political views like the marginalization of women [!!]
Myers (who is an anti-theist and certainly no conservative!) brushes this aside:
I detest evolutionary psychology, not because I dislike the answers it gives, but on purely methodological and empirical grounds: it is a grandiose exercise in leaping to conclusions on inadequate evidence, it is built on premises that simply don’t work, and it’s a field that seems to do a very poor job of training and policing its practitioners, so that it primarily serves as a dump for bad research that then supplies tabloids with a feast of garbage science that discredits the rest of us.
Even as Myers, Pinker and Coyne march into battle over methodology and assumptions about neuroplasticity and epigenetics, the specter of old political battles will hang over them. Scientific disputes inevitably bleed into political disputes, and vice versa, often with scant regard to logic. That doesn’t mean that we should shout down any scientists who attempt to overturn our political assumptions, assumptions to which nature is wholly indifferent.
So it’s perhaps useful here that Coyne, Pinker, and Myers are all secularists and atheists, showing that the disputes over evolutionary psychology are not a mere proxy war for other politics, but a genuine controversy over how the scientific community can account for our human nature.
Researchers at the University of Massachusetts medical school have reportedly made a breakthrough in learning how to silence the extra chromosome that causes Down syndrome. Down syndrome is also known as trisomy 21, as it is caused by the inheritance of an extra copy of chromosome 21 beyond the normal pair received from the mother and father. The researchers used a natural “off switch” for shutting down an X-chromosome in women, which when inserted into the extra chromosome caused it to be coated in material so that its function dropped to near-normal levels of chromosomal activity.
The UMass researchers performed their experiments using induced pluripotent stem cells (iPS) derived from a Down syndrome patient. iPS cells “turn back” adult somatic cells to become similar to the potential-rich stem cells researchers previously could only obtain from destroyed embryos. That discovery won Shinya Yamanka the 2013 Nobel Prize in medicine, and was hailed by scientists, ethicists, and pro-life activists alike as a way forward for science that did not raise any of the ethical concerns of human embryonic stem cells.
Dr. Jeanne Lawrence, one of the UMass researchers, describes the importance of her discovery as aiding controlled research of Down syndrome cellular function in the short-term, and making chromosomal therapy at least conceivable in the long-term.
The New Atlantis published one of the best essays on Down syndrome in 2008: Caitrin Nicol’s “At Home with Down Syndrome,” where she notes that despite the muscular and cognitive impairments that come with the syndrome, ”individuals with Down syndrome generally have outstanding social skills and in a supportive setting can be fairly high functioning.”
She continues, “adoption agencies report a high demand for children with Down syndrome. However, the abortion rate for fetuses diagnosed with Down syndrome tops ninety percent,” as “obstetricians are not well trained in explaining the diagnosis and have little if any clinical experience with individuals with a developmental disability.” With little access to parents of Down syndrome children, and faced with a barrage of forbidding statistics about the prospects for their child’s quality of life and the success of their marriage, ”the majority of expectant parents fall into a vortex in which abortion is offered as the sensible way out.”
In their efforts to get the word out that an extra chromosome is not a tragedy, parents and advocates describe the incredible warmth and social gifts that their child with Down syndrome brought into their lives. Indeed, they can sometimes risk idealizing distortions, as when “one astonished woman was informed by her mother-in-law that her daughter is the Bodhisattva.” Nicol notes, ”putting people on a pedestal, however well intended, makes them seem not quite human. But, as Avery’s grandmother notes, the special talents of people with Down syndrome may lie in what is most human — ‘they seem to bring out the good in people,’ she says.” Another parent described how “we were so scared of what life with Riley would be like, and now the scariest thing I can imagine is what my life would be like without him.”
‘Wouldn’t it be wonderful,’ another proud big brother asks in Gifts, ‘if every family had a kid with Down syndrome?’
That question, of course, does not express the wish that more children would struggle with disabilities, but rather that more families might find within themselves the means to understand, and to transmit to future generations, the profound truth that every life is filled with meaning, and every child is a source of joy. The deepest consequences of that discovery, it seems, have to do not with the recognition or acceptance we might offer to those who are disabled, but with the strength, compassion, happiness, and wisdom we might gain by the discovery itself, and by our acting on it. The ruling emotion that unites all the various stories told in these books is gratitude, and the reader cannot help but be left grateful as well, for the strengths on display in these stories of children with Down syndrome and of their families are the strengths we today can least do without.
Let us hope that this latest discovery continues in the tradition of Down syndrome research that should encourage prospective parents that their children will be well cared for, and that helps people with Down syndrome to live haler, healthier lives. Let us hope, too, that we can cherish our children in all their giftedness, no matter what particular hardships they may face.
Undecided voters lay inside a sleek fMRI machine in late 2007. The magnetic coil pulsed, scanning the blood flow in their brains. Images of Hilary Clinton, Mitt Romney, John Edwards, and other primary contestants flashed before their eyes.
The UCLA neuroscientists and Washington political operatives who ran the study presented their findings in a New York Times article, “This Is Your Brain on Politics.” The “voter impressions on which this election may turn” were displayed in a colorful slideshow of (statistical combinations of) brains: here the medial orbital prefrontal cortex is orange, indicating an “emotional connection” with Democrats; here the amygdala flashes, betraying anxiety about Mitt Romney.
At an American Enterprise Institute panel event last Monday, psychiatrist Sally Satel told the audience that this story alerted her to just how vulgarized neuroscience was becoming in popular culture.
“It really was a bit of a fiasco,” she told NYT columnist David Brooks, who was moderating a conversation with Satel and psychologist Scott Lilienfeld. “The fact that something lights up doesn’t mean you hate Hillary Clinton, or you’re going to vote for someone else. It almost read like a parody, the way they had boiled it down to an almost stick figure kind of narrative.”
It’s the “stick figure narrative” that’s being formed out of neuroscientific research and inserted into law, politics, and culture, that Satel and co-author Lilienfeld seek to dismantle in their authoritative, accessible new book Brainwashed: the Seductive Appeal of Mindless Neuroscience.
The wild exaggeration of neuroscience—both of specific findings, and of the field’s primacy in understanding human nature more generally—has drawn the ire of savvy bloggers and tome-writing intellectuals for years. The exposure of Jonah Lehrer, neuroscience’s most prominent popularizer, as a plagiarist and a fabricator also occasioned a critical look at the popsci genre he championed. But Satel and Lilienfeld’s book may represent the high water mark of anti-pop neuroscience writing so far: it is widely reviewed, and Brooks plugged Brainwashed in his weekly New York Times column. (Brooks freely admits he himself has succumbed to neuromania: “I wrote a book a couple years ago of mindless neuroscience, and it did really well!” he quipped at the AEI event.)
Anti-pop neuroscience, as opposed anti-neuroscience, is the key distinction. Satel and Lilienfeld want to clean up the riffraff because neuroscience done right is a sophisticated and promising field of inquiry. They are not here to argue, as Brooks did in his recent column, that the mind is not the brain, or even that we have free will outside of the causal chain of our neural firings.
Rather, the uses and abuses of neuroscience are more illustrative as a story of our tendency to get ahead of ourselves. Our perennial thirst for elegant mechanisms and overarching narratives, noble in its own right, can lead us to take lazy shortcuts and place our hope in the Next Big Explanation, whether phrenology, Freud, or Freakonomics. Culture, history, and politics are complicated, confusing, and mostly boring. With the recent successes of neuroscience, it’s easy to wish that the chatter of narratives, prejudices, habits, and emotion could be replaced with the clinical pings of the fMRI machine.
But for now, at least, it would seem that neuroscience has a long way to go before it supersedes our other ways of knowing: “This Is Your Brain on Politics” identified two 2007 candidates who had failed to fire up the neurons of swing voters, indicating impending trouble for their campaigns. They were John McCain and Barack Obama.
One of the most pervasive pathologies of the wi-fi-enabled, smartphone-strapping, “globally competitive marketplace” world we currently live in is the constant, rushed sense that there is simply no time. Where previous millenia made calendars, and the Industrial Age brought us the clock-regulated day and the hourly wage, we rush ourselves from pillar to post on schedules planned minute-by-minute. Doug Rushkoff calls our current condition “Present Shock,” an obsession with micro-time that is depleting our attention and energy from the genuine human goods of life, especially since the last thing on anyone’s mind is getting the full eight hours of sleep we all know the doctors recommend.
It would seem a good thing, then, that the multi-billion dollar coffee industry has ensured a mocha on every street corner, and fair-trade beans in every pot. Even more helpful, energy drinks like Red Bull, Monster, and Five-Hour Energy concentrate the caffeine further, so that we can be back at, or even above, our normal functioning despite our perpetual sleep deprivation. For those in school at one level or another, prescription stimulants like Adderall and Ritalin make the rounds to enable all-night study sessions. In such a competitive culture, it might even be seen an irresponsible to throw those extra hours away when they can be chemically substituted so effectively.
Science is starting to bring a rude awakening to the sleep-sparing, however. As Maria Konnikova notes over at the New Yorker, “While caffeine has numerous benefits, it appears that the drug may undermine creativity more than it stimulates it.” Caffeine boosts energy without a doubt, and blocks a brain chemical that would normally inhibit other brain activity, “aiding short-term memory, problem solving, decision making, and concentration.” All good so far, but Konnikova quickly notes that
much of what we associate with creativity—whether writing a sonnet or a mathematical proof—has to do with the ability to link ideas, entities, and concepts in novel ways. This ability depends in part on the very thing that caffeine seeks to prevent: a wandering, unfocussed mind. [sic]
Caffeine blocks “the parts of our brain that are more active when we’re at rest” and which “become activated right before we solve problems of insight. Caffeine prevents our focus from becoming too diffuse; it instead hones our attention in a hyper-vigilant fashion.” This extends far beyond the creativity associated with the painter or the poet. Many of our greatest scientific or mathematical discoveries took place not in the lab, or at the chalkboard, but in moments of mental leisure, the proverbial eureka in the shower. And the highest performers across all fields may want to take special note: some research has found even prescription stimulants to taper off in their enhancing effects towards the high end of the bell curve; indeed, they may even start retarding the cognitive powers of the most naturally gifted.
For those who, like Jacques Barzun, swear by the enhancing qualities of caffeine, fear not, as Konnikova notes a hopeful solution: “Some research has found that attributes like increased alertness and focus can be replicated by the placebo effect,” as research substituting decaf for regular coffee has found the benefits of caffeine to only show up among those who believe they have consumed regular coffee, regardless of the true content of their drink.
And while many may feel that they don’t have nearly the need for brilliant insights as they do of those extra few hours of activity, Jane Brody of the New York Times has an unwelcome recounting of all the health maladies that insufficient sleep are linked to, including diabetes, obesity, depression and substance abuse, ADHD, and even finds “The cognitive decline that so often accompanies aging may in part result from chronically poor sleep.”
Those who convince themselves that a five-hour sleep paired with a five-mile run is the key to a healthy and productive life may be fooling themselves, then, and may ultimately be sacrificing countless subtle improvements in their quality of work and life for the sake of a small quantity of time.
Last year I published Race, IQ, and Wealth, presenting the overwhelming evidence that group IQs were far more malleable and shaped by social influences than is widely acknowledged in many quarters. The result was a lengthy and ferocious Internet debate, including an overwhelmingly negative and even hostile response to my suggestions, mostly by bloggers who had long specialized in that forbidden topic.
As the dozen or so rounds of the debate played out, some of my critics, including the most scholarly, began to acknowledge that my arguments actually had quite a bit of merit, and these “second thoughts” continued after the controversy had died down.
For example, late last year an erstwhile blogger-critic informed me that he had discovered the precise details of the huge but hotly-disputed 1972 IQ study in Ireland that I had repeatedly cited, and the methodology seemed exceptionally well-designed and sound. Therefore, I think it can not longer be seriously disputed that just forty years the population of Ireland did indeed have a mean IQ of only 87.
The recent defenestration of the unfortunate Dr. Jason Richwine has brought these issues once again back to the fore, and apparently sparked renewed interest. During the previous debate, one of my earliest and strongest quantitative critics had been someone styling himself “The Occidentalist” and running a blog of a similar name. But a few days ago, he published an extremely detailed 5,000 word article entitled “The Argument Ron Should Have Made” in which he now grudgingly acknowledges that many of my central arguments seem to have been correct after all. This is a welcome change from his original response last year, which had characterized me as “egregiously dishonest” and my views as “laughable commentary.” Read More…