Human cloning is real.
Yesterday, the prominent scientific journal Cell published a paper by scientists at Oregon Health & Science University announcing that they had successfully derived stem cell lines from cloned human embryos. Some context is necessary, however, to start to grasp the implications of what has taken place.
First of all, a brief primer to the science. Cloning is more commonly referred to in scientific circles as “somatic cell nuclear transfer (SCNT),” where scientists take the DNA from an adult (somatic) cell and transfer it into an unfertilized egg, which has had its own DNA removed. Normally to begin developing into an embryo, a fetus, and ultimately an adult human being, the egg has to be fertilized by a sperm to which kick off the series of coordinated steps that constitute human development.
Instead of having the genetic material from two parents combine into a unique new life, however, cloning takes the full genetic information from an adult and places it into the emptied egg. These researchers immersed the egg in a caffeine solution and delivered regular electrical shocks, among other techniques, forcing it to enter development, dividing and growing until it reached “blastocyst” stage, where a protective outer layer called the trophoblast surrounds the mass of inner cells (ICM) that will constitute the myriad parts of a human body, and being.
At this point, as is the necessary procedure to obtain embryonic stem cells, they dissolved that protective outer layer to obtain the inner cells rich in total potential, and grew them into an “immortal” line of stem cells. To prove their success, some of those cells were programmed into muscle cells and grown into tumors under the skin of immune-suppressed mice. The muscle cells were made to contract, and filmed doing so.
The controversies and debates about cloning specifically are legion, and will be given new intensity with this announcement, but some points can be made at the outset. For those who believe that human life is worthy of protection from its inception, the creation of a human life for the express purpose of destroying it, and manipulating what could have been a child into reproducible tissue for manipulation and research is abhorrent.
Cloning compounds these considerations by transforming the nature of human life itself. As sexual beings, every child is the product of a union, possessing a unique inheritance unto themselves (identical twins notwithstanding) that will generate and govern their own story going forward. Cloning, however, gives that child the inheritance of a life already once lived. Many of our reproductive technologies already run perilously close to making the creation of life into manufacture, and cloning would drastically advance that by beginning to recycle the very material of life, to some degree inevitably making a newborn into a do-over. The demand is already there, to recover a lost child, to regenerate a dead genius, to live on forever genetically intact. We should not provide the supply.
These scientists protest that they have no interest in reproductive cloning, though, and indeed claim that a forthcoming paper will prove that their technique cannot be used to bring a child to term even if they wanted to do so. Even here, their justifications are weak. Embryonic stem cells, far from the promises of universal supply kits of personalized medicine promised at the DNC a decade back, have an inherent limit: human embryos are hard to come by. They require women to undergo highly invasive and sometimes risky techniques not to give birth, but to give scientists material to work with. The women used in this study were paid thousands of dollars, raising concerns over the exploitation of the poor, the commodification of the human body, and the commercialization of women’s reproductive powers. Furthermore, the research shows that while these scientists were very efficient, techniques obtaining more than 16 eggs at once produced eggs drastically less capable of being used for cloning.
Moreover, human embryonic stem cell research has fallen off dramatically since the discovery in 2006 of a technique for turning adult cells back into the “pluripotent” state embryos are so desirable for, at much less cost and without the ethical concerns of destroying embryos. Those cells, induced pluripotent stem cells (iPS), won Shinya Yamanaka a Nobel Prize for his efforts this year, and have revolutionized the stem cell research world. That breakthrough, it must be recognized, took place under a Japanese regulatory regime that made experimenting on human embryos almost impossible.
Taken together, a strong case is made for banning human cloning of any sort, and for keeping scientific research within the bounds of what is morally acceptable. Science wields awesome powers for achieving the ends we set before it, and we should not do it so little credit as to assume that medical progress must be ethically transgressive.
James Pinkerton has been making a series of arguments here advocating research into cures as the best way of controlling healthcare costs, especially since cures have far and away the greatest non-financial benefits to society, compared to rations or restrictions on care. In a fortuitous bit of timing, just as the conclusion of his latest two-part-series posted yesterday, the New England Journal of Medicine published a report exposing just how great the cost of our current state of affairs is becoming.
In one of the most through investigations of dementia on record, researchers led by Dr. Michael Hurd of the RAND Corporation used “the Health and Retirement Study (HRS), a nationally representative longitudinal study of older adults” to greatly improve on the previous attempts to quantify dementia’s monetary costs. By controlling for the additional illnesses dementia patients are subject to due to accidents of age and demographics, the researchers were able to determine that dementia alone incurs additional marketplace costs of approximately $28,501. Adjusted, almost half, $14,000, of this was for nursing home care.
Dementia’s costs are not purely born out in the marketplace, however, as it is much more likely to involve informal home care by family members or unrelated volunteers. When the researchers added a valuation for home care informally provided, the cost of dementia soared to approximately either $41,689 or $56,290, depending on whether you valued that care by foregone wages or the value of equivalent marketplace care.
These are the total financial costs for a theoretical average individual dementia patient, controlling for many variances. Aggregated, the costs are vast, and are only set to explode. The researchers found that the total marketplace cost of dementia in 2010 was $109 billion, and either $159 or $215 billion for total cost including informal care. That’s the good news. Predicting that the cost of care would remain constant, a conservative assumption, they predicted that by 2040, less than thirty years away, dementia costs would rise to $259 billion in marketplace care, and either $379 or $511 billion in total care. Even adjusting for the ability of population growth to moderate some of the effect of this explosion, per-capita costs of dementia would increase nearly 80 percent.
As I wrote in the aftermath of Pope Benedict’s age-driven resignation,
Older wisdoms need to be brought to bear to teach us once again about mortality, and living life well to the end. New strategies need to be created to help us care for ourselves and each other.
The tremendous diversity we find in all things human is all the more present here. We will each age distinctively, in keeping with the discrete combinations of genetic inheritance and lived experience that make us who we are. Some will find their body betray them as their wits are still sharp, some will have their body persist past their best days. Many will stay strong to the very end. My own grandfather may have lost some of the pep off his driver on the golf course, but he still undershoots his age and beats his son-in-law and grandson every time out. But the lives we set up for ourselves in the post-war era of independence now, independence forever, may not survive unchanged under the strain of the challenges we face. We may need to reach back and revive older traditions of multi-generational households. We may have the wealth to continue building a service sector to support us in our later days. However the years to come turn out, we can be sure that our society will soon look very different than it does today.
Perhaps the brain-mapping plans outlined in the President’s recently revealed $100 million BRAIN Initiative will lead to cures somewhere down the line, but it is wise to be cautious about the ability of scientists to truly understand and minister to our inner selves. Even if large medical advances come out in the decades ahead, as one presumes will happen, the lesson of the past fifty years should teach us that better health is not always cheaper, and new technologies are expensive to bring to market.
What seems certain, though, is that dementia and the costs of aging will be an ever-increasing part of our public and personal concerns.
The furniture industry in North Carolina is doing better than the textile industry, but that’s a little like saying the African elephant is doing better than the mastodon. Between the advent of mass Chinese imports and bookcases of particleboard you “assemble” at home, the market for domestically produced quality furniture has taken a beating, and the communities that market once supported have not been spared the blows. A New York Times report over the weekend pulled back the curtain a bit on the rougher side of life in the Tarheel State.
The Times story is ostensibly about the Occupational Safety and Health Administration’s failures to prioritize health concerns in its regulation and enforcement, but the thorough reporting and balanced storytelling makes the story human, with real, conflicting pressures. It centers around Royale Comfort Seating, foam cushion supplier to many of the top furniture brands. It also centers around Sheri Farley, a single mother, who, “For about five years…stood alongside about a dozen other workers, spray gun in hand, gluing together foam cushions for chairs and couches sold under brand names…” The traditional glue for this purpose in the 1980s was eliminated because of effects related to ozone depletion, and the glue after that earned the sobriquet “methyl ethyl bad stuff” by killing more than 30 workers a year. The glue Sheri Farley and her coworkers were immersed in, nPB, is a powerful neurotoxin that raised health concerns as soon as it was introduced to the market. It appears to have been seen as the least bad option.
Business Insider reports that “Steve Deace, an influential conservative Iowa talk show host” has been making profound declarations that, should the Supreme Court strike down anti-gay marriage laws, “It’s going to raise the issue to Orange Threat Level, it’ll be DEFCON 6…” In the first instance Mr. Deace is likely referring the now defunct color-coded threat warning system instituted in the aftermath of the attacks of 9/11, where orange was second only to the red warning of an imminent attack. On the second concern, Deace completely mangles the DEFCON warning system, and so gives us an opportunity to get our nuclear war metaphors straight once and for all.
DEFCON is short for DEFense CONdition, and according to the Encyclopedia of the Cold War, “The DEFCON system is divided into five different alert levels with detailed, if ambiguous, descriptions and expected actions by military forces at each threat level.” Mr. Deace’s first error is that DEFCON scales from 1 to 5, not 6. To be charitable, though, he likely knew this and was making an exaggerated claim for effect. What then, is DEFCON 5? Again from the Encyclopedia of the Cold War, “DEFCON 5: Normal peacetime readiness The lowest alert level in the DEFCON system…” DEFCON 5 is as low as alerts go, and is the traditional status for most military forces. Anytime someone threatens to go DEFCON 5 on you or a loved one, then, readily take them up on their offer as amity should shortly be restored.
Alert is raised from there with progressively lower numbers. DEFCON 4, the lowest alert we have seen since 9/11, represents “Normal, increased intelligence and strengthened security measures,” but readiness really ramps up at 3. At DEFCON 3, “Increase in force readiness above normal readiness…base security [is] tightened and intelligence-gathering intensified [and] other changes to the configuration of military forces are made,” including the arming of nuclear war-heads.
The highest we have ever been is DEFCON 2, “Further increase in force readiness but less than maximum readiness.” Parts of the military were raised to DEFCON 2 during the Cuban Missile Crisis, where “nuclear weapons … were loaded and could be used at the discretion of surprisingly low-level officers,” according to the aptly titled book DEFCON 2: Standing of the Brink of Nuclear War During the Cuban Missile Crisis.
DEFCON 1: “Maximum force readiness“ has never been reached as far as we know. That is a blessing, as “At this DEFCON level US military forces are ready to be deployed, including preparation for full-scale thermonuclear war.”
Where does this leave Steve Deace and his claim of DEFCON 6 for exaggerated effect? Going from 1 as full-scale thermonuclear war, and 5 as normal peacetime, 6 would seem to be peace on earth, goodwill to man, with the possible escalation to rainbows spread across the land. Let us hope that such is the power of the Supreme Court.
James Poulos makes an important argument in Vice, that political elites have become so obsessed with economic growth that “Not only does our anxiety push us to become the kind of people least capable of launching our own personal growth plans, it encourages us to ignore the growing number of urgent issues that have nothing to do with the size of our per-capita GDP.”
To be sure, economic growth is important, and provides many benefits, including but not limited to maintaining domestic tranquility, securing our social order, improving the material living standards of almost all Americans (even if at divergent rates) and relieving us of many anxieties and burdens. Poulos doesn’t dispute the benefits of economic growth, but rather points out how an obsession with it can undermine and spoil the very things we are or should be seeking. You should read the full piece here.
There is a complementary obsession to the one he points out, though, one which interacts with the growth obsession to undermine many of the most important discussions and considerations we should be undertaking. It is the competitiveness obsession. In fact, if there is any word that runs close to “growth” as a dominant policy topic, it is its cousin “competitiveness.” Our nation needs to be more competitive with China and India on a global scale of economies and war. Our companies must be more competitive to create jobs. Our workers must be more competitive, to keep those jobs. Our schools must be more competitive in math and science, and our children must be more competitive, because haven’t you seen how long the Chinese lock their kids up in schools? When not competing with the Chinese, our kids need to compete with their peers to (for the most egregious) get into the most competitive kindergartens and grade schools that will feed them into the most competitive colleges, and graduate programs, so that they can get the most competitive internships and entry level positions at the most competitive firms. Along the way hopefully they will meet an equally competitive spouse, so that their children can start off with a competitive genome. Because you never want to let your child “fall behind,” least of all in the womb. How would they ever catch up from that? They would be doomed, or at the least screwed.
To which Poulos has a welcome reminder: “None of us are doomed. Nobody is screwed. Sure, your life might take some weird, sometimes even painful turns. You, like millions upon millions of us since the dawn of man, might experience heartache, disappointment, and tragedy.”
But the most important things in life, the most vital and electric things that enrich the human experience into something greater than anything the most competitive of Darwin’s finches could hope for, are not to be found in competition. Love and companionship, in spite of and through hardship, will not be acquired through a sperm bank sorted by SAT score. The terror and the tenderness of parental love will not be enhanced by private schools charging Ivy-League tuition. The ecstasy and the groundedness of religious devotion cannot be obtained at the expense of another. In fact, each of these can only be obtained by looking beyond the personal immediacy of competition to recognizing ourselves as in relationship, embedded in community.
It is just this community that the competitiveness cult threatens, by driving us to instrumentalize ourselves and our lives. This is perhaps made most plain in the absurdity of the conventional college admissions process, where the early education in civil society and self governance that clubs of interest and service once provided high schoolers has been transformed by their status as mandatory resumé fodder. The building of relationships can turn into “networking,” particularly in DC, where business cards are often accompanied by sheepish-to-shameful grins testifying to the mixed motives of “staying in touch.” There are reasons for this, and people feel a genuine anxiety about keeping their head above water, in many ways driven by the increasing instability of our modern times. That does not diminish, but rather it enhances, the value of Poulos’ imperative: “Let go of our sense of dependence on plans.” What we can compete for comprises only a small portion of our human goods.
A tribute to the retiring Mariano Rivera? A wink to the notion that he haunts John McCain’s nightmares? Either way, Rand Paul entered the CPAC stage yesterday to the musical stylings of Metallica’s “Enter Sandman,” and full-throated roars of approval from the conservative crowd. First elected in 2010, the junior senator from Kentucky has been on something of a political tear of late, and he built off of the momentum from his social media fueled filibuster to offer a textbook demonstration of his skill at covering libertarian ideas in conservative partisan trappings.
Paul began his speech by declaring he had “a message for the President, “a message that is loud and clear, a message that doesn’t mince words.” Listen to three speeches at CPAC, and you’ll hear that four times, followed by a standard GOP platitude. The red-meat crowd sensed their priming and loaded up to applaud an attack on Obamacare, the virtues of America the Beautiful, or some similar staple. Paul gave them: “no one person gets to decide the law, no one person gets to decide your guilt or innocence.” From the first, his speech demonstrated the rhetorical talent Paul the Younger brings to his increasingly large national profile.
In his further attack on President Obama, Paul demanded to know “will you or won’t you defend the Constitution?” The crowd ate it up, so he slipped into Eisenhower, usually more in vogue in the pages of The American Conservative than the ACU, asking “How far can you go without destroying from within what you are trying to defend without?” The Senator followed with Montesquieu and the separation of powers but concluded the point by saying “Our Bill of Rights is what defines us and makes us exceptional.” Feeding off the guaranteed applause for American exceptionalism, Paul defended himself against John McCain and Lindsay Graham by draping himself in the cause of the wounded warriors, “the 6,000 parents whose kids died as American soldiers in Iraq and Afghanistan.” Paul was defending the Bill of Rights so that the soldiers might not have died in vain, thus co-opting the rhetoric that extended the wars in Iraq and Afghanistan. With a deft step, John McCain was rhetorically wrong-footed.
Last night’s epic talking filibuster by Senator Rand Paul of Kentucky, joined by senators Republican and Democrat (well, one Democrat), brought the GOP caucus, however briefly, around to a civil libertarian position scarcely imaginable for those of us who remember the George W. Bush years. Lindsey Graham and John McCain are still fighting the good fight, however.
Where did drones come from? Did they vault from the pages of science fiction straight into our hypothetical cafe experience? The answer is stranger than you think.
P.W. Singer (not to be confused with Peter Singer) gives a terrific history of robot warfare at the beginning of his book Wired for War, which is partially condensed in an essay at The New Atlantis. Drones began, as many great inventions have, with Nikola Tesla, who “first mastered wireless communication in 1893,” driving a motorboat outside Madison Square Garden. He was laughed out of the military for proposing the efficacy of his boat and remote controlled torpedoes.
World War I brought technology to warfare in a particularly unwieldy fashion–the strategy was 19th-century, the weapons were 20th-century. Prototypes floated around like supply-running “electric dogs” and the Kettering “Bug,” a gyroscope- and barometer-guided plane that crashed into things. The Germans guarded their coastlines with FL-7′s, boats loaded up with explosives and controlled by wire until Tesla’s wireless radio control entered the fields of war in 1916.
For the most part, World War I was a time of odd experiments in unmanned warfare, with few projects getting enough traction or funding to be implemented. World War II, on the other hand, was a veritable technological bonanza.
The Germans created the first cruise missile, ballistic missile, and jet fighter, as well as the first drone. The German drone, the FX-1400 or “Fritz” was, as Singer describes it: “a 2,300 pound bomb with four small wings, tail controls, and a rocket motor” that the Germans would drop from planes and guide by radio to its target.
The Americans were behind the curve, but experimented with “Operation Aphrodite” in 1944. Aphrodite was a plan to send bomber planes loaded with over a ton of Torpex–an explosive 50% stronger than TNT–and have the pilot bail once the explosives were armed so that a nearby ship could use video cameras mounted on the drone to guide it into heavily protected Nazi targets. On August 12, 1944 when a navy equivalent was sent out in pursuit of an experimental Nazi supercannon, the Torpex exploded before the plane even crossed the channel, killing the crew and scuttling the program in fear of the pilot’s powerful father.
The pilot of that ship was Joseph Kennedy, Jr., JFK’s older brother. Joe Kennedy the father was a millionaire, movie mogul, and most importantly, powerful political figure and former ambassador to Great Britain who was set on electing his eldest son Joe Junior president. After the younger Joe was killed, John F. Kennedy took up the family burden.
After World War II, drones took a backseat in the American military. The Firefly drone flew 3,435 missions over Vietnam and South Asia, but 16% crashed with no field tests or data collection. The Army launched the Aquila program in 1979, but burdened it with so many demands that the original battlefield spy became a typical bureaucratic boondoggle, with the original order of $560 million for nearly 800 drones turning into more than $1 billion for prototypes. Most of the experimental ground drones of this period were operated by fiber optic wire, ignoring Tesla once again and proving vulnerable to scissors.
Desert Storm made military air tech sexy, post-Cold War cuts made it a rising budgetary priority, and John Warner added a legislative mandate to a defense authorization bill in early 2001 that one-third of all attack aircraft should be unmanned by 2010. The drone explosion since the war on terror brought us to the present day, with unmanned aircraft flying missions across the world and becoming a key part of President Obama’s anti-terrorism strategy.
A final interesting note: the Global Hawk Drone pictured above is manufactured by Northrop Grumman, which bought out one of the most prominent drone manufacturers of World War II after the war. This manufacturer had a factory near Hollywood where an Army photographer doing a feature on women in industry discovered a shapely young lady working on the drones, and sent the photos he took of her to a modeling agency. The woman would soon become better known as Marilyn Monroe.
The Pope has resigned. He has not died, or been taken up like Elijah, but resigned because “both strength of mind and body are necessary” for his work and he is running out of one, the other, or both.
The history books will likely see this as an emblematic moment of an old dilemma with profoundly new reach and seriousness: we are starting to outlive our competency, outlive our health. What makes the Pope such a particularly compelling example is how his office aspires to permanence and timelessness, for he gives us a very public display of our nature as finite, limited creatures endowed with that transcendent, troublesome ability to participate in the infinite. For millions beyond the Catholic Church itself, he is the man most associated with the quest to stretch ourselves beyond our sins and earthy limitations, towards a higher calling. Ross Douthat wrote in the immediate aftermath of the announcement that:
“There is great symbolic significance in the fact that popes die rather than resign: It’s a reminder that the pontiff is supposed to be a spiritual father more than a chief executive (presidents leave office, but your parents are your parents till they die), a sign of absolute papal surrender to the divine will (after all, if God wants a new pope, He’ll get one), and a illustration of the theological point that the church is still supposed to be the church even when its human leadership isn’t at fighting trim, whether physically or intellectually or (for that matter) morally.”
And yet Pope Benedict XVI resigned. Instead of waiting to be called heavenward he exercised his own judgment (profoundly informed by long and searching prayer, doubtlessly) that he was no longer fit for the office to which he had been called. What does it mean to outlive a post meant for a lifetime? Read More…
The 2013 Academy Awards are now on the books, and the only question left is: which of these movies are actually worth paying money to see? TAC has answers:
Winner of Best Picture, Best Adapted Screenplay:
- Noah Millman feels Argo backfired as a campaign ad, and misfired as a film.
- Scott McConnell on the other hand, explains what Argo got right on Iran
Winner of Best Actor and Production Design:
- Millman was “carried along by the universally excellent performances,” and dives into the tension, dramatic and historical, of the film.
Winner of Best Foreign Language Film:
- Eve Tushnet found Amour a “totally compelling, emotionally devastating movie,” and felt it was misunderstood as pro-euthanasia shilling because it was “a portrait, not an allegory.”
Life of Pi
Winner of Best Director, Cinematography, Original Score, Visual Effects:
- Rod Dreher found Life of Pi to be “mostly wonderful, and certainly one of the most visually stunning films any of us had ever seen.” He discusses the “provocative religious message in the movie” in a spoiler-filled consideration (you have been warned) and finds “as a Christian I don’t share the film’s pantheistic worldview, but I found it philosophically engaging all the same.”
- (Update): Noah Millman adds to the list with a consideration of Life of Pi‘s multiple stories, and what stories mean to us in art and faith.
Zero Dark Thirty
Winner of Best Sound Editing
- Noah Millman gives Zero Dark Thirty a deep, thorough consideration and concludes “Don’t go to this movie to learn whether torture was necessary or not to get bin Laden. Go to this movie to understand why we – not just the Bush Administration or the CIA, but much of America – embraced torture.”
- Noah Millman finds The Sessions “a sweet little film … a heartwarming story, and one would have to be a churl not to cheer Mark on, particularly since he is so self-deprecatingly charming throughout,” but “I suppose I’ll have to be a churl,” giving a through-going consideration of all the film’s many merits along with its shortcomings.
Beasts of the Southern Wild
- Eve Tushnet “can tell you it is worth it. The things you’ve probably heard already are true: This is a lush, heart-wrenching fable about a little girl and her daddy, in a rural Louisiana enclave barely clinging to the high side of environmental apocalypse,” then gives the movie a thorough consideration.
- Millman “still can’t make up my mind what to think about Paul Thomas Anderson’s new film, which I saw last night. Not that I’m not sure it was a great film – I know what I think of it. I’m just not sure what I think about it,” ultimately concluding “Something has finally been mastered, but whether it is Dodd or Quell, or the language of self-mastery itself, I couldn’t say.”
Winner of Best Original Song, and Sound Editing:
- We didn’t review “Skyfall” per se, but Stephen Tippins, Jr. gave James Bond himself a through review in a recent issue, finding the double-oh agent to be more than a glamorous womanizer, rather “defending the West against itself.”
Winner of Best Supporting Actor and Original Screenplay:
- Rod Dreher didn’t review Django, exactly, but rather explains why neither he nor The Atlantic’s Ta-Nehisi Coates plan to see it at all, despite Rod expecting he would enjoy it.
Wednesday night the New America Foundation hosted a debate on the resolution “Your Smartphone has Hijacked Your Life.” It was a rich discussion and very well moderated, wide-ranging over a number of the social and personal costs and benefits we derive from these miraculous devices, so well worth watching. Say, on the ustream app on your phone.
Throughout the discussion, though, one particular thread of an idea seemed to reemerge from time to time in various forms, and understandably so, as it is perhaps one of the distinguishing features of life in the smartphone era.
Smartphones are always, and they are everywhere. Read More…