Across the country, NFL players may be wondering if they’ve been wearing pink in vain. In 2009, the United States Preventive Services Task Force changed its advice on mammograms, recommending them only to women over 50, since, below that age, the screening did more harm than good. Now, an exhaustive, twenty-five year study may prompt them to tell all women to cancel their appointments.
In a randomized controlled study (the gold standard of medical research), women who received regular mammograms were no less likely to die of breast cancer than those who went without. Although they received no additional protection, women who were screened paid an additional price. One in five of the cancers or abnormalities identified by mammography were ultimately harmless, but the women went through biopsies in order to be sure. Even worse, one out of every 424 women who were screened received treatment for a nonexistent cancer, enduring needless and debilitating radiation, chemotherapy, or surgery.
It’s tempting to be skeptical whenever a medical recommendation is reversed. If the last thing they told us was wrong, why should we trust them again? However, health care has changed since the advent of mammography. The old studies on the benefits of mammography weren’t necessarily wrong, just out of date. As awareness of breast cancer has increased, self-screenings have begun to do the work of mammography. As cancer drugs have improved, it’s no longer critical to identify diseases at their earliest stages to be able to survive.
But for a patient, who just hears conflicting recommendations, and not a discussion of research methods or the history of medicine, it’s hard not to come away with a sense of unease. It’s hard for doctors, knowing they won’t get to convey the nuances of the change, to finally make the public flip-flop for the sake of their patients. Not to mention we’ve been hearing every October that mammograms are one of our most potent weapons in the war on breast cancer. After so much relentless cheerleading, it’s hard not to feel a twinge of guilt or fear, when we think of the friend of a friend who had her cancer caught by a mammogram.
While we remain torn between the old recommendation and the new, it’s tempting to stick which the more interventionist option. Doctors and patients would like the comfort of knowing they did something even if what they did wasn’t very good. In fact, breast cancer treatment, as well as its diagnosis, has fallen prey to this rationale.
Women used to receive the Halstead procedure, a particularly radical form of mastectomy in which the doctor would remove the breast, underlying chest muscle, and even, sometimes, muscle in the neck and arm. William Steward Halstead performed the operation in good faith, thinking it was better to do everything he could for his patients, seeking to make the extremity of his surgery match the intensity of his intentions.
The same diseased thinking spreads through our body politic. War hawks accuse non-interventionists of being indifferent to humanitarian causes or national interest, when, in fact, the most ballistically satisfying solution may do more to calm our nerves than to quiet regional tensions. Politicians, like medical regulatory boards, become fearful of admitting that facts on the ground have changed since they made a decision, and the employer mandate doesn’t make much sense in a slack labor market.
In medicine and in politics, we’re always best prepared to fight the previous war, not the present one. Our clinical trial data and our policy maneuvers are always premised on information that’s a little out of date. But medicine has been more adept at recognizing the limitations of its data, resampling, and redeploying its resources.
Public policy is rarely put to any test more rigorous than that of public opinion, and, even the few empirical checks on rhetoric are often methodologically flawed, as in the case of the non-randomized pilot programs of the Obamacare Innovation Center.
Despite the headlines, the Canadian study didn’t expose an embarrassing error; it exemplified a robust culture of error-checking, one that’s still missing from American elections.