In our present Age of Scientism, which has been with us for over two centuries now, there is no higher authority in our culture than Science. That doesn’t mean that science is always unquestioned, but it does mean that its authority is without peer. That authority stands on its claim to be objective, disinterested. But this is a shaky claim. In the 19th century, various partisan camps in Britain seized Darwin’s findings to claim the mantle of science — sorry, Science — for what they wanted to do in the first place. For example, imperialists said Science now proves that it is natural and just for the strong to dominate the weak. Et cetera.
The point is, it is widely understood that there are groups on both the left and the right who refuse to accept scientific data and conclusions when it offends a core belief (e.g., anti-vaxxers, Biblical literalists), but the harm from credulousness about science is less widely appreciated.
The New York Times addresses this issue in a story about the bogus study purporting to prove that pro-gay canvassers can change a significant number of minds about same-sex marriage through personal conversation. The study, which relied on falsified data, was published in the prestigious journal Science, and co-authored by a widely respected authority in the field. From the piece:
The case has shaken not only the community of political scientists but also public trust in the way the scientific establishment vets new findings. It raises broad questions about the rigor of rules that guide a leading academic’s oversight of a graduate student’s research and of the peer review conducted of that research by Science.
New, previously unreported details have emerged that suggest serious lapses in the supervision of Mr. LaCour’s work. For example, Dr. Green said he had never asked Mr. LaCour to detail who was funding their research, and Mr. LaCour’s lawyer has told Science that Mr. LaCour did not pay participants in the study the fees he had claimed.
Dr. Green, who never saw the raw data on which the study was based, said he had repeatedly asked Mr. LaCour to post the data in a protected databank at the University of Michigan, where they could be examined later if needed. But Mr. LaCour did not.
“It’s a very delicate situation when a senior scholar makes a move to look at a junior scholar’s data set,” Dr. Green said. “This is his career, and if I reach in and grab it, it may seem like I’m boxing him out.”
But Dr. Ivan Oransky, A co-founder of “Retraction Watch,” which first published news of the allegations and Dr. Green’s retraction request, said, “At the end of the day he decided to trust LaCour, which was, in his own words, a mistake.”
The details that have emerged about the flaws in the research have prompted heated debate among scientists and policy makers about how to reform the current system of review and publication. This is far from the first such case.
The scientific community’s system for vetting new findings, built on trust, is poorly equipped to detect deliberate misrepresentations. Faculty advisers monitor students’ work, but there are no standard guidelines governing the working relationship between senior and junior co-authors.
The reviewers at journals may raise questions about a study’s methodology or data analysis, but rarely have access to the raw data itself, experts said. They do not have time; they are juggling the demands of their own work, and reviewing is typically unpaid.
In cases like this one — with the authors on opposite sides of the country — that trust allowed Mr. LaCour to work with little supervision.
“It is simply unacceptable for science to continue with people publishing on data they do not share with others,” said Uri Simonsohn, an associate professor at the Wharton School of the University of Pennsylvania. “Journals, funding agencies and universities must begin requiring that data be publicly available.”
Left unaddressed (naturally) by this story is the role of confirmation bias in these cases. Did the people in charge of supervising and vetting this study for publication cut corners because it arrived at a conclusion that supported what they wanted to believe?
You might remember my telling you a few weeks back about a young friend, a scientist in training, who chose to leave her position at a prestigious research institution to work in a science-related field. She said that she couldn’t stand the atmosphere of backstabbing careerism that pervaded the community, nor could she abide the data-fudging that she said was common, to reach conclusions that would make grant-getting more likely, or bring more prestige.
A piece in Vox talks about how science can go off the rails. Outright fraud is rare, but that’s not the only way scientists can produce inaccurate or otherwise skewed results. The media play a role in the process too:
In his seminal paper “Why Most Published Research Findings Are False,”Stanford professor John Ioannidis developed a mathematical model to show how broken the research process is. Researchers run badly designed and biased experiments, too often focusing on sensational and unlikely theories instead of ones that are likely to be plausible. That ultimately distorts the evidence base — and what we think we know to be true in fields like health care and medicine.
All of these problems can be further exacerbated through the dissemination of research, when university press offices, journals, or research groups push out their findings for public consumption through press releases and news stories.
But don’t think that the scientists have been hoodwinked by their institution’s PR machine:
Worse, the scientists were usually present during the spinning process, the researchers wrote: “Most press releases issued by universities are drafted in dialogue between scientists and press officers and are not released without the approval of scientists and thus most of the responsibility for exaggeration must lie with the scientific authors.”
The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices. And individual scientists, including their most senior leaders, do little to alter a research culture that occasionally veers close to misconduct.
Whenever skeptics question scientific results in a particular instance, there are always scolds who damn them for being obstinate irrationalists. And sometimes that is true. But placing all one’s faith in the integrity of scientists is a mistake too.