Thorough discussion of problem areas of science reporting. Never mentioned is the lucrative system of social control sans due process protections, psychiatry.
Last May, when This American Life acknowledged that it had run a 23-minute-long segment premised on a fraudulent scientific study, America's most respected radio journalists did something strange: They declined to apologize for the error. "Our original story was based on what was known at the time," host Ira Glass explained in a blog post. "Obviously the facts have changed."
It was a funny admission. Journalists typically don't say that "facts change"; it is a journalist's job to define and publicize facts. When a reporter gets hoodwinked by a source, she does not imply that something in the fabric of reality has shifted. She explains that she was tricked.
With science coverage, though, the situation seems to be different—which is why Glass' remark, while unusually blunt, wasn't actually wrong.
This American Life had been deceived by a political science researcher at the University of California–Los Angeles, Michael LaCour. His paper, based on falsified data, had slipped past peer review and landed in the pages of Science, the country's most prestigious scientific journal. This American Life declined to comment for this article, explaining that they might return to the incident in a future episode. But it's not hard to read the implicit what-could-we-do? shrug in Glass' statement. Science had spoken. Science had changed its mind. "Obviously the facts have changed."
The LaCour study, which focused on canvassers' ability to change voters' minds, was an especially subtle piece of fraud. It was hard to catch. LaCour had produced a result that was unusual, dramatic, optimistic, and, as Glass noted during the episode, different from 900 other similar papers that LaCour's colleagues had reviewed. No journalists—as far as I can tell—went looking for aberrations; in the end, a couple of graduate students caught him after they tried to replicate his methods.
As various commentators have observed, there's probably no field of journalism that's less skeptical, less critical, less given to investigative work, and less independent of its sources than science reporting. At even the most respected publications, science journalists tend to position themselves as translators, churning the technical language of scientific papers into summaries that are accessible to the public. The assumption is that the source text they're translating—the original scientific research—comes to them as unimpeachable fact.
There is little about the scientific method that supports these broadly accepted journalistic assumptions. Science is messy, and scientists are imperfect. And as scientific communities deal with rising retraction rates, a reproducibility crisis, continued influence from industry liaisons, new pressures on graduate students and postdoctoral fellows, a shaky peer review system, and a changing publication landscape, the need for watchdogs is stronger than ever.
[. . .]
In the United States, science reporting took its shape shortly after World War I, when the newspaper magnate Edward Scripps founded Science Service, a non-profit wire service that would deliver science coverage to American newspapers.
[. . .]
Today, most science coverage still follows a press release model. Articles focus on individual studies, as they come out. Reporters rarely return to research years down the line. Articles provide little context or criticism, and they usually frame the story within a larger narrative of human progress.
Deepening the resemblance to PR, scientific journals often provide embargoed versions of papers to select journalists ahead of publication. Among other things, that means that the journals control which outlets cover their research first.
Today, science journalists’ motivations "align very nicely with what the scientists themselves want, which is publicity for their work," says Charles Seife, a veteran science reporter and a professor in New York University's Science, Health, and Environmental Reporting Program. "This alignment creates this—almostcollusion, that might even be unethical in other branches of journalism." In short, more than other fields, science journalists see themselves as working in partnership with their sources.
[. . .]
The publicity-journalism culture has vulnerabilities. In 2014, John Bohannon, a writer for Science magazine, and Gunter Frank, a German clinician, set out to demonstrate the low standards of science reporting by running a flashy study with awful methods, and then waiting to see who would cover it. Their study purported to show that chocolate aided weight loss. The methods failed to meet even the most basic standards of good nutrition research—they had a sample size of 15—but Bohannon and Frank found a for-profit journal willing to publish their findings. Then they sent out a press release. A number of publications, includingShape magazine and Europe's largest daily newspaper, Bild, published the results. A year after Bohannon revealed that the study was a hoax, only one of those publications has retracted their coverage, he says.
[. . .]
Covering science isn't the same as covering, say, politics, of course. Politicians are competing for a limited pool of resources. They're playing zero-sum games, and they have strong incentives to conceal and deceive. Scientists, at least in theory, have incentives that are aligned with those of journalists, and of the public. We all want to learn more about the universe. It's a beautiful goal.
[. . .]
But approaching science as an exercise in purity, divorced from other incentives, Seife says, "ignores the fact that science doesn't work perfectly, and people are humans. Science has politics. Science has money. Science has scandals. As with every other human endeavor where people gain power, prestige, or status through what they do, there's going to be cheating, and there are going be distortions, and there are going to be failures."
[. . .]
Here's the uncomfortable side of this story: A substantial portion—maybe the majority—of published scientific assertions are false.
In rare cases, that's because of fraud or a serious error. The number of scientific papers retracted each year has increased by a factor of 10 since 2001.
But even accepted research methods, performed correctly, can yield false results. By nature, science is difficult, messy, and slow. The ways that researchers frame questions, process data, and choose which findings to publish can all favor results that are statistical aberrations—not reflections of physical reality. "There is increasing concern that most current published research findings are false," wrote Stanford Medical School professor John Ioannidis in a widely cited 2005 paper.
Social psychology is in the midst of a reproducibility crisis. Many landmark experiments don't hold up under replication. In general, the peer review process is supposed to control for rigor, but it's an imperfect tool. "Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public perception of peer review as a stamp of authentication is far from the truth," wrote a former Nature editor in that journal in 2006.
At the same time, the conditions in which research takes place can incentivize scientists to cheat, to do sloppy research, or to exaggerate the significance of results. Simply put, science does have politics. There's intense competition for funding, for faculty jobs, and for less tangible kinds of prestige.
Meanwhile, corporations spend small fortunes trying to influence academic researchers. In recent years, there has been a profusion of for-profit journals that either don't use peer-review or have a fake peer-review process, making it harder to establish the reliability of sources.
That's an uncomfortable image of the scientific process—uncomfortable because it's so out of step with popular presentations of scientific authority. Science magazines and sections rarely cover these issues.
Science reporters don't usually look at research funding, nor do they critically evaluate the quality of the studies that they cover. Often, they lack the time or technical knowledge to dig into stories. In other cases, they may just be worried about challenging expert authority.
All communities require watchdogs though. And while they are rare, promising models of investigative science journalism do exist.
[. . .]
After an investigation that cast doubt on the effectiveness of the antiviral drug Tamiflu, the BMJ appointed its first investigations editor, Deborah Cohen, who had trained as a doctor before moving to journalism. In her role at the BMJ, she dug into the research-backed claims of sports drink makers, and she investigated the safety of a popular diabetes drug. To demonstrate how flimsy the British government's regulation of new surgical implants had become, Cohen created a fake company, with a fake hip implant, and got it approved for medical use in the European Union.
[. . .]
At Retraction Watch, started in 2010, Ivan Oransky and Adam Marcus track retracted papers across disciplines. Oransky describes "this ecosystem that ... likes to paint the shiny, happy, new, novel, amazing breakthrough narrative onto everything. And retractions don't fit into that narrative, because they're about when things go wrong. And nobody likes admitting it."
[. . .]
Take the use of anonymous sources. In order to check the quality of new studies, diligent science reporters will call up other people in the field and ask for their opinions on the research. In small, close-knit scientific communities, though, people have strong incentives to speak positively about their colleagues' work. After all, the person you criticize in the New York Times today may be peer-reviewing a submission of yours tomorrow.
[. . .]
then there's the money. Science journalism usually focuses on the end result, and almost never refers to finances. But funding decisions affect everything from study design to the shape of entire research programs. Often, troubling data is sitting out in the open. Charles Seife, the NYU professor, has uncovered malfeasance by cross-referencing lists of federal grant recipients with lists of doctors who receive money from drug companies.
[. . .]
And then there's the money. Science journalism usually focuses on the end result, and almost never refers to finances. But funding decisions affect everything from study design to the shape of entire research programs. Often, troubling data is sitting out in the open. Charles Seife, the NYU professor, has uncovered malfeasance by cross-referencing lists of federal grant recipients with lists of doctors who receive money from drug companies.
[. . .]
Does an institution's strength come from a sense of omniscience? Or does it come from acknowledging its faults, and showing that it can address them, even as it produces useful results?
"I think that science is robust enough of a worldview and a method for truth-finding that you can beat it up as much as you want. It's going to come out just fine," Bohannon says. "You don't have to tiptoe around that thing."
How Journalists Can Help Hold Scientists Accountable
Amid the so-called replication crisis, could better investigative reporting be the answer? Maybe it’s time for journalists and scientists to work more closely together.