1
Nov

Why Can’t Scientists Reproduce Each Other’s Results?

Illustration: Paul Blow

By Félix Reskala

SAN FRANCISCO—There is a problem in science: Scientists often can’t reproduce the results of their colleagues. More than 50% of researchers feel this poses a major problem for the field, according to a survey last year in Nature.

To address what this issue means for science journalists, prominent editors and scientists convened a session titled Conflicting Data: Dealing with the Reproducibility Issue” on 28 October at the World Conference of Science Journalists 2017.

Reproducibility experiments review the veracity of an investigation. If a scientist publishes a breakthrough and her colleagues want to check whether those results hold up, they must reproduce the experiment. However, it is common that results in reproduced experiments don’t match those described by their colleagues in the original paper.

When results don’t match, scientists can’t tell whether the outcome of an investigation is the norm, or something different. Without that confidence, they cannot proceed to do other experiments based on the results. This all slows the process of science development.

What's going on?

Why can’t many scientists reproduce results of their colleagues? A first possible explanation is fraud. Scientists may modify or even falsify research data, according to the panelists.

However, senior health journalist Ivan Oransky, one of the founders of Retraction Watch, reported that only a small part of the reproducibility crisis is due to fraud in academic research. Overall, he said, fraud is not the main reason for this crisis.

A second explanation was provided by National Public Radio science correspondent Richard Harris, author of Rigor Mortis, a new book on reproducibility. He proposed that the reproducibility crisis is due to the underlying culture of research, which rewards flashy papers and not good science.

Panelist Mina Bissell, senior advisor to the laboratory director on biology at Lawrence Livermore National Laboratory in California, proposed a third explanation. She noted that many times non-reproducible results are due to differences in experimental methods between studies. Usually, she said, these differences are small and hard to detect.

What to do about it

For this reason, Bissell has collaborated with other scientists who could not reproduce her results, and she also recommended other scientists to do so. These collaborations aim to identify why results might differ in studies that attempt to use the same methods. Even though these collaborative reproductions of results are difficult and time-consuming, they usually lead to improved methods—and a better understanding of research data.

This collaborative method was praised by the speakers, but also critiqued because it would be unrealistic to do such investigations for every non-reproducible result—and even less realistic for all the academic literature.

The panelists proposed several other ways to help overcome these problems. For example, scientists could nominate a paper with notable results, and teams would volunteer to reproduce the experiments. The speakers also mentioned that reproducibility might improve because journals today require more information about specific techniques, making it easier for scientists to reproduce the methods more precisely.

Other recommendations to increase reproducibility include better courses on research techniques for Ph.D. students and postdoctoral scholars; pre-registration and review of methods by each journal’s experts; and publication of the entire data set, not just the part relevant to the reported investigation.

The reproducibility crisis is “making science better.”

But how can journalists deal with studies that have reproducibility issues?

According to Oransky, science journalists should keep in mind that results from a single study might not hold up over time, or when subject to a deeper statistical examination of its analyses. The speakers also recommended PubPeer, a site that examines probable errors in the methods of a research paper.

Even though the reproducibility crisis is a major problem in research, it is making reproducibility studies gain more respect among the scientific community, said Monya Baker, a San Francisco–based editor and writer for Nature.

The attention also calls out weak statistics in published studies, Baker said. For these reasons, she noted, the reproducibility crisis is “making science better.”

Félix Reskala is a Ph.D. psychology student at the National Autonomous University of Mexico in Mexico City. He is studying academic dishonesty as part of his dissertation and has a science communication channel on Youtube. Follow him on twitter: @habiaspensado