We have all come across scientific studies that, when reported in the media, elicit strong reactions. Some may inadvertently encourage questionable practices like increasing personal wine consumption or completely avoiding genetically modified organisms.
Consumer behaviour is linked to scientific reporting. If a scientific paper concludes, for example, that a product is detrimental to health, the demand for it is likely to instantly decrease. This seems to indicate that there is a high degree of trust placed on researchers by consumers. Sadly, there are many instances in which a high level of trust may not be warranted.
Jacob Cohen, a psychologist from New York University, analyzed the statistical power — the likelihood of an experiment to detect a real effect — for 70 research papers in 1962, and he found that the likelihood was about 20 per cent, despite the fact that all of the papers had reported significant results. He consequently urged researchers to improve their methods to ensure that the integrity of research is upheld.
This September, Paul Smaldino of the University of California, Merced, and Richard McElreath of the Max Planck Institute for Evolutionary Anthropology conducted a similar study. Published in Royal Society Open Science, the results they found were grim. Based on a sample of 44 papers published between 1960 and 2011, the statistical power had gone up a mere 4 per cent since Cohen’s paper, indicating that researchers may not have paid adequate attention to his warning.
To determine why, Smaldino and McElreath built a computational model in which an arbitrary number of laboratories compete for the prestige and funding that is received as a result of being published. Each laboratory has the ‘power’ to identify a relationship between variables, where increasing this power also increases false positives unless ‘effort’ to be accurate is increased. However, doing so decreases the productivity of the lab as this takes a greater amount of time. The result of this simulation was that bad science was ‘naturally selected’ in the Darwinian sense, meaning that this trend may not indicate conscious strategizing or cheating on the part of the researchers.
They quote the maxim of well-known economist Charles Goodhart, who says, “When a measure becomes a target, it ceases to be a good measure.” In this model, labs that were better able to characterize new results got published more often and received pay-offs, but they tended to record more false positives. On the other hand, labs that did more thorough research could not publish as much and thus their results — although more accurate — resulted in the lab not receiving as much funding.
Smaldino and McElreath modified the model to include replication, which is an attempted repetition of a published result within the same parameters to ensure accuracy. Each lab would get a boosted reputation for a successful replication and a penalty for a failure. Though this would seem to be a good control system, poor practices still prevailed but at a lower rate; this was still found to be true when the penalty was raised to a hundred times the original pay-off given to the lab.
The authors state, “Some of the most powerful incentives in contemporary science actively encourage, reward and propagate poor research methods and abuse of statistical procedures.” This leaves modern science with the daunting task of curtailing rewards given to high output researchers in favour of high quality results.
This may seem very grim, but the tide is turning in support of good scientific practices. The host of HBO’s Last Week Tonight, John Oliver, recently brought this issue to light in a segment that has garnered over 8 million views on YouTube.
The prevalence of PubPeer, which is a message board that allows anyone to post anonymous comments on scientific studies, provides further hope that this practice can be curbed. At least three high-profile scientists in the past few months have had their papers retracted by journals, after anonymous commenters on the website pointed out discrepancies.
It is clear that there is a strong desire to increase accountability within scientific publications. For consumers and those working in the field of science, this should be encouraging.