Aug 06 2015

Registering Studies Reduces Positive Outcomes

The science of science itself is critically important. Improvements in our understanding of the world and our technological ability to affect it is arguably the strongest factor determining many aspects of our quality of life. We invest billions of dollars in scientific research, to improve medical practice, feed the world, reduce our impact on the environment, make better use of resources, to do more with less.

It seems obvious that it is in our best interest for that scientific research to be as efficient and effective as possible. Bad scientific research wastes resources, wastes time, and may produce spurious results that are then used to waste further resources.

This is why I have paid a lot of attention to studies which look at the process of science itself, from the lab to the pages of scientific journals. To summarize the identified problems: most studies that are published are small and preliminary (meaning they are not highly rigorous), and this leads to many false positives in the literature. This is exacerbated by the current pressure to publish in academia.

There is researcher bias – researchers want positive outcomes. It is easy to exploit so-called
“researcher degrees of freedom” in order to manufacture positive results even out of dead-negative data.  Researcher can also engage in citation bias to distort the apparent consensus of the published literature.

Traditional journals want to maximize their impact factor, which means they are motivated to publish new and exciting results, which are the one most likely to be false. Insufficient space is given to replications, which are critically important in science to know what is really real. We are also now faced with a large number of open-access journals with frightfully low standards, some with predatory practices, flooding the literature with low-grade science.

All of this biases published science in the same direction, that of false positive studies. In most cases the science eventually works itself out, but this arguably takes a lot longer than it has to, and scientists pursue many false leads that could have been avoided with better research up front.

Attention is being paid to this problem, although not enough, in my opinion. One specific intervention aimed at reducing false positive studies is pre-registration of clinical trials (at clinicaltrials.gov, for example). The idea here is that scientists have to register a scientific study on people before they start gathering data. This means they cannot simply hide the study in a file drawer if they don’t like the results. Further, they have to declare their methods ahead of time, including what outcomes they are going to measure.

Pre-registering scientific studies, therefore, has the effect of reducing researcher degrees of freedom. They cannot simply decide after they collect the data which outcomes to follow or which comparisons to make, in order to tease out a positive result. Does this practice actually work? The answer seems to be yes, according to a new study published in PLOS One: Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.

The researchers looked at 30 large National Heart Lung, and Blood Institute (NHLBI) funded trials between 1970 and 2000. Of those studies, 17 or 57% showed a significant positive result. They then compared that to 25 similar studies published between 2000 and 2012. Of those, only 2 or 8% were positive. That is a significant drop – from 57% to 8% positive studies.

They also found that there was no difference in the design of the studies, whether they were placebo-controlled, for example. There was also no effect from industry funding.

What was different was that starting in 2000 these trials had to be pre-registered in clinicaltrials.gov. Pre-registration strongly correlated with a negative outcome. In addition to pre-registration there was also the adoption of transparent reporting standards.

These results are simultaneously very encouraging and a bit frightening. This itself is one study, although it is fairly straightforward and the results clear, but it still needs to be replicated with other databases. Taken at face value, however, it means that at least half of all published clinical trials are false positives, while only about 10% are true positive, and 40% are negative (both true and false negative). Also keep in mind – these were large studies, not small preliminary trials.

This study seems to confirm what all the other studies I reviewed above appear to be saying, that loose scientific methods are leading to a massive false positive bias in the medical literature. The encouraging part, however, is that this one simple fix seems to work remarkably well.

Conclusion

This study should be a wake-up call, but it is not getting as much play in the media as I would hope or like. I do not go as far as to say that science is broken. In the end it does work, it just takes a lot longer to get there than it should because we waste incredible resources and time chasing false positive outcomes.

The infrastructure of doing and reporting science has significant and effective built-in quality control, but it is currently not sufficient. The research is showing glaring holes and biases in the system. In some cases we know how to fix them.

At this point there is sufficient evidence to warrant full requirement for all human research to be registered prior to collecting data, declaring methods and outcomes to be measured. We need high standards of scientific rigor with full transparency in reporting. These measures are already working.

We further need an overhaul of the system by which we publish scientific studies. There is too much of a bias in traditional journals toward exciting results that are unlikely to be replicated, and too little toward boring replications that are actually the workhorses of scientific progress. We also need to reign in the new open-access journals, weed out the predators, and institute better quality control.

With online publishing it is actually easier to accomplish these goals than before. Journals can no longer argue they don’t have “space” or that it is too expensive.

The scientific community, in my opinion, needs to pay more attention to these issues.

9 responses so far