May 05 2015

Open Science

There is a movement to open access to scientific information, and with the relatively new resources provided by the internet and social media, we may be heading rapidly in that direction. However, I don’t think this will be an easy transition and we should consider the possible unintended consequences.

A 2012 commentary by Nosek and Bar-Anan outlined the changes that would open science:

We call for six changes: (1) full embrace of digital communication, (2) open access to all published research, (3) disentangling publication from evaluation, (4) breaking the “one article, one journal” model with a grading system for evaluation and diversified dissemination outlets, (5) publishing peer review, and, (6) allowing open, continuous peer review. We address conceptual and practical barriers to change, and provide examples showing how the suggested practices are being used already.

The Center for Open Science outlines a similar mission:

1-Increase prevalence of scientific values – openness, reproducibility – in scientific practice
2-Develop and maintain infrastructure for documentation, archiving, sharing, and registering research materials
3-Join infrastructures to support the entire scientific workflow in a common framework
4-Foster an interdisciplinary community of open source developers, scientists, and organizations
5-Adjust incentives to make “getting it right” more competitive with “getting it published”
6-Make all academic research discoverable and accessible

I agree that it is time to reconsider the entire infrastructure of how scientific research is documented and reported. While scientists and scientific institutions have been utilizing internet and social media resources, it has been ad hoc without a coherent widely accepted plan. That may not be a bad thing, to let systems develop organically, allow experimentation and see what shakes out. The experiment may have gone on long enough, however, to step back, see what we learned, and think about how to craft an optimal infrastructure.

I definitely think we should not be stuck in a system that is based upon printing paper journals. If I had my own wish list of where I think we should be, it would include:

1- Open access to every full scientific article in a searchable database

2- Greater transparency in terms of raw scientific data and methods used to create scientific papers, perhaps registering methods prior to collecting data.

3- Registering of all human trials

4- Shifting incentives so that the best quality papers are identified and published, including negative studies and exact replications without biases that distort the true nature of the scientific data

5- Shifting incentives so that fewer but larger and higher quality studies are published

In our “scientific utopia” scientific research would be completely transparent, high quality, free from distorting biases, and easy to search and access. The recommendations for how to get there, however, may have some unintended consequences. For me the big one is the watering down of peer review as a barrier to access to scientific data.

This is already happening, in fact. ArXiv.org is a place where physicists can publish their articles without peer-review. This way scientists can share their data with each other quickly and efficiently.  Sometimes, however, these articles are picked up by the lay media and presented as “published” articles, without necessarily pointing out that they have not been peer-reviewed.

The two main negative consequences to consider are: overwhelming scientists with information, and overwhelming the public with information. A recent study shows this is already happening. The authors found that the number and lifetime of citations to scientific research is decreasing, suggesting that there are simply too many published studies for scientists to keep up with it all.

Lowering barriers to publication may also create an incentive for academics to publish more lower quality studies, which may be counterproductive to the advancement of science. Already we have a situation where a flood of preliminary studies are mostly wrong. More such studies may not be a good thing.

The media is also having an increasingly hard time sorting through which scientific studies are significant and which aren’t. I actually think that many reporters, especially those who are not specialist trained as science journalists, don’t care. If they had a larger body of speculative studies to sort through, looking for sexy headlines, that would exacerbate the problem of misinforming the public about the status of scientific questions, further confusing their understanding of science, and perhaps reducing their trust in science as the promises of sexy headlines never materialize or contradict each other.

Right now, at least, there is the barrier of peer-review (which is imperfect and incomplete). Without that barrier we will have the wild west, with all its good and bad aspects.

Some may think this is all a good thing – let chaos reign as it will only drive creativity and collaboration, and the creme will naturally rise to the top. I think there is some truth to this, but only some. We also need a system to maintain standards, or at least formal evaluation to sort out in a transparent way the wheat from the chaff. We need to combine the best of both worlds.

For example, perhaps we can divide current peer-review into two tiers. The lower tier is a basic filter that keeps out obvious nonsense and fraud. Authors need to be registered and publications will be reviewed for basic format and at least a minimal bar of quality and appropriateness. This would be a good place for small or preliminary studies, and could function similar to arXiv.org. Studies could get out quickly, and the community can provide post-publication rolling peer-review. Perhaps we could even incorporate some social media techniques, such as having other scientists vote the publication up or down, and leave moderated comments discussing the study. Such a system would have all the advantages of open science, but not be confused with peer-reviewed.

The second tier would be the highest quality peer-review, even higher than is the current norm. Raw data would be reviewed and verified, statistics double-checked, the reviews themselves will be open and reviewed, and studies will be chosen more for quality than for how provocative they are. Journals that are considered peer-reviewed at this level would have to demonstrate that their editorial process meets this high standard.

The media could be encouraged to report only studies that pass this super peer-review, or at least prominently note when a study has or has not passed such review.

This is just one idea. I think we have an opportunity now to consider how to leverage the new information technology to reinvent the world of scientific research, publication, and reporting.

The bottom line is that we need to balance open, rapid, and searchable access with quality control, while maintaining transparency and avoiding distorting biases. Further we need to facilitate making sense of the flood of scientific data to scientists, professionals, and the media and public. This is metascience, or the science of doing science. It deserves our attention, because it has the potential for optimizing the utilization of scientific resources and the pace of scientific advance.

9 responses so far