Oct 07 2013

A Problem with Open Access Journals

In a way the internet is a grand bargain, although one that simply emerged without a conscious decision on the part of anyone. It greatly increases communication, lowers the bar for content creation and distribution, and allows open access to vast and deep databases of information. On the other hand, the traditional barriers of quality control are reduced or even eliminated, leading to a “wild west” of information. As a result it is already a cliche to characterize our current times as the “age of misinformation.”

For someone like me, a social-media skeptic, I feel the cut of both edges quite deeply. With podcasts, blogs, YouTube videos, and other content, I can create a network of content creation and distribution that can compete with any big media outlet. I can use these outlets to correct misinformation, analyse claims, engage in debates, and debunk fraud and myths.

On the other hand, the fraud, myths, and misinformation are multiplying at frightening rates on the very same platforms. It is difficult to gauge the net effect – perhaps that’s a topic for another post.

For this post I will discuss one of the most disturbing trends emerging from the internet phenomenon – the proliferation of poor quality science journals, specifically open access journals. ¬†The extent of this problem was recently highlighted by a “sting” operation recently published by Science magazine.

According to the Directory of Open Access Journals (DOAJ):

We define open access journals as journals that use a funding model that does not charge readers or their institutions for access. From the BOAI definition¬†of “open access”, we support the rights of users to “read, download, copy, distribute, print, search, or link to the full texts of these articles” as mandatory for a journal to be included in the directory.

This is great, and open access has many supporters, including me. But all new “funding models” have the potential of creating perverse incentives. With the traditional model of print publishing, money was made through advertising and subscription fees. Subscriptions are driven by quality and impact factor, creating an incentive for high quality peer review and overall quality.

Open access journals frequently make their money by charging a publication fee of the author. This creates an incentive to publish a lot of papers of any quality. In fact, if you could create a shell of a journal, with little staff, and publish many papers online with little cost, that could generate a nice revenue stream. Why not create hundreds of such journals, covering every niche scientific and academic area?

This, of course, is what has happened. We are still in the middle of the explosion of open access journals. At their worst they have been dubbed “predatory” journals for charging hidden fees, exploiting naive academics, and essentially being scams.

John Bohannon decided to run a sting operation to test the peer-review quality of open access journals. I encourage you to read his entire report, but here’s the summary.

He identified 304 open access journals that publish in English. He created a fake scientific paper with blatant fatal flaws that rendered the research uninterpretable and the paper unpublishable. He actually created 304 versions of this paper by simply inserting different variables into the same text, but keeping the science and the data the same. He then submitted a version of the paper to all 304 journals under different fake names from different fake universities (using African names to make it seem plausible that they were obscure).

The result? – over half of the papers were accepted for publication. I think it’s fair to say that any journal that accepted such a paper for publication is fatally flawed and should be considered a bogus journal.

This, of course, is a huge problem. Such journals allow for the flooding of the peer-reviewed literature with poor quality papers that should never be published. This is happening at a time when academia itself is being infiltrated with “alternative” proponents and post-modernist concepts that are anathema to objective standards.

Combine this with the erosion of quality control in science journalism, also thanks to the internet. Much of what passes as science reporting is simply cutting and pasting press releases from journals, including poor-quality open access journals hoping for a little free advertising.

At least this creates plenty of work to keep skeptics busy.

What this means for everyone is that you should be highly wary of any published study, especially if it comes from an obscure journal. The problem highlighted with this sting is not unique to open-access journals. There are plenty of “throw-away” print journals as well. And even high impact print journals may be seduced into publishing a sexy article with dubious research. Michael Eisen reminds us about the aresenic DNA paper that Science itself published a few years ago.

Definitely you should look closely at the journal in which a paper is published. But also, do not accept the findings of any single paper. Reliable scientific results only emerge following replication and the building of consensus.

Perhaps the Science paper will serve as a sort-of Flexner report for open access journals. In 1910 the Flexner report exposed highly variable quality among US medical schools, resulting in more than half of them shutting down, and much tighter quality control on those that remained open. The Flexner report is often credited with bringing US medical education into the scientific era.

In order to tame the wild west, we need clearing houses that provide careful review and their stamp of approval for quality control. The DOAJ tries to do this, stating:

For a journal to be included it should exercise quality control on submitted papers through an editor, editorial board and/or a peer-review system.

Clearly such review needs to be more robust. The integrity of the published literature is a vital resource of human civilization. As we learn to deal with the consequences of open access, intended and unintended, we need to develop new institutions of quality control and science-based standards.

 

Share

13 responses so far

13 Responses to “A Problem with Open Access Journals”

  1. rossbalchon 07 Oct 2013 at 8:43 am

    You have probably already come across this site? http://scholarlyoa.com/ It’s a pretty good tool to consult if you have suspicions that a journal is less than legit, not a definitive list but a good start.

  2. zplafon 07 Oct 2013 at 9:00 am

    Too bad they didn’t include big journals in their study, just to be sure.

  3. edamameon 07 Oct 2013 at 9:26 am

    A self-serving hit piece from Science. To suggest this is a problem specifically influencing open access journals, when they didn’t do the comparison to standard closed access journals, is simply irresponsible. It would be like saying men are better than women at X, but we only measured the performance of men. The results clearly point to a problem, but we don’t know if this is a problem specific to OA journals.

  4. Billzbubon 07 Oct 2013 at 12:15 pm

    But, men ARE better than women at X.

    where X = getting themselves in trouble with women.

    On a more serious note, I wonder if the development of this Wild West age of information will force more people to develop critical thinking filters. Pretty much everyone knows that there’s a lot of bad information out there on the internet, and I’m hoping this drives more people to learn good ways to sort the wheat from the chaff.

  5. David Colquhounon 07 Oct 2013 at 1:47 pm

    It is indeed a great pity that the spoof paper was not submitted to Nature, Science etc etc. The results might have been very interesting.

    In my own field (single ion channel biophysics) peer review still works quite well, but in the broader scheme of thinks it if seriously broken. How else could it be that PubMed lists an alarming number of quackery journals as “peer reviewed”?

    For me, the only solution is to put Elsevier and NPG out of business, set up ArXiv like servers (Cold Spring Harbor Labs are going to do this) and post-publication peer review (which is starting to work really well on PeerJ).

    The real culprit is the publish or perish pressure to publish regardless of whether you having to say, and associated JIF obsession. That’s puts the blame on squarely on senior academics and HR people, but it has opened the doors to crooks. And that is endangering good science in a way that can no longer be brushed under the carpet.

  6. Enzoon 07 Oct 2013 at 3:04 pm

    Don’t forget the added workload put on scientists that these rag journals are causing. It’s becoming increasingly more time consuming to peer review articles because now there is a questionable reference behind every questionable conclusion which has to be read more critically. Grant reviewing is likely to start suffering from this problem as well because it’s now possible to find “support” for pretty much anything. And vetting scientists is getting complex too because you have to sort through the 30 publications, 28 of which are in awful journals. And when trying to get up to speed on a topic unfamiliar to you? BOOM saturation with studies that you are not sure how to evaluate.

    Just uugg. It’s getting frustrating. Ok, rant over.

    David,

    Couldn’t agree more. The publish or perish mentality has to be addressed. It’s gotten to the point where even good scientists have to make that uncomfortable call to publish now before contradictory evidence comes up that unravels their story. The number of “rushed” manuscripts that lack serious descriptive power is crippling the reliability of the literature.

  7. Bronze Dogon 07 Oct 2013 at 3:31 pm

    It would be much more interesting and informative if they compared against other types of journals, but it’s enough to serve as a word of caution before accepting an open access publication at face value.

    As for internet freedom, it certainly means we need to be vigilant and counter falsehood with well-sourced truths and rational analysis. It’d be nice to shore up the gates and prevent cranks from gaining false prestige from undeserved publications, but there’ll always be someone out to make money or push an ideology by producing their own journals with sufficiently low standards. Accrediting organizations could sort the honest journals from the dishonest, but naturally, they’d become a target for demonizing propaganda from woo gurus. Having skeptics critically review bad publications is a good idea, but I don’t think we can handle the entire volume of nonsense that gets out there. Thankfully, we can also take advantage of easy publishing when we want to criticize something, even if it’s just a blog post pointing out the flaws of a published study.

    I’m stuck between optimism and pessimism.

  8. edamameon 07 Oct 2013 at 4:03 pm

    Enzo, that is why it is so useful to use the good old fashioned citation index to help sort through things. Good papers tend to be cited by others, bad papers fall by the wayside. I have found it invaluable as I write grants and need to get up to speed quickly on a topic. I just search by topic, sort by number of citations, and voila! I have the major publications in the field.

    Without such filters, it would truly be impossible without interacting with humans, or using collated/annotated bibliographies of the best stuff in the field. Intelligence is needed to fill in where search engines will fail us.

  9. jfroston 07 Oct 2013 at 6:48 pm

    While it’s regrettable that Bohannon didn’t provide a control group, I don’t think it entirely disqualifies his findings. The peer review process is a laborious one, requiring attentive publishing staff–production editors, editorial assistants, managers, etc. The ‘necessity-of-proper-funding’ argument isn’t one concocted by Sage, EBSCO, ProQuest, etc. to simply maintain profits. As much as I wish scientific and technical knowledge could be made freely available, there is a real issue of quality control that comes with open access, not to mention indexing and reference tasks so that the information is organized and retrievable.

  10. Bruceon 08 Oct 2013 at 3:38 am

    I think setting something up where papers are stored and “scored” post publication could be a very useful tool. Then journals could be “scored” on the papers they published.

    The logistics, legalities and politics of running such a centralised database of papers is absolutely mind-boggling though. It sounds like an amazing project though, and would give a layperson much more info on whether the actual science behind something carries water.

  11. edamameon 08 Oct 2013 at 11:53 am

    Bruce: arXiv does this.

    It could be done without storing the actual papers, but just their bibliographic information, with ratings. Though this is already sort of done with citations as ratings. That’s why the citation index is so useful (it is online but not free, unfortunately).

  12. pseudonymoniaeon 09 Oct 2013 at 2:36 am

    I would just note that Bohannon doesn’t attempt to imply that more traditional journals are any better than the cohort of open access journals that he targeted. In fact, he clearly suggests that the same operation might work quite well on a number of the former. A few open-access journals utilizing appropriate systems of peer-review also come off quite well (e.g. PLoS One).

    Also, it would have been nice if the targeted population of journals had been more comprehensive, but I don’t think this would qualify as a “control group”, as there doesn’t appear to be a specific hypothesis impugning open-access journals which he intended to test.

  13. Bruceon 09 Oct 2013 at 7:49 am

    edamame,

    Thanks, I did not read to your comment before I commented, so only noticed after I posted. I don’t think something like this can be free for all parties unfortunately because of those reasons you and others have mentioned.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.