Oct 07 2013

A Problem with Open Access Journals

You are currently browsing comments. If you would like to return to the full story, you can read the full entry here: “A Problem with Open Access Journals”.


13 responses so far

13 Responses to “A Problem with Open Access Journals”

  1. rossbalchon 07 Oct 2013 at 8:43 am

    You have probably already come across this site? http://scholarlyoa.com/ It’s a pretty good tool to consult if you have suspicions that a journal is less than legit, not a definitive list but a good start.

  2. zplafon 07 Oct 2013 at 9:00 am

    Too bad they didn’t include big journals in their study, just to be sure.

  3. edamameon 07 Oct 2013 at 9:26 am

    A self-serving hit piece from Science. To suggest this is a problem specifically influencing open access journals, when they didn’t do the comparison to standard closed access journals, is simply irresponsible. It would be like saying men are better than women at X, but we only measured the performance of men. The results clearly point to a problem, but we don’t know if this is a problem specific to OA journals.

  4. Billzbubon 07 Oct 2013 at 12:15 pm

    But, men ARE better than women at X.

    where X = getting themselves in trouble with women.

    On a more serious note, I wonder if the development of this Wild West age of information will force more people to develop critical thinking filters. Pretty much everyone knows that there’s a lot of bad information out there on the internet, and I’m hoping this drives more people to learn good ways to sort the wheat from the chaff.

  5. David Colquhounon 07 Oct 2013 at 1:47 pm

    It is indeed a great pity that the spoof paper was not submitted to Nature, Science etc etc. The results might have been very interesting.

    In my own field (single ion channel biophysics) peer review still works quite well, but in the broader scheme of thinks it if seriously broken. How else could it be that PubMed lists an alarming number of quackery journals as “peer reviewed”?

    For me, the only solution is to put Elsevier and NPG out of business, set up ArXiv like servers (Cold Spring Harbor Labs are going to do this) and post-publication peer review (which is starting to work really well on PeerJ).

    The real culprit is the publish or perish pressure to publish regardless of whether you having to say, and associated JIF obsession. That’s puts the blame on squarely on senior academics and HR people, but it has opened the doors to crooks. And that is endangering good science in a way that can no longer be brushed under the carpet.

  6. Enzoon 07 Oct 2013 at 3:04 pm

    Don’t forget the added workload put on scientists that these rag journals are causing. It’s becoming increasingly more time consuming to peer review articles because now there is a questionable reference behind every questionable conclusion which has to be read more critically. Grant reviewing is likely to start suffering from this problem as well because it’s now possible to find “support” for pretty much anything. And vetting scientists is getting complex too because you have to sort through the 30 publications, 28 of which are in awful journals. And when trying to get up to speed on a topic unfamiliar to you? BOOM saturation with studies that you are not sure how to evaluate.

    Just uugg. It’s getting frustrating. Ok, rant over.


    Couldn’t agree more. The publish or perish mentality has to be addressed. It’s gotten to the point where even good scientists have to make that uncomfortable call to publish now before contradictory evidence comes up that unravels their story. The number of “rushed” manuscripts that lack serious descriptive power is crippling the reliability of the literature.

  7. Bronze Dogon 07 Oct 2013 at 3:31 pm

    It would be much more interesting and informative if they compared against other types of journals, but it’s enough to serve as a word of caution before accepting an open access publication at face value.

    As for internet freedom, it certainly means we need to be vigilant and counter falsehood with well-sourced truths and rational analysis. It’d be nice to shore up the gates and prevent cranks from gaining false prestige from undeserved publications, but there’ll always be someone out to make money or push an ideology by producing their own journals with sufficiently low standards. Accrediting organizations could sort the honest journals from the dishonest, but naturally, they’d become a target for demonizing propaganda from woo gurus. Having skeptics critically review bad publications is a good idea, but I don’t think we can handle the entire volume of nonsense that gets out there. Thankfully, we can also take advantage of easy publishing when we want to criticize something, even if it’s just a blog post pointing out the flaws of a published study.

    I’m stuck between optimism and pessimism.

  8. edamameon 07 Oct 2013 at 4:03 pm

    Enzo, that is why it is so useful to use the good old fashioned citation index to help sort through things. Good papers tend to be cited by others, bad papers fall by the wayside. I have found it invaluable as I write grants and need to get up to speed quickly on a topic. I just search by topic, sort by number of citations, and voila! I have the major publications in the field.

    Without such filters, it would truly be impossible without interacting with humans, or using collated/annotated bibliographies of the best stuff in the field. Intelligence is needed to fill in where search engines will fail us.

  9. jfroston 07 Oct 2013 at 6:48 pm

    While it’s regrettable that Bohannon didn’t provide a control group, I don’t think it entirely disqualifies his findings. The peer review process is a laborious one, requiring attentive publishing staff–production editors, editorial assistants, managers, etc. The ‘necessity-of-proper-funding’ argument isn’t one concocted by Sage, EBSCO, ProQuest, etc. to simply maintain profits. As much as I wish scientific and technical knowledge could be made freely available, there is a real issue of quality control that comes with open access, not to mention indexing and reference tasks so that the information is organized and retrievable.

  10. Bruceon 08 Oct 2013 at 3:38 am

    I think setting something up where papers are stored and “scored” post publication could be a very useful tool. Then journals could be “scored” on the papers they published.

    The logistics, legalities and politics of running such a centralised database of papers is absolutely mind-boggling though. It sounds like an amazing project though, and would give a layperson much more info on whether the actual science behind something carries water.

  11. edamameon 08 Oct 2013 at 11:53 am

    Bruce: arXiv does this.

    It could be done without storing the actual papers, but just their bibliographic information, with ratings. Though this is already sort of done with citations as ratings. That’s why the citation index is so useful (it is online but not free, unfortunately).

  12. pseudonymoniaeon 09 Oct 2013 at 2:36 am

    I would just note that Bohannon doesn’t attempt to imply that more traditional journals are any better than the cohort of open access journals that he targeted. In fact, he clearly suggests that the same operation might work quite well on a number of the former. A few open-access journals utilizing appropriate systems of peer-review also come off quite well (e.g. PLoS One).

    Also, it would have been nice if the targeted population of journals had been more comprehensive, but I don’t think this would qualify as a “control group”, as there doesn’t appear to be a specific hypothesis impugning open-access journals which he intended to test.

  13. Bruceon 09 Oct 2013 at 7:49 am


    Thanks, I did not read to your comment before I commented, so only noticed after I posted. I don’t think something like this can be free for all parties unfortunately because of those reasons you and others have mentioned.

Trackback URI | Comments RSS

Leave a Reply

You must be logged in to post a comment.