Sep 02 2025
Detecting Online Predatory Journals
The World Wide Web has proven to be a transformative communication technology (we are using it right now). At the same time there have been some rather negative unforeseen consequences. Significantly lowering the threshold for establishing a communications outlet has democratized content creation and allows users unprecedented access to information from around the world. But it has also lowered the threshold for unscrupulous agents, allowing for a flood of misinformation, disinformation, low quality information, spam, and all sorts of cons.
One area where this has been perhaps especially destructive is in scientific publishing. Here we see a classic example of the trade-off dilemma between editorial quality and open access. Scientific publishing is one area where it is easy to see the need for quality control. Science is a collective endeavor where all research is building on prior research. Scientists cite each other’s work, include the work of others in systematic reviews, and use the collective research to make many important decisions – about funding, their own research, investment in technology, and regulations.
When this collective body of scientific research becomes contaminated with either fraudulent or low-quality research, it gums up the whole system. It creates massive inefficiency and adversely affects decision-making. You certainly wouldn’t want your doctor to be making treatment recommendations on fraudulent or poor-quality research. This is why there is a system in place to evaluate research quality – from funding organizations to universities, journal editors, peer reviewers, and the scientific community at large. But this process can have its own biases, and might inhibit legitimate but controversial research. A journal editor might deem research to be of low quality partly because its conclusions conflict with their own research or scientific conclusions.