How big is science’s fake-paper problem?


A person reviews and edits a paper using a red pen, with a tablet in the background.

Software will help publishers to detect fake articles produced by paper mills.Credit: Getty

The scientific literature is polluted with fake manuscripts churned out by paper mills — businesses that sell bogus work and authorships to researchers who need journal publications for their CVs. But just how large is this paper-mill problem?

An unpublished analysis shared with Nature suggests that over the past two decades, more than 400,000 research articles have been published that show strong textual similarities to known studies produced by paper mills. Around 70,000 of these were published last year alone (see ‘The paper-mill problem’). The analysis estimates that 1.5–2% of all scientific papers published in 2022 closely resemble paper-mill works. Among biology and medicine papers, the rate rises to 3%.

The paper-mill problem: Chart showing percentage of articles with close similarity to paper-products from 2000 to 2022.

Source: Adam Day, unpublished estimates

Without individual investigations, it is impossible to know whether all of these papers are in fact products of paper mills. But the proportion — a few per cent — is a reasonable conservative estimate, says Adam Day, director of scholarly data-services company Clear Skies in London, who conducted the analysis using machine-learning software he developed called the Papermill Alarm. In September, a cross-publisher initiative called the STM Integrity Hub, which aims to help publishers combat fraudulent science, licensed a version of Day’s software for its set of tools to detect potentially fabricated manuscripts.

Paper-mill studies are produced in large batches at speed, and they often follow specific templates, with the occasional word or image swapped. Day set his software to analyse the titles and abstracts of more than 48 million papers published since 2000, as listed in OpenAlex, a giant open index of research papers that launched last year, and to flag manuscripts with text that very closely matched known paper-mill works. These include both retracted articles and suspected paper-mill products spotted by research-integrity sleuths such as Elisabeth Bik, in California, and David Bimler (also known by the pseudonym Smut Clyde), in New Zealand.

Bimler says that Day’s “stylistic-similarity approach is the best we have at the moment” for estimating the prevalence of paper-mill studies, but he and others caution that it might inadvertently catch genuine papers that paper mills have copied, or cases in which authors have fitted real data into a template-style article. Day, however, says that he tried to keep false positives “close to zero” by validating the findings against test sets of papers that were known to be genuine, or fake. “There had to be a big signal for a paper to be flagged,” he says.

Day also examined a smaller subset of 2.85 million works published in 2022 for which a subject area was recorded in the OpenAlex database. Around 2.2% of these resembled paper-mill studies, but the rate varied depending on the subject (see ‘Subject breakdown’).

Subject breakdown: Charts showing scientific disciplines with the highest proportion of paper-mill articles.

Source: Adam Day, unpublished estimates

According to Bik, Day’s estimate, “although staggeringly high, is not impossible”. But she says that it’s not possible to evaluate Day’s work without seeing full details of his methods and examples — a concern echoed by cancer researcher and integrity sleuth Jennifer Byrne, at the University of Sydney in Australia. “Sadly, I find these estimates to be plausible,” Byrne adds.

Day, who regularly blogs about his work, says he aims to release more information at a later date, but adds that his desire to prevent competitors reverse-engineering his software, or fraudsters working around it, limits what he shares publicly. Sensitive information is shared privately with fraud investigators, he says.

Overall, he sees his estimate as a lower bound, because it will miss paper mills that avoid known templates. The analysis indicates that paper mills aren’t spread evenly across journals, but instead cluster at particular titles. Day says that he won’t reveal publicly which publishers seem to be most badly affected, because he thinks it could be harmful to do so.

A June 2022 report by the Committee on Publication Ethics, based in Eastleigh, UK, said that for most journals, 2% of submitted papers are likely to have come from paper mills, and the figure could be higher than 40% for some. The report was based on private data submitted by six publishers, and it didn’t say how the estimates were made or what proportion of paper-mill manuscripts went on to be published.

Spotting paper mills

In the past few years, publishers have stepped up their efforts to combat paper mills, says Joris Van Rossum, director of research integrity at STM who led development of the STM Integrity Hub, with a focus on tools (including Day’s software) to help detect fraudulent submitted manuscripts. They now have multiple ways to screen for them. Bik, Byrne and others have pointed out many red flags, and the STM Integrity Hub has said that it now has more than 70 signals.

Text that follows a common template is only one sign. Others include suspicious email addresses that don’t correspond to any of a paper’s authors; email addresses from hospitals in China (because the issue is known to be so prevalent there); identical charts that claim to represent different experiments; tell-tale turns of phrase that indicate efforts to avoid plagiarism detection; citations of other paper-mill studies; and duplicate submissions across journals. Day and those involved in the STM Integrity Hub will not reveal all of the signals that they use, to avoid alerting fraudsters.

In May, Bernhard Sabel, a neuropsychologist at Otto-von-Guericke University in Magdeburg, Germany, posted a preprint suggesting that any paper with an author who was affiliated with a hospital and gave a non-academic email address should be flagged as a possible paper-mill publication. Sabel estimated that 20–30% of papers in medicine and neuroscience in 2020 were possible paper-mill products, but dropped this to 11% in a revised preprint in October. He also acknowledged that his method would flag false positives, which many researchers criticized.

Whatever the scale of the problem, it seems clear that it has overwhelmed publishers’ systems. The world’s largest database of retractions, compiled by the website Retraction Watch, records fewer than 3,000 retractions related to paper-mill activity, out of a total of 44,000. That is an undercount, says the site’s co-founder Ivan Oransky, because database maintainers are still entering thousands of retractions, and some publishers avoid the term ‘paper mill’ in retraction notices.

Those retraction numbers are “only a small fraction of the lowest estimates we have for the scale of the problem at this stage”, says Day. “Paper-mill producers must feel pretty safe.”


Leave a Reply

Your email address will not be published. Required fields are marked *