This post was originally published on this site.
The perennial pressure to publish or perish is intense as ever for faculty trying to advance their careers in an exceedingly tight academic job market. On top of their teaching loads, faculty are expected to publishâand peer reviewâresearch findings, often receiving little to no compensation beyond the prestige and recognition of publishing in top journals.
Some researchers have argued that such an environment incentivizes scholars to submit questionable work to journalsâmany have well-documented peer-review backlogs and inadequate resources to detect faulty information and academic misconduct. In 2024, more than 4,600 academic papers were retracted or otherwise flagged for review, according to the Retraction Watch database; during a six-week span last fall, one scientific journal published by Springer Nature retracted more than 200 articles.
But the $19Â billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AIâpowered tools or guidelines, including those designed to aid scientists in research, writing and peer review.
Most Popular
âThese AI tools can help us improve research integrity, quality, accurate citation, our ability to find new insights and connect the dots between new ideas, and ultimately push the human enterprise forward,â Josh Jarrett, senior vice president of AI growth at Wiley, told Inside Higher Ed earlier this month. âAI tools can also be used to generate content and potentially increase research integrity risk. Thatâs why weâve invested so much in using these tools to stay ahead of that curve, looking for patterns and identifying things a single reviewer may not catch.â
However, most scholars arenât yet using AI for such a purpose. A recent survey by Wiley found that while the majority of researchers believe AI skills will be critical within two years, more than 60Â percent said lack of guidelines and training keep them from using it in their work.
In response, Wiley released new guidelines last week on âresponsible and effectiveâ uses of AI, aimed at deploying the technology to make the publishing process more efficient âwhile preserving the authorâs authentic voice and expertise, maintaining reliable, trusted, and accurate content, safeguarding intellectual property and privacy, and meeting ethics and integrity best practices,â according to a news release.
Last week, Elsevier also launched ScienceDirect AI, which extracts key findings from millions of peer-reviewed articles and books on ScienceDirect and generates âprecise summariesâ to alleviate researchersâ challenges of âinformation overload, a shortage of time and the need for more effective ways to enhance existing knowledge,â according to a news release.
Both of those announcements followed Springer Natureâs January launch of an in-house AI-powered program designed to help editors and peer reviewers by automating editorial quality checks and alerting editors to potentially unsuitable manuscripts.
âAs the volume of research increases, we are excited to see how we can best use AI to support our authors, editors and peer reviewers, simplifying their ways of working whilst upholding quality,â Harsh Jegadeesan, Springerâs chief publishing officer, said in a news release. âBy carefully introducing new ways of checking papers to enhance research integrity and support editorial decision-making we can help speed up everyday tasks for researchers, freeing them up to concentrate on what matters to themâconducting research.â
âObvious Financial Benefitâ
Academic publishing experts believe there are both advantagesâand down sidesâof involving AI in the notoriously slow peer-review process, which is plagued by a deficit of qualified reviewers willing and able to offer their unpaid labor to highly profitable publishers.
If use of AI assistants becomes the norm for peer reviewers, âthe volume problem would be immediately gone from the industryâ while creating an âobvious financial benefitâ for the publishing industry, said Sven Fund, managing director of the peer-review-expert network Reviewer Credits.
Editors’ Picks
But the implications AI has for research quality are more nuanced, especially as scientific research has become a target for conservative politicians and AI models could beâand may already be beingâused to target terms or research lawmakers donât like.
âThere are parts of peer review where a machine is definitely better than a human brain,â Fund said, pointing to low-intensity tasks such as translations, checking references and offering authors more thorough feedback as examples. âMy concern would be that researchers writing and researching on whatever they want is getting limited by people reviewing material with the help of technical agents ⊠That can become an element of censorship.â
Aashi Chaturvedi, program officer for ethics and integrity at the American Society for Microbiology, said one of her biggest concerns about the introduction of AI into peer review and other aspects of the publishing process is maintaining human oversight.
âJust as a machine might produce a perfectly uniform pie that lacks the soul of a handmade creation, AI reviews can appear wholesome but fail to capture the depth and novelty of the research,â she wrote in a recent article for ASM, which has developed its own generative AI guidelines for the numerous scientific journals it publishes. âIn the end, while automation can enhance efficiency, it cannot replicate the artistry and intuition that come from years of dedicated practice.â
But that doesnât mean AI has no place in peer review, said Chaturvedi, who said in a recent interview that she âfelt extra pressure to make sure that everything the author was reporting sounds doableâ during her 17 years working as an academic peer reviewer in the pre-AI era. As the pace and complexity of scientific discovery keeps accelerating, she said AI can help alleviate some burden on both reviewers and the publishers âhandling a large volume of submissions.â
Chaturvedi cautioned, however, that introducing such technology across the academic publishing process should be transparent and come only after ârigorousâ testing.
âThe large language models are only as good as the information you give them,â she said. âWe are at a pivotal moment where AI can greatly enhance workflows, but you need careful and strategic planning ⊠Thatâs the only way to get more successful and sustainable outcomes.â
Not Equipped to Ensure Quality?
Ivan Oransky, a medical researcher and co-founder of Retraction Watch, said, âAnything that can be done to filter out the junk thatâs currently polluting the scientific literature is a good thing,â and âwhether AI can do that effectively is a reasonable question.â
But beyond that, the publishing industryâs embrace of AI in the name of improving research quality and clearing up peer-review backlogs belies a bigger problem predating the rise of powerful generative AI models.
âThe fact that publishers are now trumpeting the fact that they both are and need to beâaccording to themâusing AI to fight paper mills and other bad actors is a bit of an admission they hadnât been willing to make until recently: Their systems are not actually equipped to ensure quality,â Oransky said.
âThis is just more evidence that people are trying to shove far too much through the peer-review system,â he added. âThat wouldnât be a problem except for the fact that everybodyâs either directlyâor implicitlyâencouraging terrible publish-or-perish incentives.â