The paper, published Aug. 9 in the journal Physica Scripta, was an attempt to uncover new solutions to a complicated math equation, but included the phrase “Regenerate response” on the third page — something one eagle-eyed reader recognized was the phrase of a button on ChatGPT, according to a report from Nature.
The authors of the paper have since acknowledged they used ChatGPT to help write the manuscript, something that wasn’t caught during two months of peer review after the paper was submitted in May. The revelation led the U.K.-based publisher to retract the paper because the authors did not disclose their use of the AI app when they submitted it.
“This is a breach of our ethical policies,” Kim Eggleton, who is in charge of peer review and research integrity at IOP publishing, said in a statement, according to Nature.
The apparent copy and paste error was discovered by computer scientist and integrity investigator Guillaume Cabanac, who since 2015 has made it a personal mission to uncover papers that are not transparent about their use of AI.
“He gets frustrated about fake papers,” said Cyril Labbé, a fellow computer scientist who works with Cabanac to uncover the papers, according to a report from Futurism.
Cabanac was also behind the recent discovery of a similar situation with a paper published in Resources Policy, which he found included “nonsensical equations,” according to Futurism.
While the peer review process for publishing papers is supposed to be rigorous, the volume of research being published leads to some things falling through the cracks. David Bimler, a researcher who also hunts for fake papers, said many reviewers do not have the time to spot sometimes subtle hints that AI was used in a paper.
“The whole science ecosystem is publish or perish,” Bimler said, according to Futurist. “The number of gatekeepers can’t keep up.”
Physica Scripta did not immediately respond to a Fox News request for comment.