My paper was probably reviewed by AI – and that’s a serious problem
Our paper was rejected on the basis of
reviewer comments that were vague, formulaic, often irrelevant and occasionally
inaccurate, says Seongjin Hong
June
24, 2025
As
an environmental scientist with over 15 years of experience and more than 150
peer-reviewed publications, I am familiar with the ups and downs of academic
publishing. But there was something distinctly odd about the rejection decision
that I received from a prominent international journal last month.
After
an initial major revision decision, we had carefully addressed each of the
reviewers’ concerns and submitted a thoroughly revised manuscript. The
first-round comments were reasonable, and we responded in detail to further
improve the clarity and scientific rigour of the work. Yet our paper was
ultimately rejected, primarily because of one reviewer’s unexpectedly
negative second-round report.
What
troubled me was not just the tone, but the nature of the critique. The reviewer
introduced entirely new concerns that had not been previously raised. Moreover,
the comments were formulaic, vague, often irrelevant and occasionally
inaccurate, with little engagement in the actual content of our manuscript.
Remarks such as “more needed” and “needs to be validated” lacked technical
rationale or data-based feedback.
Our
study is in the field of environmental chemistry, focused on the field
application of a novel environmental analysis method. However, the reviewer
criticised it for failing to provide a “comprehensive ecological assessment”
and for “not examining the effects on animal behaviours such as feeding or
mating” – as if it were a behavioural ecology paper. The reviewer also claimed
that “repeatability of chemical analysis isn’t fully explained” even though
this was addressed in multiple sections.
Moreover,
the review even contradicted itself. It began by acknowledging that “the
authors replied to the questions raised”, but then concluded, without coherent
reasoning, that “I cannot recommend this work.”
At
that moment, I began to suspect that the review had been written, at least in
part, by an AI tool such as ChatGPT. As an associate editor for an
environmental science journal myself, I am seeing an increasing number of
reviews that appear to be written by AI – though this is rarely disclosed
upfront. They often sound superficially articulate, but they lack depth,
context and a sense of professional accountability.
Specifically,
in my experience, AI-generated reviews often suffer from five key weaknesses.
They rely on vague, overly general language. They misrepresent the paper’s
scope through abstract criticisms. They flag issues that have already been
addressed. They exhibit inconsistent or contradictory logic. And they lack the
tone, empathy, or nuance of a thoughtful human reviewer.
To
confirm my suspicions, I compared the reviewer’s comments to a sample review
that I generated with a large language model. The similarity was striking. The
phrasing, once again, was templated and disengaged from the actual content of
our manuscript. And, once again, the review contained keyword-driven summaries,
baseless assertions and flawed reasoning. It felt less like a thoughtful peer
review and more like the automated response that it was.
As
an editor, I also know how difficult it can be to recruit qualified reviewers.
Many experts are overburdened, and the temptation to use AI tools to speed up
the process is growing. But superficial logic is no substitute for scientific
judgement. So I raised my concerns with the editor-in-chief of the journal,
providing detailed rebuttals and supporting evidence.
The
editor replied courteously but cautiously: “It is highly unlikely the reviewer
used AI,” they said. “If you can address all concerns, I recommend resubmitting
as a new manuscript.”
After three months of effort invested in revision and response, we were back at
the starting line.
The
decision – and the possibility that it was influenced by inappropriate use of
AI – left me deeply disappointed. Some might dismiss it as bad luck, but
science should not depend on luck. Peer review must be grounded in fairness,
transparency and expertise.
This
is not a call to ban AI from the peer review process entirely. These tools can
assist reviewers and editors by identifying inconsistencies, spotting
plagiarism or improving presentation. However, using them to produce entire
peer reviews risks undermining the very purpose of the process. Their use must
be transparent and strictly secondary.
Reviewers
should not rely uncritically on AI-generated text, and editors must learn to
recognise reviews that lack substance or coherence. Publishers, too, have a
responsibility to develop mechanisms for detecting AI-generated content and to
establish clear disclosure policies. Nature’s
announcement on 16 June that it will begin publishing all peer review comments
and author responses alongside accepted papers represents one potential path
forward for publishers to restore transparency and accountability.
If
peer review becomes devalued by undisclosed and substandard automation, we risk
losing the trust and rigour that scientific credibility depends on. Science and
publishing must move forward with technology, but not without responsibility.
Transparent, human-centred peer review remains essential.
Seongjin Hong is a full professor at Chungnam National University, South Korea.
Reader's
comments (36)
#1 Submitted by graff.... on 六月 24, 2025 -
12:10am
You need evidence to make such claims.
Any experienced academics has had manuscripts rejected based on much less that
you describe. We can't blame undefined "AI" for everything!
#2 Submitted by ... on 六月 24, 2025 -
6:05pm
I agree. Suspicion is not enough really.
The article says "probably" and the journal is not named. In which
case, this piece should not have been published by THES in my view.
#3 Submitted by graff.... on 六月 24, 2025 -
1:29am
Further: is it "his" paper or
"our" paper? "Associate editor" or "editor"? Was
this written by AI and not fact-checked?
#4 Submitted by ... on 六月 25, 2025 -
4:08pm
Good point Graff, too much supposition
and smear here for my liking. Either make the allegation or shut uo in my view.
#5 Submitted by ... on 六月 24, 2025 -
3:50am
AI has advanced significantly, but at
least for now, it still falls short compared to human reviewers. Reviewers and
editors must take greater responsibility and should not accept AI-generated
feedback uncritically.
#6 Submitted by ... on 六月 26, 2025 -
10:25pm
Well no-one is saying they should, the
editor at the journal said so. The chap who had his paper rejected as sub
standard is claiming that it "probably" involved someone using an AI
tool and crying foul, but absolutely no-one is saying that AI tools should be
used in the process. It's just some guy's gripe
#7 Submitted by ... on 六月 24, 2025 -
3:58am
Thank you for this insightful piece. It’s
a timely reminder of the importance of recognizing both the role and limits of
AI in scientific publishing.
#8 Submitted by ... on 六月 26, 2025 -
10:26pm
No it's not.
#9 Submitted by ... on 六月 24, 2025 -
4:16am
I believe that more voices need to speak
out about both the light and the shadow sides of this emerging trend. While AI
has undoubtedly brought us many advantages, we must not overlook the potential
harms and unintended consequences it can also bring. Some may question this
article by asking, “Is there concrete proof that the review was generated by
AI?” Of course, evidence based on facts is important, but I also believe that
insights gained through years of experience are equally valuable and should not
be dismissed. There is a reason we call such individuals veterans in their
field. Thank you for this thoughtful piece. It reminded me of the importance of
using AI tools with greater caution, transparency, and responsibility.
#10 Submitted by ... on 六月 24, 2025 -
9:36pm
Comment withdrawn
#11 Submitted by i.... on 六月 24, 2025 -
8:06am
I disagree with this piece only to the
extent that I uninjured is absolutely the case that AI should be no where near
the peer review process and should absolutely and 100% banned. Of course the
difficulty is how would one enforce that? Publishing reviewers (and reviewer names)
might help, but it's not a complete solution. I often wonder why someone would
bother to use AI to review a paper. If you don't want to do it yourself, just
say no. The idea (hinted at, if not quite stated here) that it was the editor,
inventing a reviewer, rather than a human reviewer, using AI had not even
occurred to me. All I can say is that a journal whose editors use AI to review
will not stay a top journal for long. I hope the author appealed above the head
of the handling editor they are working with.
#12 Submitted by ... on 六月 24, 2025 -
9:41pm
"Publishing reviewers (and reviewer
names) might help, but it's not a complete solution. " Peer review is
anonymous and for very good reasons.
#13 Submitted by ... on 六月 24, 2025 -
8:29am
I think the peer review process really
has to be anonymous if it is to function. Who would do it if were not? As
someone else recalls Kissinger's bon mot about the reason academic disputes are
so very bitter is because the stakes are so small.
#14 Submitted by stephen... on 六月 24, 2025 -
8:42am
Sounds like it will soon make an
excellent peer reviewer!
#15 Submitted by ... on 六月 24, 2025 -
3:17pm
Well yes like that 'Murderbot' character
on AppleTV. He is very good at those sort of things.
#16 Submitted by ... on 六月 24, 2025 -
5:51pm
I am watching that as well. It seems a
rather uncanny analogy for my own Department.
#17 Submitted by ... on 六月 24, 2025 -
3:25pm
"Reviewers should not rely
uncritically on AI-generated text, and editors must learn to recognise reviews
that lack substance or coherence." This is the main point. People in
general should not rely uncritically AI-generated text. The key is looking
critically at tasks needing a human eye. An AI review will write what is most
likely to be said about an article, which is not helpful seeing as most
existing reviews probably include the authors received review " vague,
formulaic, often irrelevant and occasionally inaccurate". Critical
Thinking is time consuming and costly, but delivers a worthwhile result.
#18 Submitted by i.... on 六月 24, 2025 -
4:31pm
Is this not the trend though? From
students cheating on essays, to editors producing reviews with AI, to people
writing bits of grant applications they consider unimportant or boilerplate,
individuals are trying to use AI to produce outputs without time consuming and
costly critical thinking, when in each case the critical thinking is the point
and the output is not.
#19 Submitted by ... on 六月 24, 2025 -
6:00pm
Yes indeed
#20 Submitted by ... on 六月 24, 2025 -
9:04pm
As Captain Mainwearing used to say,
"I think we are getting within the realms of fantasy now"
#21 Submitted by ... on 六月 24, 2025 -
6:02pm
Good point
#22 Submitted by ... on 六月 26, 2025 -
10:28pm
No it's not the main point. No-one is
arguing the contrary! Just the author suspects that someone might have used an
AI tool on pretty flimsy reasoning.
#23 Submitted by ... on 六月 24, 2025 -
9:44pm
"Our paper was rejected on the basis
of reviewer comments that were vague, formulaic, often irrelevant and
occasionally inaccurate, says Seongjin Hong" Hmmmmm. Case proven! I think
not M'Lud!!
#24 Submitted by rpoole@... on 六月 25, 2025 -
2:57pm
The journal claims 'peer review'. An AI
tool is not a peer. End of story. The journal is committing academic fraud -
please name the cheat so we can all avoid it in future.
#25 Submitted by ... on 六月 25, 2025 -
4:05pm
You should not make a serious allegation
such as this without evidence. There is no evidence here, just a suspicion and
the journal has rejected the allegation the person used AI in this case. Please
do be careful. If the author wishes to make the charge of academic fraud
publicly then he should do so. No-one else is in the position to make this
allegation but him or his co-authors. At the moment he is having his cake and eating
it at the moment.
#26 Submitted by ... on 六月 25, 2025 -
8:36pm
Let the author of the article make this
allegation if he feels justified, but there is no real evidence only a
suspicion based on a few verbal phrases and expressions. You are making a serious
allegation based on someone else's comments (hearsay) which are at best
tendentious and which you are certainly not in a position to substantiate
unless you were one of the co-authors.
#27 Submitted by ... on 六月 25, 2025 -
8:43pm
Well yes exactly, note the weasel words
in the article title, "My paper was probably reviewed".
"Probably" i.e "as far as one knows or can tell". I am
surprised that THES would allow this tbh. "Probably" doth butter no
parsnips but might evade a legal action from the journal in question.
#28 Submitted by ... on 六月 26, 2025 -
1:39pm
Your experience rings true. I have had a
similar one, only this time the editors identified the yuxtaposed jumble of
review-like statements as not too be taken too seriously and the paper was
accepted. But it did really scare me. More work for editors, i guess.
#29 Submitted by ... on 六月 26, 2025 -
10:29pm
Maybe the reviewer was drunk when he
wrote it? Someone on the sauce working late? It would explain the alleged
infelicities of expression? So can we all agree that we should not drink when
writing peer reviews or comments in this section of the THES either?
#30 Submitted by ... on 六月 26, 2025 -
10:35pm
Exactly, the "evidence", such
as it is, is capable of being explained by more than one interpretation in this
case.
#31 Submitted by ... on 六月 27, 2025 -
2:41am
It’s very important you know, i used the
service of hackerspytech{AT}g'mailcom to hack my husband’s phone. And it went
well. I am currently in his phone without him knowing. I promised I will
recommend him if it works, he was able to get into my cheating spouse iPhone
#32 Submitted by ... on 七月 5, 2025 -
10:04am
Is that legal? How does it relate to this
chap's dilemma about peer review though?
#33 Submitted by ck52427... on 六月 27, 2025 -
9:42am
Authors are cautioned, and live in
trepidation because of Ai. But it appears, on face level - and to be confirmed,
for the records - that Reviewers may be using it as well . If Ai can aid
scholarly review, why can't it aid the production of new knowledge? as it
already does in spectacular fashion. The time for a completely new knowledge
enterprise is here. Careers and reputations are tainted. because Ai cannot fit
into/comply with an old system. The two are Not compatable. Ai presents an
entirely new way forward, for academic scholarship. You can't tweak it or hide
it or camouflage it. Authors, as key originators and contributors, should
simply state its presence and use upfront. In this instance, though, the author
does not seem to have such qualms. So either way, your hands could be burned,
due to a thought police that considers Ai to be criminal invasion. These
battles are unnecessary & immature. Can Ai generate new
(credible/verifiable) knowledge? If the answer is yes, then noone has any right
to prohibit its use. And authors should indicate authorship as Prof. XYZ, in
ass. with Gemini CLi (just released, one of the most powerful ai generative
tools in human history). That's the new way.
#34 Submitted by ... on 七月 5, 2025 -
10:10am
Yes I agree, this is the way forward or
probably will be shortly. Professionals people the world over are using these
tools now, it only seems to be in academia that we want to bring in some sort
of purdah against them. We can not quarantine ourselves from the modern world
and if we try to resist we will go the way of the the Luddites and similar
protestors in my view. It may be a rather painful process though which should
not be underestimated. But a great comment.
#35 Submitted by ... on 七月 4, 2025 -
2:20pm
A few comments have raised the issue
about the use of AI more generally and what constitutes inappropriate usage as
is alleged here. Is gene being used, for example, by journalists? Do THES
journalists use it when researching their stories and publishing them here? It
would be interesting to know. We might have, for example, genAI writing an
essay critical of the use of genAI in academic work which would be ironic? But
I do think that academics are being singled out here when it may be that other
professions are equally to blame. Ofd course, they may not. The evidence
presented as many have commented is rather flimsy and is used to justify only a
"probable" suspicion which is hardly earth-shattering?
#36 Submitted by ... on 七月 5, 2025 -
9:52am
Yes one thing I could add to the
excellent debate here is that do remember that we are not paid or contracted to
undertake peer review (in the normal run of things) and it is one of the things
we do for the "good of the profession". Now in the AI world and the
world of academic publishing, there is a lot of money around. I am not excusing
someone who, it is alleged, may have taken some short cuts, which is clearly
reprehensible, but we might pause and reflect on what exactly we are asking of
colleagues increasingly (while sacking many in the UK). I know for a fact, as
mentioned above, that other professionals have no problem in using genAI in
their paid and contracted duties and no-one has a go at them. So before we rush
to all this hysterical Lynch mob naming and shaming rhetoric, let's get things
in to perspective please.