linguistic research / translation / editing

Many other details and pieces of evidence have been added since the original post below, across three posts on this website.

Download the human-prompted AI-generated reports as Word documents here:

Download ChatGPT simulation tests with many matching text patterns here:

Read about the journal’s response here:

See new controlled AI detection tests here, including three human controls and one AI control.

➔ To clarify a point in Emanuel Maiberg’s 404 Media article, the online AI detectors do not flag the reviewers’ human text as AI; only their review reports are detected as AI – as the controlled tests at the above link show. My point was that I am aware of the political concerns in accusing researchers whose first language is not English of essentially writing in “too perfect” English in their reviews. Yet this must not mask the larger issue of ChatGPT being the source of much of that text; the point is not the wording, but the underlying ideas (the lack thereof). And it cannot be stressed enough that misuse of generative AI is occurring in many languages, not just English.

Read about text patterns matching those reported in Liang et al. (2024) here.

[Above update of 8 April 2024]

It has come to my attention that academics are now using generative AI (Chat GPT or whatever) to conduct their peer reviews.

Just now, in mid-December 2023, I received two reviewer reports, written in English, from a language journal based in Italy. Both reviewers assessed my submitted article as being “suitable for publication if significant changes are made.” Both reviewer reports were extraordinarily vague and unspecific, making all sorts of extremely general critiques but without any actual engagement with my arguments. The more I read them, the more obvious it was that the human “reviewers” did not assess my paper at all, but a generative AI program did, with a bare minimum of human prompting and post-editing. I immediately responded to the editors, informing them of the situation.

The machine content was obvious to me because there was not a single coherent critique that meaningfully engaged with my paper. I went through the reports line by line, word by word: there was nothing there. That was all the evidence I needed, as a human researcher who knows my subject. But for what it’s worth, some online AI detectors – obviously not foolproof – also predicted that the reviews were machine generated. Here is a quick rundown of analyses conducted by some free and paid AI detection sites the evening of 19 December 2023 in Italy:

SourceReviewer 1Reviewer 2My article
[Refer to the new, controlled AI detection analysis HERE.
I have pulled this original grid down on 5 April 2024 to ensure that only the most up-to-date information is available.

AI detection tests are only one indicative element. Additional evidence is provided in the ChatGPT simulation tests, my qualitative analyses, and the journal response, all covered in the Updates post here and the Journal Response post here]

Never in my life have I felt angrier, or more professionally disrespected and betrayed. I have not yet decided how to proceed, in terms of publication of the paper or what to request from the journal. For now, it is clear to me that unethical researchers who use AI for peer review deserve to be held publicly accountable and they have no place in academia.

If ever there were a compelling argument to totally abandon the intellectually dishonest notion of “blind” peer review – a system everyone knows is broken, rarely fully “blind” and frequently not “blind” at all, but allowing unscrupulous or mediocre yet established scholars to sabotage promising work – then this finally is it: accountability and transparency in the face of the robot takeover. Every person who reviews another person’s scholarship must be willing to sign their name to their own evaluation, to stand by it and assert it was not the work of the machines.

I may in the future provide further details on this specific incident (including the offending reports) if or when new developments emerge. At the moment, I wanted to immediately go public with this shocking realization. If it happened to me two times, for the same article, at the same journal, it must be happening everywhere.

Nicholas Lo Vecchio
20 December 2023