linguistic research / translation / editing

[Many other details and pieces of evidence have been added since the original post below, across three posts on this website.]

It has come to my attention that academics are now using generative AI (Chat GPT or whatever) to conduct their peer reviews.

Just now, in mid-December 2023, I received two reviewer reports, written in English, from a language journal based in Italy. Both reviewers assessed my submitted article as being “suitable for publication if significant changes are made.” Both reviewer reports were extraordinarily vague and unspecific, making all sorts of extremely general critiques but without any actual engagement with my arguments. The more I read them, the more obvious it was that the human “reviewers” did not assess my paper at all, but a generative AI program did, with a bare minimum of human prompting and post-editing. I immediately responded to the editors, informing them of the situation.

The machine content was obvious to me because there was not a single coherent critique that meaningfully engaged with my paper. I went through the reports line by line, word by word: there was nothing there. That was all the evidence I needed, as a human researcher who knows my subject. But for what it’s worth, some online AI detectors – obviously not foolproof – also predicted that the reviews were machine generated. Here is a quick rundown of analyses conducted by some free and paid AI detection sites the evening of 19 December 2023 in Italy:

SourceReviewer 1Reviewer 2My article
[Refer to the new, controlled AI detection analysis HERE.
I have pulled this original grid down to ensure that only the most up-to-date information is available.]

Never in my life have I felt angrier, or more professionally disrespected and betrayed. I have not yet decided how to proceed, in terms of publication of the paper or what to request from the journal. For now, it is clear to me that unethical researchers who use AI for peer review deserve to be held publicly accountable and they have no place in academia.

If ever there were a compelling argument to totally abandon the intellectually dishonest notion of “blind” peer review – a system everyone knows is broken, rarely fully “blind” and frequently not “blind” at all, but allowing unscrupulous or mediocre yet established scholars to sabotage promising work – then this finally is it: accountability and transparency in the face of the robot takeover. Every person who reviews another person’s scholarship must be willing to sign their name to their own evaluation, to stand by it and assert it was not the work of the machines.

I may in the future provide further details on this specific incident (including the offending reports) if or when new developments emerge. At the moment, I wanted to immediately go public with this shocking realization. If it happened to me two times, for the same article, at the same journal, it must be happening everywhere.

Nicholas Lo Vecchio
20 December 2023