2024
Blog

AI and peer review: Official journal response

AI and peer review: Official journal response

This update follows on earlier posts (20 Dec. 2023; 31 Jan. and 7 Feb. 2024) reporting on the use of generative AI in the peer review of a research paper.

I received an official response by email from the mediAzioni journal editors on Thursday, 22 February 2024.

Attached to that email were detailed responses from both human reviewers (which I do not provide here). I appreciate that the journal has made the decision to involve the reviewers, whose identities yet remain undisclosed.

Most notable in the response is that the mediAzioni editors take no explicit position on my allegations that generative AI was used to produce both review reports. They state: “our journal is not the proper place to conduct a dispute on this issue” / “la nostra Rivista non è la sede per condurre un contraddittorio in merito.”

Remarks in both anonymous reviewers’ responses suggest they were unaware that I had made repeated requests to engage with the reviewers from my earliest email, of 20 December 2023 (“which I strongly urge you to address directly with the authors of the reviews (please share this response with them) as well as with the journal’s editorial/scientific boards due to the serious ethical breach I am alleging”). The reviewers appear to have been notified only after my public statements of 31 January, contrasting with the editors’ earlier commitments to conduct an investigation.

The reviewers’ responses indicate to me that these texts were written by the humans who used generative AI (likely ChatGPT 3.5) to review my article through a process of individualized AI prompting and post-editing, calling upon their own professional expertise. These evidently human-produced responses, written in the first person, thus provide control samples against which to compare the earlier review reports.

The following grid presents the results of a new round of testing using various free and paid AI detection sites, conducted 23 February 2024 in Italy (listed alphabetically below). The two original review reports were here tested against four control samples: one AI control using selected output from the ChatGPT simulation tests, along with three human controls: my article in its draft version of June 2023 and the two reviewer responses to me. The reviewers’ responses are readily analyzed as being human, like my own article, and in contrast to their original review reports.

Controlled AI detection tests

These new controlled AI detection tests – along with the ChatGPT simulations (raw output here), the concordance lines with many matching text patterns, my qualitative commentary in the annotated reports, the journal’s response timeline, the unwillingness to de-anonymize, and the unconvincing reviewer responses – only reinforce my position that generative AI was used to produce both peer review reports for my paper.

My case must not be seen as exceptional. I do not mean to single out the anonymous reviewers alone for their conduct or the journal editors for their response, as similar scenarios are surely now playing out all over the place. My public pursuit of this matter has been motivated as much by raising awareness as by establishing accountability.

Nicholas Lo Vecchio

26 February 2024

Cliquez sur l’onglet ENGLISH ci-dessus pour accéder au contenu blog.

Note bas de page