[This post has been updated, below, and see here most recently for the journal’s response.]
This post provides an update on the AI peer review incident reported previously on this site, in December 2023.
The article described here had been commissioned for a special issue of mediAzioni journal (Università di Bologna – Forlì), to be published in late 2024 as conference proceedings of the TaCo conference, on “taboo” in language, held in Rome in September 2022. The assessment of the paper from the editors was positive: “Both referees have indicated that the paper is suitable for publication, although significant changes should be made.”
After discovering that generative AI had been used to assess my paper, I sent a detailed response to the guest editors of the special issue. This included many concrete questions for the reviewers to answer; a request that my response be shared with the scientific committee; and a request that a revised version of my paper undergo a replacement round of human review.
Upon reflection, I then escalated the matter, appealing to the editors in chief of mediAzioni journal to conduct an official investigation involving all members of the scientific board. On 21 December 2023, the guest editors and the mediAzioni editors indicated that a “thorough investigation” and a “thorough evaluation” would be conducted after the holiday break.
On 11 January 2024, the editors responded denying the allegations, without providing any substantiation for their position:
Dear Nicholas Lo Vecchio,
both the Special Issue guest editors and the mediAzioni editorial team have now carefully assessed your claims about AI having allegedly been used to create the reviews for the paper you have submitted and have found them unsubstantiated and without merit. We have no reason to doubt neither the authenticity of the reviews nor the ethical conduct of the two reviewers. Therefore, we stand by our original message sent to you on December 19th, 2023.
The Special Issue guest editors
The mediAzioni editors-in-chief
I withdrew my article and informed the editors that the matter would be considered open and unresolved until the mediAzioni journal provided an adequate response. I submitted a list of additional questions about the investigation, again asking the editors to confront the reviewers directly and to involve all members of the scientific committee. The editors did not respond, either to that email or to any of the questions raised in previous communications.
As of this morning, I have appealed by email to all members of the mediAzioni scientific committee (Advisory Board) in the hopes that they will take this matter seriously and seek to establish public accountability for the individuals responsible.
I have also contacted all participants of the 2022 TaCo conference, urging other submitting authors to carefully examine their own review reports to determine whether similar AI text patterns are detected.
In addition to the ethical breach, there are legal concerns about confidentiality, as any unauthorized processing of texts in a commercial AI platform could have led to the theft and undue monetarization of intellectual property. The automated analyses themselves raise various other issues, including heterocisnormative bias in AI algorithms.
Here, in the spirit of transparency, I am now releasing the review reports. Two versions are available:
The most up-to-date version of my withdrawn article is available in prepublication form on the main page of this website: “Using Translation to Chart the Early Spread of GAY Terminology” [link].
Nicholas Lo Vecchio
31 January 2024
New update with ChatGPT tests
To further update on the matter of AI use in the peer review of my article, I have as of today received no response from the members of the mediAzioni Advisory Board, either individually or collectively. I had requested some sort of formal response by Monday 5 February end of day.
In the continued spirit of transparency, I am now releasing a file with simulations run in ChatGPT that convincingly demonstrate that this was the generative AI platform most likely used to produce the review reports for my article.
The scholarly and press dialogue on the use of AI in peer review has so far been skewed toward the hard sciences; see a bibliography here. A more critical discussion that accounts for its ill-suited use in the humanities is urgently needed.
The concrete outcome I would like is for the journal editors and board to assume their own responsibility by acknowledging the error, and for the individuals responsible to apologize.
Nicholas Lo Vecchio
7 February 2024