Fiche publication


Date publication

février 2026

Journal

British journal of anaesthesia

Auteurs

Membres identifiés du Cancéropôle Est :
Pr POISBEAU Pierrick


Tous les auteurs :
De Cassai A, Dost B, Augoustides JG, Azamfirei L, Alanoğlu Z, Azi LM, Calvache JA, Cerny V, De Hert S, Eldawlatly A, Farber MK, Sobreira-Fernandes D, Fettiplace MR, Galante D, Garg R, Goldstein HV, Abad-Gurumeta A, Gupta L, Hemmings HC, Jones CA, Hochberg MC, Katz JD, Kang H, Talu GK, Kraychete DC, Landau R, Lee S, Lum HD, Lundgren C, Makuloluwa TR, Martelletti P, Palermo TM, Peyton PJ, Poisbeau P, Rathmell JP, Roquilly A, Schwarz SKW, Shevade M, Sloan PA, Sweitzer B, Neto AS, Stahel PF, Şatırlar ZÖ, Turan A, Turk DC, Valeriani M, Werner MU, Young PJ, Zabolotskikh IB, Zacharowski K, Zdanowski S

Résumé

This article presents a Delphi consensus developed by a panel of editors-in-chief of anaesthesiology and pain medicine journals to guide the responsible use of large language models (LLMs) in academic publishing. LLMs offer potential benefits for scientific writing, including language editing, summarisation, translation, information organisation, and support for non-native English speakers, but their misuse raises concerns about accuracy, transparency, confidentiality, and research integrity. Through a three-round modified Delphi process involving 53 editors-in-chief or their delegates, 59 statements were generated and categorised into guidance for authors, editors, reviewers, and publishers with a particular attention to LLM disclosure practices and perceived risks. The consensus recognises that LLMs are useful tools in academic publishing for authors, reviewers, and editors. However, their use must be guided by ethics, legality, and principles of transparency and accountability. LLMs may assist with limited editorial and authorial tasks provided that their use is fully disclosed and all outputs are verified by humans. The consensus also emphasises the inappropriateness of using LLMs to generate original or ideative content, which should remain a strictly human responsibility. Moreover, LLMs must not generate data, references, conclusions, or entire manuscripts, nor be used for editorial decisions or peer-review reports. Editors expressed concerns about 'hallucinations', erosion of critical skills, confidentiality breaches, and the proliferation of low-quality LLM-generated manuscripts. The resulting guidance highlights transparency, human accountability, and careful verification as essential principles for integrating LLMs into scholarly workflows while preserving the integrity of scientific publishing.

Mots clés

Delphi consensus, anaesthesia and pain medicine journals, editorial policy, large language models, research integrity, responsible artificial intelligence, scientific publishing ethics

Référence

Br J Anaesth. 2026 02 25;: