I read an interesting article titled “Navigating Artificial Intelligence in Scientific
Manuscript Writing: Tips and Traps” in your journal where authors have briefed us
about the advantages and disadvantages of using artificial intelligence (AI) in academic
writing.[1] In the article, the authors presented some of the guidelines from journal publishers
regarding the use of AI. However, I presume an in-depth analysis could help readers
to understand which domains are of utmost important to consider while using AI in
manuscript preparation. In this context, I prepared a list of 20 publishers and extracted
themes from their guidelines regarding the usage of AI in research and writing.[2]
[3]
The themes and direct quotes of the publishers are shown in [Table 1], and the relative frequency of the themes is shown in [Fig. 1].[3] Responsibility is the most prominent theme, as authors remain ultimately accountable
for the content, accuracy, and ethical standards of their manuscripts. Authorship
is also a significant issue, as AI should not be listed as an author, given that it
does not fulfill the International Committee of Medical Journal Editors criteria of
authorship.[4] Declaration of AI use is critical, with many publishers requiring authors to disclose
the use of AI tools, specifying their role in the manuscript preparation process.
If authors use it in study process, they should mention its use in the methods section.
If they use it for language and grammar correction, they should mention it in acknowledgment.
While AI can enhance productivity, helping with drafting and literature reviews, it
comes with limitations, such as potential biases or inaccuracies in AI-generated content.
Looking ahead, the future prospect of AI in academic writing suggests continued integration,
but with a focus on ethical use, transparency, and maintaining academic integrity.
Table 1
Some direct quotes of the biomedical journal publishers on using artificial intelligence
in manuscript preparation
Responsibility
|
(1) Use of AI “should be done with human oversight and control, and authors should
carefully review and edit the result.” (Elsevier)
(2) “Authors are liable for … parts created with the help of an AI.” (Thieme)
(3) “Authors... to review the final text and accept responsibility for its accuracy.”
(Mary Ann Liebert, Inc.)
(4) Authors “are fully responsible for the content of their manuscript.” (Bentham
Science)
|
Authorship
|
(1) “Generative AI technologies … do not meet the criteria required for authorship.”
(Frontiers)
(2) AI “do not meet authorship criteria and thus cannot be listed as authors.” (MDPI)
(3) “Bots cannot be listed as a credited author of a paper published.” (Scientific
Scholar)
|
Declaration
|
(1) Use of AI in the manuscript “should be clearly described in an acknowledgments
section.” (De Gruyter)
(2) Use of AI should be “properly documented in the Methods section.” (Springer)
(3) AI use should be mentioned in “the Methods section (or via a disclosure or within
the Acknowledgements section, as applicable).” (Wiley)
(4) Add details of AI use in “cover letters to editors and in the Methods or Acknowledgements
section of manuscripts.” (Oxford University Press)
(5) Authors can provide “input prompts provided to a generative AI technology and
outputs … in supplementary files.” (Frontiers)
(6) To mention “name of the model or tool, version and extension numbers, and manufacturer.”
(JAMA)
|
Productivity
|
(1) “Copy-editing an article using a generative AI tool/LLM to improve its language
and readability” is permitted. (Emerald) (they don't allow copywriting by AI)
(2) AI tools “can increase productivity and foster innovation if used appropriately.”
(Wiley)
(3) Proper use of AI has “the potential to augment research outputs and thus foster
progress through knowledge.” (Taylor and Francis)
(4) AI can “generate initial ideas for a structure, for example, or when summarizing,
paraphrasing, language polishing” of a manuscript. (SAGE)
|
Limitations
|
(1) “LLMs can ‘hallucinate’ i.e. generate false content” and they “can generate content
that is linguistically but not scientifically plausible.” (SAGE)
(2) “AI can generate authoritative-sounding output that can be incorrect, incomplete
or biased.” (ELSEVIER)
(3) “Any use of AI must not breach (their) plagiarism policy.” (Cambridge University
Press)
|
Prospect
|
(1) Use of “artificial intelligence, language models, machine learning, or similar
technologies is discouraged.” (JAMA)
(2) AI in “research and writing is an evolving practice.” (Taylor and Francis)
(3) “In a few years, AI will become the norm, like how the Internet or Google are
now.” (MDPI)
(4) Researchers should use it “responsibly and in accordance with (their) AI policy.”
(Taylor and Francis)
|
Abbreviations: JAMA, Journal of the American Medical Association; MDPI, Multidisciplinary
Digital Publishing Institute.
Fig. 1 Relative frequency of the themes.
The best possible way of using AI is—use, take responsibility, and declare in detail!