Pharmacopsychiatry
DOI: 10.1055/a-2743-2564
Letter to the Editor

Be Aware of AI Limitations

Authors

  • Scott Monteith

    1   Psychiatry, Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, United States
  • Tasha Glenn

    2   Consultant, Chronorecord Association, Inc., Fullerton, CA, United States
  • John Richard Geddes

    3   Department of Psychiatry, University of Oxford, Oxford, United Kingdom
  • Peter C Whybrow

    4   Psychiatry, UCLA, Los Angeles, United States (Ringgold ID: RIN8783)
  • Eric Achtyes

    5   Psychiatry, Western Michigan University Homer Stryker MD School of Medicine, Kalamazoo, United States (Ringgold ID: RIN51374)
  • Rita Bauer

    6   Department of Psychiatry and Psychotherapy, Technische Universität Dresden, Dresden, Germany (Ringgold ID: RIN9169)
  • Michael Bauer

    7   Department of Psychiatry, Technische Universität Dresden, Dresden, Germany (Ringgold ID: RIN9169)

To the Editor

Large language models (LLMs) have revolutionized AI allowing machines to generate human-like text with speed and accuracy when used in the circumstances they were designed for. Yet a major concern is that there is insufficient understanding and recognition of the limitations of LLMs. The use of generative AI products based on LLMs has expanded at a pace faster than either the adaption of personal computers or the Internet [1]. LLMs are machine learning algorithms designed by humans that are easy to use and respond to input with human-like text [2]. The fundamental basis for the capabilities of AI models is based on training data. The amount, caliber, and variety of the data used for training are critical factors for developing a robust, impartial and accurate model. LLMs are trained using massive datasets containing billions (109) or trillions (1012) of data units. The smallest piece of data that a LLMs can read and process is referred to as a token. Tokens are the data units used to generate text, make predictions and provide responses. The model uses parameters learned during training to interpret and transform the data units. LLMs operate by predicting the next token, whether letters, words or phrases, based on the training data. Data structures and algorithms are fundamental for processing and sorting data, and making decisions and predictions. It is estimated that GPT-4 has one trillion parameters [3].



Publication History

Received: 20 June 2025

Accepted after revision: 05 November 2025

Article published online:
09 January 2026

© 2026. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany