Subscribe to RSS
DOI: 10.1055/a-2743-2564
Be Aware of AI Limitations
Authors
To the Editor
Large language models (LLMs) have revolutionized AI allowing machines to generate human-like text with speed and accuracy when used in the circumstances they were designed for. Yet a major concern is that there is insufficient understanding and recognition of the limitations of LLMs. The use of generative AI products based on LLMs has expanded at a pace faster than either the adaption of personal computers or the Internet [1]. LLMs are machine learning algorithms designed by humans that are easy to use and respond to input with human-like text [2]. The fundamental basis for the capabilities of AI models is based on training data. The amount, caliber, and variety of the data used for training are critical factors for developing a robust, impartial and accurate model. LLMs are trained using massive datasets containing billions (109) or trillions (1012) of data units. The smallest piece of data that a LLMs can read and process is referred to as a token. Tokens are the data units used to generate text, make predictions and provide responses. The model uses parameters learned during training to interpret and transform the data units. LLMs operate by predicting the next token, whether letters, words or phrases, based on the training data. Data structures and algorithms are fundamental for processing and sorting data, and making decisions and predictions. It is estimated that GPT-4 has one trillion parameters [3].
Publication History
Received: 20 June 2025
Accepted after revision: 05 November 2025
Article published online:
09 January 2026
© 2026. Thieme. All rights reserved.
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
References
- 1 Bick A, Blandin A, Deming DJ. The rapid adoption of generative ai. National Bureau of Economic Research. 2024 Sep 23
- 2 Marcus G. Taming Silicon Valley: How We Can Ensure That AI Works for Us. Cambridge, MA, USA: MIT Press; 2024. Sep 17
- 3 Bastian M. GPT-4 has more than a trillion parameters – Report. The Decoder. 2023 https://the-decoder.com/gpt-4-has-a-trillion-parameters/
- 4 Marcus G. AI platforms like ChatGPT are easy to use but also potentially dangerous. Sci Am. 2022 https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/
- 5 Smith G. Machine learning algos often fail: they focus on data, ignore theory. Mind Matters. 2025 Apr. 8 https://mindmatters.ai/2025/04/machine-learning-algorithms-provide-lots-of-data-but-not-theory/#
- 6 Monteith S, Glenn T, Geddes J. et al. Artificial intelligence and increasing misinformation. Br J Psychiatry 2024; 224: 33-35
- 7 Monteith S, Glenn T, Geddes J. et al. Expectations for artificial intelligence (AI) in psychiatry. Curr Psychiatry Rep 2022; 24: 709-721
- 8 Griot M, Hemptinne C, Vanderdonckt J. et al. Large language models lack essential metacognition for reliable medical reasoning. Nat Commun 2025; 16: 642
- 9 Monteith S, Glenn T, Geddes JR. et al. Differences between human and artificial/augmented intelligence in medicine. Computers in Human Behavior: Artificial Humans 2024; 2: 100084
- 10 Monteith S, Glenn T, Geddes JR. et al. Patient and physician exposure to artificial intelligence hype. Pharmacopsychiatry. 2025;
- 11 Kolding S, Lundin RM, Hansen L. et al. Use of generative artificial intelligence (AI) in psychiatry and mental health care: a systematic review. Acta Neuropsychiatrica. 2024; 37: e37
