Hamostaseologie 2024; 44(06): 422-424
DOI: 10.1055/a-2443-4130
Editorial

Artificial Intelligence in Medicine: Are We Ready?

Michael Nagler
1   Department of Clinical Chemistry, Inselspital, Bern University Hospital, Bern, Switzerland
› Author Affiliations

Funding Michael Nagler is supported by a research grant of the Swiss National Science Foundation (215574).
 

Abstract

In spite of my personal belief in the benefits of artificial intelligence (AI), reading Cathy O'Neil's book “Weapons of Math Destruction” left me feeling unsettled.[1] She describes how flawed and unchecked algorithms are widely applied in areas that affect us all: hiring, credit scoring, access to education, and insurance pricing. In one example, a fixed percentage of teachers in a U.S. region was dismissed every year based on biased and opaque algorithms. The authors concluded that such algorithms act as “weapons of math destruction,” perpetuate and amplify societal biases, act unethically, and harm vulnerable populations. The question arises as to what happens when we apply these algorithms to medicine? How do we know whether we are giving our patients the correct diagnosis or prognosis? Are we still sure that patients are receiving the appropriate treatment? Would we notice if the algorithms were geared more toward the needs of companies (make a lot of money) or health insurance companies (spend as little as possible)? In fact, evidence of bias and inequality of algorithms in medicine is already available.[2] Due to these risks, some of my colleagues suggest that AI should be completely banned from medicine.


There would, however, be a lot at stake if we banned AI completely from medicine. The first thing we would lose is the unbeatable capabilities that cannot be replaced by “human intelligence”: machine-learning algorithms can find patterns in large amounts of high-dimensional data, they are consistent and accurate without tiring, and they are very good at classification (diagnosis) and prediction.[3] [4] Second, the combination of multivariable analysis with the new capabilities of laboratory analytics yields a completely new potential for the diagnosis and screening of diseases, for prediction and prognosis estimation and for treatment monitoring.[5] Third, algorithms can be used in (selected) processes to reduce the workload on health care professionals in times of widespread staff shortages and ever tighter budgets.[6] Fifth, to be consistent, we would also have to ban all regression models from medicine, a method that has proven itself over decades.[7] Ultimately, just as we no longer want to do without the internet the younger generation will not want to do without health care apps. This trend is unstoppable. Either we take control or we will be overtaken by new players in the health care sector, such as Google Health and others.[8]

There is a sensible way to deal with this dilemma, and the three articles in this issue of Haemostaseologie are just such steps. We need to understand the causes of errors, biases, and inequalities in AI and machine-learning–based algorithms. We need to develop a conceptual framework and methodological toolkit that allows us to develop algorithms, validate them thoroughly, implement them, and monitor their performance in different populations. In addition, we need to try new approaches, knowing that we will make a lot of mistakes, and we need to learn a lot from those mistakes. I am grateful that several experts in the field of thrombosis and hemostasis have agreed to do pioneering work for these purposes. Dr. Chrysafi and Dr. Patell's group from Boston provides a comprehensive overview of studies and applications in the field of venous thromboembolism. In particular, they describe the potential of generative models, describe barriers and facilitators, and suggest practical next steps.[9] Dr. Kahl and colleagues conducted a systematic review and provide a broad overview of all digital applications in the field of inherited coagulation disorders.[10] Dr. Nilius presents a methodological framework for developing effective algorithms and discusses common pitfalls.[11]

The implementation of AI can also be seen in a broader context: an information technology that provides people with information at the right time, thus relieving them of tedious tasks (memorizing, looking up, etc.).[12] This phenomenon has occurred several times in human history: with the invention of the complete alphabet by the Greeks, the printing press in the Middle Ages, modern computers, and the Internet.[13] All of these carry risks that have to be countered with new framework conditions. However, all of these also release a great deal of energy that has been converted into increased productivity and world-changing innovations.


Conflict of Interest

The authors declare that they have no conflicts of interest.


Address for correspondence

Michael Nagler, MD, PhD
Inselspital, Bern University Hospital, University of Bern;
3010 Bern
Switzerland   

Publication History

Received: 15 October 2024

Accepted: 15 October 2024

Article published online:
10 December 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany