Hamostaseologie 2024; 44(06): 422-424
DOI: 10.1055/a-2443-4130
Editorial

Artificial Intelligence in Medicine: Are We Ready?

Michael Nagler
1   Department of Clinical Chemistry, Inselspital, Bern University Hospital, Bern, Switzerland
› Author Affiliations

Funding Michael Nagler is supported by a research grant of the Swiss National Science Foundation (215574).
Preview

Abstract

In spite of my personal belief in the benefits of artificial intelligence (AI), reading Cathy O'Neil's book “Weapons of Math Destruction” left me feeling unsettled.[1] She describes how flawed and unchecked algorithms are widely applied in areas that affect us all: hiring, credit scoring, access to education, and insurance pricing. In one example, a fixed percentage of teachers in a U.S. region was dismissed every year based on biased and opaque algorithms. The authors concluded that such algorithms act as “weapons of math destruction,” perpetuate and amplify societal biases, act unethically, and harm vulnerable populations. The question arises as to what happens when we apply these algorithms to medicine? How do we know whether we are giving our patients the correct diagnosis or prognosis? Are we still sure that patients are receiving the appropriate treatment? Would we notice if the algorithms were geared more toward the needs of companies (make a lot of money) or health insurance companies (spend as little as possible)? In fact, evidence of bias and inequality of algorithms in medicine is already available.[2] Due to these risks, some of my colleagues suggest that AI should be completely banned from medicine.



Publication History

Received: 15 October 2024

Accepted: 15 October 2024

Article published online:
10 December 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany