Artificial intelligence (AI), which is roughly defined as a computer (machines) programmed
to simulate human intelligence in problem-solving and learned behavior, has changed
modus operandi in many areas (elements) of our lives. It is being used in a wide range
of activities, such as banking, remote sensing, transportation, healthcare, and more
[1].
In medicine, AI platforms already exist and soon may become indispensable for early
detection, characterization, and classification of several gastrointestinal disorders
including Barrett esophagus, stomach and colonic lesions [2]
[3]
[4]
[5]. Basically, it represents computer-derived decision-making algorithms that are developed
comparing data from a specific patient with large quantities of data from other patients
and it has been repeatedly claimed that such projects will automate doctors’ work
soon [1]
[2]. Before it happens and alongside technical challenges that implementation and integration
of AI in clinical practice pose to engineers and medical workers, there are a series
of open questions and legal issues that need to be addressed.
Due to their wide and ever-increasing presence in our lives, AI machines may soon
gain some social capacity in terms of affecting our emotions and responsiveness, so
the crucial question is whether AI will ever replace doctors in the future and how
many people will support this happening. We have seen, from the survey by Wadhawa
et al. [6] of 124 US gastroenterologists, 86 % of them have a strong interest in applying AI
in their daily practice and nearly 85 % among them think it would improve their practice.
On the other hand, only 57 % would rely on a decision made by AI. So, the answer to
the question about whether AI will replace doctors in the future is far from being
simple and straightforward. One thing is for sure is that once an AI system is linked
to the doctor-patient relationship, matters will become more complex.
One of the complexities in the dilemma is whether AI can be held accountable for misdiagnosis
or even malpractice. Well, perhaps one day, and then if some circumstances are studied
and examined to apportion accountability.
The first and perhaps most fundamental concern is validation of algorithms and classification
of the product (software). Recently, several algorithms have been developed for detection
and characterization of colon polyps and they are in clinical trials or under approval
in Europe, Japan, and the United States [5]
[7]
[8]
[9]. Once a model is tested on large amounts of internal and external image data sets
(internal and external validation), an AI system can be used safely in the clinical
setting. If such a system is intended to detect or treat disease, according to the
European Medical Devices Regulations, it can be classified as a medical device that
contributes to diagnosis and facilitates decision-making on therapeutic measures [10]. If we standardize AI design and have it registered as a medical device and request
that ethical, moral, and social norms be programmed into AI platforms that interact
with humans, then we are obliged to determine under which circumstances they can be
held legally responsible for their actions.
At this stage, healthcare professionals would be responsible for harm if they did
not take adequate measures to properly evaluate AI technology. But as technology advances,
machines are likely to gain more autonomy and the strategy in relation to legal responsibility
needs to be further developed. All AI systems are designed and built by humans with
the intent of doing no harm to other human and achieving their goals in a safe and
secure manner. It will be challenging, however, to find responsible parties among
software developers, hardware engineers, companies and healthcare providers in a case
of medical error (product or vicarious liability vs. medical malpractice).
Can AI systems be guilty of medical malpractice and can patients sue a robot?
From a legal perspective, it is difficult to say, as it is still an unknown and evolving
field.
A large number of medical malpractice lawsuits originate from the missed or hindered
diagnosis of a medical condition or illness [11]. Still, a mistake in diagnosis by itself is not enough to pursue a medical practice
lawsuit.
Medical malpractice includes negligence and negligence involves consciousness of failure
to act (knowing but not doing), which implies that the person – or in our case, computer
– knew what breach of a duty to recognize and differentiate adenoma from hyperplastic
lesion would result in. This aspect of AI design is still lacking. But as AI systems
autonomy expands, it is not completely impossible to distribute and attribute legal
responsibility to the machine itself.
If a medical malpractice lawsuit is pursued, the court at first instance should be
able to determine the direct cause of the plaintiff’s injury followed by determination
of whether there are elements to claim for medical malpractice or product lability
[11]
[12]. If the case arises from a defect in the AI system hardware that later caused plaintiff’s
injury, then it should advance against the manufacturer or owner (end user-hospital
and group of physicians using it) in case of inappropriate operation and maintenance.
If an AI machine was registered as a medical device and programmed according to the
medical requirements and medical standards required (again, standardization is crucial),
patients consented to use of AI in their diagnostic work-up and the procedure was
explained in details and the machine was operated properly but still failed to recognize
and differentiate adenoma that would result in interval cancer (direct cause of the
plaintiff’s injury), then perhaps the computer can be held liable for medical malpractice
based on missed or wrong diagnosis. But, as AI becomes further integrated into everyday
practice, it becomes obvious that the current legal framework is insufficient and
further elucidation of the interface between law, technology, and medicine is required
to protect millions of patients soon to be exposed to the diagnosis and therapies
suggested by AI systems. For the time being, only general regulations continue to
apply, but we need to find new and creative solutions to reconcile the new circumstances.
Regardless of whether we give AI a personal identity or share liability among all
involved parties in use and implantation of the technology, quality and safety must
come first. Because future AI systems may exclude physicians from decision-making
about interpretation of endoscopic images, we need to carefully weight their adoption
against imminent threats posed to physicians in using technology that is not completely
regulated.
Besides justified concerns regarding costs (75 %), operator dependence (63 %), and
increased procedural time (60 %) perhaps some of the 43 % of gastroenterologists surveyed,
[6] who felt uncomfortable using computer-aided diagnosis to support a “diagnose and
leave” strategy for hyperplastic polyps, philosophically had some “sentimental problems”
who will be responsible for missed cancer. It is a pity, but the question was not
asked. Or, perhaps they are just of an age that they would not rely on computer-aided
diagnosis.
In order for AI to be regularly included in colonoscopy service, gastroenterologists
need to be confident that the technology is not only affordable and that it will improve
their performance but they would need legal clarity and certainty before it is fully
adopted in clinical practice.