CC BY-NC-ND 4.0 · Endosc Int Open 2020; 08(10): E1385-E1386
DOI: 10.1055/a-1214-5858
Editorial

AI in endoscopy and medicolegal issues: the computer is guilty in case of missed cancer?

Ivan Jovanovic
1   Clinical Center of Serbia – Clinic for Gastroenterology and Hepatology, Beograd, Serbia
2   University of Belgrade Faculty of Medicine, Beograd, Serbia
› Author Affiliations
 

Artificial intelligence (AI), which is roughly defined as a computer (machines) programmed to simulate human intelligence in problem-solving and learned behavior, has changed modus operandi in many areas (elements) of our lives. It is being used in a wide range of activities, such as banking, remote sensing, transportation, healthcare, and more [1].

In medicine, AI platforms already exist and soon may become indispensable for early detection, characterization, and classification of several gastrointestinal disorders including Barrett esophagus, stomach and colonic lesions [2] [3] [4] [5]. Basically, it represents computer-derived decision-making algorithms that are developed comparing data from a specific patient with large quantities of data from other patients and it has been repeatedly claimed that such projects will automate doctors’ work soon [1] [2]. Before it happens and alongside technical challenges that implementation and integration of AI in clinical practice pose to engineers and medical workers, there are a series of open questions and legal issues that need to be addressed.

Due to their wide and ever-increasing presence in our lives, AI machines may soon gain some social capacity in terms of affecting our emotions and responsiveness, so the crucial question is whether AI will ever replace doctors in the future and how many people will support this happening. We have seen, from the survey by Wadhawa et al. [6] of 124 US gastroenterologists, 86 % of them have a strong interest in applying AI in their daily practice and nearly 85 % among them think it would improve their practice. On the other hand, only 57 % would rely on a decision made by AI. So, the answer to the question about whether AI will replace doctors in the future is far from being simple and straightforward. One thing is for sure is that once an AI system is linked to the doctor-patient relationship, matters will become more complex.

One of the complexities in the dilemma is whether AI can be held accountable for misdiagnosis or even malpractice. Well, perhaps one day, and then if some circumstances are studied and examined to apportion accountability.

The first and perhaps most fundamental concern is validation of algorithms and classification of the product (software). Recently, several algorithms have been developed for detection and characterization of colon polyps and they are in clinical trials or under approval in Europe, Japan, and the United States [5] [7] [8] [9]. Once a model is tested on large amounts of internal and external image data sets (internal and external validation), an AI system can be used safely in the clinical setting. If such a system is intended to detect or treat disease, according to the European Medical Devices Regulations, it can be classified as a medical device that contributes to diagnosis and facilitates decision-making on therapeutic measures [10]. If we standardize AI design and have it registered as a medical device and request that ethical, moral, and social norms be programmed into AI platforms that interact with humans, then we are obliged to determine under which circumstances they can be held legally responsible for their actions.

At this stage, healthcare professionals would be responsible for harm if they did not take adequate measures to properly evaluate AI technology. But as technology advances, machines are likely to gain more autonomy and the strategy in relation to legal responsibility needs to be further developed. All AI systems are designed and built by humans with the intent of doing no harm to other human and achieving their goals in a safe and secure manner. It will be challenging, however, to find responsible parties among software developers, hardware engineers, companies and healthcare providers in a case of medical error (product or vicarious liability vs. medical malpractice).

Can AI systems be guilty of medical malpractice and can patients sue a robot?

From a legal perspective, it is difficult to say, as it is still an unknown and evolving field.

A large number of medical malpractice lawsuits originate from the missed or hindered diagnosis of a medical condition or illness [11]. Still, a mistake in diagnosis by itself is not enough to pursue a medical practice lawsuit.

Medical malpractice includes negligence and negligence involves consciousness of failure to act (knowing but not doing), which implies that the person – or in our case, computer – knew what breach of a duty to recognize and differentiate adenoma from hyperplastic lesion would result in. This aspect of AI design is still lacking. But as AI systems autonomy expands, it is not completely impossible to distribute and attribute legal responsibility to the machine itself.

If a medical malpractice lawsuit is pursued, the court at first instance should be able to determine the direct cause of the plaintiff’s injury followed by determination of whether there are elements to claim for medical malpractice or product lability [11] [12]. If the case arises from a defect in the AI system hardware that later caused plaintiff’s injury, then it should advance against the manufacturer or owner (end user-hospital and group of physicians using it) in case of inappropriate operation and maintenance.

If an AI machine was registered as a medical device and programmed according to the medical requirements and medical standards required (again, standardization is crucial), patients consented to use of AI in their diagnostic work-up and the procedure was explained in details and the machine was operated properly but still failed to recognize and differentiate adenoma that would result in interval cancer (direct cause of the plaintiff’s injury), then perhaps the computer can be held liable for medical malpractice based on missed or wrong diagnosis. But, as AI becomes further integrated into everyday practice, it becomes obvious that the current legal framework is insufficient and further elucidation of the interface between law, technology, and medicine is required to protect millions of patients soon to be exposed to the diagnosis and therapies suggested by AI systems. For the time being, only general regulations continue to apply, but we need to find new and creative solutions to reconcile the new circumstances. Regardless of whether we give AI a personal identity or share liability among all involved parties in use and implantation of the technology, quality and safety must come first. Because future AI systems may exclude physicians from decision-making about interpretation of endoscopic images, we need to carefully weight their adoption against imminent threats posed to physicians in using technology that is not completely regulated.

Besides justified concerns regarding costs (75 %), operator dependence (63 %), and increased procedural time (60 %) perhaps some of the 43 % of gastroenterologists surveyed, [6] who felt uncomfortable using computer-aided diagnosis to support a “diagnose and leave” strategy for hyperplastic polyps, philosophically had some “sentimental problems” who will be responsible for missed cancer. It is a pity, but the question was not asked. Or, perhaps they are just of an age that they would not rely on computer-aided diagnosis.

In order for AI to be regularly included in colonoscopy service, gastroenterologists need to be confident that the technology is not only affordable and that it will improve their performance but they would need legal clarity and certainty before it is fully adopted in clinical practice.


#

Competing interests

The authors declare that they have no conflict of interest.

  • References

  • 1 Copeland BJ. Artificial intelligence. Encyclopedia Britannica; 2020 Available from (Accessed May 26, 2020): https://www.britannica.com/technology/artificial-intelligence
  • 2 Yang YJ, Chang SB. Application of artificial intelligence in gastroenterology. World J Gastroenterol 2019; 25: 1666-83
  • 3 van der Sommen F, Zinger S, Curves WL. et al. Computer-aided detection of early neoplastic lesions in Barrett’s esophagus. Endoscopy 2016; 48: 617-624
  • 4 Hirasawa T, Aoyama K, Tanimoto T. et al. Application of artificial intelligence using a convoluting neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018; 21: 653-660
  • 5 Byrne MF, Chapados N, Soudan F. et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 2019; 68: 94-100
  • 6 Wadhwa V, Alagappan M, Gonzalez A. et al. Physician sentiment toward artificial intelligence (AI) in colonoscopic practice: a survey of United States gastroenterologists. Endosc Int Open 2020; 08: E1379-E1384
  • 7 Urban G, Tripathi P, Alkayali T. et al. Deep learning localizes and identifies polyps in real time with 96 % accuracy in screening colonoscopy. Gastroenterology 2018; 155: 1069-1078
  • 8 Misawa M, Kudo SE, Mori Y. et al. Artificial intelligence-assisted polyp detection for colonoscopy: initial experience. Gastroenterology 2018; 154: 2027-2029
  • 9 Klare P, Sander C, Prinzen M. et al. Automated polyp detection in the colorectum: a prospective study (with videos). Gastrointestinal Endosc 2019; 89: 576-582
  • 10 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, Article 2. p. 15–20).
  • 11 Sullivan HR, Schweikart SJ. Are current tort liability doctrines adequate for addressing injury caused by AI?. AMA J Ethics 2019; 21: E160-E166
  • 12 Price WN. Artificial intelligence in health care: applications and legal implications. The SciTech Lawyer 2017; 14: 10-14

Corresponding author

Ivan Jovanovic
Gastroenterology and Hepatology, Clinical Center of Serbia
Koste Todorovic 6 Belgrade 11000
Serbia   
Fax: +381113615587   

Publication History

Article published online:
22 September 2020

© 2020. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commecial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Copeland BJ. Artificial intelligence. Encyclopedia Britannica; 2020 Available from (Accessed May 26, 2020): https://www.britannica.com/technology/artificial-intelligence
  • 2 Yang YJ, Chang SB. Application of artificial intelligence in gastroenterology. World J Gastroenterol 2019; 25: 1666-83
  • 3 van der Sommen F, Zinger S, Curves WL. et al. Computer-aided detection of early neoplastic lesions in Barrett’s esophagus. Endoscopy 2016; 48: 617-624
  • 4 Hirasawa T, Aoyama K, Tanimoto T. et al. Application of artificial intelligence using a convoluting neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018; 21: 653-660
  • 5 Byrne MF, Chapados N, Soudan F. et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model. Gut 2019; 68: 94-100
  • 6 Wadhwa V, Alagappan M, Gonzalez A. et al. Physician sentiment toward artificial intelligence (AI) in colonoscopic practice: a survey of United States gastroenterologists. Endosc Int Open 2020; 08: E1379-E1384
  • 7 Urban G, Tripathi P, Alkayali T. et al. Deep learning localizes and identifies polyps in real time with 96 % accuracy in screening colonoscopy. Gastroenterology 2018; 155: 1069-1078
  • 8 Misawa M, Kudo SE, Mori Y. et al. Artificial intelligence-assisted polyp detection for colonoscopy: initial experience. Gastroenterology 2018; 154: 2027-2029
  • 9 Klare P, Sander C, Prinzen M. et al. Automated polyp detection in the colorectum: a prospective study (with videos). Gastrointestinal Endosc 2019; 89: 576-582
  • 10 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, Article 2. p. 15–20).
  • 11 Sullivan HR, Schweikart SJ. Are current tort liability doctrines adequate for addressing injury caused by AI?. AMA J Ethics 2019; 21: E160-E166
  • 12 Price WN. Artificial intelligence in health care: applications and legal implications. The SciTech Lawyer 2017; 14: 10-14