CC BY-NC-ND 4.0 · Indian J Radiol Imaging 2018; 28(02): 137-139
DOI: 10.4103/ijri.IJRI_256_18
Editorial

Artificial intelligence in radiology – Are we treating the image or the patient?

Chander Mohan SM
Interventional Radiology, BLK Superspecialty Hospital, Pusa Road, New Delhi, India
› Author Affiliations

Subject Editor:
Financial support and sponsorship Nil.
 

    The word artificial intelligence almost invariably transcends our minds to futuristic things that we have been accustomed to watch in science fiction movies like the Matrix trilogy. After spending some moments pondering on what the future can potentially behold, we get back to our lives thinking that “Well, it’s still in the future.”

    Of late, artificial intelligence has become the buzz word in radiology. It is hard to think of a single term that has led to such serious discussions and debates in our specialty in recent times. Lot of new technology jargon we have not been accustomed to read, let alone understand, are all over the papers, and words like convoluted neural networks (CNN), natural language processing, deep convolutional neural networks (DCNN) have become commonplace. There have been recent talks and articles stating that these algorithms will generate “Heat maps” of the areas for the radiologist to focus on or in other words using the “eyes of the software” to interpret the images.

    Simply put artificial intelligence in radiology means what computers understand, interpret, and label medical diagnostic images after learning from examples. Traditionally, we have been accustomed to providing complex inputs to computers and expecting outputs. The trend here is for “reverse training” of the computer by giving human output to the computer first to learn. The discussion about artificial intelligence has ranged from adding to the productivity of the radiologist to improving detection to outrageous statements like replacing the radiologist all together. The roots of this belief lie in the fact that radiology is more a science of perception where in due course these perceptive algorithms would get better than humans and we would be better off training these algorithms than the human radiologist.

    The latter statement would literally mean a medical imaging world where the “artificial intelligence powered radiology robots” would be working round the clock all 365 days a year to report the most complex radiological investigations accurately at a breathtaking pace in cranky basements without getting fatigued, distracted, or bored by the monotonous nature of the work and also without demanding leaves or pay hikes. This would seem like the stuff dreams are made of to the corporate sector investing in healthcare and also for the general population who would get their reports within minutes of the test or even as soon as they walk out of the gantry, much like the paid weighing machines where you stand, put in a token, and get your weight almost as soon as you step down. All this sounds straight out of the science fiction movies. If such magical results can be obtained by using this technology, it does merit serious discussion where the pros and cons of the system are analyzed and put to discussion.

    The pros of such an approach to imaging would be dramatically improving the skewed ratio of the number of scans to the radiologists available. It will help clear the backlog of scans most notable of which is the quintessential chest X-ray with its enormous daily numbers and huge backlogs yet to be reported in almost all radiology departments. One of the most important way to clear the backlog would be the concept of “triaging” where the artificial intelligence software would decide which scans should be on top of the radiologist’s list for reporting and also raising alarms on the scans with critical findings which would warrant immediate confirmation with the attending radiologist.

    In due course as these systems get evolved and reliable, they may help not only in the training of radiologists while also helping residents and consultants during the tough on-call duties. In future, as the confidence in these systems becomes high, they may be used for cross checking reports adding to the quality control and assurance.

    To understand the system better, we need to understand the workflow of the development process and workflow of these algorithms. The “reverse training” used here to train the software algorithm is via “small focused datasets,” focusing on pathologies that the developer is working on. Then, the software gets trained to recognizing the pathology by pixel and voxel analysis of the images and then generates heat maps of the areas of the pathology. The results comparing the software to the radiologist are analyzed to develop algorithms. The algorithm is subsequently applied to the general imaging practice in larger and diverse patient populations in hospitals and scanning centers.

    This basic system has its own share of limitations and flaws. The data curation technique based on “small focused datasets” has its inherent flaws of selection bias as the researcher may purposefully select the more obvious cases of the pathology to inflate its numbers during testing. Second, to assess the positive X-ray initially used to train the software, the expertise, and experience of the radiologist is completely at the will of the researcher who would often opt for a more affordable option rather than going to the expert leading specialty radiologists. Then comes the bias at the testing level where the comparative results are reported without revealing the experience and specialty expertise of the “average radiologist” used for the comparison.

    The next inherent flaw is focusing on one or few particular findings in the artificial intelligence software research that will boast of detection rates of over 90% for pneumothorax, pneumonias, and pleural effusion, etc., The radiologist of today looks for a number of pathologies simultaneously, then takes things into a clinical perspective by going through relevant history, investigations, and clinical feedback. The outrageous statements given by some artificial intelligence developers (mostly from technological companies and sometimes general physicians who have no radiology background) about replacing radiologists are far-fetched as these software may be better in their “controlled test conditions” but none of them is equipped enough to detect all findings and then integrate everything into a cohesive diagnosis. Perhaps they should realize that nothing performs as well in the real world as in the testing conditions, just like the advertised mileage of cars.

    People are happy to point fingers at the radiologists for missing a finding but the important part is the same radiologist who missed one finding in a case perhaps picked many others in the same case. The advertised accuracy of artificial intelligence algorithms is pathology based. So an algorithm claiming more than 95% pneumonia detection does not detect all pneumothoraces or pericardial effusions. So till the time there is a “holistic all encompassing” artificial intelligence algorithm, there can be no comparison to a radiologist. In fact, all of this propaganda of replacing the radiologist has led to a lot of interest and funding in artificial intelligence worldwide. In fact, all the misinformation may lead to reduced enthusiasm in the student community regarding opting for radiology.

    After discussing the flaws in the system, it is imperative that we also discuss the hypothetical scenario if a truly “all-encompassing” artificial intelligence solution does exist which can diagnose most if not all pathologies and replace the radiologist. An important issue which needs to be addressed is if the “artificial intelligence robots” take over who will do the patient facing tasks like which could be as simple as modifying the technical parameters of a scan to suit the patient, perform an ultrasound, fluoroscopic studies, or obtaining relevant history from the patient, sometimes even performing clinical examination as needed along with discussing the patients reports with them to more complex things like performing guided vascular and nonvascular interventions and treating diseases (ablations, embolization). This need for humans also applies to consulting with physicians, helping them develop a road map to their surgeries to multidisciplinary tumor boards, and continuing medical education. In fact, we will need another generation of artificial intelligence powered robots to replace the entire subspecialty of interventional radiology.

    Then comes up the most important issue of responsibility. If a radiologist makes an error, he or she is accountable and due process of law can be followed. The pertinent question that needs a reply from the artificial intelligence algorithm developers is “are there softwares good enough to not “miss” anything “ and if at all there is a “miss,” who is responsible – is it the software developer, the institute administration or the treating physicians who will follow the results to plan their treatments. All softwares come with their disclaimers in small fonts. In other words who will sign the reports and take responsibility. In fact, the commercially motivated owners may start hiring one or two radiologists and pressurizing them to sign and take responsibility for the enormous amounts of artificial intelligence generated reports which would give them huge monetary gains.

    If the radiologists also stand up and start to sign only after “second reading” all cases, then the result times may even increase further as the machine can report the findings and be done but humans have to make an interpretation too. Along with this, they will have to not just cross check the machine findings as well as look for new findings as no software can claim to pick up all findings that exist. Also the radiologists of today even report scans which are technically inadequate like debilitated patients who cannot hold their breaths or patients at extremes of body habitus, the machines will just have no answer if its technical criteria don’t fit in. With the diverse patient population such situations would not be uncommon.

    Next important issue is trust. When a machine sends out a report, will the patients just trust and get their medical and surgical interventions done based on the results. Patients invariably come and ask the doctor what does the report mean, what can be alternate diagnoses, and what treatments can be planned based on it. Won’t the patients come back and ask for a second read if there are serious or sinister findings. The entire “radiation protection” for the patient is tailored today by the radiologist on a case-to-case basis as well as the optimization balancing radiation exposure with reasonable enough image quality. Will the artificial intelligence robots be able to do this is another question that needs an answer. In fact the issue of trust will also apply to the physician friends who will invariably come to see and discuss things for themselves. And then the biggest hurdle will come from the medical insurance companies who with their commercial interests will have to pay up for the machine findings. How the artificial intelligence algorithms will win over their trust is another story and not encourage lawsuits where they will request human reads.

    Then we have to get to the scenario of artificial intelligence assisting the radiologist. This would firstly mean generating heat maps of pathologies for the radiologist to look at. This brings to the table the issue of distraction. It is human nature that when somebody shows you a finding, we tend to be less inquisitive and take a passive approach. In fact, we all have often heard that, if the attention is on one finding like a liver abscess, you may altogether miss an asymptomatic ureteric calculus. Whatever findings the artificial intelligence algorithm misses is also more likely to be missed by the radiologist. Second issue is about the “triaging” and changing exam priorities for reporting. Since the artificial intelligence software also “miss,” the question that needs immense thought is that the patients who are missed by artificial intelligence would fall further behind not just in their diagnoses but also in their treatments. Obviously, the overall effect of triaging would be positive, but when we are dealing with human lives we cannot take chances. In fact, with all the talk about the errors made by radiologists and physicians, the errors don’t go up to 30–40% of cases and in most practices may be around 10%. But even 10% is a high number as far as humans are concerned, so can the artificial intelligence developers come forward and tell us their error rates not just for missing pathologies but “missing potentially any finding” in the patient.

    The talk about replacing the radiologist without addressing the above issues would be short sightedness. Only thing the current status (and may be the future) of artificial intelligence can do is to just assist the radiologist and not guide or replace him. This would even mean triaging and highlighting only the cases where there is a critical finding while the entire order of cases being the same. Similarly, the radiologists also need to come forward and point out where the artificial intelligence algorithms may be useful in assisting them. One potential area where they can help is to mine through the patient histories and investigations for the relevant findings. Artificial intelligence can help radiologists by rapidly analyze images and data registries, achieve better understanding of patient’s condition, increase their clinical role, and become a part of the core management team. In fact, the question is not about replacing the radiologists, but artificial intelligence has the potential to improve the capabilities, efficiency, and accuracy of radiologists and improve patient outcome by intelligent protocol of imaging equipments to reduce unnecessary imaging studies.

    We may have a technological revolution, and may be eventually the radiologist may be replaced, but that danger is not worth preparing for any radiologist living in 2018. In fact, the world’s best airplanes with automatic navigation systems still have at least two “human pilots” on board. Anything that deals with human lives, simply can’t be trusted on with machines or algorithms alone. We have to understand that we don’t treat the image and its findings but the patient.


    #

    Conflict of Interest

    There are no conflicts of interest.


    Publication History

    Article published online:
    26 July 2021

    © 2018. Indian Radiological Association. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial-License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/).

    Thieme Medical and Scientific Publishers Private Ltd.
    A-12, Second Floor, Sector -2, NOIDA -201301, India