CC BY-NC-ND 4.0 · Endosc Int Open 2022; 10(02): E171-E177
DOI: 10.1055/a-1675-1941
Original article

Deep learning and colon capsule endoscopy: automatic detection of blood and colonic mucosal lesions using a convolutional neural network

Miguel Mascarenhas
1   Department of Gastroenterology, São João University Hospital, Porto, Portugal
2   WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
3   Faculty of Medicine of the University of Porto Porto, Portugal
,
Tiago Ribeiro
1   Department of Gastroenterology, São João University Hospital, Porto, Portugal
2   WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
,
João Afonso
1   Department of Gastroenterology, São João University Hospital, Porto, Portugal
2   WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
,
João P.S. Ferreira
4   Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
5   INEGI – Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal.
,
Hélder Cardoso
1   Department of Gastroenterology, São João University Hospital, Porto, Portugal
2   WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
3   Faculty of Medicine of the University of Porto Porto, Portugal
,
Patrícia Andrade
1   Department of Gastroenterology, São João University Hospital, Porto, Portugal
2   WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
3   Faculty of Medicine of the University of Porto Porto, Portugal
,
Marco P.L. Parente
4   Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
5   INEGI – Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal.
,
Renato N. Jorge
4   Department of Mechanical Engineering, Faculty of Engineering of the University of Porto, Porto, Portugal
5   INEGI – Institute of Science and Innovation in Mechanical and Industrial Engineering, Porto, Portugal.
,
Miguel Mascarenhas Saraiva
6   ManopH Gastroenterology Clinic, Porto, Portugal
,
Guilherme Macedo
1   Department of Gastroenterology, São João University Hospital, Porto, Portugal
2   WGO Gastroenterology and Hepatology Training Center, Porto, Portugal
3   Faculty of Medicine of the University of Porto Porto, Portugal
› Author Affiliations
 

Abstract

Background and study aims Colon capsule endoscopy (CCE) is a minimally invasive alternative to conventional colonoscopy. However, CCE produces long videos, making its analysis time-consuming and prone to errors. Convolutional neural networks (CNN) are artificial intelligence (AI) algorithms with high performance levels in image analysis. We aimed to develop a deep learning model for automatic identification and differentiation of significant colonic mucosal lesions and blood in CCE images.

Patients and methods A retrospective multicenter study including 124 CCE examinations was conducted for development of a CNN model, using a database of CCE images including anonymized images of patients with normal colon mucosa, several mucosal lesions (erosions, ulcers, vascular lesions and protruding lesions) and luminal blood. For CNN development, 9005 images (3,075 normal mucosa, 3,115 blood and 2,815 mucosal lesions) were ultimately extracted. Two image datasets were created and used for CNN training and validation.

Results The mean (standard deviation) sensitivity and specificity of the CNN were 96.3 % (3.9 %) and 98.2 % (1.8 %) Mucosal lesions were detected with a sensitivity of 92.0 % and a specificity of 98.5 %. Blood was detected with a sensitivity and specificity of 97.2 % and 99.9 %, respectively. The algorithm was 99.2 % sensitive and 99.6 % specific in distinguishing blood from mucosal lesions. The CNN processed 65 frames per second.

Conclusions This is the first CNN-based algorithm to accurately detect and distinguish colonic mucosal lesions and luminal blood in CCE images. AI may improve diagnostic and time efficiency of CCE exams, thus facilitating CCE adoption to routine clinical practice.


#

Introduction

Capsule endoscopy (CE) has revolutionized the investigation of suspected small bowel disease. Colon capsule endoscopy (CCE) was developed as a minimally invasive alternative to conventional colonoscopy for detection of colorectal disease, particularly in the setting of colorectal cancer screening [1]. This diagnostic tool overcomes some of the drawbacks associated with colonoscopy, including the potential for pain, use of sedation, and the risk of bleeding and perforation [2]. CCE represents a viable alternative for patients with previous incomplete colonoscopy, or for whom the latter is contraindicated, unfeasible, or unwanted by the patient [3]. Nevertheless, a single CCE examination may produce up to 50,000 images, revision of which is burdensome, requiring approximately 50 minutes for completion [3]. Abnormal findings may be restricted to a very small number of frames and the risk of overlooking important lesions is significant [3].

Automatic image analysis using artificial intelligence (AI) tools has received much attention in recent last years. Convolutional neural vetworks (CNN) are multi-layered algorithms designed for automatic image analysis and have shown high performance levels in diverse medical fields [4] [5] [6]. Application of these technologies to endoscopic imaging, particularly in CE, has produced exciting results [7]. These technological advances in evaluation of CE images may allow to improve diagnostic efficiency and to optimize the reading process, including its time cost, which constitutes one of its main drawbacks [8].

Most studies regarding CCE applications focus on detection of colorectal neoplasia [9]. Nevertheless, the potential of CCE in other clinical settings, including the assessment of disease extent and severity in inflammatory bowel disease (IBD) patients (particularly those with ulcerative colitis), has been suggested [10] [11]. Thus, the identification of findings other than protruding lesions (including vascular lesions, ulcers/erosions and blood content) is clinically relevant. Enhanced reading of CCE images using AI tools may improve the diagnostic rate of these lesions. Nevertheless, the development and testing of deep learning algorithms in this context has scarcely been reported. Therefore, we aimed to develop and test a CNN-based algorithm for the automatic detection colonic mucosal lesions and luminal blood or hematic vestiges in CCE exams.


#

Material and methods

Study design

A multicenter study was performed for development and validation of a CNN for automatic detection of colonic mucosal lesions and luminal blood/hematic residues. CCE images were retrospectively collected from the two different institutions: São João University Hospital (Porto, Portugal) and ManopH Gastroenterology Clinic (Porto, Portugal). One hundred and 24 CCE exams (124 patients), performed between 2010 and 2020, were included. Data retrieved from these examinations were used for development, training and validation of a CNN-based model for identification. The full-length video of all participants was reviewed (total number of frames: 3,387,259), and 9,005 images of the colonic mucosa were ultimately extracted. Inclusion and classification of frames were performed by three gastroenterologists with experience in CCE (MJMS, HC and MMS), each having reviewed more than 1,500 CE exams before the start of this study. A final decision on frame labelling required the agreement of at least two of the three researchers.

This study was approved by the ethics committee of São João University Hospital/Faculty of Medicine of the University of Porto (No. CE 407 /2020) and was conducted respecting the declaration of Helsinki. This study is retrospective and of non-interventional nature. Any information deemed to potentially identify the subjects was omitted. Each patient was assigned a random number in order to guarantee effective data anonymization. A team with Data Protection Officer (DPO) certification (Maastricht University) confirmed the non-traceability of data and conformity with the general data protection regulation (GDPR).


#

Colon capsule endoscopy system

All procedures were conducted using the PillCam COLON 2 system (Medtronic, Minneapolis, Minnesota, United States). This capsule has 2 high-resolution cameras, with a combined 344º field of view, and an adjustable frame rate ranging from 4 to 35 frames per second [3]. This system was launched in 2009. No hardware modifications were introduced during the timespan of the included CCE exams. Therefore, image quality remained unaltered during this period. The images were reviewed using PillCam software v9 (Medtronic, Minneapolis, Minnesota, United States).

Each patient received bowel preparation according to previously published guidelines [12]. Summarily, patients initiated a clear liquid diet in the day preceding capsule ingestion, with fasting in the night before examination. A bowel preparation consisting of 4 liters of polyethylene glycol solution was used in split-dosage (2 L in the evening and 2 L in the morning of capsule ingestion). Prokinetic therapy (10 mg domperidone) was used if the capsule remained in the stomach 1 hour after ingestion. Two boosters consisting of a sodium phosphate solution were applied after the capsule has entered the small bowel and with a 3-hour interval.


#

Development of the CNN

A deep learning model was constructed with the objective to automatically identify and classification of three categories: normal colonic mucosa; blood or hematic residues within the lumen of the colon, and colonic mucosal lesions. The latter included ulcers, erosions, vascular lesions (red spots, angiectasia and varices) and protruding lesions (e. g., polyps, epithelial tumors, submucosal tumors, nodes). From the collected pool of images (n = 9,005), 3,075 contained normal colonic mucosa, 3,115 showed luminal blood or hematic residues, and 2,815 had mucosal lesions. This pool of images was split for constitution of training and validation datasets. The training dataset was composed by the first 80 % of the consecutively extracted images (n = 7,204). The remaining 20 % was used as the validation dataset (n = 1,801). The performance of the CNN was assessed using the validation dataset. A flowchart summarizing the study is presented in [Fig. 1].

Zoom Image
Fig. 1 Study flowchart for the training and validation phases.

The CNN was created using the Xception model with its weights trained on ImageNet (a large-scale image dataset aimed for use in development of object recognition software). To transfer this learning to our data, we kept the convolutional layers of the model. We removed the last fully connected layers and attached fully connected layers based on the number of classes we used to classify our endoscopic images. We used two blocks, each having a fully connected layer followed by a dropout layer of 0.3 drop rate. Following these two blocks, we add a dense layer with a size defined as the number of categories (three) to classify. The learning rate of 0.0001, batch size of 16, and the number of epochs of 100 was set by trial and error. We used Tensorflow 2.3 and Keras libraries to prepare the data and run the model. The analyses were performed with a computer equipped with a 2.1 GHz Intel Xeon Gold 6130 processor (Intel, Santa Clara, California, United States) and a double NVIDIA Quadro RTX 4000 graphic processing unit (NVIDIA Corporate, Santa Clara, California, United States).


#

Model performance and statistical analysis

The primary outcome measures included sensitivity, specificity, positive and negative predictive values, and the accuracy in differentiating between images colonic mucosal lesions, blood/hematic residues, and normal findings. Moreover, we used receiver operating characteristic (ROC) curves analysis and area under the ROC curves (AUROC) to measure the performance of our model in the distinction between the three categories. The network’s classification was compared to the specialists’ analysis, the latter being considered the gold standard. A subgroup analysis was performed in order to assess the sensitivity of the network for detection of each group of lesions classified as mucosal abnormalities.

In addition, the image processing performance of the network was determined by calculating the time required for the CNN to provide output for all images in the validation image dataset.

For each image, the trained CNN calculated the probability for each of the three categories. A higher probability translated in a greater confidence in the CNN prediction. The category with the highest probability score was outputted as the CNN’s predicted classification ([Fig. 2]). Sensitivities, specificities, positive and negative predictive values are presented as means and standard deviations (SD). ROC curves were graphically. Statistical analysis was performed using Sci-Kit learn v0.22.2 [13].

Zoom Image
Fig. 2 a Heatmaps and b output obtained from the application of the convolutional neural network. a Examples of heatmaps showing detection of blood and a protruding lesion as identified by the CNN. b The bars represent the probability estimated by the network. The finding with the highest probability was outputted as the predicted classification. A blue bar represents a correct prediction. Red bars represent an incorrect prediction. The gold standard classification (specialists’ consensus) is reported between brackets. N – normal mucosa; B – blood; ML – mucosal lesions.

#
#

Results

Construction of the network

A total of 124 patients underwent CCE and enrolled in this study. The full image dataset was composed by 9,005 frames. A total of 1,801 frames (20 %) were used as a validation dataset. The latter subset of images was composed by 623 (34.6 %) images with evidence of blood or hematic residues, 563 (31.3 %) images with mucosal lesions and 615 (34.1 %) images with normal mucosa. The CNN demonstrated increasing levels of accuracy as data was repeatedly inputted into its multi-layer architecture ([Fig. 3]).

Zoom Image
Fig. 3 Evolution of the accuracy of the convolutional neural network during training and validation phases, as the training and validation datasets were repeatedly inputted in the neural network.

#

Overall performance of the network

The results are summarized in [Table 1]. Overall, the mean (SD) sensitivity and specificity of the CNN were 96.3 % (3.9 %) and 98.2 % (1.8 %), respectively. The network provided accurate predictions in 97.6 % (1.9 %). The positive predictive value and negative predictive value were 96.4 % (3.3 %) and 98.2 % (1.7 %) ([Table 2]).

Table 1

Confusion matrix of the automatic detection versus expert classification.

Expert classification

Normal

Blood

Mucosal lesions

CNN classification

Normal

597

  1

 43

Blood

  0

621

  2

Mucosal lesions

 18

  1

518

CNN – convolutional neural network; normal – normal colonic mucosa; blood – blood or hematic residues.

Table 2

CNN performance for detection and differentiation of normal colon mucosa, free blood and several mucosal lesions.

Sensitivity

Specificity

PPV

NPV

Overall, (mean % ± SD)

96.3 ± 3.9

 98.2 ± 1.8

 96.4 % ± 3.3 %

98.2 % ± 1.7 %

ML vs. all, %

92.0

 98.5

 96.4

96.4

Blood vs. all, %

99.5

 99.8

 99.7

99.8

Normal vs. all, %

97.1

 96.3

 93.1

98.4

ML vs. Normal, %

92.3

 97.1

 96.6

93.3

Blood vs. ML, %

99.8

 99.6

 99.7

99.8

Blood vs. Normal, %

99.8

100.0

100.0

99.8

CNN – convolutional neural network; blood – blood or hematic residues; normal – normal mucosa; ML – mucosal lesions; SD – Standard deviation; PPV – positive predictive value; NPV – negative predictive value.


#

CNN performance for the detection of mucosal lesions and blood

The analysis of the performance of the CNN revealed a sensitivity of 92.0 % and specificity of 98.5 % for the detection of mucosal lesions ([Table 2]). The AUROC was 0.99 (CI 95 % 0.98–1.00) ([Fig. 4]). Blood and hematic residues were detected with a sensitivity and specificity of 99.5 % and 99.8 % ([Table 2]), respectively, and had an AUROC of 1.00 (CI 95 % 0.99–1.00) ([Fig. 4]). Classification as a normal mucosa occurred with a sensitivity and specificity of 97.1 % and 96.3 % ([Table 2]), respectively, and an AUROC of 1.00 (CI 95 % 0.99–1.00) ([Fig. 4]).

Zoom Image
Fig. 4 ROC analyses of the network’s performance in the detection of normal mucosa, blood and colon mucosal lesions. ROC – receiver operating characteristic.

Our model was able to differentiate blood/hematic vestiges from normal mucosa with a sensitivity of 99.8 % and specificity of 100.0 %. Mucosal lesions were distinguished from normal mucosa with 92.3 % sensitivity and 97.1 % specificity. Additionally, for the distinction between blood versus mucosal lesions, the CNN was 99.8 % sensitive and 99.6 % specific ([Table 2]).


#

Subgroup analysis of images with mucosal lesions

The subset of images showing mucosal lesions in the validation dataset (n = 553) was further analyzed to assess the detection rate of each individual subgroup of lesions. This subset constituted 329 images of protruding lesions, 188 images of ulcers and erosions, and 35 images of vascular lesions. The network accurately detected 293 of 329 protruding lesions (89.1 %), 178 of 188 ulcers and erosions (94.7 %), and 30 of 35 vascular lesions (85.7 %).


#

Computational performance of the CNN

The reading of the validation dataset was completed in The CNN completed the reading of the validation dataset in 28 seconds, at a rate of approximately 65 frames per second (0.015 seconds per image).


#
#

Discussion

The use of CNNs has provided promising results for image analysis in gastroenterology and hepatology [14] [15] [16]. In recent years, significant interest has been devoted to the application of this technology in CE. Recent studies have demonstrated high diagnostic performance of CNN-based models for small bowel CE, including for the detection of ulcers and erosions, celiac disease, vascular lesions, blood content, and protuberant lesions [17] [18] [19] [20] [21] [22]. Nevertheless, the exploration of these tools in CCE has scarcely been performed.

We developed an accurate CNN model capable of detecting and distinguishing colonic mucosal lesions as well as blood/hematic residues in CCE images. Several aspects of this work are worthy of highlighting. First, we developed a multicentric study which is the first to evaluate the performance of a CNN for the detection of a wide array of findings, specifically blood and multiple subtypes of colonic lesions in CCE images. Second, our algorithm demonstrated high performance levels in the detection and differentiation of such colonic pathologic findings. Our model demonstrated to be highly sensitive for the detection of mucosal lesions and blood/hematic residues, which is paramount for a CNN-assisted reading, thus lessening the probability of missed lesions. Third, our network had high image processing performance, with approximate reading rates of 90 frames per second, which is superior to most studies published so far [18] [19] [22].

The role of CCE in everyday clinical practice has not yet been completely established. So far, most studies have focused on colorectal cancer screening and polyp detection. The most common indications for CCE are previous incomplete colonoscopy and unwillingness or contraindications for undergoing conventional colonoscopy [12]. Data have shown that noninvasive CCE has an acceptable diagnostic performance, and it could be viewed as a complement, rather than substitutive of gold standard colonoscopy [23]. CCE has been shown to outperform other non-invasive colorectal neoplasia screening tests, such as CT colonography [24]. Current guidelines place CCE as an alternative to colonoscopy for screening in average-risk population [12]. Recent data reported higher uptake (an essential parameter in any population-based screening program) for CCE comparing to conventional colonoscopy. [25]. Moreover, when applied after a positive fecal-immunological test, CCE may reduce the need for more invasive conventional colonoscopy [26]. However, CCE has significant drawbacks that limit its generalization. These include the need for more rigorous bowel cleansing, technical limitations, as well as financial and time costs. CCE is not widely available and most endoscopists are not familiar with reviewing CCE images. Acquiring expertise in reviewing CCE images is time-consuming and often performed in a non-standardized fashion [27]. Recent literature has been reporting promising results regarding the automatic detection of lesions in CE images. On the other hand, evidence reporting the impact of deep learning techniques in CCE remains scarce. The introduction of AI-assisted CCE image review may enhance acquisition of competences in CCE reading, thus shortening the learning curve for unexperienced gastroenterologists. This is particularly important in centers with a low volume of CCE exams. Therefore, we believe that the development of sensitive AI tools, as described in this work, have the potential to significantly enhance the diagnostic and time efficiency of CCE examinations, which may widen the indications and acceptance of CCE. These tools may have a pivotal role for widespread adoption of CCE, as the potential increase in the use of CCE due to implementation of AI-assisted reading may ultimately tackle its financial costs by decreasing CCE system unit price and the time spent reviewing these images.

The performance and impact of CNNs for automatic detection of colorectal lesions in CCE images has scarcely been evaluated. To our knowledge, only two other studies have addressed this issue [28] [29]. However, the spectrum of both studies was restricted to detection of colorectal neoplasia. Blanes-Vidal et al. adapted a preexisting CNN (AlexNet), and reported a sensitivity and specificity of 97 % and 93 %, respectively, and an overall accuracy of 96 % [28]. More recently, Yamada et al. developed a CNN-based algorithm for detection of colorectal neoplasia. Their network detected polyps and colorectal cancers with a sensitivity, specificity and AUROC of 79 %, 87 % and 0.90, respectively [29]. These studies pave the way for the development of AI systems which may assist in the selection of patients requiring further exploration with conventional colonoscopy. The addition of systems predicting the histologic features of lesions to those providing automatic detection will have a significant impact in screening patients requiring further colonoscopy, thus potentiating the role of CCE in conventional endoscopic practice.

To date, no other study has been published reporting a CNN-based deep learning model for the evaluation of multiple colonic lesions by CCE. Our network was capable detecting and distinguishing a wide range of mucosal lesions and blood with high sensitivity and specificity. These technologies should be regarded as supportive rather than substitutive. Therefore, these systems must remain highly sensitive in order to minimize missed lesions. On subgroup analysis, our network demonstrated to be highly sensitive for the detection of ulcers/erosions, as well as protruding lesions. The lower detection rate of vascular lesions may be explained by their frequent small size. Overall, we believe that our study enlarges the potential of AI algorithms in CCE for the detection of lesions other than polyps.

Recently, the application of CCE for assessment of the severity and extension of IBD, especially ulcerative colitis (UC), has generated much interest, although its clinical significance remains to be established. CCE was shown to have a 89 % sensitivity and 75 % specificity for the determination of the severity of UC [30]. Recent evidence was published reporting a high degree of correlation between conventional colonoscopy and CCE findings in UC patients [31] [32]. Moreover, the possibility of CCE to provide a pan-enteric evaluation may have reveal small bowel abnormalities, which ultimately may have significant diagnostic and prognostic [33]. Likewise, CCE was shown to have good correlation with conventional colonoscopy in the evaluation of the severity of colonic Crohn’s disease [34]. However, more generalized CCE use is limited by its inability for histologic sampling. Our network demonstrated high performance levels for the detection of mucosal abnormalities, particularly ulcers and erosions, as well as blood content, which are common findings in IBD patients. Therefore, the development of CNNs and their introduction into clinical practice may potentiate the role of CCE in monitoring disease activity and extension in these patients. Furthermore, the application of automated tools to CCE, may allow for time efficient pan-enteric evaluation, ultimately facilitating the follow-up of patients with known or suspected IBD.

Our network demonstrated high computational performance, being capable of processing 65 images per second. At this rate, revision of a full-length CCE video containing an estimate of 50,000 frames would require approximately 13 minutes. No value for comparison exists regarding CCE. Nevertheless, our image processing rate outperformed those of other networks processing CE images [18] [22]. In the near future, these performance marks may translate into shorter reading times, thus overcoming one of the main drawbacks of CE. Further studies are required to assess if increased computational power translates into enhanced reading time efficiency.

This study has several limitations. First, it is a retrospective study. Further well-designed prospective studies in real-life settings are necessary to confirm the clinical value of our results. Second, although our model demonstrated high accuracy in the detection of mucosal abnormalities, it was not designed to distinguish its subtypes. Third, although our network demonstrated high processing speed we did not assess if CNN-assisted image review reduces the reading time compared to conventional reading. Finally, although a large pool of images was reviewed, the number of patients included in this study is small. Large multicenter studies are required to overcome this limitation.


#

Conclusions

AI is expected to play a large role in everyday medical practice in the future. We developed a CNN-based model capable of detecting colon mucosal abnormalities in CCE and blood/hematic residues. Our model achieved high levels of accuracy and excellent computational performance. These results may lay the foundations for application of this technology to CCE, thus improving its diagnostic and reading time efficiency and, ultimately, its acceptance.


#
#

Competing interests

The authors declare that they have no conflict of interest.

  • References

  • 1 Eliakim R, Fireman Z, Gralnek IM. et al. Evaluation of the PillCam Colon capsule in the detection of colonic pathology: results of the first multicenter, prospective, comparative study. Endoscopy 2006; 38: 963-970
  • 2 Niikura R, Yasunaga H, Yamada A. et al. Factors predicting adverse events associated with therapeutic colonoscopy for colorectal neoplasia: a retrospective nationwide study in Japan. Gastrointest Endosc 2016; 84: 971-982.e976
  • 3 Eliakim R, Yassin K, Niv Y. et al. Prospective multicenter performance evaluation of the second-generation colon capsule compared with colonoscopy. Endoscopy 2009; 41: 1026-1031
  • 4 Yasaka K, Akai H, Abe O. et al. Deep Learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: A preliminary study. Radiology 2018; 286: 887-896
  • 5 Esteva A, Kuprel B, Novoa RA. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542: 115-118
  • 6 Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017; 124: 962-969
  • 7 Soffer S, Klang E, Shimon O. et al. Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis. Gastrointest Endosc 2020; 92: 831-839.e838
  • 8 Aoki T, Yamada A, Aoyama K. et al. Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading. Dig Endosc 2020; 32: 585-591
  • 9 Hosoe N, Limpias Kamiya KJL, Hayashi Y. et al. Current status of colon capsule endoscopy. Dig Endosc 2020; DOI: 10.1111/den.13769.
  • 10 Herrerías-Gutiérrez JM, Argüelles-Arias F, Caunedo-Álvarez A. et al. PillCamColon Capsule for the study of colonic pathology in clinical practice. Study of agreement with colonoscopy. Rev Esp Enferm Dig 2011; 103: 69-75
  • 11 Shi HY, Chan FKL, Higashimori A. et al. A prospective study on second-generation colon capsule endoscopy to detect mucosal lesions and disease activity in ulcerative colitis (with video). Gastrointest Endosc 2017; 86: 1139-1146.e1136
  • 12 Spada C, Hassan C, Galmiche JP. et al. Colon capsule endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) Guideline. Endoscopy 2012; 44: 527-536
  • 13 Pedregosa F, Varoquaux G, Gramfort A. et al. Scikit-learn: Machine Learning in Python. J Machine Learning Res 2011; 12: 2825-2830
  • 14 Urban G, Tripathi P, Alkayali T. et al. Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy. Gastroenterology 2018; 155: 1069-1078.e1068
  • 15 Wang K, Lu X, Zhou H. et al. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut 2019; 68: 729-741
  • 16 Marya NB, Powers PD, Chari ST. et al. Utilisation of artificial intelligence for the development of an EUS-convolutional neural network model trained to enhance the diagnosis of autoimmune pancreatitis. Gut 2020; DOI: 10.1136/gutjnl-2020-322821.
  • 17 Aoki T, Yamada A, Aoyama K. et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 2019; 89: 357-363.e352
  • 18 Aoki T, Yamada A, Kato Y. et al. Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network. J Gastroenterol Hepatol 2020; 35: 1196-1200
  • 19 Leenhardt R, Vasseur P, Li C. et al. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Gastrointest Endosc 2019; 89: 189-194
  • 20 Klang E, Barash Y, Margalit RY. et al. Deep learning algorithms for automated detection of Crohnʼs disease ulcers by video capsule endoscopy. Gastrointest Endosc 2020; 91: 606-613.e602
  • 21 Wang X, Qian H, Ciaccio EJ. et al. Celiac disease diagnosis from videocapsule endoscopy images with residual learning and deep feature extraction. Comput Methods Programs Biomed 2020; 187: 105236
  • 22 Saito H, Aoki T, Aoyama K. et al. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 2020; 92: 144-151.e141
  • 23 Spada C, Barbaro F, Andrisani G. et al. Colon capsule endoscopy: What we know and what we would like to know. World J Gastroenterol 2014; 20: 16948-16955
  • 24 Cash BD, Fleisher MR, Fern S. et al. Multicentre, prospective, randomised study comparing the diagnostic yield of colon capsule endoscopy versus CT colonography in a screening population (the TOPAZ study). Gut 2020; DOI: 10.1136/gutjnl-2020-322578.
  • 25 Groth S, Krause H, Behrendt R. et al. Capsule colonoscopy increases uptake of colorectal cancer screening. BMC Gastroenterol 2012; 12: 80
  • 26 Holleran G, Leen R, O'Morain C. et al. Colon capsule endoscopy as possible filter test for colonoscopy selection in a screening population with positive fecal immunology. Endoscopy 2014; 46: 473-478
  • 27 Watabe H, Nakamura T, Yamada A. et al. Assessment of an electronic learning system for colon capsule endoscopy: a pilot study. J Gastroenterol 2016; 51: 579-585
  • 28 Blanes-Vidal V, Baatrup G, Nadimi ES. Addressing priority challenges in the detection and assessment of colorectal polyps from capsule endoscopy and colonoscopy in colorectal cancer screening using machine learning. Acta Oncol 2019; 58: S29-s36
  • 29 Yamada A, Niikura R, Otani K. et al. Automatic detection of colorectal neoplasia in wireless colon capsule endoscopic images using a deep convolutional neural network. Endoscopy 2020; DOI: 10.1055/a-1266-1066.
  • 30 Sung J, Ho KY, Chiu HM. et al. The use of Pillcam Colon in assessing mucosal inflammation in ulcerative colitis: a multicenter study. Endoscopy 2012; 44: 754-758
  • 31 Hosoe N, Matsuoka K, Naganuma M. et al. Applicability of second-generation colon capsule endoscope to ulcerative colitis: a clinical feasibility study. J Gastroenterol Hepatol 2013; 28: 1174-1179
  • 32 Ye CA, Gao YJ, Ge ZZ. et al. PillCam colon capsule endoscopy versus conventional colonoscopy for the detection of severity and extent of ulcerative colitis. J Dig Dis 2013; 14: 117-124
  • 33 San Juan-Acosta M, Caunedo-Álvarez A, Argüelles-Arias F. et al. Colon capsule endoscopy is a safe and useful tool to assess disease parameters in patients with ulcerative colitis. Eur J Gastroenterol Hepatol 2014; 26: 894-901
  • 34 D’Haens G, Löwenberg M, Samaan MA. et al. Safety and feasibility of using the second-generation pillcam colon capsule to assess active colonic Crohn's disease. Clin Gastroenterol Hepatol 2015; 13: 1480-1486.e1483

Corresponding author

Miguel José da Quinta e Costa de Mascarenhas Saraiva, MD, MSc
Rua Oliveira Martins 104
Porto, 4200-427
Portugal   

Publication History

Received: 15 April 2021

Accepted: 21 September 2021

Article published online:
16 February 2022

© 2022. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Eliakim R, Fireman Z, Gralnek IM. et al. Evaluation of the PillCam Colon capsule in the detection of colonic pathology: results of the first multicenter, prospective, comparative study. Endoscopy 2006; 38: 963-970
  • 2 Niikura R, Yasunaga H, Yamada A. et al. Factors predicting adverse events associated with therapeutic colonoscopy for colorectal neoplasia: a retrospective nationwide study in Japan. Gastrointest Endosc 2016; 84: 971-982.e976
  • 3 Eliakim R, Yassin K, Niv Y. et al. Prospective multicenter performance evaluation of the second-generation colon capsule compared with colonoscopy. Endoscopy 2009; 41: 1026-1031
  • 4 Yasaka K, Akai H, Abe O. et al. Deep Learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: A preliminary study. Radiology 2018; 286: 887-896
  • 5 Esteva A, Kuprel B, Novoa RA. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542: 115-118
  • 6 Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology 2017; 124: 962-969
  • 7 Soffer S, Klang E, Shimon O. et al. Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis. Gastrointest Endosc 2020; 92: 831-839.e838
  • 8 Aoki T, Yamada A, Aoyama K. et al. Clinical usefulness of a deep learning-based system as the first screening on small-bowel capsule endoscopy reading. Dig Endosc 2020; 32: 585-591
  • 9 Hosoe N, Limpias Kamiya KJL, Hayashi Y. et al. Current status of colon capsule endoscopy. Dig Endosc 2020; DOI: 10.1111/den.13769.
  • 10 Herrerías-Gutiérrez JM, Argüelles-Arias F, Caunedo-Álvarez A. et al. PillCamColon Capsule for the study of colonic pathology in clinical practice. Study of agreement with colonoscopy. Rev Esp Enferm Dig 2011; 103: 69-75
  • 11 Shi HY, Chan FKL, Higashimori A. et al. A prospective study on second-generation colon capsule endoscopy to detect mucosal lesions and disease activity in ulcerative colitis (with video). Gastrointest Endosc 2017; 86: 1139-1146.e1136
  • 12 Spada C, Hassan C, Galmiche JP. et al. Colon capsule endoscopy: European Society of Gastrointestinal Endoscopy (ESGE) Guideline. Endoscopy 2012; 44: 527-536
  • 13 Pedregosa F, Varoquaux G, Gramfort A. et al. Scikit-learn: Machine Learning in Python. J Machine Learning Res 2011; 12: 2825-2830
  • 14 Urban G, Tripathi P, Alkayali T. et al. Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy. Gastroenterology 2018; 155: 1069-1078.e1068
  • 15 Wang K, Lu X, Zhou H. et al. Deep learning Radiomics of shear wave elastography significantly improved diagnostic performance for assessing liver fibrosis in chronic hepatitis B: a prospective multicentre study. Gut 2019; 68: 729-741
  • 16 Marya NB, Powers PD, Chari ST. et al. Utilisation of artificial intelligence for the development of an EUS-convolutional neural network model trained to enhance the diagnosis of autoimmune pancreatitis. Gut 2020; DOI: 10.1136/gutjnl-2020-322821.
  • 17 Aoki T, Yamada A, Aoyama K. et al. Automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 2019; 89: 357-363.e352
  • 18 Aoki T, Yamada A, Kato Y. et al. Automatic detection of blood content in capsule endoscopy images based on a deep convolutional neural network. J Gastroenterol Hepatol 2020; 35: 1196-1200
  • 19 Leenhardt R, Vasseur P, Li C. et al. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy. Gastrointest Endosc 2019; 89: 189-194
  • 20 Klang E, Barash Y, Margalit RY. et al. Deep learning algorithms for automated detection of Crohnʼs disease ulcers by video capsule endoscopy. Gastrointest Endosc 2020; 91: 606-613.e602
  • 21 Wang X, Qian H, Ciaccio EJ. et al. Celiac disease diagnosis from videocapsule endoscopy images with residual learning and deep feature extraction. Comput Methods Programs Biomed 2020; 187: 105236
  • 22 Saito H, Aoki T, Aoyama K. et al. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network. Gastrointest Endosc 2020; 92: 144-151.e141
  • 23 Spada C, Barbaro F, Andrisani G. et al. Colon capsule endoscopy: What we know and what we would like to know. World J Gastroenterol 2014; 20: 16948-16955
  • 24 Cash BD, Fleisher MR, Fern S. et al. Multicentre, prospective, randomised study comparing the diagnostic yield of colon capsule endoscopy versus CT colonography in a screening population (the TOPAZ study). Gut 2020; DOI: 10.1136/gutjnl-2020-322578.
  • 25 Groth S, Krause H, Behrendt R. et al. Capsule colonoscopy increases uptake of colorectal cancer screening. BMC Gastroenterol 2012; 12: 80
  • 26 Holleran G, Leen R, O'Morain C. et al. Colon capsule endoscopy as possible filter test for colonoscopy selection in a screening population with positive fecal immunology. Endoscopy 2014; 46: 473-478
  • 27 Watabe H, Nakamura T, Yamada A. et al. Assessment of an electronic learning system for colon capsule endoscopy: a pilot study. J Gastroenterol 2016; 51: 579-585
  • 28 Blanes-Vidal V, Baatrup G, Nadimi ES. Addressing priority challenges in the detection and assessment of colorectal polyps from capsule endoscopy and colonoscopy in colorectal cancer screening using machine learning. Acta Oncol 2019; 58: S29-s36
  • 29 Yamada A, Niikura R, Otani K. et al. Automatic detection of colorectal neoplasia in wireless colon capsule endoscopic images using a deep convolutional neural network. Endoscopy 2020; DOI: 10.1055/a-1266-1066.
  • 30 Sung J, Ho KY, Chiu HM. et al. The use of Pillcam Colon in assessing mucosal inflammation in ulcerative colitis: a multicenter study. Endoscopy 2012; 44: 754-758
  • 31 Hosoe N, Matsuoka K, Naganuma M. et al. Applicability of second-generation colon capsule endoscope to ulcerative colitis: a clinical feasibility study. J Gastroenterol Hepatol 2013; 28: 1174-1179
  • 32 Ye CA, Gao YJ, Ge ZZ. et al. PillCam colon capsule endoscopy versus conventional colonoscopy for the detection of severity and extent of ulcerative colitis. J Dig Dis 2013; 14: 117-124
  • 33 San Juan-Acosta M, Caunedo-Álvarez A, Argüelles-Arias F. et al. Colon capsule endoscopy is a safe and useful tool to assess disease parameters in patients with ulcerative colitis. Eur J Gastroenterol Hepatol 2014; 26: 894-901
  • 34 D’Haens G, Löwenberg M, Samaan MA. et al. Safety and feasibility of using the second-generation pillcam colon capsule to assess active colonic Crohn's disease. Clin Gastroenterol Hepatol 2015; 13: 1480-1486.e1483

Zoom Image
Fig. 1 Study flowchart for the training and validation phases.
Zoom Image
Fig. 2 a Heatmaps and b output obtained from the application of the convolutional neural network. a Examples of heatmaps showing detection of blood and a protruding lesion as identified by the CNN. b The bars represent the probability estimated by the network. The finding with the highest probability was outputted as the predicted classification. A blue bar represents a correct prediction. Red bars represent an incorrect prediction. The gold standard classification (specialists’ consensus) is reported between brackets. N – normal mucosa; B – blood; ML – mucosal lesions.
Zoom Image
Fig. 3 Evolution of the accuracy of the convolutional neural network during training and validation phases, as the training and validation datasets were repeatedly inputted in the neural network.
Zoom Image
Fig. 4 ROC analyses of the network’s performance in the detection of normal mucosa, blood and colon mucosal lesions. ROC – receiver operating characteristic.