Endoscopy 2019; 51(04): S4
DOI: 10.1055/s-0039-1681180
ESGE Days 2019 oral presentations
Friday, April 5, 2019 08:30 – 10:30: Artificial intelligence Club A
Georg Thieme Verlag KG Stuttgart · New York

AUTOMATED POLYP DIFFERENTIATION ON COLOSCOPIC DATA USING SEMANTIC SEGMENTATION WITH CNNS

M Arlt
1   Friedrich-Schiller-University Jena, Computer Vision Group, Jena, Germany
,
J Peter
2   University Hospital Jena, Gastroenterology, Hepatology, and Infectious Diseases, Jena, Germany
,
S Sickert
1   Friedrich-Schiller-University Jena, Computer Vision Group, Jena, Germany
,
CA Brust
1   Friedrich-Schiller-University Jena, Computer Vision Group, Jena, Germany
,
J Denzler
1   Friedrich-Schiller-University Jena, Computer Vision Group, Jena, Germany
,
A Stallmach
2   University Hospital Jena, Gastroenterology, Hepatology, and Infectious Diseases, Jena, Germany
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
18. März 2019 (online)

 
 

    Aims:

    Interval carcinomas are a commonly known problem in endoscopic adenoma detection, especially when they follow negative index colonoscopy. To prevent patients from these carcinomas and support the endoscopist, we reach for a live assisted system in the future, which helps to remark polyps and increase adenoma detection rate. We present our first results of polyp recognition using a machine learning approach.

    Methods:

    We apply convolutional neuronal networks for semantic segmentation of colonoscopic image data. In particular, we make use of fully-convolutional networks2, which are a state-of-the-art technique for segmentation tasks. Furthermore, for the architecture we choose a modified ResNet181. As input, we feed pairs of images to the network, which contain the original image with the polyp and a corresponding binary map, where the spatial information of polyp and background is coded as two classes. After the training process, we observe how the network performs on unknown images. During this validation process we verify the segmentation accuracy of the network.

    Results:

    In our experimental results, we demonstrate the overall feasibility for the task at hand. We were able to show a meaningful polyp recognition performance rate. For our experiments, we ran three different setups where we optimized hyperparameters like learning rate, batch size and regularization function. In the quantitative analysis of the performed experiments we reached a pixel-wise validation accuracy of 79%.

    Conclusions:

    Due to the promising accuracy results we expect to achieve beneficial polyp detection rates.

    In our ongoing research we try to implement a problem-oriented pipeline, which responds to the well-known clinical problem of very few annotated image data. We also aim at proving the generalizability and clinical applicability in future work.


    #