RSS-Feed abonnieren

DOI: 10.1055/s-0042-1746485
Fully automated analysis of the inner ear: joint segmentation and anatomical landmark detection from 3D-CT data with deep neural networks
Introduction Joint semantic segmentation and landmark detection in 3D-CT data is a promising technique for the rapid generation of a patient-specific analysis of the inner ear with respect to volume, cochlear duct length, and cochlear axis orientation.
Methods The two central functions of the automated algorithm – segmentation and regression of landmark coordinates – were implemented in a customized, fully convolutional network (FCN), following the multi task learning (MTL) paradigm. For this, a pipeline managing preprocessing, augmentation and visualization during training and prediction phase was created. From 44 manually segmented and landmark-labelled datasets (voxel edge length 99 µm) from cadaveric temporal bones, 39 were selected as training set and 5 as validation set. The extra test set comprised 10 clinical CT datasets. After finishing training, FCN was evaluated utilizing the 5 + 10 data elements from the validation and test sets. The segmentation performance was evaluated by the Dice-Score (DSC) and volumetric similarity (VSM). The regression was evaluated using the L2 norm between ground truth (GT) and predicted coordinates.
Results Using the genuinely new, during training unseen elements of the validation and test sets, a segmentation performance with high overlap between GT and prediction was produced by FCN: (validation | test, mean(variance)): DSC= 0.965(15) | 0.932(10), VSM = 0.975(16) | 0.946(10). For the coordinate regression, a mean distance between prediction and GT in voxel units of 3.6(3.1) | 4.4(1.4) was obtained.
Conclusion The proposed FCN framework was able to learn the inner ear morphology in the context of the posed MTL problem and thus represents a developmental step for the integration of rapid and automated techniques into clinical practice.
Publikationsverlauf
Artikel online veröffentlicht:
24. Mai 2022
© 2022. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial-License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/).
Georg Thieme Verlag
Rüdigerstraße 14, 70469 Stuttgart, Germany