CC BY-NC-ND 4.0 · Geburtshilfe Frauenheilkd 2021; 81(11): 1203-1216
DOI: 10.1055/a-1522-3029
GebFra Science
Review/Übersicht

The Use of Artificial Intelligence in Automation in the Fields of Gynaecology and Obstetrics – an Assessment of the State of Play

Article in several languages: English | deutsch
Jan Weichert
1   Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
2   Zentrum für Pränatalmedizin an der Elbe, Hamburg, Germany
,
Amrei Welp
1   Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
,
Jann Lennard Scharf
1   Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
,
Christoph Dracopoulos
1   Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
,
Wolf-Henning Becker
2   Zentrum für Pränatalmedizin an der Elbe, Hamburg, Germany
,
Michael Gembicki
1   Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
› Author Affiliations
 

Abstract

The long-awaited progress in digitalisation is generating huge amounts of medical data every day, and manual analysis and targeted, patient-oriented evaluation of this data is becoming increasingly difficult or even infeasible. This state of affairs and the associated, increasingly complex requirements for individualised precision medicine underline the need for modern software solutions and algorithms across the entire healthcare system. The utilisation of state-of-the-art equipment and techniques in almost all areas of medicine over the past few years has now indeed enabled automation processes to enter – at least in part – into routine clinical practice. Such systems utilise a wide variety of artificial intelligence (AI) techniques, the majority of which have been developed to optimise medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection and classification and, as an emerging field of research, radiogenomics. Tasks handled by AI are completed significantly faster and more precisely, clearly demonstrated by now in the annual findings of the ImageNet Large-Scale Visual Recognition Challenge (ILSVCR), first conducted in 2015, with error rates well below those of humans. This review article will discuss the potential capabilities and currently available applications of AI in gynaecological-obstetric diagnostics. The article will focus, in particular, on automated techniques in prenatal sonographic diagnostics.


#

Introduction

In 1997 the reigning world chess champion G. Kasparov was conquered by a computer (Deep Blue). Public awareness of artificial intelligence (AI), however, predates this success. In fact, the first successful AI applications were developed significantly beforehand and have now become an integral and accepted (often unrecognised) feature of our daily lives [4]. AI experts from large companies report that 79 percent of participants in surveys consider AI techniques as strategically highly significant or even vital for sustainable business success. Put another way, artificial intelligence has now become a mainstream technology worldwide in every industry. Competencies in core AI technologies (machine learning with deep learning, natural language processing and computer vision) are nowadays indispensable for larger companies [5].

Although no single accepted definition of artificial intelligence exists, most exerts would agree that AI as a technology refers to any machine or system that can perform complex tasks that would normally involve human (or other biological) brain power [1], [2], [3]. Hence, the term artificial intelligence refers not just to a single technology but to a family of AI applications in a wide range of fields ([Fig. 1]). Machine learning (ML) is a member of this family that focuses on teaching computers to perform tasks with a predetermined goal, without explicitly programming the rules for performing such tasks. It can be regarded as a statistical method that continuously improves upon itself through exposure to increasing volumes of data. This allows such systems to consecutively acquire the ability to correctly recognise objects from images, texts or acoustic data by searching for common properties and regularities and from which patterns can ultimately be extracted.

Zoom Image
Fig. 1 Alan M. Turingʼs 1950 review paper on “machine” intelligence formed the conceptual basis for the introduction of the “Turing test” to ascertain whether a machine can be said to exhibit artificial intelligence. The development of artificial intelligence and its applications can be viewed in a temporal context: machine learning and deep learning are not merely related in name; deep learning is a modelling approach that enables, among other things, problems in modern fields such as image recognition, speech recognition and video interpretation to be solved significantly faster and with a lower error rate than might be feasible by humans alone [4], [47], [55].

Deep learning, the flagship discipline of ML, unlike other machine learning methods, no longer requires direct intervention by humans. Such machine learning processes are made possible by artificial neural networks (ANN) or convolutional neural networks (CNN), which consist of several convolutional layers, followed by a pooling layer that aggregates the data of the filters and eliminates superfluous information ([Fig. 2]). In this way, the abstraction level of a CNN increases with each of these filter levels. Developments in the field of computer-aided signal processing and the expansion of computing power with the latest high-speed graphics processors now allow an unlimited number of filter layers within a CNN to be created, which are thus referred to as “deep” (in contrast to conventional “shallow” neural networks, which usually consist of only one filter). Such learning is an adaptive process in which the weighting of all interconnected neurons changes so as to ultimately achieve the optimal response (output) to all input variables. The process allows either supervised or unsupervised approaches to neural network learning. In the former, ML algorithms employ a pre-coded data set to predict the desired outcome. In contrast, unsupervised approaches are supplied with only unlabelled (coded) input data in order to identify hidden patterns within them and, consequently, make novel predictions.

Zoom Image
Fig. 2 Schematic design of a (feed-forward) convolutional network with two hidden layers. The source information is segmented and abstracted to achieve pattern recognition in these layers and ultimately passed on to the output layer. The capacity of such neural networks can be controlled by varying their depth (number of layers) and width (number of neurons/perceptrons per layer).

Since the early 2000s, deep learning networks have been successfully employed, for example, to recognise and segment objects and image content. AI-assisted voice control and speech recognition are based on similar principles, for instance, the Amazon Alexa, Google Home and Apple Siri voice assistants. Such technologies have a wide range of applications. Their developers, for example, claim that it is now possible to utilise audio capture (“coughing” apps) to identify vocal patterns characteristic of COVID-19 [6]. Furthermore, automated evaluation of speech spectrograms can now be employed to identify vocal biomarkers for a variety of diseases, such as depression [6], [7].

In terms of healthcare, computer-assisted analysis of image data is undoubtedly one of the most significant advantages of AI. In recent years, a new and rapidly developing field of research has emerged in this context, which, under the umbrella term “radiomics”, aims to employ AI to systematically analyse imaging data from patients, characterising over a range a large number of individual and distinct image features with regard to correlation and clinical differentiation. In contrast, the term “radiogenomics” refers to a specialised application in which radiomic or other imaging features are linked to genomic profiles [8].


#

AI and Benefits for Gynaecological-Obstetric Imaging and Diagnostics

The initial hysteria that AI technologies could potentially replace clinical radiologists has now abated. In its place is an awareness that machine learning capabilities will enable personalised AI-based software algorithms with interactive visualisation and automated quantification to accelerate clinical decision-making and analysis time. The uptake of AI in other clinical fields, however, is still rather modest or hesitant [9], [10], [11]. Computer-assisted diagnosis (CAD) systems have actually been in use for more than 25 years, in particular in breast diagnostics [12], [13]. Novel deep-learning algorithms are employed to optimise diagnostic capabilities both in the area of mammography and in AI-supported reporting of mammasonography data sets, addressing issues limiting the use of conventional CAD systems (high development costs, general cost/workflow (in)efficiency, relatively high false positive rate, restriction to certain lesions/entities) [14]. This is illustrated by a recent US-British study in which a CNN was trained on the basis of 76 000 mammography scans, resulting in a significant reduction of false-positive and false-negative findings by 1.2 and 2.7% (UK) and 5.7 and 9.4% (USA), respectively, compared to initial expert findings [15]. In addition, consistent AI support can also sustainably reduce workload by automatically identifying normal screening results in advance that would otherwise have required traditional assessment [16]. OʼConnell et al. have published similarly promising data on AI-assisted evaluation of breast sonography findings. They demonstrated on the basis of 300 patients that, using a commercial diagnostic tool (S-Detect), automated detection of breast lesions using a set of BI-RADS descriptors was in agreement with the results obtained by ten radiologists with appropriate expertise (sensitivity, specificity > 0.8) [17].

The advantage of deep learning algorithms has also been made explicitly evident in other application areas in our field. Cho et al., for example, have developed and validated deep learning models to automatically classify cervical neoplasms in colposcopic images. The authors optimised pre-trained CNNs in two scoring systems: the cervical intraepithelial neoplasia (CIN) system and the lower anogenital squamous terminology (LAST) system. The CNNs were capable of efficiently identifying biopsy-worthy findings (AUC 0.947) [18]. Shanthi et al. were able to correctly classify microscopic cervical cell smears as normal, mild, moderate, severe and carcinomatous with an accuracy of 94.1%, 92.1% and 85.1%, respectively, using various CNNs trained with augmented data sets (original colposcopy, contour-extracted and binary image data) [19]. In the view of Försch et al., one of the main challenges, generally, to increased integration of AI algorithms in the assessment of pathology and diagnosis of histomorphological specimens is that, at present, only a fraction of histopathological data is in fact available in digital form and thus accessible for automated evaluation [20]. This situation still applies to the vast majority of potential clinical AI applications [1], [21].

Very comparable approaches have also been pursued over the last five years in reproductive medicine, in which successful attempts have been made, among other things, to utilise AI technologies for embryo selection. These have involved training CNNs to make qualitative statements based on image data and/or morphokinetic data, to predict the success of implantation [22], [23]. A study by Bori et al. involving more than 600 patients analysed not only the above-mentioned morphokinetic characteristics but also novel parameters such as the distance and speed of pronuclear migration, inner cell mass area, increased diameter of the blastocyst and length of the trophectoderm cell cycle. Of the four test algorithms, most efficient (AUC 0.77) was the one combining conventional morphokinetic and the above-mentioned morphodynamic features, with the latter two parameters significantly more likely to be associated with differences in implanted and non-implanted embryos [24].

It is beyond dispute that, to date, only relatively few AI-based ultrasound applications have progressed fully from academic concept to clinical application and commercialisation. In addition to the importance of AI in prenatal diagnostics, as discussed in the following paragraphs, the advantages of AI-based automated algorithms have been very impressively demonstrated in reporting gynaecological abnormalities, a task that is bound to gain in importance, in particular, given the limited quality of existing ultrasound training [25], [26], [27]. Even though the first work in this field dates back more than 20 years [28], significant pioneering work has been conducted in the last decade in particular, not least due to the extensive studies of the IOTA Working Group. Model analyses on risk quantification of sonographically detected adnexal lesions have been able to demonstrate the extent to which, on the one hand, a standardised procedure for qualified assessment and, on the other, a multi-class risk model (IOTA Adnex – assessment of different neoplasias in the Adnexa) validated on the basis of thousands of patient histories have made it possible to precisely and reproducibly assess the quality of sonographic findings of adnexal processes, thereby providing a significant boost to other study approaches (± AI) in this field [29], [30], [31]. The incorporation of the ADNEX model into a consensus guideline of the American College of Radiologists (ACR) clearly supports these findings. The decision to do so is remarkable, as the US professional medical associations are traditionally considered to be sceptical of ultrasound across all disciplines [32]. In a recent study on the validity of two AI models to determine the character (benign/malignant) of adnexal lesions (trained on grey-scale and power doppler images), Christiansen et al. demonstrated a sensitivity of 96% and 97.1%, respectively, and a specificity of 86.7% and 93.7%, respectively, with no significant differences to expert assessments [33]. The additional benefit of various ML classifiers, alone or in combination, has been investigated in several other approaches, which have likewise found that, in the future, AI approaches will be able to identify more ovarian neoplasms and be increasingly employed in their (early) detection [34], [35], [36], [37], [38]. In a recently published study, Al-Karawi et al. used ML algorithms (support vector machine classification) to investigate seven differing familiar image texture parameters in ultrasound still images, which, according to the authors, can provide information about altered cellular composition in the process of carcinogenesis. By combining the features with the best test results, the researchers achieved an accuracy of 86 – 90% [39].


#

AI in Foetal Echocardiography

Naturally, any analysis of the value of AI raises the question of how automated approaches can benefit foetal cardiac scanning – both in terms of diagnostics and with regard to the practitioner – which is one of the most important but also more complex elements of prenatal sonographic examinations. Here, it is important to recognise that although detection rates of congenital heart defects (CHD) in national or regional screening programmes have improved demonstrably over the last decade, their sensitivity still ranges from 22.5 to 52.8% [40]. The reasons for this are complex – one of the main factors is without doubt the fact that the vast majority of CHDs actually occur in the low-risk population, with only approximately 10% occurring in pregnant women with known risk factors. Furthermore, a Dutch study suggests that, in addition to the lack of expertise in routine clinical practice, factors such as limited adaptive visual-motor skills in acquiring the correct cardiac planes and reduced vigilance seem to play a crucial role in identifying cardiac abnormalities [41].

Experience from cardiology in adults has shown, among other things, that the use of automated systems (not by any means a novel conceptual approach) is demonstrably more efficient than a conventional (manual) approach and is likely to bridge the gap between experts and less experienced practitioners, while at the same time reducing inter- and intraobserver variance. Pilot studies on automated analysis of left ventricular (functional) parameters such as ventricular volume and ejection fraction on the basis of 2-D images and studies on AI-based tracing of endocardial contours in apical two- and four-chamber views using transthoracically acquired 3-D data sets have demonstrated accuracy comparable to manual evaluation [42], [43]. In this context, Kusunose defined four steps that are critical for developing relevant AI models in echocardiography (in addition to ensuring adequate image quality, these steps include level classification, measurement approaches and, finally, anomaly detection) [44]. Zhang et al. investigated the validity of a fully automated AI approach to echocardiographic diagnosis in a clinical context by training deep convolutional networks using > 14 000 complete echocardiograms, enabling them to identify 23 different viewpoints across five different common reference views. In up to 96% of cases, the system was able to accurately identify the individual cardiac diagnostic levels and, in addition, to quantify eleven different measurement parameters with comparable or even higher accuracy than manual approaches [45]. Does this mean that AI algorithms will replace echocardiographers or even cardiologists in the future? Should we be worried? Have we become members of the “useless class”, in the provocative words of Harari [46]? The answers to these questions are unambiguous yet complex and equally applicable to foetal echocardiography. Although AI approaches will very soon be an integral component of routine cardiac diagnostics, examiners have a continuing or even increased responsibility to employ their clinical expertise to understand, monitor and assess automated procedures and, when errors occur, to take appropriate remedial action [47]. Arnaout et al. have successfully trained a model for creating diagnostic cross-sectional planes using 107 823 ultrasound images from > 1300 foetal echocardiograms [48]. In a separate modelling approach, they were then able to distinguish between structurally normal hearts and those with complex anomalies. The findings of the AI were comparable to those of experts. A slightly lower sensitivity/specificity (0.93 and 0.72, respectively, AUC 0.83) was documented by Le et al. in 2020 in their AI approach on nearly 4000 foetuses [49]. Dong et al. demonstrated how accurately a three-step CNN is able to detect different representations of four-chamber views based on 2-D image files and at the same time provide feedback on the completeness of the key cardiac structures imaged [50].

In summary, it is beyond question that the essential prerequisite for efficient cardiac diagnostics, as discussed above, is the creation of exact cross-sectional images obtained during examinations. In the final analysis, this prerequisite should apply to all disciplines, and in particular that of functional imaging diagnostics. Hinton formulated this pertinently: “To recognize shapes, first learn to generate images” [51]. Noteworthy here is the recent approval by the US Food and Drug Administration (FDA) of an adaptive ultrasound system (Caption AI) to support and optimise sectional plane creation (and recording of video sequences) echocardiography in adults. In the view of the developers, this demonstrates how the enormous potential of artificial intelligence and machine learning technologies can be used specifically to improve access to safe and effective cardiac diagnostics [52]. Another commercially available high-sensitivity ultrasound simulator (Volutracer O. P. U. S.) has a comparable AI workflow. The simulator controls and adaptively corrects in real time the manual settings and transducer movements to achieve an exact target plane in any 2-D image sequence (irrespective of the anatomical structure) ([Fig. 3]) [53]. A major advantage of these systems is undoubtedly their usefulness in particular in training and advanced training, since, among other things, the integrated self-learning mode can be used to automatically train, evaluate and certify operators without the need for experts to personally adjust settings [54].

Zoom Image
Fig. 3 Representation of the optical ultrasound simulator Volutracer O. P. U. S. Any volume data set (see also [Fig. 5]) can be uploaded and be adapted, for instance for teaching, by post-processing to acquire appropriate planes (so-called freestyle mode – without simulator instructions). In the upper right-hand corner of the screen, the system provides graphical feedback to assist movements to establish the correct target level. The simulation software also includes a variety of cloud-based training datasets that help teach users the correct settings using a GPS tracking system and audio simulator instructions with overlaid animations. Among other things, the system measures the position, angle of rotation and time until the required target plane is achieved and compares this with an expert comparison that can likewise be viewed.

Due to its comparatively small size, the foetal heart usually takes up a comparatively small area of the US image, and this, in turn, requires any algorithm to learn to ignore at least a portion of the available image data. Another difference to postnatal echocardiography is that the relative orientation and position of the heart in the image in relation to the position of the foetus in the uterus can vary considerably, further complicating image analysis [55], [56]. HeartAssist is an interesting approach to automated recognition, annotation and measurement of cardiac structures using deep-learning algorithms that is about to be launched on the market. HeartAssist is an intelligent software tool employed in foetal echocardiography that can identify and evaluate target structures (axial, sagittal) from 2-D static images (directly or as a single frame extracted from video sequences) of cardiac diagnostic sectional planes ([Fig. 4]). Noteworthy is that, on the one hand, the tool can capture even partially obscured image information and integrate it into the analysis and, on the other hand, the image recognition is successful even with a limited sonographic window. This approach, like most algorithms to automate (foetal) diagnostics (e.g. BiometryAssist, Smart OB or SonoBiometry), is based on segmentation (abstraction) of foetal structures. It employs a wide variety of automated segmentation techniques (pixel, edge and region based models, as well as model and texture based models), which are usually combined to achieve better results [57], [58].

Zoom Image
Fig. 4 Four-chamber view of a foetal heart in week 23 of pregnancy. The foetusʼ spine is located at 3 oʼclock, the four-chamber view can be seen in a partially oblique orientation. In addition to abdominal and cardiac circumference, the inner outline of the atria and ventricles is automatically recognised, traced and quantified in the static image. Similarly, HeartAssist can annotate and measure all other cardiac diagnostic sectional planes (axial/longitudinal).

#

Particular significance of 3-D/4-D technology

The introduction of 3-D/4-D technology, now pre-installed on most US systems, has now opened up a range of diverse display options that are being increasingly utilised for automated image analysis and layer creation. With this technology some manufacturers are offering commercial software tools to facilitate a volume-based approach to foetal echocardiography and its standardised interpretation (Fetal Heart Navigator, SonoVCADheart, Smart Planes FH and 5D Heart). The latter algorithm facilitates a standardised workflow-based 3-D/4-D evaluation of the cardiac anatomy of the foetus through implementation of “foetal intelligent navigation echocardiography” (FINE) ([Fig. 5]). This method analyses STIC (spatial temporal image correlation) volumes with the four-chamber view as the initial plane of volume acquisition. In the next step, predefined anatomical target structures are marked and the nine diagnostic planes needed for a complete foetal echocardiographic assessment are automatically reconstructed. Each plane can subsequently be evaluated independently of the others (e.g. quantitative analysis of the outflow tracts) and, if required, then be manually adjusted. Yeo et al. showed that 98% of cardiac abnormalities could be detected using this method [59]. This has been shown to be easy to learn and simplifies work-flows to evaluate the foetal heart independently of expert practitioners, a feature that is particularly important for capturing congenital anomalies in detail [60], [61].

Zoom Image
Fig. 5 5DHeart (foetal intelligent navigation echocardiography, FINE) program interface with automatically reconstructed diagnostic planes of an Ebsteinʼs anomaly of a foetus in the week 33 of pregnancy (STIC volume). The atrialised right ventricle is clearly visible as a lead structure in the laevorotated four-chamber view (cardiac axis > 63°). The foetusʼ back is positioned at 6 oʼclock by default after the automated software has been implemented (volume acquisition, on the other hand, was performed at 7 – 8 oʼclock, see [Fig. 5]). Analysis of the corresponding planes has also revealed a tubular aortic stenosis (visualised in three-vessel view, five-chamber view, LVOT and aortic arch planes).

Acquisition and quantification of objectifiable foetal cardiac functional parameters is similarly demanding and thus examiner-dependent, similar to manual level reconstruction. At this juncture, special mention should be made of speckle tracking echocardiography, which provides quantitative information on two-dimensional global and segmental myocardial wall movement and deformation parameters (strain/strain rate) on the basis of “speckles” caused by interference from random scatter echoes in the ultrasound image. The introduction of semi-automatic software (fetalHQ), which uses a 2-D video clip of the heart and manual selection of a heart cycle and corresponding marking of the annulus and apex, has now made it possible for less experienced practitioners to quantify the size, shape and contractility of 24 different segments of the foetal heart using AI-assisted analysis of these speckles [62], [63], [64] ([Fig. 6]). Beyond this, AI methods to analyse Doppler-based cardiac function (modified myocardial performance index (Mod-MPI), previously termed the Tei index) have been developed in recent years and are now commercially available [65], [66].

Zoom Image
Fig. 6 Software tools for functional analysis of the foetal heart. Semi-automated approach to speckle tracking analysis using fetalHQ in the foetus examined in [Figs. 3] and [5] with Ebsteinʼs anomaly (a). A selected cardiac cycle is analysed in the approach using automatic contouring of the endocardium for the left and/or right ventricle and subsequent quantification of functional variables such as contractility and deformation. Automated calculation of the (modified) myocardial performance index (MPI, Tei index) by spectral Doppler recording of blood flow across the tricuspid and pulmonary valves using MPI+ (b).

#

AI in standardised diagnostics of the foetal CNS

As mentioned above, the decisive advantage of automated techniques in prenatal diagnostics is clearly that they will allow less experienced practitioners to correctly identify highly complex anatomical structures such as the foetal heart or CNS in a standardised and examiner-independent manner. The basis for such tools is formed by transthalamic (TT) 3-D volume data sets (analogous to the sectional image setting needed to quantify biparietal diameter) acquired with AI-assisted post-processing and evaluation. These allow a primary examination of the foetal CNS with extraction of the transventricular (TV) or transcerebellar (TC) plane from the volume block (SonoCNS, Smart Planes CNS) or even a complete neurosonogram (5DCNS+) ([Fig. 7]). After axial alignment of the corresponding B and C planes and marking of thalamic nuclei or the cave of the septum pellucidum, the latter algorithm also automatically reconstructs the coronal and sagittal sectional planes required for a complete neurosonogram ([Fig. 5]). The latter working group documented successful visualisation rates of 97.7 – 99.4% for axial, 94.4 – 97.7% for sagittal and 92.2 – 97.2% for coronal planes in a prospective follow-up study using the 5DCNS+ modified algorithm [67]. A retrospective clinical validation study of more than 1100 pregnant women yielded similar results [68]. In contrast to the data of Pluym et al., the authors were able to show in their study, among other things, that this standardised approach could be used to collect biometric parameters that, in comparison, were similarly valid and reproducible to those obtained manually [69]. Ambroise-Grandjean et al. were similarly unequivocal when they showed in a feasibility study that the three primary planes including biometric measurements could be consistently reconstructed and quantitatively evaluated using AI (Smart Planes CNS) with low intra- and interobserver reproducibility [ICC > 0.98] [70].

Zoom Image
Fig. 7 (Semi-)automatic reconstruction after application of 5DCNS+ of an axially acquired 3-D volume of the foetal CNS (biparietal plane) in a foetus with a semilobar holoprosencephaly in week 23 of pregnancy. The complete neurosonogram reconstructed from the source volume comprises the 9 required diagnostic sectional planes (3 axial, 4 coronal and 2 sagittal planes). In the axial planes, automatic biometric measurements (not shown) are taken, which can be adjusted subsequently by hand at any time.

These algorithms are already in use in clinical practice, however, they usually require intermediate steps taken by the practitioner. Nevertheless, in the future specially trained CNNs will be able to fully automatically extract all sectional planes from raw volumes. Huang et al. demonstrated that “view-based projection networks” (CNN) from post-processed 3-D volumes (axial output volume and corresponding 90° sagittal/coronal rotations) could reliably detect and image five predefined anatomical CNS structures in parallel in three different 3-D projections, of which the best detection rates were achieved once again on the axial view [71]. The latter is due, among other things, to the gradual reduction in image quality inherent in the orthogonal B and C planes. The authors used the data sets of the INTERGROWTH-21 study group for their analysis. Precise B image quality and accuracy in sectional plane imaging is an indispensable prerequisite for 2-D-based AI approaches, especially for automated detection of abnormal CNS findings, as recently published by Xie et al. [72]. In this paper, CNN were trained using 2-D and 3-D datasets of approximately 15 000 normal and 15 000 abnormal standard axial planes and assessed for segmentation efficiency, binary classification into normal and abnormal planes, and CNS lesion localisation (sensitivity/specificity 96.9 and 95.9%, respectively and AUC 0.989). Before such AI approaches can be used in those areas where they would be of greatest benefit, namely in routine diagnostics, a number of “hurdles” still need to be cleared. These are primarily associated with the initial steps in imaging diagnostics (in keeping with Hintonʼs exhortation that quality is based on image generation), and this ultimately also applies to other foetal target structures in prenatal diagnostics [51]. It would be interesting here to determine, for example, to what extent such automated approaches enable standardised plane reconstruction in combination with DL algorithms to classify and thus accurately and reproducibly detect, annotate and quantify two- and three-dimensional measurement parameters, thereby enabling diagnostics that in the future are significantly less dependent on the presence of an expert practitioner. Of particular interest are, clearly, approaches in which, for example, specialised neural networks are used to optimise image acquisition protocols in obstetric ultrasound diagnostics, thereby shortening examination times and providing comprehensive anatomical information, even from, at times, obscured image areas. For example, Cerrolaza et al. demonstrated (analogous to deep reinforcement learning models for incomplete CT scans) that, even if only 60% of the foetal skull were captured in a volume dataset, AI reconstruction would nevertheless still be possible [73], [74].

The potential of neural networks has also been demonstrated by recent papers by Cai et al. who developed a multi-task CNN that learns how to detect standard axial planes, such as foetal abdominal and head circumference (transventricular sectional plane), by detecting eye movements of the examiner when viewing video sequences [75]. Baumgartner et al. were able to show that a specially trained convolution network (SonoNet) could be used to detect thirteen different standard foetal planes in real time and correctly record target structures [76]. Yacub et al. took a similar approach, using a neural network to, on the one hand, ascertain the completeness of a sonographic abnormality diagnosis and, on the other, to perform quality control of the image data obtained (in accordance with international guidelines). No differences were demonstrated in this case compared to manual expert assessment [77], [78]. The same approaches to modelling now also form the (intelligent) basis for the worldʼs first fully integrated AI tool for automated biometric detection of foetal target structures and AI-supported quality control (SonoLyst) [5]. The potential of neural networks is also apparent in recent data from a British research group on AI-based 2-D video analysis of the workflow of experienced practitioners. This analysis allows systems to predict which transducer movements are most likely to result in the creation of precise target planes in abnormality diagnostics [79]. The same research group were able to demonstrate, on the one hand, that their initial AI models were able to automatically recognise video content (sectional planes) and add appropriate captions and, on the other hand, that specially trained CNNs were able to evaluate combined data from a motion sensor and an ultrasound probe, converting them into signals to augment correct transducer guidance [80], [81].


#

AI and Other Clinical Applications in Obstetric Monitoring

Optimisation of biometric accuracy is another area where AI can be directly clinically relevant. Regardless of the assistance systems already mentioned (see above) and notwithstanding the significant improvement in ultrasound diagnostics over the past few years, such optimisation remains a challenge. The majority of foetal weight estimation models are based on parameters (head circumference, biparietal diameter, abdominal circumference, femoral diaphysis length) measured during conventional 2-D ultrasounds. Hitherto, the development of the soft tissue of the upper and lower extremities, although not directly biometrically quantifiable, was the established surrogate parameter for foetal nutritional status [82]. Three-dimensional measurement of the fractional limb volume (FLV) of the upper arm and/or thigh has been shown to improve the precision of foetal weight estimation, even in multiple pregnancies [83]. Automated techniques that allow much faster and, above all, examiner-independent processing of 3-D volumes (efficient recognition and tracing of soft tissue boundaries) have clearly demonstrated the clinical benefit of volumetric recording of FLV (5DLimbVol), which implements workflow-based, relevant, axially acquired 3-D data sets of the upper arm or thigh and incorporates them into conventional weight estimation ([Fig. 8]) [84], [85].

Zoom Image
Fig. 8 Automated sectional plane reconstruction of a foetal thigh in week 35 of pregnancy to estimate foetal weight (soft tissue mantle of the thigh reconstructed by 5DLimb). After 3-D volume acquisition of the thigh aligned transversely, the soft tissue volume calculated in this way can be used to improve the accuracy of estimations of foetal weight.

AI has now also made it possible to automatically record sonographic parameters such as angle of progression (AoP) and head direction (HD), even as birth progresses. The first findings on this technique were published by Youssef et al. in 2017, who found that an automated approach is possible and can be used in a reproducible manner [86]. Just how far commercially available software solutions such as LaborAssist will improve clinical care remains to be seen, however.

A heated debate, illustrative of the occasional difficulties encountered in employing automated techniques in clinical practice, is under-way regarding the potential benefits of computer-assisted assessment of peripartal foetal heart rate (electronic foetal heart rate monitoring), which, due to the clear interobserver variability and subjectivity in assessing CTG abnormalities, could, at least theoretically, benefit from objective automated analysis. Prospective randomised data from the INFANT study group did not demonstrate any advantage over conventional visual assessment by medical staff present during delivery, neither in neonatal short-term outcomes nor in outcomes at two years [87]. The question of how far methodological weaknesses in the design of the study contributed to these non-significant differences between the study arms (Hawthorne effect) remains open [88], [89], especially since other computer-based approaches delivered clearly promising data [90].

To answer this, Fung et al. used data from two large population-based cohort studies (INTERGROWTH 21st and its phase II study INTERBIO 21st) to show machine learning can be employed to analyse biometric data from an ultrasound performed between weeks 20 and 30 of pregnancy along with a repeat measurement within the following ten weeks of pregnancy to determine gestational age to within three days and to predict the growth curve over the next six weeks in an individualised way for each foetus [91]. There is no doubt that AI will become ever more important in the future, potentially, for instance, in assessing and predicting foetomaternal risk constellations such as prematurity, gestational diabetes and hypertensive diseases of pregnancy [92].


#

Summary

The authors of a recent web-based survey at eight university hospitals stated, among other things, that the majority of respondents tend to view AI in a positive light and, ultimately, believe that the future of clinical medicine will be shaped by a combination of human and artificial intelligence, and that sensible use of AI technologies will significantly improve patient care. The study participants considered the greatest potential lay in analysis of sensor-based, continuously collected data in electrocardiography/electroencephalography, in monitoring of patients in intensive care and in imaging procedures in targeted diagnostics and workflow support [93]. Specifically with regard to our field, it should be emphasised that the continuous development of ultrasound systems and the equipment associated with them, for instance high-resolution ultrasound probes/matrix probes for gynaecological and obstetric diagnostics, along with the inexorable introduction of efficient automated segmentation techniques for two and, in particular, three-dimensional image information, will increasingly influence and optimise the entire process chain in the future, from image data creation, analysis and processing through to its management.

A recent systematic review of more than 80 studies on automated image analysis found that AI delivered findings that were equivalent in precision to those from practitioners who were experts in their field. However, the authors also found that in many publications, no external validation of the various AI algorithms had been performed, or only inadequate validation. This situation, together with the collaboration between AI developers and clinicians, which is already well underway in many areas but still needs to be intensified, is currently making further implementation in relevant clinical processes even more difficult [94]. The current state of AI utilisation in healthcare is similar to the situation of owning a brand new car; making use of it will require both petrol and roads. In other words, the respective algorithms need to be “fuelled” with, for example, (annotated) image data, but they will only be able to fully realise their potential if the appropriate infrastructure – efficient and scalable processes with an AI-ready workflow – is in place [21].


#

Outlook

AI systems continue to be developed and integrated into clinical processes, and with this comes tremendous expectations on how they will advance healthcare. What is certain is that integrating these tools is likely to fundamentally change work and training methods in the future. They will support all healthcare professionals by providing them with rapidly and reliably collected data and facts to interpret findings and consultations, which will, in the best case scenario, allow them to focus more on the uniquely human elements of their profession. Those tasks that cannot be performed by a machine because they demand emotional intelligence, such as targeted patient interaction to identify more nuanced symptoms and to build trust through human intuition, highlight just how unique and critical will be the human factor in deploying clinical AI applications of the future [95]. If nothing else, this reminds us that AI is a long way from truly replacing humans. Almost 100 years ago the visionary writings of Fritz Kahn (“The Physician of the Future”) were already foreshadowing current and future AI technologies in medicine: highly plastic constructivism in which technological civilisation and experimental science can synergistically transform the biology of the human body [96], [97]. One thing that emerges from these advances is that, notwithstanding all technical progress, humans have not yet, nor ever will, render themselves superfluous. Predictions that up to 47% of all jobs would be lost due to automation would seem to be unfounded; in healthcare in particular, more jobs are being created than are being lost [46], [98], [99].

What is needed to optimally exploit the potential of AI algorithms is interdisciplinary communication and constant involvement of physicians as the primary users of these tools in the processes of developing and the modes of operating AI tools. In the absence of such involvement, the medicine of tomorrow will be shaped exclusively by the vision of engineers and will be less able to meet the actual requirements of personalised (precision) medicine [47], [100]. [Table 1] summarises the most urgent research priorities for AI as formulated by the participants at the 2018 consensus workshop of radiological societies [101], [102]. From the perspective of gynaecology and obstetrics, mention should be made that regarding AI-assisted sonographic parameters, continued efforts are underway to optimise imaging (pre-/post-processing) in both conventional 2-D imaging and 3-D/4-D volume sonography, and that, similar to the established algorithms with an automated workflow, the need is for further AI technologies that provide intuitive user guidance, ease of use and general (cross-device) availability to efficiently analyse image and volume data. In addition, assisted systems for real-time plane adjustment and target structure quantification should be further pursued for routine diagnostics. Of particular note here, is the fact that it is now possible to incorporate pre-trained algorithms to analyse oneʼs own population-based data (transfer learning). This constitutes an attractive and, above all, reliable method, as training a new neural network with a large volume of data is computationally and time intensive [103]. The process adopts the existing, pre-trained layers of a CNN and adapts and re-trains only the output layer to recognise the appropriate object classes of the new network.

Table 1 Recommendations from the 2018 consensus workshop on translational research held in Bethesda, USA on advancing and integrating artificial intelligence applications in clinical processes (adapted from Allen et al. 2019, Langlotz et al. 2019 [101], [102]).

Research priorities for artificial intelligence in medical imaging

Structured AI use cases and clinical problems need to be created and defined that can be actually solved by AI algorithms.

Novel image reconstruction methods should be developed to efficiently generate images from source data.

Automated image labelling and annotation methods that efficiently provide training data to explore advanced ML models and enable their intensified clinical use need to be established.

Research is required on machine learning methods that can more specifically communicate and visualise AI-based decision aids to users.

Methods should be established to validate and objectively monitor the performance of AI algorithms to facilitate regulatory approval processes.

Standards and common data platforms need to be developed to enable AI tools to be easily integrated into existing clinical workflows.

It is highly likely that the greatest challenge facing targeted use of AI in healthcare in general, however, is not whether automated technologies are fully capable of meeting the demands placed on them but whether they can be incorporated into everyday clinical practice. To achieve this, among other things, appropriate approval procedures must be initiated, the appropriate (clinical) infrastructure must be established, standardisation ensured, and, above all, clinical staff must be adequately trained. It is clear that in the future these hurdles will be surmounted, but the technologies themselves may well take longer to mature. We should therefore expect to see still limited uptake of AI in clinical practice over the next five years (with more widespread uptake within ten years) [104].


#
#

Conflict of Interest/Interessenkonflikt

The authors (JW, MG) declare that in the last 3 years, they have received speakerʼs fees from Samsung HME and GE Healthcare./Die Autoren (JW, MG) erklären, dass sie innerhalb der vergangenen 3 Jahre Vortragshonorare von Samsung HME und GE Healthcare erhalten haben.

  • References/Literatur

  • 1 Drukker L, Droste R, Chatelain P. et al. Expected-value bias in routine third-trimester growth scans. Ultrasound Obstet Gynecol 2020; 55: 375-382
  • 2 Deng J, Dong W, Socher R. et al. ImageNet: A large-scale hierarchical image database. Paper presented at: 2009 IEEE Conference on Computer Vision and Pattern Recognition; 20 – 25 June 2009. 2009
  • 3 Russakovsky O, Deng J, Su H. et al. ImageNet Large Scale Visual Recognition Challenge. Int J Comput Vis 2015; 115: 211-252
  • 4 Turing AM. I – Computing Machinery and Intelligence. Mind 1950; LIX: 433-460
  • 5 Deloitte. State of AI in the enterprise – 3rd ed. Deloitte, 2020. Accessed September 30, 2021 at: http://www2.deloitte.com/content/dam/Deloitte/de/Documents/technology-media-telecommunications/DELO-6418_State of AI 2020_KS4.pdf
  • 6 Anthes E. Alexa, do I have COVID-19?. Nature 2020; 586: 22-25
  • 7 Huang Z, Epps J, Joachim D. Investigation of Speech Landmark Patterns for Depression Detection. IEEE Transactions on Affective Computing 2019; DOI: 10.1109/TAFFC.2019.2944380.
  • 8 Bodalal Z, Trebeschi S, Nguyen-Kim TDL. et al. Radiogenomics: bridging imaging and genomics. Abdom Radiol (NY) 2019; 44: 1960-1984
  • 9 Allen B, Dreyer K, McGinty GB. Integrating Artificial Intelligence Into Radiologic Practice: A Look to the Future. J Am Coll Radiol 2020; 17: 280-283
  • 10 Purohit K. Growing Interest in Radiology Despite AI Fears. Acad Radiol 2019; 26: e75
  • 11 Richardson ML, Garwood ER, Lee Y. et al. Noninterpretive Uses of Artificial Intelligence in Radiology. Acad Radiol 2021; 28: 1225-1235
  • 12 Bennani-Baiti B, Baltzer PAT. Künstliche Intelligenz in der Mammadiagnostik. Radiologe 2020; 60: 56-63
  • 13 Chan H-P, Samala RK, Hadjiiski LM. CAD and AI for breast cancer–recent development and challenges. Br J Radiol 2019; 93: 20190580
  • 14 Fujita H. AI-based computer-aided diagnosis (AI-CAD): the latest review to read first. Radiol Phys Technol 2020; 13: 6-19
  • 15 McKinney SM, Sieniek M, Godbole V. et al. International evaluation of an AI system for breast cancer screening. Nature 2020; 577: 89-94
  • 16 Rodriguez-Ruiz A, Lång K, Gubern-Merida A. et al. Can we reduce the workload of mammographic screening by automatic identification of normal exams with artificial intelligence? A feasibility study. Eur Radiol 2019; 29: 4825-4832
  • 17 OʼConnell AM, Bartolotta TV, Orlando A. et al. Diagnostic Performance of An Artificial Intelligence System in Breast Ultrasound. J Ultrasound Med 2021; DOI: 10.1002/jum.15684.
  • 18 Cho BJ, Choi YJ, Lee MJ. et al. Classification of cervical neoplasms on colposcopic photography using deep learning. Sci Rep 2020; 10: 13652
  • 19 Shanthi PB, Faruqi F, Hareesha KS. et al. Deep Convolution Neural Network for Malignancy Detection and Classification in Microscopic Uterine Cervix Cell Images. Asian Pac J Cancer Prev 2019; 20: 3447-3456
  • 20 Försch S, Klauschen F, Hufnagl P. et al. Künstliche Intelligenz in der Pathologie. Dtsch Arztebl 2021; 118: 199-204
  • 21 Chang PJ. Moving Artificial Intelligence from Feasible to Real: Time to Drill for Gas and Build Roads. Radiology 2020; 294: 432-433
  • 22 Tran D, Cooke S, Illingworth PJ. et al. Deep learning as a predictive tool for fetal heart pregnancy following time-lapse incubation and blastocyst transfer. Hum Reprod 2019; 34: 1011-1018
  • 23 Zaninovic N, Rosenwaks Z. Artificial intelligence in human in vitro fertilization and embryology. Fertil Steril 2020; 114: 914-920
  • 24 Bori L, Paya E, Alegre L. et al. Novel and conventional embryo parameters as input data for artificial neural networks: an artificial intelligence model applied for prediction of the implantation potential. Fertil Steril 2020; 114: 1232-1241
  • 25 DEGUM. Pressemitteilungen. DEGUM, 2017. Updated 29.11.2017. Accessed September 30, 2021 at: http://www.degum.de/aktuelles/presse-medien/pressemitteilungen/im-detail/news/zu-viele-kindliche-fehlbildungen-bleiben-unentdeckt.html
  • 26 Murugesu S, Galazis N, Jones BP. et al. Evaluating the use of telemedicine in gynaecological practice: a systematic review. BMJ Open 2020; 10: e039457
  • 27 Benacerraf BR, Minton KK, Benson CB. et al. Proceedings: Beyond Ultrasound First Forum on Improving the Quality of Ultrasound Imaging in Obstetrics and Gynecology. J Ultrasound Med 2018; 37: 7-18
  • 28 Timmerman D, Verrelst H, Bourne TH. et al. Artificial neural network models for the preoperative discrimination between malignant and benign adnexal masses. Ultrasound Obstet Gynecol 1999; 13: 17-25
  • 29 Froyman W, Timmerman D. Methods of Assessing Ovarian Masses: International Ovarian Tumor Analysis Approach. Obstet Gynecol Clin North Am 2019; 46: 625-641
  • 30 Van Calster B, Van Hoorde K, Valentin L. et al. Evaluating the risk of ovarian cancer before surgery using the ADNEX model to differentiate between benign, borderline, early and advanced stage invasive, and secondary metastatic tumours: prospective multicentre diagnostic study. BMJ 2014; 349: g5920
  • 31 Vázquez-Manjarrez SE, Rico-Rodriguez OC, Guzman-Martinez N. et al. Imaging and diagnostic approach of the adnexal mass: what the oncologist should know. Chin Clin Oncol 2020; 9: 69
  • 32 Andreotti RF, Timmerman D, Strachowski LM. et al. O-RADS US Risk Stratification and Management System: A Consensus Guideline from the ACR Ovarian-Adnexal Reporting and Data System Committee. Radiology 2020; 294: 168-185
  • 33 Christiansen F, Epstein EL, Smedberg E. et al. Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: comparison with expert subjective assessment. Ultrasound Obstet Gynecol 2021; 57: 155-163
  • 34 Acharya UR, Mookiah MR, Vinitha Sree S. et al. Evolutionary algorithm-based classifier parameter tuning for automatic ovarian cancer tissue characterization and classification. Ultraschall Med 2014; 35: 237-245
  • 35 Akazawa M, Hashimoto K. Artificial Intelligence in Ovarian Cancer Diagnosis. Anticancer Res 2020; 40: 4795-4800
  • 36 Aramendia-Vidaurreta V, Cabeza R, Villanueva A. et al. Ultrasound Image Discrimination between Benign and Malignant Adnexal Masses Based on a Neural Network Approach. Ultrasound Med Biol 2016; 42: 742-752
  • 37 Khazendar S, Sayasneh A, Al-Assam H. et al. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator. Facts Views Vis Obgyn 2015; 7: 7-15
  • 38 Zhou J, Zeng ZY, Li L. Progress of Artificial Intelligence in Gynecological Malignant Tumors. Cancer Manag Res 2020; 12: 12823-12840
  • 39 Al-Karawi D, Al-Assam H, Du H. et al. An Evaluation of the Effectiveness of Image-based Texture Features Extracted from Static B-mode Ultrasound Images in Distinguishing between Benign and Malignant Ovarian Masses. Ultrason Imaging 2021; 43: 124-138
  • 40 Bakker MK, Bergman JEH, Krikov S. et al. Prenatal diagnosis and prevalence of critical congenital heart defects: an international retrospective cohort study. BMJ Open 2019; 9: e028139
  • 41 van Nisselrooij AEL, Teunissen AKK, Clur SA. et al. Why are congenital heart defects being missed?. Ultrasound Obstet Gynecol 2020; 55: 747-757
  • 42 Knackstedt C, Bekkers SC, Schummers G. et al. Fully Automated Versus Standard Tracking of Left Ventricular Ejection Fraction and Longitudinal Strain: The FAST-EFs Multicenter Study. J Am Coll Cardiol 2015; 66: 1456-1466
  • 43 Tsang W, Salgo IS, Medvedofsky D. et al. Transthoracic 3D Echocardiographic Left Heart Chamber Quantification Using an Automated Adaptive Analytics Algorithm. JACC Cardiovasc Imaging 2016; 9: 769-782
  • 44 Kusunose K. Steps to use artificial intelligence in echocardiography. J Echocardiogr 2021; 19: 21-27
  • 45 Zhang J, Gajjala S, Agrawal P. et al. Fully Automated Echocardiogram Interpretation in Clinical Practice. Circulation 2018; 138: 1623-1635
  • 46 Harari YN. Homo sapiens verliert die Kontrolle. Die Große Entkopplung. In: Homo Deus – Eine Geschichte von Morgen. 16. Aufl.. München: C. H. Beck; 2020
  • 47 Gandhi S, Mosleh W, Shen J. et al. Automation, machine learning, and artificial intelligence in echocardiography: A brave new world. Echocardiography 2018; 35: 1402-1418
  • 48 Arnaout R, Curran L, Zhao Y. et al. Expert-level prenatal detection of complex congenital heart disease from screening ultrasound using deep learning. medRxiv 2020; DOI: 10.1101/2020.06.22.20137786.
  • 49 Le TK, Truong V, Nguyen-Vo TH. et al. Application of machine learning in screening of congenital heart diseases using fetal echocardiography. J Am Coll Cardiol 2020; 75: 648
  • 50 Dong J, Liu S, Liao Y. et al. A Generic Quality Control Framework for Fetal Ultrasound Cardiac Four-Chamber Planes. IEEE J Biomed Health Inform 2020; 24: 931-942
  • 51 Hinton GE. To recognize shapes, first learn to generate images. Prog Brain Res 2007; 165: 535-547
  • 52 Voelker R. Cardiac Ultrasound Uses Artificial Intelligence to Produce Images. JAMA 2020; 323: 1034
  • 53 Yeo L, Romero R. Optical ultrasound simulation-based training in obstetric sonography. J Matern Fetal Neonatal Med 2020; DOI: 10.1080/14767058.2020.1786519.
  • 54 Steinhard J, Dammeme Debbih A, Laser KT. et al. Randomised controlled study on the use of systematic simulator-based training (OPUS Fetal Heart Trainer) for learning the standard heart planes in fetal echocardiography. Ultrasound Obstet Gynecol 2019; 54 (S1): 28-29
  • 55 Day TG, Kainz B, Hajnal J. et al. Artificial intelligence, fetal echocardiography, and congenital heart disease. Prenat Diagn 2021; 41: 733-742 DOI: 10.1002/pd.5892.
  • 56 Garcia-Canadilla P, Sanchez-Martinez S, Crispi F. et al. Machine Learning in Fetal Cardiology: What to Expect. Fetal Diagn Ther 2020; 47: 363-372
  • 57 Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2018; 92: 210-235
  • 58 Rawat V, Jain A, Shrimali V. Automated Techniques for the Interpretation of Fetal Abnormalities: A Review. Appl Bionics Biomech 2018; 2018: 6452050
  • 59 Yeo L, Luewan S, Romero R. Fetal Intelligent Navigation Echocardiography (FINE) Detects 98 % of Congenital Heart Disease. J Ultrasound Med 2018; 37: 2577-2593
  • 60 Gembicki M, Hartge DR, Dracopoulos C. et al. Semiautomatic Fetal Intelligent Navigation Echocardiography Has the Potential to Aid Cardiac Evaluations Even in Less Experienced Hands. J Ultrasound Med 2020; 39: 301-309
  • 61 Weichert J, Weichert A. A “holistic” sonographic view on congenital heart disease: How automatic reconstruction using fetal intelligent navigation echocardiography eases unveiling of abnormal cardiac anatomy part II-Left heart anomalies. Echocardiography 2021; 38: 777-789
  • 62 DeVore GR, Klas B, Satou G. et al. Longitudinal Annular Systolic Displacement Compared to Global Strain in Normal Fetal Hearts and Those With Cardiac Abnormalities. J Ultrasound Med 2018; 37: 1159-1171
  • 63 DeVore GR, Klas B, Satou G. et al. 24-segment sphericity index: a new technique to evaluate fetal cardiac diastolic shape. Ultrasound Obstet Gynecol 2018; 51: 650-658
  • 64 DeVore GR, Polanco B, Satou G. et al. Two-Dimensional Speckle Tracking of the Fetal Heart: A Practical Step-by-Step Approach for the Fetal Sonologist. J Ultrasound Med 2016; 35: 1765-1781
  • 65 Lee M, Won H. Novel technique for measurement of fetal right myocardial performance index using synchronised images of right ventricular inflow and outflow. Ultrasound Obstet Gynecol 2019; 54 (S1): 178-179
  • 66 Leung V, Avnet H, Henry A. et al. Automation of the Fetal Right Myocardial Performance Index to Optimise Repeatability. Fetal Diagn Ther 2018; 44: 28-35
  • 67 Rizzo G, Aiello E, Pietrolucci ME. et al. The feasibility of using 5D CNS software in obtaining standard fetal head measurements from volumes acquired by three-dimensional ultrasonography: comparison with two-dimensional ultrasound. J Matern Fetal Neonatal Med 2016; 29: 2217-2222
  • 68 Welp A, Gembicki M, Rody A. et al. Validation of a semiautomated volumetric approach for fetal neurosonography using 5DCNS+ in clinical data from > 1100 consecutive pregnancies. Childs Nerv Syst 2020; 36: 2989-2995
  • 69 Pluym ID, Afshar Y, Holliman K. et al. Accuracy of three-dimensional automated ultrasound imaging of biometric measurements of the fetal brain. Ultrasound Obstet Gynecol 2021; 57: 798-803
  • 70 Ambroise Grandjean G, Hossu G, Bertholdt C. et al. Artificial intelligence assistance for fetal head biometry: Assessment of automated measurement software. Diagn Interv Imaging 2018; 99: 709-716
  • 71 Huang R, Xie W, Alison Noble J. VP-Nets: Efficient automatic localization of key brain structures in 3D fetal neurosonography. Med Image Anal 2018; 47: 127-139
  • 72 Xie HN, Wang N, He M. et al. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. Ultrasound Obstet Gynecol 2020; 56: 579-587
  • 73 Cerrolaza JJ, Li Y, Biffi C. et al. Fetal Skull Reconstruction via Deep Convolutional Autoencoders. Annu Int Conf IEEE Eng Med Biol Soc 2018; 2018: 887-890
  • 74 Ghesu FC, Georgescu B, Grbic S. et al. Towards intelligent robust detection of anatomical structures in incomplete volumetric data. Med Image Anal 2018; 48: 203-213
  • 75 Cai Y, Droste R, Sharma H. et al. Spatio-temporal visual attention modelling of standard biometry plane-finding navigation. Medical Image Analysis 2020; 65: 101762
  • 76 Baumgartner CF, Kamnitsas K, Matthew J. et al. SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound. IEEE Trans Med Imaging 2017; 36: 2204-2215
  • 77 Yaqub M, Kelly B, Noble JA. et al. An AI system to support sonologists during fetal ultrasound anomaly screening. Ultrasound Obstet Gynecol 2018; 52 (S1): 9-10
  • 78 Yaqub M, Sleep N, Syme S. et al. ScanNav® audit: an AI-powered screening assistant for fetal anatomical ultrasound. Am J Obstet Gynecol 2021; 224 (Suppl.) S312 DOI: 10.1016/j.ajog.2020.12.512.
  • 79 Sharma H, Drukker L, Chatelain P. et al. Knowledge representation and learning of operator clinical workflow from full-length routine fetal ultrasound scan videos. Med Image Anal 2021; 69: 101973
  • 80 Droste R, Drukker L, Papageorghiou AT. et al. Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound. Med Image Comput Comput Assist Interv 2020; 12263: 583-592
  • 81 Alsharid M, Sharma H, Drukker L. et al. Captioning Ultrasound Images Automatically. In: Shen D. et al., eds. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. (Lecture Notes in Computer Science, vol 11767; ). Cham: Springer; 2019
  • 82 Lee W, Deter RL, Ebersole JD. et al. Birth weight prediction by three-dimensional ultrasonography: fractional limb volume. J Ultrasound Med 2001; 20: 1283-1292
  • 83 Corrêa VM, Araujo Júnior E, Braga A. et al. Prediction of birth weight in twin pregnancies using fractional limb volumes by three-dimensional ultrasonography. J Matern Fetal Neonatal Med 2020; 33: 3652-3657
  • 84 Gembicki M, Offerman DR, Weichert J. Semiautomatic Assessment of Fetal Fractional Limb Volume for Weight Prediction in Clinical Praxis: How Does It Perform in Routine Use?. J Ultrasound Med 2021; DOI: 10.1002/jum.15712.
  • 85 Mack LM, Kim SY, Lee S. et al. Automated Fractional Limb Volume Measurements Improve the Precision of Birth Weight Predictions in Late Third-Trimester Fetuses. J Ultrasound Med 2017; 36: 1649-1655
  • 86 Youssef A, Salsi G, Montaguti E. et al. Automated Measurement of the Angle of Progression in Labor: A Feasibility and Reliability Study. Fetal Diagn Ther 2017; 41: 293-299
  • 87 Brocklehurst P, Field D, Greene K. et al. Computerised interpretation of fetal heart rate during labour (INFANT): a randomised controlled trial. Lancet 2017; 389: 1719-1729
  • 88 Keith R. The INFANT study-a flawed design foreseen. Lancet 2017; 389: 1697-1698
  • 89 Silver RM. Computerising the intrapartum continuous cardiotocography does not add to its predictive value: FOR: Computer analysis does not add to intrapartum continuous cardiotocography predictive value. BJOG 2019; 126: 1363
  • 90 Gyllencreutz E, Lu K, Lindecrantz K. et al. Validation of a computerized algorithm to quantify fetal heart rate deceleration area. Acta Obstet Gynecol Scand 2018; 97: 1137-1147
  • 91 Fung R, Villar J, Dashti A. et al. International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st). Achieving accurate estimates of fetal gestational age and personalised predictions of fetal growth based on data from an international prospective cohort study: a population-based machine learning study. Lancet Digit Health 2020; 2: e368-e375
  • 92 Lee KS, Ahn KH. Application of Artificial Intelligence in Early Diagnosis of Spontaneous Preterm Labor and Birth. Diagnostics (Basel) 2020; 10: 733
  • 93 Maassen O, Fritsch S, Palm J. et al. Future Medical Artificial Intelligence Application Requirements and Expectations of Physicians in German University Hospitals: Web-Based Survey. J Med Internet Res 2021; 23: e26646
  • 94 Littmann M, Selig K, Cohen-Lavi L. et al. Validity of machine learning in biology and medicine increased through collaborations across fields of expertise. Nature Machine Intelligence 2020; 2: 18-24
  • 95 Norgeot B, Glicksberg BS, Butte AJ. A call for deep-learning healthcare. Nat Med 2019; 25: 14-15
  • 96 Borck C. Communicating the Modern Body: Fritz Kahnʼs Popular Images of Human Physiology as an Industrialized World. Canadian Journal of Communication 2007; 32: 495-520
  • 97 Jachertz N. Populärmedizin: Der Mensch ist eine Maschine, die vom Menschen bedient wird. Dtsch Arztebl 2010; 107: A-391-393
  • 98 Frey CB, Osborne MA. The future of employment: How susceptible are jobs to computerisation?. Technological Forecasting and Social Change 2017; 114: 254-280
  • 99 Gartner H, Stüber H. Strukturwandel am Arbeitsmarkt seit den 70er Jahren: Arbeitsplatzverluste werden durch neue Arbeitsplätze immer wieder ausgeglichen. 16.7.2019. Nürnberg: Institut für Arbeitsmarkt- und Berufsforschung; 2019
  • 100 Bartoli A, Quarello E, Voznyuk I. et al. Intelligence artificielle et imagerie en médecine fœtale: de quoi parle-t-on? [Artificial intelligence and fetal imaging: What are we talking about?]. Gynecol Obstet Fertil Senol 2019; 47: 765-768
  • 101 Allen jr. B, Seltzer SE, Langlotz CP. et al. A Road Map for Translational Research on Artificial Intelligence in Medical Imaging: From the 2018 National Institutes of Health/RSNA/ACR/The Academy Workshop. J Am Coll Radiol 2019; 16 (9 Pt A): 1179-1189
  • 102 Langlotz CP, Allen B, Erickson BJ. et al. A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 2019; 291: 781-791
  • 103 Tolsgaard MG, Svendsen MBS, Thybo JK. et al. Does artificial intelligence for classifying ultrasound imaging generalize between different populations and contexts?. Ultrasound Obstet Gynecol 2021; 57: 342-343
  • 104 Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019; 6: 94-98

Correspondence/Korrespondenzadresse

Prof. Dr. Jan Weichert
Bereich Pränatalmedizin und Spezielle Geburtshilfe
Klinik für Frauenheilkunde und Geburtshilfe
Universitätsklinikum Schleswig-Holstein, Campus Lübeck
Ratzeburger Allee 160
23538 Lübeck
Germany   

Publication History

Received: 22 April 2021

Accepted: 01 June 2021

Article published online:
04 November 2021

© 2021. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commecial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References/Literatur

  • 1 Drukker L, Droste R, Chatelain P. et al. Expected-value bias in routine third-trimester growth scans. Ultrasound Obstet Gynecol 2020; 55: 375-382
  • 2 Deng J, Dong W, Socher R. et al. ImageNet: A large-scale hierarchical image database. Paper presented at: 2009 IEEE Conference on Computer Vision and Pattern Recognition; 20 – 25 June 2009. 2009
  • 3 Russakovsky O, Deng J, Su H. et al. ImageNet Large Scale Visual Recognition Challenge. Int J Comput Vis 2015; 115: 211-252
  • 4 Turing AM. I – Computing Machinery and Intelligence. Mind 1950; LIX: 433-460
  • 5 Deloitte. State of AI in the enterprise – 3rd ed. Deloitte, 2020. Accessed September 30, 2021 at: http://www2.deloitte.com/content/dam/Deloitte/de/Documents/technology-media-telecommunications/DELO-6418_State of AI 2020_KS4.pdf
  • 6 Anthes E. Alexa, do I have COVID-19?. Nature 2020; 586: 22-25
  • 7 Huang Z, Epps J, Joachim D. Investigation of Speech Landmark Patterns for Depression Detection. IEEE Transactions on Affective Computing 2019; DOI: 10.1109/TAFFC.2019.2944380.
  • 8 Bodalal Z, Trebeschi S, Nguyen-Kim TDL. et al. Radiogenomics: bridging imaging and genomics. Abdom Radiol (NY) 2019; 44: 1960-1984
  • 9 Allen B, Dreyer K, McGinty GB. Integrating Artificial Intelligence Into Radiologic Practice: A Look to the Future. J Am Coll Radiol 2020; 17: 280-283
  • 10 Purohit K. Growing Interest in Radiology Despite AI Fears. Acad Radiol 2019; 26: e75
  • 11 Richardson ML, Garwood ER, Lee Y. et al. Noninterpretive Uses of Artificial Intelligence in Radiology. Acad Radiol 2021; 28: 1225-1235
  • 12 Bennani-Baiti B, Baltzer PAT. Künstliche Intelligenz in der Mammadiagnostik. Radiologe 2020; 60: 56-63
  • 13 Chan H-P, Samala RK, Hadjiiski LM. CAD and AI for breast cancer–recent development and challenges. Br J Radiol 2019; 93: 20190580
  • 14 Fujita H. AI-based computer-aided diagnosis (AI-CAD): the latest review to read first. Radiol Phys Technol 2020; 13: 6-19
  • 15 McKinney SM, Sieniek M, Godbole V. et al. International evaluation of an AI system for breast cancer screening. Nature 2020; 577: 89-94
  • 16 Rodriguez-Ruiz A, Lång K, Gubern-Merida A. et al. Can we reduce the workload of mammographic screening by automatic identification of normal exams with artificial intelligence? A feasibility study. Eur Radiol 2019; 29: 4825-4832
  • 17 OʼConnell AM, Bartolotta TV, Orlando A. et al. Diagnostic Performance of An Artificial Intelligence System in Breast Ultrasound. J Ultrasound Med 2021; DOI: 10.1002/jum.15684.
  • 18 Cho BJ, Choi YJ, Lee MJ. et al. Classification of cervical neoplasms on colposcopic photography using deep learning. Sci Rep 2020; 10: 13652
  • 19 Shanthi PB, Faruqi F, Hareesha KS. et al. Deep Convolution Neural Network for Malignancy Detection and Classification in Microscopic Uterine Cervix Cell Images. Asian Pac J Cancer Prev 2019; 20: 3447-3456
  • 20 Försch S, Klauschen F, Hufnagl P. et al. Künstliche Intelligenz in der Pathologie. Dtsch Arztebl 2021; 118: 199-204
  • 21 Chang PJ. Moving Artificial Intelligence from Feasible to Real: Time to Drill for Gas and Build Roads. Radiology 2020; 294: 432-433
  • 22 Tran D, Cooke S, Illingworth PJ. et al. Deep learning as a predictive tool for fetal heart pregnancy following time-lapse incubation and blastocyst transfer. Hum Reprod 2019; 34: 1011-1018
  • 23 Zaninovic N, Rosenwaks Z. Artificial intelligence in human in vitro fertilization and embryology. Fertil Steril 2020; 114: 914-920
  • 24 Bori L, Paya E, Alegre L. et al. Novel and conventional embryo parameters as input data for artificial neural networks: an artificial intelligence model applied for prediction of the implantation potential. Fertil Steril 2020; 114: 1232-1241
  • 25 DEGUM. Pressemitteilungen. DEGUM, 2017. Updated 29.11.2017. Accessed September 30, 2021 at: http://www.degum.de/aktuelles/presse-medien/pressemitteilungen/im-detail/news/zu-viele-kindliche-fehlbildungen-bleiben-unentdeckt.html
  • 26 Murugesu S, Galazis N, Jones BP. et al. Evaluating the use of telemedicine in gynaecological practice: a systematic review. BMJ Open 2020; 10: e039457
  • 27 Benacerraf BR, Minton KK, Benson CB. et al. Proceedings: Beyond Ultrasound First Forum on Improving the Quality of Ultrasound Imaging in Obstetrics and Gynecology. J Ultrasound Med 2018; 37: 7-18
  • 28 Timmerman D, Verrelst H, Bourne TH. et al. Artificial neural network models for the preoperative discrimination between malignant and benign adnexal masses. Ultrasound Obstet Gynecol 1999; 13: 17-25
  • 29 Froyman W, Timmerman D. Methods of Assessing Ovarian Masses: International Ovarian Tumor Analysis Approach. Obstet Gynecol Clin North Am 2019; 46: 625-641
  • 30 Van Calster B, Van Hoorde K, Valentin L. et al. Evaluating the risk of ovarian cancer before surgery using the ADNEX model to differentiate between benign, borderline, early and advanced stage invasive, and secondary metastatic tumours: prospective multicentre diagnostic study. BMJ 2014; 349: g5920
  • 31 Vázquez-Manjarrez SE, Rico-Rodriguez OC, Guzman-Martinez N. et al. Imaging and diagnostic approach of the adnexal mass: what the oncologist should know. Chin Clin Oncol 2020; 9: 69
  • 32 Andreotti RF, Timmerman D, Strachowski LM. et al. O-RADS US Risk Stratification and Management System: A Consensus Guideline from the ACR Ovarian-Adnexal Reporting and Data System Committee. Radiology 2020; 294: 168-185
  • 33 Christiansen F, Epstein EL, Smedberg E. et al. Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: comparison with expert subjective assessment. Ultrasound Obstet Gynecol 2021; 57: 155-163
  • 34 Acharya UR, Mookiah MR, Vinitha Sree S. et al. Evolutionary algorithm-based classifier parameter tuning for automatic ovarian cancer tissue characterization and classification. Ultraschall Med 2014; 35: 237-245
  • 35 Akazawa M, Hashimoto K. Artificial Intelligence in Ovarian Cancer Diagnosis. Anticancer Res 2020; 40: 4795-4800
  • 36 Aramendia-Vidaurreta V, Cabeza R, Villanueva A. et al. Ultrasound Image Discrimination between Benign and Malignant Adnexal Masses Based on a Neural Network Approach. Ultrasound Med Biol 2016; 42: 742-752
  • 37 Khazendar S, Sayasneh A, Al-Assam H. et al. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator. Facts Views Vis Obgyn 2015; 7: 7-15
  • 38 Zhou J, Zeng ZY, Li L. Progress of Artificial Intelligence in Gynecological Malignant Tumors. Cancer Manag Res 2020; 12: 12823-12840
  • 39 Al-Karawi D, Al-Assam H, Du H. et al. An Evaluation of the Effectiveness of Image-based Texture Features Extracted from Static B-mode Ultrasound Images in Distinguishing between Benign and Malignant Ovarian Masses. Ultrason Imaging 2021; 43: 124-138
  • 40 Bakker MK, Bergman JEH, Krikov S. et al. Prenatal diagnosis and prevalence of critical congenital heart defects: an international retrospective cohort study. BMJ Open 2019; 9: e028139
  • 41 van Nisselrooij AEL, Teunissen AKK, Clur SA. et al. Why are congenital heart defects being missed?. Ultrasound Obstet Gynecol 2020; 55: 747-757
  • 42 Knackstedt C, Bekkers SC, Schummers G. et al. Fully Automated Versus Standard Tracking of Left Ventricular Ejection Fraction and Longitudinal Strain: The FAST-EFs Multicenter Study. J Am Coll Cardiol 2015; 66: 1456-1466
  • 43 Tsang W, Salgo IS, Medvedofsky D. et al. Transthoracic 3D Echocardiographic Left Heart Chamber Quantification Using an Automated Adaptive Analytics Algorithm. JACC Cardiovasc Imaging 2016; 9: 769-782
  • 44 Kusunose K. Steps to use artificial intelligence in echocardiography. J Echocardiogr 2021; 19: 21-27
  • 45 Zhang J, Gajjala S, Agrawal P. et al. Fully Automated Echocardiogram Interpretation in Clinical Practice. Circulation 2018; 138: 1623-1635
  • 46 Harari YN. Homo sapiens verliert die Kontrolle. Die Große Entkopplung. In: Homo Deus – Eine Geschichte von Morgen. 16. Aufl.. München: C. H. Beck; 2020
  • 47 Gandhi S, Mosleh W, Shen J. et al. Automation, machine learning, and artificial intelligence in echocardiography: A brave new world. Echocardiography 2018; 35: 1402-1418
  • 48 Arnaout R, Curran L, Zhao Y. et al. Expert-level prenatal detection of complex congenital heart disease from screening ultrasound using deep learning. medRxiv 2020; DOI: 10.1101/2020.06.22.20137786.
  • 49 Le TK, Truong V, Nguyen-Vo TH. et al. Application of machine learning in screening of congenital heart diseases using fetal echocardiography. J Am Coll Cardiol 2020; 75: 648
  • 50 Dong J, Liu S, Liao Y. et al. A Generic Quality Control Framework for Fetal Ultrasound Cardiac Four-Chamber Planes. IEEE J Biomed Health Inform 2020; 24: 931-942
  • 51 Hinton GE. To recognize shapes, first learn to generate images. Prog Brain Res 2007; 165: 535-547
  • 52 Voelker R. Cardiac Ultrasound Uses Artificial Intelligence to Produce Images. JAMA 2020; 323: 1034
  • 53 Yeo L, Romero R. Optical ultrasound simulation-based training in obstetric sonography. J Matern Fetal Neonatal Med 2020; DOI: 10.1080/14767058.2020.1786519.
  • 54 Steinhard J, Dammeme Debbih A, Laser KT. et al. Randomised controlled study on the use of systematic simulator-based training (OPUS Fetal Heart Trainer) for learning the standard heart planes in fetal echocardiography. Ultrasound Obstet Gynecol 2019; 54 (S1): 28-29
  • 55 Day TG, Kainz B, Hajnal J. et al. Artificial intelligence, fetal echocardiography, and congenital heart disease. Prenat Diagn 2021; 41: 733-742 DOI: 10.1002/pd.5892.
  • 56 Garcia-Canadilla P, Sanchez-Martinez S, Crispi F. et al. Machine Learning in Fetal Cardiology: What to Expect. Fetal Diagn Ther 2020; 47: 363-372
  • 57 Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2018; 92: 210-235
  • 58 Rawat V, Jain A, Shrimali V. Automated Techniques for the Interpretation of Fetal Abnormalities: A Review. Appl Bionics Biomech 2018; 2018: 6452050
  • 59 Yeo L, Luewan S, Romero R. Fetal Intelligent Navigation Echocardiography (FINE) Detects 98 % of Congenital Heart Disease. J Ultrasound Med 2018; 37: 2577-2593
  • 60 Gembicki M, Hartge DR, Dracopoulos C. et al. Semiautomatic Fetal Intelligent Navigation Echocardiography Has the Potential to Aid Cardiac Evaluations Even in Less Experienced Hands. J Ultrasound Med 2020; 39: 301-309
  • 61 Weichert J, Weichert A. A “holistic” sonographic view on congenital heart disease: How automatic reconstruction using fetal intelligent navigation echocardiography eases unveiling of abnormal cardiac anatomy part II-Left heart anomalies. Echocardiography 2021; 38: 777-789
  • 62 DeVore GR, Klas B, Satou G. et al. Longitudinal Annular Systolic Displacement Compared to Global Strain in Normal Fetal Hearts and Those With Cardiac Abnormalities. J Ultrasound Med 2018; 37: 1159-1171
  • 63 DeVore GR, Klas B, Satou G. et al. 24-segment sphericity index: a new technique to evaluate fetal cardiac diastolic shape. Ultrasound Obstet Gynecol 2018; 51: 650-658
  • 64 DeVore GR, Polanco B, Satou G. et al. Two-Dimensional Speckle Tracking of the Fetal Heart: A Practical Step-by-Step Approach for the Fetal Sonologist. J Ultrasound Med 2016; 35: 1765-1781
  • 65 Lee M, Won H. Novel technique for measurement of fetal right myocardial performance index using synchronised images of right ventricular inflow and outflow. Ultrasound Obstet Gynecol 2019; 54 (S1): 178-179
  • 66 Leung V, Avnet H, Henry A. et al. Automation of the Fetal Right Myocardial Performance Index to Optimise Repeatability. Fetal Diagn Ther 2018; 44: 28-35
  • 67 Rizzo G, Aiello E, Pietrolucci ME. et al. The feasibility of using 5D CNS software in obtaining standard fetal head measurements from volumes acquired by three-dimensional ultrasonography: comparison with two-dimensional ultrasound. J Matern Fetal Neonatal Med 2016; 29: 2217-2222
  • 68 Welp A, Gembicki M, Rody A. et al. Validation of a semiautomated volumetric approach for fetal neurosonography using 5DCNS+ in clinical data from > 1100 consecutive pregnancies. Childs Nerv Syst 2020; 36: 2989-2995
  • 69 Pluym ID, Afshar Y, Holliman K. et al. Accuracy of three-dimensional automated ultrasound imaging of biometric measurements of the fetal brain. Ultrasound Obstet Gynecol 2021; 57: 798-803
  • 70 Ambroise Grandjean G, Hossu G, Bertholdt C. et al. Artificial intelligence assistance for fetal head biometry: Assessment of automated measurement software. Diagn Interv Imaging 2018; 99: 709-716
  • 71 Huang R, Xie W, Alison Noble J. VP-Nets: Efficient automatic localization of key brain structures in 3D fetal neurosonography. Med Image Anal 2018; 47: 127-139
  • 72 Xie HN, Wang N, He M. et al. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. Ultrasound Obstet Gynecol 2020; 56: 579-587
  • 73 Cerrolaza JJ, Li Y, Biffi C. et al. Fetal Skull Reconstruction via Deep Convolutional Autoencoders. Annu Int Conf IEEE Eng Med Biol Soc 2018; 2018: 887-890
  • 74 Ghesu FC, Georgescu B, Grbic S. et al. Towards intelligent robust detection of anatomical structures in incomplete volumetric data. Med Image Anal 2018; 48: 203-213
  • 75 Cai Y, Droste R, Sharma H. et al. Spatio-temporal visual attention modelling of standard biometry plane-finding navigation. Medical Image Analysis 2020; 65: 101762
  • 76 Baumgartner CF, Kamnitsas K, Matthew J. et al. SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound. IEEE Trans Med Imaging 2017; 36: 2204-2215
  • 77 Yaqub M, Kelly B, Noble JA. et al. An AI system to support sonologists during fetal ultrasound anomaly screening. Ultrasound Obstet Gynecol 2018; 52 (S1): 9-10
  • 78 Yaqub M, Sleep N, Syme S. et al. ScanNav® audit: an AI-powered screening assistant for fetal anatomical ultrasound. Am J Obstet Gynecol 2021; 224 (Suppl.) S312 DOI: 10.1016/j.ajog.2020.12.512.
  • 79 Sharma H, Drukker L, Chatelain P. et al. Knowledge representation and learning of operator clinical workflow from full-length routine fetal ultrasound scan videos. Med Image Anal 2021; 69: 101973
  • 80 Droste R, Drukker L, Papageorghiou AT. et al. Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound. Med Image Comput Comput Assist Interv 2020; 12263: 583-592
  • 81 Alsharid M, Sharma H, Drukker L. et al. Captioning Ultrasound Images Automatically. In: Shen D. et al., eds. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. (Lecture Notes in Computer Science, vol 11767; ). Cham: Springer; 2019
  • 82 Lee W, Deter RL, Ebersole JD. et al. Birth weight prediction by three-dimensional ultrasonography: fractional limb volume. J Ultrasound Med 2001; 20: 1283-1292
  • 83 Corrêa VM, Araujo Júnior E, Braga A. et al. Prediction of birth weight in twin pregnancies using fractional limb volumes by three-dimensional ultrasonography. J Matern Fetal Neonatal Med 2020; 33: 3652-3657
  • 84 Gembicki M, Offerman DR, Weichert J. Semiautomatic Assessment of Fetal Fractional Limb Volume for Weight Prediction in Clinical Praxis: How Does It Perform in Routine Use?. J Ultrasound Med 2021; DOI: 10.1002/jum.15712.
  • 85 Mack LM, Kim SY, Lee S. et al. Automated Fractional Limb Volume Measurements Improve the Precision of Birth Weight Predictions in Late Third-Trimester Fetuses. J Ultrasound Med 2017; 36: 1649-1655
  • 86 Youssef A, Salsi G, Montaguti E. et al. Automated Measurement of the Angle of Progression in Labor: A Feasibility and Reliability Study. Fetal Diagn Ther 2017; 41: 293-299
  • 87 Brocklehurst P, Field D, Greene K. et al. Computerised interpretation of fetal heart rate during labour (INFANT): a randomised controlled trial. Lancet 2017; 389: 1719-1729
  • 88 Keith R. The INFANT study-a flawed design foreseen. Lancet 2017; 389: 1697-1698
  • 89 Silver RM. Computerising the intrapartum continuous cardiotocography does not add to its predictive value: FOR: Computer analysis does not add to intrapartum continuous cardiotocography predictive value. BJOG 2019; 126: 1363
  • 90 Gyllencreutz E, Lu K, Lindecrantz K. et al. Validation of a computerized algorithm to quantify fetal heart rate deceleration area. Acta Obstet Gynecol Scand 2018; 97: 1137-1147
  • 91 Fung R, Villar J, Dashti A. et al. International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st). Achieving accurate estimates of fetal gestational age and personalised predictions of fetal growth based on data from an international prospective cohort study: a population-based machine learning study. Lancet Digit Health 2020; 2: e368-e375
  • 92 Lee KS, Ahn KH. Application of Artificial Intelligence in Early Diagnosis of Spontaneous Preterm Labor and Birth. Diagnostics (Basel) 2020; 10: 733
  • 93 Maassen O, Fritsch S, Palm J. et al. Future Medical Artificial Intelligence Application Requirements and Expectations of Physicians in German University Hospitals: Web-Based Survey. J Med Internet Res 2021; 23: e26646
  • 94 Littmann M, Selig K, Cohen-Lavi L. et al. Validity of machine learning in biology and medicine increased through collaborations across fields of expertise. Nature Machine Intelligence 2020; 2: 18-24
  • 95 Norgeot B, Glicksberg BS, Butte AJ. A call for deep-learning healthcare. Nat Med 2019; 25: 14-15
  • 96 Borck C. Communicating the Modern Body: Fritz Kahnʼs Popular Images of Human Physiology as an Industrialized World. Canadian Journal of Communication 2007; 32: 495-520
  • 97 Jachertz N. Populärmedizin: Der Mensch ist eine Maschine, die vom Menschen bedient wird. Dtsch Arztebl 2010; 107: A-391-393
  • 98 Frey CB, Osborne MA. The future of employment: How susceptible are jobs to computerisation?. Technological Forecasting and Social Change 2017; 114: 254-280
  • 99 Gartner H, Stüber H. Strukturwandel am Arbeitsmarkt seit den 70er Jahren: Arbeitsplatzverluste werden durch neue Arbeitsplätze immer wieder ausgeglichen. 16.7.2019. Nürnberg: Institut für Arbeitsmarkt- und Berufsforschung; 2019
  • 100 Bartoli A, Quarello E, Voznyuk I. et al. Intelligence artificielle et imagerie en médecine fœtale: de quoi parle-t-on? [Artificial intelligence and fetal imaging: What are we talking about?]. Gynecol Obstet Fertil Senol 2019; 47: 765-768
  • 101 Allen jr. B, Seltzer SE, Langlotz CP. et al. A Road Map for Translational Research on Artificial Intelligence in Medical Imaging: From the 2018 National Institutes of Health/RSNA/ACR/The Academy Workshop. J Am Coll Radiol 2019; 16 (9 Pt A): 1179-1189
  • 102 Langlotz CP, Allen B, Erickson BJ. et al. A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 2019; 291: 781-791
  • 103 Tolsgaard MG, Svendsen MBS, Thybo JK. et al. Does artificial intelligence for classifying ultrasound imaging generalize between different populations and contexts?. Ultrasound Obstet Gynecol 2021; 57: 342-343
  • 104 Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019; 6: 94-98

Zoom Image
Fig. 1 Alan M. Turingʼs 1950 review paper on “machine” intelligence formed the conceptual basis for the introduction of the “Turing test” to ascertain whether a machine can be said to exhibit artificial intelligence. The development of artificial intelligence and its applications can be viewed in a temporal context: machine learning and deep learning are not merely related in name; deep learning is a modelling approach that enables, among other things, problems in modern fields such as image recognition, speech recognition and video interpretation to be solved significantly faster and with a lower error rate than might be feasible by humans alone [4], [47], [55].
Zoom Image
Fig. 2 Schematic design of a (feed-forward) convolutional network with two hidden layers. The source information is segmented and abstracted to achieve pattern recognition in these layers and ultimately passed on to the output layer. The capacity of such neural networks can be controlled by varying their depth (number of layers) and width (number of neurons/perceptrons per layer).
Zoom Image
Fig. 3 Representation of the optical ultrasound simulator Volutracer O. P. U. S. Any volume data set (see also [Fig. 5]) can be uploaded and be adapted, for instance for teaching, by post-processing to acquire appropriate planes (so-called freestyle mode – without simulator instructions). In the upper right-hand corner of the screen, the system provides graphical feedback to assist movements to establish the correct target level. The simulation software also includes a variety of cloud-based training datasets that help teach users the correct settings using a GPS tracking system and audio simulator instructions with overlaid animations. Among other things, the system measures the position, angle of rotation and time until the required target plane is achieved and compares this with an expert comparison that can likewise be viewed.
Zoom Image
Fig. 4 Four-chamber view of a foetal heart in week 23 of pregnancy. The foetusʼ spine is located at 3 oʼclock, the four-chamber view can be seen in a partially oblique orientation. In addition to abdominal and cardiac circumference, the inner outline of the atria and ventricles is automatically recognised, traced and quantified in the static image. Similarly, HeartAssist can annotate and measure all other cardiac diagnostic sectional planes (axial/longitudinal).
Zoom Image
Fig. 5 5DHeart (foetal intelligent navigation echocardiography, FINE) program interface with automatically reconstructed diagnostic planes of an Ebsteinʼs anomaly of a foetus in the week 33 of pregnancy (STIC volume). The atrialised right ventricle is clearly visible as a lead structure in the laevorotated four-chamber view (cardiac axis > 63°). The foetusʼ back is positioned at 6 oʼclock by default after the automated software has been implemented (volume acquisition, on the other hand, was performed at 7 – 8 oʼclock, see [Fig. 5]). Analysis of the corresponding planes has also revealed a tubular aortic stenosis (visualised in three-vessel view, five-chamber view, LVOT and aortic arch planes).
Zoom Image
Fig. 6 Software tools for functional analysis of the foetal heart. Semi-automated approach to speckle tracking analysis using fetalHQ in the foetus examined in [Figs. 3] and [5] with Ebsteinʼs anomaly (a). A selected cardiac cycle is analysed in the approach using automatic contouring of the endocardium for the left and/or right ventricle and subsequent quantification of functional variables such as contractility and deformation. Automated calculation of the (modified) myocardial performance index (MPI, Tei index) by spectral Doppler recording of blood flow across the tricuspid and pulmonary valves using MPI+ (b).
Zoom Image
Fig. 7 (Semi-)automatic reconstruction after application of 5DCNS+ of an axially acquired 3-D volume of the foetal CNS (biparietal plane) in a foetus with a semilobar holoprosencephaly in week 23 of pregnancy. The complete neurosonogram reconstructed from the source volume comprises the 9 required diagnostic sectional planes (3 axial, 4 coronal and 2 sagittal planes). In the axial planes, automatic biometric measurements (not shown) are taken, which can be adjusted subsequently by hand at any time.
Zoom Image
Fig. 8 Automated sectional plane reconstruction of a foetal thigh in week 35 of pregnancy to estimate foetal weight (soft tissue mantle of the thigh reconstructed by 5DLimb). After 3-D volume acquisition of the thigh aligned transversely, the soft tissue volume calculated in this way can be used to improve the accuracy of estimations of foetal weight.
Zoom Image
Abb. 1 Alan M. Turings Übersichtsarbeit zur „maschinellen“ Intelligenz aus dem Jahr 1950, welche die konzeptionelle Basis für die Einführung des nach ihm benannten Test zur Messung künstlicher Intelligenz bildet. Die Entwicklung der künstlichen Intelligenz und deren Anwendungen im zeitlichen Kontext – Machine Learning und Deep Learning sind nicht nur dem Namen nach verwandt, beim sog. Deep Learning handelt es sich um Modellansätze, die es u. a. möglich machen, moderne Problemstellungen wie Bilderkennung, Spracherkennung und Videointerpretation im Vergleich zum Menschen deutlich schneller und dabei mit geringerer Fehlerrate zu lösen [4], [47], [55].
Zoom Image
Abb. 2 Schematischer Aufbau eines (Feed-Foward-)Konvolutionsnetzwerks mit 2 versteckten Schichten (hidden layers). Die Quellinformationen werden zur Mustererkennung in diesen Schichten segmentiert und abstrahiert und letztlich an die Ausgabeschicht weitergegeben. Die Kapazität derartiger neuronaler Netzwerke kann durch Variation ihrer Tiefe (Anzahl der Schichten) und Breite (Anzahl Neurone/Perzeptrone pro Schicht) gesteuert werden.
Zoom Image
Abb. 3 Darstellung des optischen Ultraschallsimulators Volutracer O. P. U. S.. Ein beliebiger Volumendatensatz (s. a. [Abb. 5]) kann hochgeladen und z. B. für Lehrzwecke entsprechend ebenenadaptiert nachbearbeitet werden (sog. Freestyle-Modus – ohne Simulatorinstruktionen). Im oberen rechten Bildrand gibt das System ein grafisches Feedback an, um die Bewegungen für die Einstellung der korrekten Zielebene zu unterstützen. Die Simulationssoftware bietet darüber hinaus eine Vielzahl cloudbasierter Trainings-Datensets an, die über ein GPS-Trackingsystem und akustische Simulatorinstruktionen mit eingeblendeten Animationen die korrekten Einstellungen erlernen lassen. Das System misst u. a. die Position, Rotationswinkel und Zeit bis zur Einstellung der geforderten Zielebene und vergleicht diese mit einer ebenfalls abrufbaren Expertenreferenz.
Zoom Image
Abb. 4 Vierkammerblick (4KB) eines fetalen Herzens in der 23. SSW. Die fetale Wirbelsäule liegt bei 3 Uhr, der 4KB kommt in einer leicht schrägen Lage zur Darstellung. Zusätzlich zu Bauch- und Herzumfang wird im Standbild die Innenkontur der Vorhöfe und Ventrikel automatisch erkannt, umfahren und quantifiziert. In ähnlicher Weise können alle anderen kardialen diagnostischen Schnittebenen (axial/longitudinal) durch HeartAssist annotiert und ausgemessen werden.
Zoom Image
Abb. 5 Programmoberfläche von 5DHeart (Fetal Intelligent Navigation Echocardiography, FINE) mit automatisch rekonstruierten diagnostischen Ebenen einer Ebstein-Anomalie eines Feten in der 33. SSW (STIC-Volumen). Der atrialisierte rechte Ventrikel ist im linksrotierten Vierkammerblick (Herzachse > 63°) als Leitstruktur eindeutig darzustellen. Der fetale Rücken ist nach Applikation der automatisierten Software standardmäßig bei 6 Uhr lokalisiert (Volumenakquise erfolgte dagegen bei 7 – 8 Uhr, s. [Abb. 5]). Bei der Analyse der korrespondierenden Ebenen fällt zudem eine tubuläre Aortenstenose auf (visualisiert in den Ebenen: Dreigefäß-/Fünfkammerblick, LVOT, Aortenbogen).
Zoom Image
Abb. 6 Softwaretools für die Funktionsanalyse des Fetalherzens. Semiautomatischer Ansatz zur Speckle-Tracking-Analyse mittels fetalHQ bei dem in [Abb. 3] und [5] untersuchten Feten mit Ebstein-Anomalie (a). Die Analyse eines manuell ausgewählten Herzzyklus erfolgt hierbei durch automatische Konturierung des Endokards für den linken u./o. rechten Ventrikel und subsequente Quantifizierung funktioneller Größen wie Kontraktilität und Verformung. Automatisierte Berechnung des (modifizierten) myokardialen Performance-Index (MPI; Tei-Index) durch Spektral-Doppler-Erfassung des Blutflusses über der Trikuspidal- und Pulmonalklappe durch Verwendung von MPI+ (b).
Zoom Image
Abb. 7 (Semi-)automatische Rekonstruktion nach Applikation von 5DCNS+ eines axial akquirierten 3-D-Volumens des fetalen ZNS (biparietale Ebene) bei einem Feten mit einer semilobaren Holoprosenzephalie in der 23. SSW. Das aus dem Quellvolumen rekonstruierte komplette Neurosonogramm besteht aus den 9 geforderten diagnostischen Schnittebenen (3 axiale, 4 koronale und 2 sagittale Ebenen). In den axialen Ebenen werden automatische biometrische Messungen (nicht gezeigt) vorgenommen, die jederzeit noch manuell angepasst werden können.
Zoom Image
Abb. 8 Automatisierte Schnittebenen-Rekonstruktion eines fetalen Oberschenkels in der 35. SSW zur Schätzung des Fetalgewichts (Oberschenkel-Weichteilmantel nach 5DLimb-Programmapplikation). Nach 3-D-Volumenakquise des transversal ausgerichteten Oberschenkels kann das auf diese Weise errechnete Weichteilvolumen für eine exaktere Schätzung des fetalen Gewichts herangezogen werden.