Open Access
CC BY 4.0 · J Neuroanaesth Crit Care 2024; 11(03): 167-178
DOI: 10.1055/s-0044-1787844
Review Article

The Promise of Artificial Intelligence in Neuroanesthesia: An Update

Zhenrui Liao
1   Department of Neuroscience, Columbia University, New York, New York, United States
,
Niharika Mathur
2   School of Interactive Computing, College of Computing, Georgia Institute of Technology, Atlanta, Georgia, United States
,
Vidur Joshi
3   Department of Biomedical Engineering, Steven's Institute of Technology, Hoboken, New Jersey, United States
,
Shailendra Joshi
4   Department of Anesthesiology, Columbia University, New York, New York, United States
› Institutsangaben
 

Abstract

Artificial intelligence (AI) is poised to transform health care across medical specialties. Although the application of AI to neuroanesthesiology is just emerging, it will undoubtedly affect neuroanesthesiologists in foreseeable and unforeseeable ways, with potential roles in preoperative patient assessment, airway assessment, predicting intraoperative complications, and monitoring and interpreting vital signs. It will advance the diagnosis and treatment of neurological diseases due to improved risk identification, data integration, early diagnosis, image analysis, and pharmacological and surgical robotic assistance. Beyond direct medical care, AI could also automate many routine administrative tasks in health care, assist with teaching and training, and profoundly impact neuroscience research. This article introduces AI and its various approaches from a neuroanesthesiology perspective. A basic understanding of the computational underpinnings, advantages, limitations, and ethical implications is necessary for using AI tools in clinical practice and research. The update summarizes recent reports of AI applications relevant to neuroanesthesiology. Providing a holistic view of AI applications, this review shows how AI could usher in a new era in the specialty, significantly improving patient care and advancing neuroanesthesiology research.


Introduction

From its inception, research in artificial intelligence (AI) has challenged the human monopoly on knowledge and creativity.[1] Descriptions of AI invariably refer to mimicking different aspects of human cognition in a machine—in recent iterations, particular interest has been devoted to learning from experience without explicit programming. Turing (1950) first distilled this mimetic conception of AI in his now-famous Test, which he called “the imitation game.” Turing proposed that the criterion for a machine deemed capable of “thinking” is if it can “trick” a human into believing it is another human. Like humans, we might expect these devices to learn, reason, self-correct, and create independently by processing vast amounts of visual, textual, and speech data. Based on these capabilities, many definitions of AI have emerged over time.[2] John McCarthy coined “artificial intelligence,” defining AI as “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to biologically observable methods.” [3]

Health care relies strongly on perception, cognition, and judgment, which human intelligence alone has been able to provide for most of history; introducing any substitute will fundamentally change the delivery of health care. Since the time of Hippocrates, health care has relied on trust in other humans and often their subjective judgments. How will we evaluate life-impacting decisions made by algorithms?[4] [5] A high-level understanding of the modern field of AI, including a frank assessment of its capabilities and limitations, is necessary to answer this question. While reports of the impact of AI on anesthesiology are rapidly emerging,[6] [7] [8] [9] there are very few reports of AI applications to neuroanesthesiology.[10] The role of AI in neuroanesthesiology was addressed by a review in this journal in 2020.[11] This article provides greater insights into underlying computational models, enumerates the advantages and disadvantages of AI, and includes novel reports of AI-anesthesiology applications developed since then. AI will have a transformative impact on neuroanesthesiology directly and indirectly, but also raises serious legal and ethical questions regarding transparency, privacy, data security, trust calibration, concealed bias, intellectual property, and professional liability that must be acknowledged.[5] Neuroanesthesiology should develop a tempered optimism for AI, embracing enthusiasm while acknowledging significant limitations ([Fig. 1]).

Zoom
Fig. 1 Anticipated impact of AI on neuroanesthesiology patient care. AI, artificial intelligence.

An overview of AI and Data Processing

Central to cognition, biological or engineered, is acquiring, processing, and acting on information from the environment. [Fig. 2] shows the data preparation needed for AI analysis and the overlap between various AI fields. Chae recently provided a detailed technical review of AI in anesthesiology.[12] We first remark that three closely related terms in AI are often (erroneously) used interchangeably:

Zoom
Fig. 2 Data processing for artificial intelligence, machine learning, and deep learning.
  • AI is an umbrella term for computationally replicating human cognition and problem-solving.

  • Machine learning (ML) is a subset of AI. It describes the approach of using large datasets to learn patterns implicitly that enable a system to perceive, reason, predict, or interact with its environment. The entity in ML that learns from the data and holds the learned information is called the model.

  • Deep learning (DL) is a subfield of ML that uses a specific class of algorithms, called neural networks, to learn input–output relationships from large amounts of data. It is the most successful paradigm in modern AI. Neural networks can learning functions using sufficient raw data (e.g., image, text, speech) that would be very difficult to program manually. Nonetheless, human involvement is still required in designing the architecture of the network, selecting data on which it will be trained, and evaluating its performance

Non-ML approaches, such as search algorithms and symbolic reasoning, have played a significant role in the history of AI and have experienced a resurgence in some domains. Nonetheless, as medicine is a data-driven field, most of the questions of interest can be reframed as ML questions; hence, this review is devoted to discussing techniques that broadly fall within the ML subset of AI.

Input and Output

Modern AI systems encode their inputs and outputs as vectors. Many forms of data can be encoded as vectors; for example, binary data can be encoded as 0 or 1; laboratory values can be encoded as a list of numbers; radiographic images or pathology slides can be encoded as lists of pixel values; multimodal data can be created simply by concatenating these lists together. The input unit to a model, which may correspond to a set of laboratory values, a pathology slide, or a magnetic resonance image, is called a “sample.” A sample usually consists of multiple “features”: the individual values in a basic metabolic profile or each pixel in an image or video, which must be used together to produce the desired output. The input and output data sizes are predetermined before training the model. The steps in data processing are shown in [Figs. 2] and [3]. For example, a model may take a fluoroscopic image from an angiogram as input and extract the features to produce an output, such as whether the image contains a normal vessel, distal, or proximal vasospasm.

Zoom
Fig. 3 The concept of an electronic neuron is shown at the top, and that of a fully connected convoluted neural network showing the stages of an angiographic image analysis is shown at the bottom.


Organization of Neural Networks

Neural networks have become the most popular model in ML due to their capacity for learning highly complex input–output relationships. It has been mathematically proven that any function can be encoded into a sufficiently large neural network. The simplest neural network, the perceptron, consists of a single neuron that accepts a vector of features as input and returns a binary (0 or 1) output. Deep neural networks have many such connected neurons ([Fig. 3]).

Biological neurons inspired perceptron development. A perceptron receives a set of input features (its “dendrites”). Different synaptic “weights” are applied to each feature, representing the importance that the neuron places on the feature. The sum of these weighted features passes through a nonlinear activation function and generates an output value (in the simplest case, 0 or 1: no spike or spike) (carried along its “axon”). To train the perception, pairs of inputs and known outputs are provided, and the weight and bias are iteratively adjusted so that the output matches the ground truth. A limitation of the perceptron is that only linear functions of its inputs can be learned. This limitation can be circumvented by networking many perceptrons in a neural network.

Neural Networks

In a neural network, many neurons are linked, drawing inspiration from the human cortex.[13] Like the cortex, the neurons in the network are organized in layers, with the neurons of each layer feeding input to the next, as described in an earlier review in this journal.[11] The number of neurons can describe each layer of a neural network it contains (as its width), and the neural network as a whole can be characterized by the number of layers it has (as its depth).[11] [12] [Table 1] describes the several types of neural networks. By convention, the first layer is referred to as the input layer, the final layer as the output layer, and any intermediate layers are called hidden layers. A neural network is called deep if it has many hidden layers; as the depth of a network grows, it gains the ability to learn more complex functions but also requires more data to train effectively.

Table 1

Types of neural networks

Types

Characteristic

Advantage

Disadvantage

Application

Example

Perceptron or threshold logic unit

Single node Supervised learning

Binary classifier

Efficient for simple logic: and or not-and

Not for nonlinear problems, Boolean XOR functions

Binary data classification

Emergence from anesthesia[88]

Multilayer perceptron

(MLP)

Fully connected nodes

Multiple hidden layers

Weights assigned by backpropagation

The “standard” neural network trained easily with backpropagation

Limited ability to handle problems with a temporal or spatial component

Difficult to encode context, with many parameters to train

Classification

Often used as part of a more complex modular network

Emergence from anesthesia[88]

Convolutional NN

The convolutional layer inspired by biological visual processing shares weights across space, enabling piecemeal image processing, pooling, conversion, flattening, and MLP output.

Effective for image analysis

Permits deep learning with few parameters

Complicated design, challenging to maintain

Requires specialized hardware (GPU acceleration) for performance

Image processing and vision, speech, and translation function.

Medical imaging and anomaly detection.

Landmark identification for regional and spinal blocks.[89]

Recurrent NN (RNN)

Inspired by biological networks, the recurrent layer allows information to flow in loops rather than simply forward. This property endows the network with memory that persists over time.

Sequential data modeling. It can be combined with MLP or convolutional layers.

Training is difficult.

Gradients vanish or explode when backpropagated to early layers.

Text processing, including grammar and language suggestions. Text to speech. Translation

Handwriting recognition.

Assessing the depth of anesthesia[59]

Long short-term memory NN

A subtype of RNN in which nodes have memory cells with control input, output, and forget functions.

Effective in classifying and processing sequential data

Complicated

Extensive training

Slow to train

Speech recognition

Language modeling

Drug design

Medical apps

Predicting anesthetic events[90]

Sequence to sequence model

Two RNNs work simultaneously to encode input and decode the output

Handling long sequences of variable lengths

Problem handling context information and long-term dependencies

Chatbots

Language Translation

Q/A systems

Text summaries

Generating medical reports from patient–doctor conversations[91]

Modular NN

Several networks function independently to achieve the output.

The modular approach decreases crosstalk and increases efficiency

Problems can arise while fragmenting the inputs

High-level input data compression

Character recognition

Market predictions

Predicting surgery duration[92]

Transformers

Uses an “attention mechanism” to process sequential data (e.g., text, audio) in a way that is aware of context

Are large encoders and decoders of sequential data that identify contextual relationships.

Can be trained in a self-supervised manner

They are replacing CNNs and RNNs but have high computational demands.

Vast amounts of data and compute required for training

Text, speech analysis, simultaneous translation, gene analysis

Decoding EEG,

predicting the depth of anesthesia[93]

Self-organizing maps

Used for simplification of higher dimensional data yet preserves the topographical structure by determining the best mapping unit

It is based on competitive learning, using unsupervised training to generate a lower dimensional map space, which is then applied to the input data.

Need large datasets.

Computation time and costs.

Enables visualization of large datasets—applications for fraud detection and market analytics that could be useful for health care.

Surgical skill assessment[94]

Generative adversarial networks

The generator-determined fake data are assessed by the discriminator and refined until the fake can no longer be detected.

The creation of novel data is indistinguishable from real data.

Flexible data generation of any size.

The generator and discriminator conflict makes training difficult and time-consuming.

Image creation.

Text-to-image translation.

Image editing

Pain management[95]

Abbreviation: EEG, electroencephalogram.



Information Flow through a Neural Network

In the simplest deep neural network, each neuron in a layer simultaneously applies a nonlinear activation function to the weighted sum of its inputs. The output of every neuron in the layer is then fed to every neuron in the next layer of neurons via a layer of synaptic weights. In a fully connected network, if one layer has N neurons and the next has M neurons, the two layers are connected by N × M weight represented by a matrix. This process is repeated for each subsequent layer until the output layer is reached. Thus, information from the input can be recombined several times before reaching the output, allowing complex relationships among the input features to be learned. The number of weights in the network grows exponentially with the depth of the network; a fully connected network with L layers where each layer has N neurons will have NL weights.


Training a Neural Network

Neural networks are trained using the backpropagation algorithm. The mathematical details of this algorithm are outside the scope of this review but are available in web tutorials.[14] However, the intuition of the backpropagation algorithm is similar to the intuition of the perceptron training algorithm: pairs of inputs and ground-truth outputs are presented, and the weights in the network are adjusted to minimize the loss, the difference between the network output and the desired ground-truth output. This procedure requires modifying the weight layers of the network in reverse order (i.e., from output to input, calculating a new error at each layer to be used as the loss of the previous layer), hence the name. Because this procedure requires weights in the network to be bidirectional, most neuroscientists do not consider backpropagation to be plausible in biological networks.



Steps in Applying AI to a New Problem

Planning

The first step in applying AI to a new problem is to define the project's objective and scope with key participants and experts. These projects are resource- and time-intensive and should be periodically assessed to ensure success. From the beginning, major AI projects should involve all stakeholders, including investigators, data scientists, patients, ethicists, and health care professionals. Trained personnel should be available to troubleshoot technical problems and assess progress. Additionally, at the planning stage, the entire impact of the project has to be holistically reviewed, keeping the final product performance in mind. The assessment of AI/ML models includes problem fit, data availability quality and quantity, model testing frequency and scope, scalability and integration, interpretability and explainability, flexibility and customization, ethical and regulatory concerns, and marketing support and monitoring.


Data Acquisition and Processing

AI's main strength is its ability to learn from vast amounts of data with or without human intervention to discover patterns humans cannot articulate or program explicitly. The volume of data needed to train DL applications is sometimes in the petabyte range, or approximately 500 billion printed pages. As shown in [Fig. 2], many data types and sources are used for health care purposes. Insufficient data, mislabeled or noisy data, incomplete data, obsolete data, or biased data can all adversely affect the performance of an AI model. The resulting predictions may be incorrect or invisibly reproduce undesirable biases present in the training data.

Once the data are identified from a source, it has to be standardized and cleaned. It has to be assessed for outliers, errors, missing values, and overlapping features. Features in a dataset can be deleted due to lack of significance, redundancy, sparsity, missing values, or dimensionality reduction techniques (discussed later) that can represent the same data using fewer features.[13]


Types of ML Model

Most ML models can be categorized as supervised, unsupervised, or reinforcement learning. Some newer models, particularly transformers (on which large-language models such as ChatGPT are based) do not easily fit into the traditional categorization. The technical details of the underlying algorithms used are beyond the scope of this review but are discussed by Hastie et al (2009), Bishop (2006), and Goodfellow et al (2016).[15] [16] [17]

  • Supervised learning: supervised learning models are trained on labeled datasets, where each input is associated with a corresponding ground-truth output. The model's parameters (for example, the weights of a neural network) are updated using these ground-truth input–output pairs to improve subsequent predictions. The goal of training is to find a set of parameters that generalize well to unseen data—a model that performs well on the training data but poorly on new data is said to be overfitted to the training set (akin to a student who has memorized an exam answer key but is unable to apply the information to answer different questions). To detect overfitting, the available data may be divided into a training set (used to train the model) and a test set (used to evaluate the performance model after training). Because the model has not seen the test data during training, the test data can be used to determine how the model might perform in the real world.

    • Classification is the subcategory of supervised learning that assigns a discrete label (such as yes/no or a choice from a list of possibilities) to each data sample. Techniques used for classification include decision trees, random forests, support vector machines, logistic regression, and neural networks.

    • Regression is the subcategory of supervised learning that assigns a continuous numerical label to each sample. Regression techniques include linear and nonlinear models.[18]

  • Unsupervised learning: unsupervised learning uses unlabeled data and lets the algorithm determine the underlying patterns without human intervention or guidance. Unsupervised techniques are helpful for data exploration or hypothesis generation, anomaly reduction, dimensionality reductions, and data clustering. Clinically, unsupervised learning could identify clusters of patients based on genetic markers that may have different responses to treatments.

    • Clustering of the data refers to discovering groups based on its natural distribution rather than on external labels. Clusters can then be examined to determine what drives the differences between those groups.

    • Dimensionality reduction refers to transforming data with hundreds or thousands of features to data with a small number of transformed features (often only two or three for visualization) while preserving essential features such as distances between samples. The projection of a 3D globe onto a 2D map is an everyday example of dimensionality reduction, although, in practice, dimensionality reduction is generally much more dramatic. Algorithms for dimension reduction include principal component analysis, factor analysis, and nonnegative matrix factorization.

  • Reinforcement learning: reinforcement learning attempts to learn optimal actions based on real-time observations to achieve a goal (i.e., maximize reward and/or minimize punishment). It is also used in agent-based robotics to process images of the environment.

  • Beyond supervised/unsupervised: since the early 2000s, DL has fueled the rapid growth of AI technologies. DL models include, but are not limited to, recurrent neural networks, convolutional neural networks, and transformers, some of which are briefly described in [Table 1].[13] This approach has greatly advanced image, text, and speech processing. However, generating sufficient labeled data to train such models is extremely expensive and time-consuming. Self-supervised learning involves using data not labeled manually to train models to learn complex tasks. In self-supervised learning, a model may be asked to predict masked words in a sentence based on context. The masking needed for the model can be performed programmatically without manual labeling.

    • Large language models (LLMs): LLMs use large datasets to comprehend, summarize, create, and predict text contents. These models require training on unlabeled data in the petabyte range. Transformer models that can assess context are generally used for LLMs. Examples of LLMs are:

      • ▪ ChatGPT.

      • ▪ Google Bard.

    • Generative AI: generative AI uses vast amounts of unlabeled data to create new data, such as an essay, report, or a piece of music, using techniques such as stable diffusion and autoregressive modeling. Generative AI creating images from text often consists of an image generation network layered on top of an already-trained LLM. Examples of generative AI include:

      • ▪ Dall-E.

      • ▪ Midjourney.


Evaluation of Supervised Learning Models

ML models must be tested initially after validation. Testing is periodically needed to check for drifts after adding new features or any hyperparameter corrections. To test the model, labeled data are compared with the predictions by the model. The familiar true-positive, true-negative, false-positive, and false-negative rates from statistics can also be used to evaluate the performance of ML models. Depending on the objective, tests are undertaken as listed in [Table 2].[12]

Table 2

Assessment of AI and ML models:

Test parameter

Characteristics

Accuracy

Number of correct prediction/total predictions

Precision

True positives/total positives (true and false)

Recall

True positives/total positives (true positives + false negatives)

The area under the curve

Compares models by plotting false positive rates (x-axis) to true positive rates (y-axis). The closer the values are to the y-axis the better the model.

Log loss

Compare model prediction probability to actual values as 0 (certainty) or 1 (improbability)—a suitable parameter for comparing models.

F1 score

Fraction of correct predictions made by the model, ideally >0.9

Confusion matrix

This is a graphic plot of a multiclass classification of actual versus predicted outcomes for each class. It is easy to comprehend, as the diagonal values show correct results.

Abbreviations: AI, artificial intelligence; ML, machine learning.




Capabilities and Limitations of AI

  • Capabilities: AI can:

    • Assess and quickly analyze vast amounts of language, image, and text data beyond human, statistical, or traditional programming capabilities.

    • Find unforeseen patterns and solutions using large datasets.

    • Self-learn and self-correct to find better solutions over time.

    • Control devices in remote and extreme situations such as deep space, nuclear sites, weapons disposal, and toxic spills.

    • Enhance personal and organizational capabilities.

    • Rapid progress is anticipated with improved AI software and hardware.

  • Limitations: AI:

    • Has a high cost for developing and training new models.

    • Needs large amounts of current and reliable data that are not easy to obtain and may create security, privacy, and ethical concerns.

    • May perform worse than human decision-making.

    • Lacks the ability to process outliers.

    • Is affected by systemic biases in the data, leading to biased output.

    • Lacks true creativity, as the AI solutions require human input.

    • Lacks abstraction and generalization, which limits its use in atypical situations.

    • Needs human oversight in decision-making.

    • Has high energy requirements. The current hardware with CPU–memory separation energy is inefficient and consumes considerable energy. Better hardware, such as integrated CPU memory, is needed to decrease energy consumption.

    • Raises the question of trust due to a lack of transparency/interpretability and contextual awareness.

    • Can “hallucinate,” resulting in grossly erroneous output unexplainable by the data.

    • Lacks emotive factors while assessing complex situations, particularly in health care.

    • Raises concerns regarding liability and accountability when the technology fails.

    • Regulatory oversight is poorly developed, variable, and lagging.

    • Often developed by for-profit entities whose interests are not necessarily aligned with the public's.


Applications of AI Relevant to Neuroanesthesiology

Very little literature assesses the direct impact of AI on neuroanesthesiology. One recent report evaluated the usefulness of accurately reporting the Society of Neuroanesthesiology and Critical Care guidelines and recommendations using Chat GPT and found it inadequate.[10] Without more original research publications selectively relevant to neuroanesthesiology, one has to take a more holistic view of the field because changes in neuroanesthesiology practice will be a subset of the changes in the overall practice of medicine, such as in health care delivery, hospital operations, neurology, anesthesiology, and research.

  • Health care delivery applications: AI's role in health care management can be divided into three phases [19]:

    • Early phase: Administration, health information, and electronic data acquisition and analysis.

    • The intermediate phase is related to telemedicine, where there are current deficiencies to justify risks.

    • Late phase with AI-based medical applications directly involved in clinical decision-making, which carries the risk of medical liability.

One advantage of AI is developing dynamic management strategies with built-in social and ethical concerns[20] that can ensure a fair allocation of resources across genders and races.[21] [22]

  • AI's impact on hospital-wide operations[4] : AI can significantly improve hospital management. This includes personnel oversight, scheduling, secretarial assistance, patient outreach, and satisfaction surveys.[23] [24] By voice-to-text transcription alone, AI could reduce physicians' work time by 17% and nurses by 51%.[5]

  • AI's impact on neurology:

    • AI-enhanced neuroimaging: contextual interpretation of imaging data using radiomics for better diagnosis and prognostication.[25] [26] [27] [28]

    • Automated handwriting and gait analysis and rapid electroencephalogram (EEG) signal processing will assist in diagnosing neurological diseases using AI.[29] [30]

    • Building the connectome for understanding cortical functioning[31] and how it is affected by aging.[32] [33]

    • AI-assisted diagnosis of diseases that require neuroanesthesia care, including stroke,[34] Parkinson's disease,[35] epilepsy,[35] [36] and intracranial hemorrhage.[37] [38] Early and reliable diagnosis and classification of brain tumors.[27] [39] [40]

  • AI's impact on anesthesiology: the application of AI in anesthesiology was recently reviewed by several authors.[41] [42] [43] [44] [45] [46] [47] The applications range from scheduling the operating room to reviewing the electronic medical record (EMR) to predicting outcomes, as recently described by Cascella et al.[45] AI can direct anesthetic drug dosing to maintain a targeted level of EEG activity,[48] predict the likelihood of perioperative complications,[44] assess difficult intubation,[49] [50] predict drug doses and delivery,[43] anticipate intraoperative complications such as hypoxia[51] and hypotension[52] [53] by closely analyzing the vital signs and/or EMR data, improve the reliability of vital sign alarms,[54] help with postoperative monitoring,[55] pain treatment,[47] mortality,[56] and anesthesia training.[57] Despite concerns regarding privacy and liability, there is optimism that AI will improve decision-making in the operating room and during perioperative care.[42] However, assessing the airway, predicting hypotension, and improving EEG monitoring are areas of technological development of immense relevance to neuroanesthesiology.

  • AI and EEG analysis: EEG is a powerful neurological tool that provides real-time information on the functioning of the cortex. AI-assisted advances in EEG monitoring are highly relevant to treating epilepsy by establishing convulsions' nature, source, and severity. EEG is routinely monitored during neurovascular surgery and helps assess the depth of anesthesia.[58] Li et al found that long short-term memory with a sparse denoising autoencoder method could better predict the depth of anesthesia compared with conventional EEG parameters such as α ratio or permutation entropy during sevoflurane anesthesia.[59] Since the early 1990s, EEG monitoring has been simplified to numerical values to assess the depth of anesthesia and help titrate anesthetic drugs.[48] Wang et al reported that with ML input of eight EEG parameters and demographic features, they could predict Bispectral Index Score (BIS) changes during propofol infusion.[60] Recently, EEG, electrocardiographic (ECG), and electromyographic data have been used to assess the depth of anesthesia with ML. Nsugbe et al reported that in some situations, ECG could better reflect the depth of anesthesia compared with EEG analysis.[61]

  • AI in assessing the airway: AI has been used to predict difficulty in intubation using photographic images. Using DL, Hayasaka et al found that supine-side-mouth-base position photographs best predict difficult intubation.[50] Lin et al, aware of the DL approach's lack of explainability, used ML to determine difficult intubation while mimicking a clinical protocol.[49] Wang et al applied semi-supervised DL to satisfactorily predict difficulties in mask ventilation and intubation using head and neck images from nine viewpoints. [62] Yamanaka et al determined that ML models could predict successful first-pass intubation and difficult intubation in the emergency department compared with modified LEMON (look, evaluate Mallampati, obstruction, neck movability) criteria.[63] The technology for robotic intubations has also advanced in recent years.[64] Wang et al successfully used a remote intubation device in pigs.[65] Biro et al successfully demonstrated the robotic navigation of an endoscope to intubate.[66] Robotic intubation that enables remote intubations today could function autonomously in the future. AI can detect the correct placement on chest X-ray films once the tube has been placed.[67]

  • AI can predict intraoperative events such as hypoxia and hypotension: one of AI's important applications is accurately predicting the outcome of surgery and anticipated intraoperative complications. Lundberg et al reviewed data from 50,000 EMRs to anticipate the risk of hypoxia during surgery. With ML, they found a twofold increase in anticipation based on patient and procedure characteristics.[68] Park et al used vital signs and ventilatory data to predict episodes of intraoperative hypoxia in pediatric patients with three ML approaches.[51] Kendale et al reviewed EMR for comorbidities, drug treatment, and vital signs to reliably predict postinduction hypotension with ML.[52] Hatib et al successfully predicted the same with a high analysis of arterial waveforms.[69] One concern about using AI in clinical settings is the lack of transparency. To circumvent this issue, van der Ven et al used a standardized patient management algorithm to predict hypotension.[70] Hypotension and hypoxia are two critical concerns during neuroanesthesia, which are further compounded by variations in the patient's position during surgery. The ability to predict such complications is, therefore, clinically highly relevant.

  • Impact of AI on neuroscience research:

    • Bench research: AI can significantly enhance drug discovery,[71] [72] [73] develop better peptide carriers design to deliver drugs to the brain,[74] [75] and help develop better pharmacokinetic models.[76]

    • Clinical trial planning and monitoring: AI will significantly impact future clinical trial design by assisting with patient selection[40] and ensure protocol compliance.[77]

    • Radiomics is an AI application that quantifies images of tissue characteristics. Such image features have been correlated with cellular characteristics, and changes in the features can be tracked over time to assess therapeutic response.[78] [79] Thus, radiomics-based research could generate new insights through the ability to accurately predict outcomes for a given patient and the response to interventions.

      • Stroke: combining radiomics features with clinical characteristics best predicts the outcome of stroke patients, such as with endovascular interventions.[80] [81]

      • Gliomas: radiomics, in combination with genomics, can better predict treatment outcomes. Radiomics and tumor tissue characterization could help assess drug delivery and the extent of pseudo-progression of the tumor.[82] [83]

      • Traumatic brain injuries: AI-assisted outcome assessment of traumatic brain injury with multiparameter image assessment will improve patient classification for better outcome predictions and clinical trials.[84] [85] [86] [87]


Conclusion

AI use in anesthesiology is likely to advance rapidly. Few publications currently directly address neuroanesthesia applications, but that is likely to change in the future. This review summarizes the underlying concepts and describes AI's potential and concerns. An overview of AI is necessary to meaningfully understand AI-generated publications, research analysis, and clinical applications. While there is growing enthusiasm about AI, significant concerns with AI, such as privacy, lack of transparency, legal liability, and unrecognized bias, have to be carefully addressed. It is a dangerous fallacy to think of AI as transcending humanity; AI reflects humanity for good and ill. AI mimics our human faculties to perceive, reason, and interact with the world. AI learns and reproduces the human biases present in the data it is trained on. It is reasonable to surmise that the future impacts of AI on health care and society will mirror our collective choices about how health care and society are organized: concentrating control over this technology in the hands of a few will worsen existing inequality, but democratizing could improve the lives of many. From a narrower perspective, one can safely conclude that AI will play a significant role in neuroanesthesia practice and research well into the future.



Conflict of Interest

None declared.


Address for correspondence

Shailendra Joshi, MD
Department of Anesthesiology, Columbia University, College of Physicians and Surgeons
630 West 168th Street, P&S Box 46, New York, NY 10032
United States   

Publikationsverlauf

Artikel online veröffentlicht:
06. August 2024

© 2024. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)

Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India


Zoom
Fig. 1 Anticipated impact of AI on neuroanesthesiology patient care. AI, artificial intelligence.
Zoom
Fig. 2 Data processing for artificial intelligence, machine learning, and deep learning.
Zoom
Fig. 3 The concept of an electronic neuron is shown at the top, and that of a fully connected convoluted neural network showing the stages of an angiographic image analysis is shown at the bottom.