Open Access
CC BY 4.0 · Journal of Coloproctology 2025; 45(04): s00451813736
DOI: 10.1055/s-0045-1813736
Review Article

Predictive Analytics in Obstructed Colon Cancer: A Comparative Narrative Review of Clinical and AI-Based Models

Authors

  • Sreejith Kannummal Veetil

    1   Department of General and Laparoscopic Surgery, Christian Medical College, Ludhiana, Punjab, India
  • Parvez David Haque

    1   Department of General and Laparoscopic Surgery, Christian Medical College, Ludhiana, Punjab, India
  • Deepak Jain

    1   Department of General and Laparoscopic Surgery, Christian Medical College, Ludhiana, Punjab, India
  • Binni Sharma

    2   Department of Trauma and Emergency Bhatti Hospital, Ludhiana, Punjab, India
 

Abstract

Introduction

Malignant large-bowel obstruction (LBO) occurs in 8 to 15% of colorectal cancer cases, and it is linked to high rates of complications and death. Identifying patient risk before surgery is essential to choose between emergency surgery and stenting.

Objective

To predict outcomes in patients with obstructed colon cancer by comparing traditional clinical scoring systems (such as the American Society of Anesthesiologists [ASA] Physical Status Classification System, the Physiological and Operative Severity Score for Enumeration of Mortality and Morbidity [POSSUM], and the Association of Coloproctology of Great Britain & Ireland [ACPGBI] score) with modern artificial intelligence (AI) and machine learning (ML) models (such as Random Forest [RF], K-Nearest Neighbors [KNN], Extreme Gradient Boosting [XGBoost], and nomograms).

Materials and Methods

We conducted a narrative review of peer-reviewed articles on prognostic models for malignant LBO. For each study, we recorded the variables used by the model, how user-friendly and interpretable it was, its performance metrics (such as the area under the receiver operating characteristic [AUROC] curve and accuracy), and its potential for clinical use. We organized the findings into five summary tables, comparing conventional risk scores and AI-based approaches in multiple dimensions. The focus is on practical, clinically-relevant insights rather than detailed algorithmic explanations.

Results

Traditional scores rely on a fixed set of clinical and operative variables. While easy to calculate and widely understood, they perform suboptimally in emergent LBO, with the Colorectal POSSUM (CR-POSSUM) showing an AUROC of approximately 0.65, and the ACPGBI reaching an AUROC of approximately 0.80 in elective settings. Conversely, AI models that leverage multiple perioperative inputs achieve higher accuracy: a Random Forest model yielded an AUROC of approximately 0.79 (95%CI: 0.71–0.87) on training and ranging from 0.75 to 0.82 on validation; a KNN model recorded approximately 88% of accuracy (AUROC: ∼ 0.77); and logistic regression nomograms attained AUC values near 0.84 for specific outcomes.

Conclusion

Models based on AI generally outperform traditional risk scores in predicting short-term outcomes of malignant bowel obstruction, but they require high-quality electronic data and technological infrastructure. Traditional scores offer ease-of-use and interpretability, but they are less accurate in this setting. Clinicians should consider hybrid strategies, such as using familiar scores (ASA, the Charlson Comorbidity Index [CCI]) for quick triage, and AI/nomogram tools for detailed risk when data is available. Future work must address implementation barriers (such as data integration, model explainability, external validation, and prospective impact studies) to translate these tools into real-world perioperative decision-making.


Introduction

Malignant large-bowel obstruction (LBO) is a critical surgical emergency linked to significant morbidity and mortality.[1] It develops in approximately 8 to 15% of new colorectal cancer (CRC) diagnoses, representing most LBO cases. When patients present emergently, they often have systemic disturbances such as dehydration and sepsis, and the overall outcomes are poor. One large series by Manceau et al.[2] reported a 30-day mortality rate of about 7% following emergency resection for obstructive colon cancer, showing higher short-term mortality in broader patient groups. Factors such as older age, higher grade on the American Society of Anesthesiologists (ASA) Physical Status Classification System, existing comorbidities, and physiological instability are known to worsen prognosis.[2] Therefore, accurate risk stratification upon presentation is essential to guide the decision on whether to proceed with primary surgery, place a stent, or offer palliative care, and to determine the level of perioperative support needed.

Traditionally, clinicians have used established scoring systems (such as the ASAPhysical Status Classification System, the Physiological and Operative Severity Score for Enumeration of Mortality and Morbidity [POSSUM] and its variants, the Acute Physiology and Chronic Health Evaluation [APACHE II], the Sequential Organ Failure Assessment [SOFA], and the Charlson Comorbidity Index [CCI]) to estimate the perioperative risk. These tools combine clinical and laboratory data collected before surgery to generate a quantitative risk estimate. In addition, nomograms based on logistic regression have been created to predict certain complications, such as strangulated and obstruction. Recently, machine-learning (ML) methods, including Random Forest (RF), K-Nearest Neighbors (KNN), Extreme Gradient Boosting (XGBoost), and neural networks have been used to forecast outcomes by analyzing complex perioperative datasets.

The usefulness of these approaches in the clinical practice depends not only on their accuracy, but also on their ease of use, interpretability, and ability to integrate into existing workflows. In the current review, we compare traditional scoring systems with artificial intelligence (AI)/ML models to predict outcomes in obstructed colon cancer, focusing on their real-world advantages, limitations, and implementation challenges.


Materials and Methods

We performed a narrative review following our predefined outline, which was supplemented by updated literature searches on risk prediction in malignant LBO. The search terms included obstructed colon cancer risk, malignant bowel obstruction prediction, POSSUM colorectal, ACPGBI score, machine learning colon obstruction, and related phrases. We selected clinical studies that assessed pre- or perioperative risk models for outcomes such as mortality, complications, or failure-to-rescue in patients with obstructed CRC. Studies on AI/ML were only incorporated if they specifically addressed cohorts with colorectal obstruction. We excluded sources that were not peer-reviewed. From each eligible study, we extracted information on model inputs, target outcomes, performance metrics (such as the area under the receiver operating characteristic [AUROC] curve and accuracy), and practical considerations. We then organized this information into comparative tables, evaluating each system's performance, the data elements required, the interpretability, the user-friendliness, and implementation challenges. Our focus was on models applicable upon presentation or during surgery rather than on long-term survival predictions.


Results

Traditional Clinical Scoring Systems

Traditinal scoring system details shown in [Table 1] are straightforward to apply but show only moderate accuracy in emergency cases of obstructed colon cancer. The ASA Physical Status Classification System (grades I–V) correlates with 30-day mortality, as patients classified as ASA III or higher have increased risk; however, its subjective nature and broad categories limit precision.[2] In obstructed settings, most traditional scores achieve AUROC values of approximately 0.60 to 0.70.[3] Because these scores were developed using data from elective surgeries and do not include computed tomography (CT) findings or biomarker information, there is a clear need for models that incorporate variables specific to obstruction. The POSSUM (which uses 12 physiological and 6 operative variables) and its Portsmouth variant (P-POSSUM) provide more details but underperform in emergent obstruction. For instance, in elective CRC, the Colorectal variant (CR-POSSUM) shows an AUROC of approximately 0.88 to 0.89, whereas, in obstructed cases, it drops to roughly 0.65, and the P-POSSUM often underestimates mortality rates (example: predicting a rate of 5.9% instead of the actual rate of 8.9%).[4] Although the Association of Coloproctology of Great Britain & Ireland (ACPGBI) score (which includes age, ASA grade, Dukes' stage, urgency, and whether resection is performed) outperforms the POSSUM in elective CRC, with an AUROC near 0.80, it has not been formally validated in emergency obstruction. Its strong performance in elective settings suggests it may be useful in obstruction if further studies are conducted.[4] Scoring systems focused on the Intensive Care Unit (ICU; such as the APACHE II and SOFA) and the CCI reflect organ dysfunction or chronic disease burden, but they are not designed for preoperative risk assessment in obstructed patients and often require laboratory data that may not be immediately available.

Table 1

Traditional clinical scoring systems for risk estimation

Scoring system

Variables (examples)

Outcome predicted

Reported performance

Interpretability/Use

ASA Physical Status Classification System

1–5 scale of fitness (subjective)

Surgical mortality risk

AUC ∼ 0.6 (low)

Very simple, broad categories

POSSUM

12 physiological + 6 operative variables (vitals, labs, blood loss, etc)

Postoperative mortality/morbidity

AUC ∼ 0.65–0.75

Requires data entry, somewhat complex

P-POSSUM

Recalibrated POSSUM

Mortality

Tends to underpredict in CRC

Similar to the POSSUM

CR-POSSUM

POSSUM recalibrated for CRC

Mortality

AUC ∼ 0.65 in emergent CRC vs ∼ 0.88 in elective diverticulitis

Similar to the POSSUM

ACPGBI (revised)

Age, ASA grade, cancer stage, urgency, resection

30-day mortality

AUC ∼ 0.8 (elective CRC)

Simple sum, validated in elective CRC

APACHE II

Acute physiology, age, chronic health

ICU mortality

Designed for ICU, not specific to CRC

Complex scoring, requires laboratory tests

SOFA

Organ-dysfunction parameters

ICU mortality/morbidity

General critical illness score

Easy to compute in ICU settings

Charlson Comorbidity Index

Comorbidity points (e.g. MI, DM)

1-year mortality

Moderate correlation with surgical risk

Quick comorbidity count

Abbreviations: ACPGBI, Association of Coloproctology of Great Britain & Ireland; APACHE II, Acute Physiology and Chronic Health Evaluation; ASA, American Society of Anesthesiologists; AUC, area under the curve; CR-POSSUM, Colorectal POSSUM; CRC, colorectal cancer; DM, diabetes mellitus; ICU, Intensive Care Unit; MI, myocardial infarction; P-POSSUM, Portsmouth POSSUM; POSSUM, Physiological and Operative Severity Score for Enumeration of Mortality and Morbidity; SOFA, Sequential Organ Failure Assessment.


Table 2

Predictive artificial-intelligence/machine-learning models

Model

Features required

Cohort/Data

Performance

Interpretability

Notes

Random Forest

∼ 8–15 (clinical and laboratory) features

Large multicenter (n > 500)

Training AUC: ∼ 0.79; external AUC ∼ 0.75–0.82

Moderate (feature importances can be shown)

Good accuracy; requires computation

K-Nearest Neighbors

∼ 10–20 features

Single center (n ∼ 99)

Accuracy: ∼ 0.82–0.88, AUC: ∼ 0.77

Low (black-box classification)

Sensitive to feature scaling; simple concept

XGBoost (GBM)

∼ 10 features (varies)

Postoperative ileus cohort (n ∼ 467)

Accuracy: ∼ 0.81; AUC: ∼ 0.64

Low (decision-tree-based ensemble approach)

Powerful on tabular data; needs tuning

Decision Tree

∼ 10 features (varies)

Postoperative ileus cohort (n ∼ 467)

Accuracy: ∼ 0.81; AUC: ∼ 0.64

High (tree structure is explainable)

Overfits easily; simple rules

Logistic Nomogram

4 features (albumin etc.)

SBO cohort (n = 560)

AUC: ∼ 0.84

High (regression equation)

Easy to use chart; static model

ACS NSQIP SRC

∼ 20 patient factors

Broad NSQIP database

Varies according to the procedure

Moderate (proprietary)

Online tool, not specific to obstruction

Abbreviations: ACS NSQIP SRC, American College of Surgeons National Surgical Quality Improvement Program Surgical Risk Calculator; AUC, area under the curve; GBM, Gradient-Boosting Machine; SBO; XGBoost, Extreme Gradient Boosting.



AI and ML Models

Modern AI and ML techniques are increasingly used to predict outcomes in bowel obstruction, leveraging large datasets to improve accuracy. One multicenter study from China[5] created a Random Forest model that included eight important features—such as the POSSUM physiological score, ASA grade, and neutrophil percentage—and demonstrated strong discrimination to predict early postoperative complications, with an AUROC of 0.788 in the training cohort and ranging from 0.75 to 0.82 in external validation cohorts. This Random Forest model consistently outperformed more traditional logistic-regression methods, highlighting the benefit of decision-tree-based ensemble approaches for complex clinical predictions. K-Nearest Neighbors algorithms have also shown promise,[5] achieving overall accuracy between 0.82 and 0.88 and an AUROC of approximately 0.77 to predict major complications after urgent obstruction surgery. In this study,[5] including malignancy status was crucial, as patients with cancer-related obstruction had significantly worse outcomes than those with benign causes—emphasizing the importance of disease-specific factors in risk models. Gradient-boosting frameworks such as XGBoost and simple decision-tree models have been evaluated for related outcomes, such as postoperative ileus. Although these decision-tree-based AI methods achieved reasonable accuracy (of approximately 0.81), their modest AUROCs (range: 0.64–0.68) suggest that the predictive performance may be limited by smaller sample sizes or variability in how outcomes are defined and measured. Complementing these “black-box” algorithms, straightforward nomograms derived from logistic regression still offer clinical value by providing easy-to-use, chart-based calculators. A recent nomogram to predict strangulated small-bowel obstruction—using predictors such as serum albumin, neutrophil percentage, peritoneal signs, and ascitic fluid—, for example, achieved an AUROC of approximately 0.84 in both training and validation cohorts.[5] Such tools balance usability and predictive accuracy, making them practical complements to more complex ML models. Summary of AI/ML models given in [Table 2].


Comparative Analysis

Comparing traditional scores with AI-based methods highlights trade-offs. Traditional clinical scores require minimal infrastructure and can be calculated quickly, often using only paper charts, making them familiar and easy to interpret, because the contribution of each variable is predefined. However, they still depend on accurate vital signs and laboratory results, which may be delayed during emergencies, and they cannot adapt to new biomarkers or imaging findings without redevelopment. In contrast, AI models can process dozens of inputs (including lab trends, radiology findings, or genomic data) and capture nonlinear interactions, which often leads to better predictive performance.[5] [6] Yet, these models are more opaque (understanding how a Random Forest or XGBoost model arrives at a prediction is difficult), and their implementation requires computational resources, software, and reliable data pipelines. [Table 3] summarizes the practical differences between traditional and AI/ML approaches.

Table 3

Summary of the practical differences between traditional scores and AI/ML models

Factor

Traditional scores

AI/ML Models

Data required

Limited set (vitals, laboratory variables, demographics)

Large, high-dimensional (laboratory variables, imaging, notes)

Ease of computation

Simple calculation (by hand or basic software)

Requires specialized software/IT support

Interpretability

High (clear formula or categories)

Low–Moderate (black-box; needs XAI tools)

Customization

Fixed formula; slow to update

Retrainable; can incorporate new risk factors

Validation

Well-validated historically in general surgery

Must be validated on local population (risk of overfitting)

Deployment

Immediately applicable in any setting

Integration into EHRs/applications needed (see [Table 5])

Typical AUC (obstructive CRC)

∼ 0.60–0.75

∼ 0.75–0.85 (in studies)

Example of ease-of-use

“POSSUM sheet” or online calculator

Web/application calculators or integrated dashboard

Abbreviations: AI, artificial intelligence; AUC, area under the curve; CRC, colorectal cancer; EHR, electronic health record; IT, information technology; ML, machine learning; POSSUM, Physiological and Operative Severity Score for Enumeration of Mortality and Morbidity; XAI, explainable artificial intelligence.




Discussion

The current review demonstrates that AI and ML techniques generally provide superior discrimination regarding outcomes in malignant bowel obstruction compared to conventional risk scores, albeit with increased complexity. In many clinical settings, practitioners still depend on familiar scoring systems; the ASA classification and POSSUM-derived scores, for example, often guide rapid decisions such as ICU versus ward admission because of their established trustworthiness. While scores such as the ACPGBI or CR-POSSUM can broadly stratify risk in CRC patients, they tend to underestimate risk in acute obstruction. By contrast, ML algorithms can detect subtler patterns, such as the combined effect of elevated lactate levels, specific radiological findings, and neutrophil counts to forecast sepsis risk.[6] However, only a few AI models have undergone prospective validation in bowel obstruction, and most existing studies are retrospective and single center, raising concerns about their external validity.

For example, AI algorithms applied to predict postoperative ileus after laparoscopic colon cancer surgery—specifically XGBoost and decision-tree models—achieved the highest accuracy (of approximately 0.807) despite achieving only moderate AUROCs (range: 0.638–0.678), underscoring their potential usefulness in the early identification of high-risk patients.[7] A validated nomogram that incorporated albumin, neutrophil count, peritoneal signs, and ascites demonstrated strong performance in predicting outcome in intestinal obstruction, with AUROCs of 0.842 and 0.839 in training and validation cohorts respectively.[8] In contrast, traditional scoring systems such as the POSSUM have been shown[9] to lack precision regarding contemporary perioperative risk assessment in colorectal surgery, reflecting limitations in accuracy and generalizability.

When a predictive model is very complex (such as when it has many parameters or intricate structures) but is trained on a small dataset, it can “memorize” the idiosyncrasies of that specific data rather than learning the underlying patterns. In other words, the model fits the random noise or outliers in the training set instead of capturing the true relationships. As a result, although it may perform exceptionally well on the data it was trained on, it will likely perform poorly on any new or unseen data. This phenomenon—known as overfitting—occurs because the model's complexity exceeds the amount of information that the small dataset can reliably support. Moreover, integrating ML tools into clinical workflows requires substantial information technology (IT) infrastructure and data interoperability—resources that many hospitals do not currently possess. Protecting patient confidentiality during multicenter model training also poses challenges, predictive models using data from multiple hospitals or clinics, there is a risk of exposing private patient information if all those hospitals send their data to a single, centralized database. Federated learning solves this problem by keeping each hospital's patient data on its own computers. Instead of combining all the data in one place, the learning algorithm travels to each site, uses the local data to update the model, and then shares only the updated model parameters (not the patient records) back to a central server. This way, patient privacy is maintained while still enabling the overall model to benefit from data at multiple centers, which makes the model stronger and more widely applicable.[10]

Clinicians often hesitate to use AI models whose internal workings are hidden—the so-called “black-box” systems—because they cannot verify how those models arrive at a particular recommendation. If a patient's care depends on a risk score generated by a model, a surgeon or physician needs to understand which factors drove that score before they feel comfortable acting on it. Otherwise, it can seem like taking action on faith rather than evidence. Explainable AI (XAI) techniques address this problem by peeling back the layers of complexity and showing, in human-interpretable terms, which variables most strongly influence each prediction. Two commonly-used XAI methods are Shapley Additive Explanations (SHAP) and rule extraction. In one study,[6] for example, a KNN algorithm designed to predict major postoperative complications after bowel obstruction surgery achieved the highest accuracy (∼ 0.82) and AUROC (∼ 0.77), highlighting how AI tools can surpass traditional methods in perioperative risk estimation. Embedding explanations—such as indicating that low albumin and elevated heart rate contribute most to predicted mortality—can improve clinician acceptance. Nonetheless, even the most accurate AI predictions must be accompanied by clear, clinically-actionable responses; a high-risk alert, for example, should trigger defined interventions such as ICU transfer or discussion about non-operative palliative options. Models based on AI offer enhanced precision through dynamic, data-driven analyses, facilitating more individualized and timely decision-making.[10]

In short, federated learning enables hospitals to collaborate on AI development without compromising patient confidentiality, making it a promising strategy to deploy AI widely and securely in healthcare settings.[11] Longo et al.[12] conducted a seminal, retrospective cohort analysis of patients undergoing colectomy for colon cancer, meticulously examining a broad array of pre- and intraoperative variables to identify independent predictors of postoperative morbidity and mortality. By evaluating factors such as patient age, comorbid conditions (including cardiovascular and pulmonary disease), nutritional status (such as serum albumin), tumor stage, and operative urgency, they[12] were able to quantify each variable's contribution to adverse outcomes using multivariable logistic regression. Their work not only elucidated which clinical and pathological characteristics most strongly influence early postoperative risk—highlighting, for example, the impact of advanced age and elevated ASA classification on 30-day mortality—but also provided benchmark performance metrics (such as odds ratios and confidence intervals) against which subsequent risk-prediction tools could be compared. Because of its rigorous methodology, large sample size, and comprehensive variable selection, this study[12] remains a cornerstone reference in colorectal surgery, offering foundational data that underpins and validates modern prognostic models). Therefore, the importance of predictive model whether clinical of newer ML are the cornerstone of management in obstructive colon cancer.

Bridging these approaches, hybrid models may offer significant benefits. A combined scoring system, for example, might use the ASA classification and the CCI as a baseline and then apply a small ML module to refine risk estimates. Alternatively, AI-derived predictions could be incorporated into an expanded nomogram format. In resource-limited settings, simpler scores (such as the POSSUM or APACHE II) may remain indispensable, whereas facilities with comprehensive electronic records can run AI- or nomogram-based calculators in the background to flag high-risk patients. [Tables 4] and [5] summarize the practical considerations: [Table 4] assesses ease of use and interpretability, while [Table 5] enumerates common implementation barriers.

Table 4

Comparison of aspects of traditional scores and AI/ML models

Method aspect

Traditional scores

AI/ML models

Ease-of-use

Very easy (paper/chart or simple application)

Moderate–hard (requires data feeds, software)

Interpretability

High (each input's weight is known)

Low (requires XAI such as SHAP to explain)

Data availability

Uses routinely-collected variables

May require additional data collection

Workflow integration

Can be performed at bedside or pre-operatively

Needs EHR/IT integration, clinical buy-in

Update flexibility

Manual (needs new study to change)

Retraining possible as new data accrues

Cost

Minimal (free calculators)

Higher (development, maintenance)

Abbreviations: AI, artificial intelligence; EHR, electronic health record; IT, information technology; ML, machine learning; SHAP, Shapley Additive Explanations; XAI, explainable artificial intelligence.


Table 5

Common implementation barriers to AI/ML models and strategies

Barrier

Impact

Possible solutions

Data quality/Gaps

Garbage-in, garbage-out; missing data leads to failure

Standardize data collection; imputation techniques; validate using real-world data

Generalizability

Models overfit to one center/population

Multicenter training, federated learning to broaden data

Explainability/Trust

Clinicians may ignore opaque models

Use XAI tools (SHAP, LIME) and present clear logic

Regulatory/Privacy

Strict rules on medical AI, patient data privacy

Seek approvals, use de-identified data, federated frameworks

IT infrastructure

Need for software, integration with EHR

Develop cloud-based tools or EHR plug-ins; start with pilot studies

Clinical workflow fit

New tools may disrupt routines

Involve end-users in design; provide training; phase implementation

Validation and evidence

Few prospective trials in obstruction cases

Conduct prospective validation studies; include obstruction patients in general models

Abbreviations: AI, artificial intelligence; EHR, electronic health record; IT, information technology; LIME, Local Interpretable Model-Agnostic Explanations; ML, machine learning; SHAP, Shapley Additive Explanations; XAI, explainable artificial intelligence.



Conclusion

In summary, malignant LBO requires rapid yet reliable risk stratification to guide management decisions. Conventional scoring systems—such as the POSSUM, the ASA Physical Status Classification System, and the ACPGBI—continue to offer practical advantages due to their straightforward calculation, widespread acceptance, and immediate applicability in emergency settings. Nonetheless, these tools frequently demonstrate suboptimal predictive accuracy in acute obstruction, primarily because they rely on variables derived from elective cases and do not incorporate obstruction-specific clinical or radiological data. By contrast, AI and ML models exhibit enhanced predictive performance by integrating high-dimensional perioperative data, thereby facilitating more granular identification of patients at elevated risk for adverse outcomes. However, the adoption of AI-based approaches introduces significant challenges, including the requirement for robust IT infrastructure, strategies to maintain patient privacy—such as federated learning—and the need for explainable AI techniques to ensure clinician trust. Until prospective, multi-institutional validation studies are completed and seamless integration into electronic health record systems is achieved, a hybrid paradigm is recommended. In this model, established clinical scores serve as the initial triage tool, whereas AI-driven risk estimates—implemented via user-friendly nomograms or decision-support applications—supplement decision-making when comprehensive data capture and institutional resources permit. By synthesizing the transparency inherent to traditional scoring systems with the superior accuracy afforded by ML, clinicians may ultimately deliver more individualized, timely, and evidence-based care for patients presenting with malignant LBO.



Conflict of Interests

The authors have no conflict of interests to declare.

Authors' Contributions

Sreejith Kannummal Veetil: Conceptualization; Methodology; Formal Analysis; Data Curation; Writing – Original Draft; Visualization.

Parvez David Haque: Investigation; Resources; Data Curation; Writing – Review & Editing.

Deepak Jain: Methodology; Software; Validation; Writing – Review & Editing.

Binni Sharma: Project Administration; Funding Acquisition; Supervision; Writing – Review & Editing.



Address for correspondence

Sreejith Kannummal Veetil, MBBS, MS, Mch, FACS, FMAS
Christian Medical College
Ludhiana, Punjab
India   

Publication History

Received: 09 June 2025

Accepted: 11 August 2025

Article published online:
29 December 2025

© 2025. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution 4.0 International License, permitting copying and reproduction so long as the original work is given appropriate credit (https://creativecommons.org/licenses/by/4.0/)

Thieme Revinter Publicações Ltda.
Rua Rego Freitas, 175, loja 1, República, São Paulo, SP, CEP 01220-010, Brazil

Bibliographical Record
Sreejith Kannummal Veetil, Parvez David Haque, Deepak Jain, Binni Sharma. Predictive Analytics in Obstructed Colon Cancer: A Comparative Narrative Review of Clinical and AI-Based Models. Journal of Coloproctology 2025; 45: s00451813736.
DOI: 10.1055/s-0045-1813736