Appl Clin Inform 2024; 15(04): 743-750
DOI: 10.1055/s-0044-1788331
Research Article

Developing PRISM: A Pragmatic Institutional Survey and Bench Marking Tool to Measure Digital Research Maturity of Cancer Centers

Authors

  • Carlos Berenguer Albiñana

    1   IQVIA, London, United Kingdom
  • Matteo Pallocca

    2   IRCCS Istituto Nazionale Tumori Regina Elena, Rome, Italy
  • Hayley Fenton

    1   IQVIA, London, United Kingdom
    3   Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
  • Will Sopwith

    1   IQVIA, London, United Kingdom
    3   Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
  • Charlie Van Eden

    1   IQVIA, London, United Kingdom
  • Olof Akre

    4   Karolinska Comprehensive Cancer Center, Stockholm, Sweden
  • Annika Auranen

    5   Tays Cancer Centre, Pirkanmaa, Finland
  • François Bocquet

    6   Data Factory & Analytics Department, Institut de Cancérologie de l'Ouest, Nantes-Angers, France
  • Marina Borges

    1   IQVIA, London, United Kingdom
    7   Instituto Português de Oncologia do Porto Francisco Gentil, Porto, Portugal
  • Emiliano Calvo

    8   START Madrid-CIOCC, Centro Integral Oncológico Clara Campal, Madrid, Spain
  • John Corkett

    3   Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom
  • Serena Di Cosimo

    9   Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
  • Nicola Gentili

    10   IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) “Dino Amadori,” Meldola, Italy
  • Julien Guérin

    11   Data Office, Institut Curie, Paris, France
  • Sissel Jor

    12   Oslo University Hospital Cancer Center, Oslo, Norway
  • Tomas Kazda

    13   Masaryk Memorial Cancer Institute, Brno, Jihomoravský, Czechia
  • Alenka Kolar

    14   Institute of Oncology Ljubljana, Ljubljana, Slovenia
  • Tim Kuschel

    15   Charité, Universitätsmedizin Berlin, Berlin, Germany
  • Maria Julia Lostes

    16   Vall d'Hebron University Hospital, Barcelona, Spain
  • Chiara Paratore

    17   University Hospital San Luigi Gonzaga of Orbassano, Orbassano TO, Italy
  • Paolo Pedrazzoli

    18   Fondazione I.R.C.C.S. Policlinico San Matteo, Pavia PV, Italy
  • Marko Petrovic

    19   Sestre Milosrdnice University Hospital, Zagreb, Croatia
  • Jarno Raid

    20   Tartu University Hospital, Tartu, Tartumaa, Estonia
  • Miriam Roche

    21   Trinity St James's Cancer Institute, Dublin, Ireland
  • Christoph Schatz

    22   Biobank Innsbruck, Innsbruck, Austria
  • Joelle Thonnard

    23   Cliniques Universitaires Saint-Luc, Brussels, Belgium
  • Giovanni Tonon

    24   IRCCS San Raffaele Scientific Institute, Milan, Italy
  • Alberto Traverso

    24   IRCCS San Raffaele Scientific Institute, Milan, Italy
    25   Maastricht Comprehensive Cancer Center, Maastricht, The Netherlands
    26   DIGICORE, Bruxelles, Belgium
  • Andrea Wolf

    27   University Cancer Center Frankfurt (UCT), University Hospital, Goethe University, Frankfurt, Germany
  • Ahmed H. Zedan

    28   Vejle Hospital, University of Southern Denmark, Vejle, Denmark
  • Piers Mahon

    1   IQVIA, London, United Kingdom
    26   DIGICORE, Bruxelles, Belgium
 

Abstract

Background Multicenter precision oncology real-world evidence requires a substantial long-term investment by hospitals to prepare their data and align on common Clinical Research processes and medical definitions. Our team has developed a self-assessment framework to support hospitals and hospital networks to measure their digital maturity and better plan and coordinate those investments. From that framework, we developed PRISM for Cancer Outcomes: PRagmatic Institutional Survey and benchMarking.

Objectives The primary objective was to develop PRISM as a tool for self-assessment of digital maturity in oncology hospitals and research networks; a secondary objective was to create an initial benchmarking cohort of >25 hospitals using the tool as input for future development.

Methods PRISM is a 25-question semiquantitative self-assessment survey developed iteratively from expert knowledge in oncology real-world study delivery. It covers four digital maturity dimensions: (1) Precision oncology, (2) Clinical digital data, (3) Routine outcomes, and (4) Information governance and delivery. These reflect the four main data types and critical enablers for precision oncology research from routine electronic health records.

Results During piloting with 26 hospitals from 19 European countries, PRISM was found to be easy to use and its semiquantitative questions to be understood in a wide diversity of hospitals. Results within the initial benchmarking cohort aligned well with internal perspectives. We found statistically significant differences in digital maturity, with Precision oncology being the most mature dimension, and Information governance and delivery the least mature.

Conclusion PRISM is a light footprint benchmarking tool to support the planning of large-scale real-world research networks. It can be used to (i) help an individual hospital identify areas most in need of investment and improvement, (ii) help a network of hospitals identify sources of best practice and expertise, and (iii) help research networks plan research. With further testing, policymakers could use PRISM to better plan digital investments around the Cancer Mission and European Digital Health Space.


Background and Significance

Large-scale digital research consortia in tertiary hospitals focused on real-world evidence (RWE) in specialty diseases face unique organizational challenges as they seek to plan for their digital research programs. One of the most fundamental is to create a way of describing their digital maturity between institutions, as this is essential to good digital research planning. This is especially true in the era of Precision Medicine when assessments of digital maturity need to cover much more than core electronic health records (EHRs). To scale research to multiple centers in Precision oncology, we need to assess the availability of routine molecular information, clinical outcomes, and operational and legal constraints on study design and delivery.

The team running IQVIA's Oncology Evidence Network (OEN) has extensive experience in the center selection and delivery of multicenter protocolized oncology research in Europe, such as BMS I-O Optimise[1] and multiple regulatory grade external comparators to provide case-matched controls to single-arm trials.[2] [3] Based on that experience, we developed a framework for discussing digital maturity with potential partner hospitals ([Fig. 1]).

Zoom
Fig. 1 Digital maturity framework with Bronze, Silver, and Gold classification. eCRF, electronic case report form; EHR, electronic health record; GDPR, General Data Protection Regulation; IHC, immunohistochemistry; MDX, molecular diagnostics; NCCN, National Comprehensive Cancer Network; PRO, patient-reported outcome; RECIST, Response Evaluation Criteria In Solid Tumors; NSCLC, non-small cell lung cancer; SoC, standard of care.

The discussion framework has four dimensions: (I) Precision oncology maturity to describe the state of routine molecular testing, ranked by its frequency and complexity, (II) Clinical digital data to describe the maturity of treatment and observational data in a hospital's EHRs, (III) Pragmatic outcomes maturity to describe the availability of key outcome information in a hospital on its patients, and (IV) Information governance and delivery maturity to describe a hospitals legal and operational capacity to mobilize its EHRs for advanced research that is not dependent on traditional study-specific consent and manual retype. We observed ([Fig. 1]) in the hospitals three broad tiers of digital maturity for cancer centers: bronze, silver, and gold.


Objectives

The OEN team historically used this framework in a three-step due diligence process to select research hospitals for commercial RWE including (I) literature reviews to identify hospitals that have used their EHRs for cancer RWE studies, (II) interviews with those hospitals' staff to understand their interest in the research objectives, and (III) an intensive 2-week on-center data and IT systems assessment to understand study feasibility in detail. Approximately 50 European centers have been through step (II) of the due diligence process, and approximately 20 centers to step (III) since the OEN was founded.

However, while robust, the effort required to review a single hospital was not scalable; each hospital assessment would typically take 1 year of elapsed time and between 3 and 6 months of effort. Something lighter is needed for planning large digital networks. The Healthcare Information and Management Systems Society (HIMSS) maturity frameworks[4] are incomplete for research planning, as they describe the maturity of data integration for care delivery, not for research. Nor do they cover molecular data, essential for Precision oncology.

Given experience in semiquantitative methods in a variety of industries, the OEN team questioned if a semiquantitative self-assessment survey could be developed to codify their experience to create a simple tool for digital research maturity planning. This article presents the development of such a tool, and the results from an initial benchmarking survey run in partnership with the Organisation of European Cancer Institute (OECI) and DIGICORE, the Digital Institute for Cancer Outcomes Research.


Methods

Survey Instrument Design

The initial development of the semiquantitative survey started using an approach developed by Piers Mahon (PM) for other research. The team's collective study experience was used to develop an initial instrument that followed these six “FORCIB” design principles: (A) Feasible: with no more than 25 questions, (B) Objective: the semiquantitative graduations from 1 to 5 should be as factual as possible and clearly defined to avoid subjectivity, (C) Representative: based on the team's prior experience—a “3” should be typical for a hospital running outcome studies from EHRs today, a “5” best-known practice, and a “1” a typical low digital maturity hospital, (D) Clear: the text should be clear in international English for non-native speakers, (E) Informative: the resulting benchmark captures the essence of the maturity of an individual institution for Precision oncology RWE research and be useful to participating centers. (F) Balanced: both over the dimensions of digital maturity in the framework, and with a mixture of leading and lagging indicators.

The main challenges in semiquantitative research are clarity of language—a question interpreted in the same way in different contexts and by different people, and assessability—Can someone provide a reasonably objective score? Many instruments define 1 and 5 and leave the graduations in between open to interpretation. Care was taken even in the first draft to objectively define each score on each dimension in clear, international English. These survey elements were tested and refined iteratively as described below.

The OEN has domain expert native speakers in German, French, Dutch, and English. The internal team provided feedback for international interpretability from native English, German, Dutch, and French speakers embedded in hospitals. They also helped refine the scoring system by removing some ambiguities. The next draft was tested for clarity and assessability with academic focus groups at Leeds Cancer Centre in England and Frankfurt University Hospitals in Germany to allow an initial international perspective. Most questions were consistently interpreted by staff at these institutions, and the definitions for 1 through 5 of each response were found to be unambiguous. The two exceptions on clarity were questions on molecular tumor board tools and the HIMSS maturity framework, which were new concepts to some. Educational materials were introduced on those two topics to solve these knowledge gaps, which the focus groups thought would be sufficient.

This second draft was refined based on feedback to create a document that could be completed by a hospital Data Manager or a similar role without supervision. The final questionnaire had extensive instructions and a video briefing to help coach hospitals as to how to fill it in. The central team was also available to clarify questions or interpretations.

The final instrument is available in the [Supplementary Materials] (available in the online version).


Sample Recruitment

We chose to invite European cancer centers from three networks to the survey in the second half of 2021/H1 2022. At that time, this included the OECI, 21 members of DIGICORE, and 7 cancer centers in IQVIA's networks. There is significant redundancy in these networks, and at the time of the survey, the list of hospitals invited was 106 unique hospitals, and >98% corresponding to the OECI.

During the development of the OEN, we realized that many research-intensive European hospitals were conscious of weaknesses in their EHRs' data quality, creating a barrier to survey participation. To reduce this barrier, and to reduce the likelihood of positivity bias in the self-assessment, all hospitals were promised anonymity in the results. The results would only be published in aggregate unless otherwise agreed.

Participating hospital recruitment was coordinated in partnership with the relevant management teams of the OECI and DIGICORE in two waves. Wave 1 ran in September and October 2021, in preparation for DIGICORE's first membership Conference. Wave 2 ran from March to June 2022 in preparation for a DIGICORE-coordinated funding scheme to support hospital Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) conversions. The surveys take 1 to 2 days for a hospital project manager to identify the right experts and complete over a week or two. This is substantially less effort than the intensive due diligence process it replaced.

The proposed analytic output would include a mapping of the entire network in statistically aggregated data and a readout for each hospital to benchmark itself against peers in private. A hospital was assessed on each of the four dimensions as the median of the scores in the relevant section of questions and then an overall digital maturity score equal to the median of the four subdimensions. All 25 questions are listed as part of [Fig. 2A]. Community distributions were assessed on this overall digital maturity metric unless stated otherwise. Differences between these metrics were explored using the Friedman test and paired Wilcoxon signed-rank test. Finally, hospitals were asked to supply total new diagnostic volumes for 2019 to allow the final visualization.

Zoom
Fig. 2 (A) Privacy conserving visualization of the entire community by question showing median and interquartile ranges of responses to each question from the sample. (B) Box plot of the four survey dimensions. The first dimension is Precision oncology (blue, far left, average of survey questions Q1.1–Q1.7), the second dimension is Clinical digital data (orange, middle left Q2.1–Q2.5), the third dimension is Pragmatic outcomes (gray, middle right, Q3.1–Q3.7), and the last dimension is Information governance and delivery (yellow, far right, Q4.1–Q4.6). The original survey used a 5-point score in which 1 represents the least mature option while 5 represents the most mature. EHR, electronic health record; RECIST, Response Evaluation Criteria In Solid Tumors.


Results

Network-wide Results

The survey had a high completion rate with 26 out of 106 centers completing it. Survey completeness was high, with 21 out of 26 (80.7%) surveys with all questions answered. A subset (5/26, 19.2%) had missing responses to one or two questions. Missing data were treated by assigning a score of zero (0) on the 1 to 5 scale.

We shared three visualizations of the hospitals that completed the survey while preserving center-level anonymity. [Fig. 2A] shows a visualization of the community distribution displaying the median (red) and the two limits to the interquartile ranges (gray dashed lines) for each question across our community (N = 26). Based on judgement, we allocated Gold to scores between 4.25 and 5, Silver to scores between 4.25 and 2.75, and Bronze to scores less than 2.75.

[Fig. 2B] demonstrates the community by major maturity dimension with a dot per hospital for that hospital's maturity score on that dimension.

[Table 1] shows the average and median scores for each of the four dimensions, the Precision oncology dimension being the most mature, the Information governance and delivery score the least mature, while Clinical digital data and Pragmatic outcomes showed similar (and intermediate) maturity. We used pairwise Wilcoxon signed-rank tests to validate if these observed differences are statistically meaningful ([Table 2]). The results confirmed that only the difference between Precision oncology and Information governance and delivery (p = 0.010) is significant at p < 0.05. The rest of pairwise differences are not significant; however, we expect that these differences will become clearer with a larger volume of data.

Table 1

Average and median digital maturities in the sample by dimension

Overall score per dimension (n = 26)

Average score

Median score

1. Precision oncology

3.72

3.86

2. Clinical digital data

3.30

3.30

3. Pragmatic outcomes

3.33

3.36

4. Information governance and delivery

2.81

2.83

Table 2

Bonferroni-adjusted p-values for pairwise Wilcoxon's signed-rank tests between the maturity dimensions (green, significant at p < 0.05; red, not significant; gray, not applicable)

1. Dimension

2. Clinical digital data

3. Pragmatic outcomes

4. Information governance and delivery

1. Precision oncology

0.069

0.235

0.010

2. Clinical digital data

1.000

0.058

3. Pragmatic outcomes

0.208


Individual Center Results

Individual results for 26 centers were shared benchmarking each center's results (blue) against the median (red) in each of the 25 survey questions, the overall median score for each of the four dimensions, and the overall median score. A typical visualization of an anonymous center is shown below ([Fig 3]). All centers were classified using their median digital maturity score across the four dimensions. Given the commitment made to anonymity, we share here an illustrative unnamed example. Visualizations including the interquartile range (similar to [Fig. 2A]) were found to be visually confusing by centers.

Zoom
Fig. 3 Anonymous example of the individual results shared back to the hospitals.

Feedback from this and other hospitals was that the visualizations were both useful in promoting internal discussions on digital maturity, and the results were in-line with internal expert expectations. However, with only 26 hospitals in the sample, we did not attempt to formally test the survey's utility or perceived accuracy.


Use for Research Planning

[Fig. 4] shows a snapshot of the cancer digital maturity across the cohort surveyed by the hospital. The x-axis represents the grade of Precision oncology maturity (section 1 of the survey) while the y-axis shows the Outcome data maturity (section 3 of the survey). The color of the bubbles represents the degree of clinical digital data maturity of each center; the top quartile (green), between the interquartile range (amber), and the bottom quartile (red). The size of the circle represents the number of annual patient diagnostics (Dx), 19 centers provided counts (solid outline) while the remaining centers were estimated using the average annual number of new cancer diagnostics across the 19 responding centers (dashed outline).

Zoom
Fig. 4 Cancer digital maturity snapshot. The color of the bubbles represents the degree of clinical digital data maturity of each center; the top quartile (green), between interquartile range (amber), and the bottom quartile (red). The size of the circle represents the number of annual patient diagnostics, 19 centers provided counts (solid outline) while the remaining centers were estimated using the average annual number of new cancer diagnostics across the 19 responding centers (dashed outline). EHR, electronic health record; RECIST, Response Evaluation Criteria In Solid Tumors.

We are currently experimenting with how to use such visualizations in research planning. As an example, hospitals in the top right quadrant would make good partners for clinical biomarker validation research, having both good outcomes and molecular data availability. Members of the bottom right quadrant have data suitable for biomarker-driven trial recruitment, but not a broader range of real-world research use cases. Site 24 has exceptional routine patient-reported data and could be a potential bid coordinator for patient-reported outcomes research calls.



Discussion

An initial benchmarking cohort for cancer outcomes was created utilizing the PRagmatic Institutional Survey and benchMarking (PRISM) semiquantitative self-assessment survey on 26 hospitals from 19 European countries across OECI and DIGICORE. The coordination of recruitment with both of DIGICORE's funding schemes and with the agreement of the relevant network coordinators likely explains the high penetration of the survey into the sample. We found that academic hospitals participating are better at the management of Precision oncology data than other aspects of digital research, perhaps reflecting academic investments in discovery science and precision trial recruitment.

The weakest area, even in top academic centers, is in information governance and delivery. Effectively, while hospitals may have data, they often do not have the legal basis and appropriate control processes to mobilize the data toward research. If Europe is to mobilize its extensive health data for the public good, dedicated investment by policymakers in this is required. Europe will need targeted programs to help weaker centers improve their data governance, not HORIZON-like programs to invest in improving existing experts. The quality improvement elements of the Beating Cancer Plans,[5] and key elements of the Cancer Mission[6] data-driven research platform depend on this.

Measuring the digital maturity of a large cohort of hospitals is helping DIGICORE members in four ways as they build DigiONE, their federated research network.[7]

Firstly, it identifies a Direct Institutional Benchmark, holding up a mirror to internal views on the progress that each center is making to digitize, and where to focus efforts to prompt internal management discussion and prioritize local digital investments. Secondly, it identifies best practices at European level, specifically institutions that are best practices in particular elements of digital research as sources of expertise to others, and from which DIGICORE's community of peer cancer centers can learn. Furthermore, it catalyzes collaborative research, enabling collaborative digital research projects within DIGICORE to come together between expert centers to develop new clinical informatics solutions. This is especially useful for HORIZON grant planning, given most clinical informatics investments to date have been national, and so few European hospitals have well-established international digital research peer networks. Finally, it tracks digital progress: the PRISM survey can be repeatedly taken to track increased digital maturity, for instance, after digital infrastructure investments.

Limitations to the surveys include their self-assessment and self-reported nature, as well as their English language basis. These were design compromises required to reduce the assessment burden. We also suspect that the sample is biased towards research-intensive and potentially more digitally interested hospitals. Supporting this is the observation that 33% of OECI-accredited members participated, as opposed to 21% of the OECI membership overall. This accreditation process[8] tends to select more research-intensive and digitally mature members (Prof. Simon Oberst, accreditation lead, OECI, personal communication). As a result, the results are not indicative of the overall maturity of European academic cancer centers, let alone nonacademic cancer centers. Larger, more representative samples would be needed for PRISM to help input to either national or European digital policy planning, for instance on the European Health Data Space.

Finally, the sample was too small for it to be worth formally testing perceived survey accuracy or utility. The surveys and benchmarking are open to other European consortia to participate, so we envision this process to evolve and improve over time. While many of the questions in the survey are cancer-specific, the general approach could be applied elsewhere.


Conclusion

PRISM serves as a rationalized benchmarking instrument suitable for extensive hospital real-world research networks. PRISM enables individual hospitals to identify areas that require investment and enhancement, it assists networks of hospitals in determining best practices and expertise and aids research networks in their research planning. With additional validation, PRISM could be leveraged by policymakers to strategically plan digital investments in alignment with the Cancer Mission and the European Digital Health Space.


Clinical Relevance Statement

A simple 25-question self-assessment can capture the essence of hospital digital research maturity for outcomes research and digital care quality improvement. European policymakers need to invest to improve hospital data governance to support digital care quality.


Multiple-Choice Questions

  1. The PRISM survey was divided into four dimensions in order to assess the digital maturity of each center. Which is the correct name for the four dimensions?

    • Precision oncology, Surgery data, Pragmatic outcomes, Information governance and delivery.

    • Precision oncology, Clinical digital data, Pragmatic outcomes, Information governance and delivery.

    • Oncology treatment data, Clinical digital data, Pragmatic outcomes, Information governance and delivery.

    • Precision oncology, Clinical digital data, Pragmatic outcomes, Privacy maturity.

    Correct answer: The correct answer is option b. The four dimensions used to evaluate digital maturity are Precision oncology, Clinical digital data, Pragmatic outcomes, Information governance and delivery.

  2. This study showcased the multiple benefits and use cases for PRISM. Which one of the following options describes a beneficial use case for PRISM?

    • Help an individual hospital identify areas most in need of investment and improvement.

    • Help a network of hospitals identify sources of best practice and expertise.

    • Help research networks plan their research.

    • All of the above.

    Correct answer: the correct answer is option d.

Erratum: The article has been corrected as per Erratum published on November 4, 2024 (DOI: 10.1055/s-0044-1792139).



Conflict of Interest

None declared.

Protection of Human and Animal Subjects

Our study is exempted from Institutional Review Board.


  • References

  • 1 Ekman S, Griesinger F, Baas P. et al. I-O Optimise: a novel multinational real-world research platform in thoracic malignancies. Future Oncol 2019; 15 (14) 1551-1563
  • 2 Salles G, Bachy E, Smolej L. et al. Single-agent ibrutinib in RESONATE-2™ and RESONATE™ versus treatments in the real-world PHEDRA databases for patients with chronic lymphocytic leukemia. Ann Hematol 2019; 98 (12) 2749-2760
  • 3 Palomba ML, Ghione P, Patel AR. et al. A comparison of clinical outcomes from updated Zuma-5 (Axicabtagene Ciloleucel) and the International Scholar-5 External Control Cohort in Relapsed/Refractory Follicular Lymphoma (R/R FL). Blood 2021; 138 (01) 3543
  • 4 HIMSS. HIMSS Maturity Models: Models for Digital Health Transformation. Accessed July 10, 2024 at: https://www.himss.org/what-we-do-solutions/maturity-models. (05/07/2024)
  • 5 European Commission. Europe's Beating Cancer plan. Accessed June 20, 2024 at: eu_cancer-plan_en_0.pdf (europa.eu)
  • 6 European Commission. EU Mission: Cancer (europa.eu) Accessed July 10, 2024 at: https://research-and-innovation.ec.europa.eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-europe/eu-missions-horizon-europe/eu-mission-cancer_en (05/07/2024)
  • 7 Mahon P, Chatzitheofilou I, Dekker A. et al. A federated learning system for precision oncology in Europe: DigiONE. Nat Med 2024; 30 (02) 334-337
  • 8 Boomsma F, Van Harten W, Oberst S. et al. OECI. Accreditation and Designation User Manual V. 2.0. User manual of A&D Programme. Accessed June 202, 2024 at: oeci.eu

Address for correspondence

Carlos Berenguer Albiñana, PhD
IQVIA
London W2 1AF
United Kingdom   

Publication History

Received: 05 November 2023

Accepted: 16 June 2024

Article published online:
11 September 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References

  • 1 Ekman S, Griesinger F, Baas P. et al. I-O Optimise: a novel multinational real-world research platform in thoracic malignancies. Future Oncol 2019; 15 (14) 1551-1563
  • 2 Salles G, Bachy E, Smolej L. et al. Single-agent ibrutinib in RESONATE-2™ and RESONATE™ versus treatments in the real-world PHEDRA databases for patients with chronic lymphocytic leukemia. Ann Hematol 2019; 98 (12) 2749-2760
  • 3 Palomba ML, Ghione P, Patel AR. et al. A comparison of clinical outcomes from updated Zuma-5 (Axicabtagene Ciloleucel) and the International Scholar-5 External Control Cohort in Relapsed/Refractory Follicular Lymphoma (R/R FL). Blood 2021; 138 (01) 3543
  • 4 HIMSS. HIMSS Maturity Models: Models for Digital Health Transformation. Accessed July 10, 2024 at: https://www.himss.org/what-we-do-solutions/maturity-models. (05/07/2024)
  • 5 European Commission. Europe's Beating Cancer plan. Accessed June 20, 2024 at: eu_cancer-plan_en_0.pdf (europa.eu)
  • 6 European Commission. EU Mission: Cancer (europa.eu) Accessed July 10, 2024 at: https://research-and-innovation.ec.europa.eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-europe/eu-missions-horizon-europe/eu-mission-cancer_en (05/07/2024)
  • 7 Mahon P, Chatzitheofilou I, Dekker A. et al. A federated learning system for precision oncology in Europe: DigiONE. Nat Med 2024; 30 (02) 334-337
  • 8 Boomsma F, Van Harten W, Oberst S. et al. OECI. Accreditation and Designation User Manual V. 2.0. User manual of A&D Programme. Accessed June 202, 2024 at: oeci.eu

Zoom
Fig. 1 Digital maturity framework with Bronze, Silver, and Gold classification. eCRF, electronic case report form; EHR, electronic health record; GDPR, General Data Protection Regulation; IHC, immunohistochemistry; MDX, molecular diagnostics; NCCN, National Comprehensive Cancer Network; PRO, patient-reported outcome; RECIST, Response Evaluation Criteria In Solid Tumors; NSCLC, non-small cell lung cancer; SoC, standard of care.
Zoom
Fig. 2 (A) Privacy conserving visualization of the entire community by question showing median and interquartile ranges of responses to each question from the sample. (B) Box plot of the four survey dimensions. The first dimension is Precision oncology (blue, far left, average of survey questions Q1.1–Q1.7), the second dimension is Clinical digital data (orange, middle left Q2.1–Q2.5), the third dimension is Pragmatic outcomes (gray, middle right, Q3.1–Q3.7), and the last dimension is Information governance and delivery (yellow, far right, Q4.1–Q4.6). The original survey used a 5-point score in which 1 represents the least mature option while 5 represents the most mature. EHR, electronic health record; RECIST, Response Evaluation Criteria In Solid Tumors.
Zoom
Fig. 3 Anonymous example of the individual results shared back to the hospitals.
Zoom
Fig. 4 Cancer digital maturity snapshot. The color of the bubbles represents the degree of clinical digital data maturity of each center; the top quartile (green), between interquartile range (amber), and the bottom quartile (red). The size of the circle represents the number of annual patient diagnostics, 19 centers provided counts (solid outline) while the remaining centers were estimated using the average annual number of new cancer diagnostics across the 19 responding centers (dashed outline). EHR, electronic health record; RECIST, Response Evaluation Criteria In Solid Tumors.