Subscribe to RSS

DOI: 10.1055/a-2736-6561
Integration of Real-World Data into Clinical Trials: An Interdisciplinary Discussion from Regulatory and Practical Perspectives
Einbindung von versorgungsnahen Daten in klinische Studien – Interdisziplinäre Diskussion aus regulatorischer und anwendungsbezogener SichtAuthors
Abstract
Background
Increasing efforts are required to integrate the growing volume of so-called real-world data (also named routine practice data), which are data generated outside of randomized controlled trials, into regulatory studies. Various stakeholders anticipate that such integration could save time and financial resources during the approval process of new therapies, and ethical considerations also partially support such an approach. The aim of this manuscript is to provide an overview of the methodological, ethical, and regulatory considerations when integrating routine practice data into randomized controlled trials. It targets clinical researchers, biostatisticians, regulators, and decision-makers involved in evidence generation and trial design.
Results
The inclusion of real-world data in randomized controlled trials can be meaningful from both ethical and economic perspectives. However, implementing this requires addressing various and sometimes significant limitations of the data, which need to be methodologically addressed. Therefore, it is essential to carefully weigh the risks and benefits of incorporating real world data into clinical studies.
Conclusion
Randomized trials remain the gold standard for evaluating the efficacy of new therapies. Nevertheless, real-world data have the potential to improve the complex and costly process of drug development. The assessment of the potential for a specific clinical study should be made in collaboration with all relevant stakeholders. Apart from that, real-world data have a substantial potential to expand the evidence from randomized trials after post-market approval, thereby ensuring the safety of all patients.
Zusammenfassung
Hintergrund
Die wachsende Menge an sogenannten versorgungsnahen Daten, also Daten außerhalb randomisierter, kontrollierter Studien, fördert die Bestrebungen, diese in Zulassungsstudien einzubeziehen. Stakeholder erhoffen sich dadurch Zeit- und Kostenersparnisse im Zulassungsprozess neuer Therapien; auch ethische Gründe sprechen teilweise dafür. Das Ziel dieses Manuskripts ist es, einen Überblick über die methodischen, ethischen und regulatorischen Aspekte bei der Integration von Routinedaten in randomisierte kontrollierte Studien zu geben. Es richtet sich an klinische Forschende, Biostatistiker:innen, Regulierungsbehörden und Entscheidungsträger:innen, die an der Evidenzgenerierung und Studiendesign beteiligt sind.
Ergebnisse
Die Einbeziehung von versorgungsnahen Daten in randomisierten Studien kann aus ethischer und ökonomischer Sicht sinnvoll sein. Bei der Umsetzung sind jedoch unterschiedlichste, teils schwerwiegende Limitationen der Daten zu beachten, die entsprechend methodisch adressiert sein müssen. Es ist daher unverzichtbar die Risiken und Nutzen bei der Einbeziehung von versorgungsnahen Daten in klinischen Studien sorgfältig abzuwägen.
Schlussfolgerung
Randomisierte Studien bleiben der Goldstandard bei der Überprüfung der Wirksamkeit neuer Therapien. Dennoch haben versorgungsnahe Daten das Potential, den aufwendigen und kostenintensiven Prozess der Arzneimittelentwicklung zu verbessern. Die Einschätzung, wie hoch das Potential für eine konkrete klinische Studie ist, sollte in Zusammenarbeit von allen relevanten Stakeholdern getroffen werden. Davon abgesehen stellen versorgungsnahe Daten ein erhebliches Potential dar, die Evidenz aus den randomisierten Studien nach Marktzulassung zu erweitern und so die Sicherheit aller Patient:innen zu gewährleisten.
Introduction
Randomized controlled trials (RCTs) are the gold standard for evaluating the efficacy of new therapies, characterized particularly by high internal validity [1] [2] [3]. However, RCTs also have limitations, such as feasibility issues, especially with rare diseases, or potentially restricted external validity [4] [5] [6] [7] [8] [9] [10] [11]. Additionally, patient preferences or early dropout of participants from the control arm to access a new, comparable, or efficacy-similar therapy available outside the study can affect an RCT [12] [13]. These effects increase when there are clear differences between treatment options that make blinding impossible or ethically unjustifiable, such as differences in administration (oral vs intravenous) of a drug, like in the OVIVA study on bone and joint infections [14], intervention differences as in the ProFHER study on the clinical effectiveness of surgical vs conservative therapy for proximal humeral fractures [15], or additional intraoperative chemotherapy during the resection of peritoneal-metastasized gastric cancer in the GASTRIPEC-I study [16].
RCTs are often conducted in highly selected populations that do not directly represent the actual population in real-world setting, as multimorbid or non-consenting patients are often excluded [4] [11]. Moreover, RCTs are frequently conducted in highly-specialized centers, where high standards do not reflect the broader real-world care standards. Hence, high internal validity is often in competition with broad generalizability (external validity).
Conversely, real-world data (RWD), also referred to as routine practice data, appear increasingly attractive due to more extensive available data sources. Driven by demands from payers and growing concerns over the limitations of conventional clinical trials, numerous legislative and regulatory efforts promote their use in research [17] [18] [19] [20] [21]. Besides prospective observational studies, retrospective analyses of data from internal patient record systems, evaluations of registry data, or utilization data from health insurers or other secondary sources, digitalization, and increased use of medical apps provide even more comprehensive data for research. While the term “real-world data” has been the common term until now, it is increasingly critiqued since all data originate from the “real” world. With the emergence of fully virtual worlds (e. g., Metaverse) and the development of “digital twins” concerning health data, the term is becoming increasingly misleading. To maintain the distinction from RCT data, we will use the term “routine practice data” (RPD) in this context [22].
Conducting studies using RPD primarily complements the information from RCTs through higher external validity. Such studies are indispensable after post-market approval to ensure patient safety [5] [9] [23], and they significantly influence regulatory decisions globally [9] [24] [25].
Beyond the aforementioned applications of RPD, efforts in industry and academia aim to incorporate RPD into RCTs to potentially include fewer patients in studies, thus saving resources alongside ethical considerations [26] [27] [28].
The aim of this manuscript is to provide a comprehensive overview of the opportunities, methodological challenges, ethical considerations, and regulatory implications involved in integrating RPD into RCTs. We discuss how emerging causal inference methods can support valid evidence generation from combined data sources, highlighting both, the potential and limitations of this approach. This article is primarily targeted at clinical researchers, biostatisticians, regulatory authorities, and healthcare decision-makers who are engaged in designing, evaluating, or implementing clinical trials that incorporate real-world data.
The manuscript is based on a targeted literature review and multidisciplinary expert discussions to provide a comprehensive overview of the integration of routine practice data into randomized controlled trials.
Benefits and Risks of Incorporating Routine Practice Data into RCTs
The use of external data in randomized controlled trials (RCTs) offers enticing advantages, promising to conduct studies with fewer patients, in shorter timeframes, and at lower costs. Moore et al. [29] examined 225 studies on 101 new therapeutic agents approved by the FDA between 2015 and 2017, finding that sample size was the most significant single factor influencing study costs. However, these savings are often overestimated: halving the sample size does not equate to halving the costs or study duration. Nevertheless, external data can lead to significant reductions in costs and study duration. A shorter study duration allows for earlier submissions to regulatory authorities in the case of successful Phase-III studies, potentially increasing profit margins significantly. Although the time required for patient recruitment may decrease, the time needed for study planning and analysis does not diminish; given the increased complexity of studies with external controls, these times may be longer than for traditional RCTs. Moreover, regulatory or Health Technology Assessment (HTA) approval processes might be prolonged due to concerns about the interpretability of results. Such regulatory delays could outweigh patient cost savings, as delays can lead to revenue losses of up to USD 8 million per day [30]. Additionally, accessing and processing routine practice data can incur additional costs [27].
One of the strongest arguments for considering external controls lies in patient perspectives and their possible ethical implications. The uncertainty of being assigned to a treatment group or the concern about receiving suboptimal treatment, if assigned to the control group, can be an additional burden for patients [31], thus reducing the attractiveness and ultimately the sample size of an RCT. Studies have shown that the public perceives random assignment in studies as unacceptable and often misinterprets equipoise in RCTs as ignorance [32]. Furthermore, most patients do not understand the scientific benefits of randomization [33]. Conversely, the opportunity to benefit from a new treatment is consistently cited as a motivation for study participation [34]. Therefore, in “difficult” situations, such as studies involving very serious illnesses, children, or rare diseases, regulatory authorities sometimes accept single-arm studies. Even if single-arm studies are primarily evaluated on their own methods, external control comparisons can add relevant further insights to the results. Thus, in some situations, the ethical argument or decision in favor of patient perspectives against randomization is valid and important. Incorporating external data into RCTs can be beneficial in these cases [27] [35]. In other cases, early involvement of patient organizations and/or conducting “patient preference” studies during drug development can help to set up adequate (randomized) study designs.
Overall, when planning studies, ethical considerations must account not only for the safety and interests of study participants in line with the Declaration of Helsinki [36], but also for the interests of future patients and their demand for effective therapies [37]. Future patients may have a strong interest in the greater evidential power of randomized controlled experiments compared to external control studies. The increased risk of false-positive findings in studies with external controls and the potential consequences for patients receiving ineffective treatments must be weighed against study participants’ concerns regarding randomization [27] [38].
There are many reasons to approach studies with external controls cautiously: one must first realize that the external data might be “different” from the study data. They may have been collected at different times, places, in different target populations, and/or for different purposes, and may have been measured less accurately, evaluated in different laboratories, etc. This particularly applies to routine practice data. For example, reimbursement data are not equivalent to medical data, as they primarily serve administrative and billing purposes. These data can be biased concerning medical aspects because they typically capture only those events that are billable or reimbursable, potentially omitting clinically relevant but non-reimbursed information. For instance, claims data often lack information on blood parameters and laboratory values, details about the actual dose of medication taken by a patient, specifics about fracture morphology in cases of an injury or tumor staging. As a result, the use of reimbursement data for clinical research may lead to misclassification of disease stages, which may not always be accurately captured through International Classification of Diseases (ICD) codes. Additionally, subjective impressions of the treating physician, such as patient’s frailty, can be difficult to capture or entirely missed using ICD codes. These limitations introduce different sources of bias, may lead to insufficient adjustment for confounders, influence study outcomes and may compromise also the validity of casual inference and reduce the overall reliability of study conclusions when such data are used as proxies for detailed medical records. However, the quality of the reimbursement data is also significantly dependent on legal requirements varying between countries, which should be considered when evaluating the reliability of study conclusions.
Furthermore, plausibility checks may be lacking, and the data may evolve over time, leading to uncertainties. This can result in various biases (e. g., selection bias, time-related bias, regional bias, judgement bias, endpoint bias), which represent the “Pandora’s box” of observational research.
Another problem is the potential inadequacy of the data, which can lead to insufficient evidence and erroneous conclusions due to variations in the databases, creating a manifestation of the “law of small numbers” [39], meaning that the results of an analysis may depend heavily on which specific data source is used rather than reflecting true underlying effects.
External controls, mainly in the form of summary report outcomes in published literature, are common for assessing the efficacy of experimental therapies in single-arm studies [40]. However, the single-arm study group and the aggregated external group, whose patients received the control therapy, may not be comparable in their characteristics, likely leading to biased treatment effect estimates in direct/unadjusted comparisons (International Conference on Harmonization (ICH) E10 Guideline) [41]. An inadequate adjustment of external control to the current treatment context and the resulting biases may thus ultimately lead to false conclusions, misguided investments, or multiyear delays in an entire development program. Therefore, the decision for a single-arm (externally controlled) study must be carefully thought out by weighing the benefits and risks [42] [43] [44].
If external controls are to be used in RCTs, they must be methodologically addressed. Various potential solutions exist. For example, two-stage designs can be employed where, based on propensity-score distributions during interim analysis, it is decided whether internal and external data are comparable [28]. If the assessment is negative, a control arm is added to the study from the second stage. Another approach involves estimating the treatment effect under control conditions using prediction models with external data [45]. Extensive individual patient data (clinical, demographic, etc.) are used for treatment effect prediction, enabling an adjusted comparison. For binary endpoints, an adjusted comparison between study participants and external controls can be achieved using a modified Simon two-stage design [46]. Additionally, a 2:1 or 3:1 randomization can replace 1:1 randomization. This allows the sample size in the control arm to be reduced while maintaining sample size in the experimental arm, simultaneously increasing the individual probability of receiving the experimental treatment. The information loss from the reduced control sample size could be “supplemented” by external data. Various analytical methods are available. Importantly, patient characteristics collected before treatment initiation play a key role in data-driven decisions about how much “weight” should be assigned to external control data. An inadequate choice of statistical analysis method can lead to worse outcomes (higher variability and/or greater bias) than if external data were ignored. However, there are also statistical methods that show a clear improvement in treatment effect estimators on average, regardless of the actual differences between external and RCT data [47].
In short, the use of external controls in RCTs poses complex regulatory, methodological, and ethical challenges that must be carefully navigated to ensure the validity and reliability of study findings. Regulatory authorities, researchers, and ethicists must work together to establish clear guidelines and best practices for the incorporation of external controls into RCTs. Ultimately, a thoughtful and multidisciplinary approach is necessary to harness the potential of external controls in RCTs while minimizing their risks and ensuring the highest standards of evidence-based medicine.
RPD for Causal Inference
Building on the previous discussion of RPD in classical RCTs, we now focus on using routine practice data as a standalone basis for causal inference, exploring the methodological strategies and challenges of drawing causal conclusions from real-world observations.
A particularly promising methodological approach is the so-called target trial emulation (TTE) framework, which allows observational studies based on RPD to be conceptually aligned with the structure of a (hypothetical) randomized trial. The main idea is to emulate a hypothetical RCT using a two-step process [48] [49]. Initially, a protocol for a hypothetical RCT is developed, outlining the essential components necessary for defining the causal relationships of interest. This includes specifying eligibility criteria, treatment protocols, allocation procedures, follow-up periods, outcome measures, and the planned data analysis strategy. This hypothetical RCT serves as the target study, which is then emulated using observational data in the subsequent stage. The goal of a TTE is to fulfill the methodological standards of a randomized trial, thereby minimizing common sources of bias, such as immortal time bias, and enhancing the transparency and interpretability of the findings [48] [49]. As a special case of a TTE, an existing (and published) RCT is used as the target trial and emulated using RPD. These emulations of existing RCTs are utilized not only regulatory decision making but also to investigate discrepancies between the outcomes of randomized controlled trials and those observed in real-world healthcare settings, helping to clarify why the results of RCTs may differ from those encountered in routine clinical practice [9] [20] [50] [51].
Moreover, quasi-experimental designs offer structured alternatives to randomized assignment, especially when randomization is ethically or logistically infeasible. Examples include differences-in-differences approaches, regression discontinuity designs, or instrumental variable analysis [52] [53]. These designs are particularly suitable when detailed temporal or eligibility data are available from RPD sources; however, they rely on strong assumptions – such as parallel trends or the validity of instruments – that can be difficult to verify in practice and may limit their applicability [54].
Recent developments in causal machine learning further expand the toolbox for analyzing complex, high-dimensional RPD. Methods such as double machine learning, targeted maximum likelihood estimation (TMLE), or causal forests combine statistical learning with causal inference principles, allowing for flexible modeling of treatment effects while maintaining robust identification properties – even in the presence of heterogeneous populations [55] [56].
However, beyond methodological considerations, it is crucial to reflect on the epistemological conditions under which RPD can serve as valid sources of evidence in clinical research. The use of RPD challenges traditional hierarchies of evidence and requires a broadened conceptual framework, as emphasized in the evolving paradigm of Evidence-Based Medicine Plus (EbMplus) [57] [58].
EbMplus extends the classic evidence-based medicine triad – best research evidence, clinical expertise, and patient values – by explicitly incorporating context sensitivity, biological plausibility, and mechanistic reasoning [57] [59]. From this perspective, RPD may be evidence-generating when the data are complete, sufficiently structured, contextually interpretable, and collected with clear temporal alignment to the clinical question. For instance, RPD must reflect a well-defined patient population, have reliably captured interventions and outcomes, and include sufficient covariate information to allow for causal adjustment [6] [48].
Conversely, the evidentiary utility of RPD is limited when the data lack granularity, are poorly curated, subject to substantial unmeasured confounding, or when outcome definitions and time origins are misaligned with the clinical research question. In such cases, even advanced causal inference methods cannot fully overcome the lack of identifiability or the potential for structural bias [54].
Importantly, the use of RPD for causal inference is not merely a statistical challenge – it is also a question of epistemic transparency and interpretability. The generation of valid, actionable knowledge from RPD requires a synthesis of statistical rigor, clinical expertise, and mechanistic understanding of disease and treatment effects. Only through this comprehensive approach can RPD complement randomized controlled trials (RCTs) in a meaningful way.
Thus, while RCTs remain the gold standard for establishing internal validity, RPD can strengthen external validity, support real-world generalizability, and inform treatment decisions in populations typically excluded from trials – provided that their methodological and contextual limitations are fully addressed. These insights call for closer collaboration among methodologists, clinicians, regulators, and patient representatives in defining the epistemic and practical boundaries of RPD-derived evidence. In this context, such collaboration is becoming increasingly essential to responsibly harness the full potential of RPD for regulatory strategies, ethical assessment, and innovative study designs.
Conclusion
In summary, randomized controlled trials will continue to be the gold standard for generating confirmatory statements in drug regulation. Although the European Medicines Agency and the U.S. Food and Drug Administration have approved medications based on non-randomized experiments, the evidence presented in such cases can be considered less comprehensive compared to that from randomized experiments. However, in situations where RCTs are not feasible or are ethically challenging, where there is a significant unmet clinical need, the disease course is well-defined, and the primary endpoint is clear, the use of historical controls is deemed appropriate [35]. The scenarios in which such an approach might be conceivable could expand in the future, if all stakeholders collaboratively develop adequate guidelines and methods. Meanwhile, routine practice data can be used to extrapolate RCT results to the target patient population: RPD holds significant potential to extend evidence from RCTs and to evaluate the safety and effectiveness of therapies, particularly post-marketing.
To fully realize this potential, however, the use of RPD must be grounded in methodological rigor, contextual interpretation, and an expanded concept of evidence that integrates statistical inference with clinical expertise and biological plausibility.
This article is part of the DNVF Special Issue “Health Care Research and Implementation”
Conflict of Interest
The authors declare that they have no conflict of interest.
-
Literature
- 1 Byar DP, Simon RM, Friedewald WT. et al. Randomized clinical trials: perspectives on some recent ideas. N Engl J Med 1976; 295: 74-80
- 2 Abel U, Koch A. The role of randomization in clinical studies: myths and beliefs. J Clin Epidemiol 1999; 52: 487-497
- 3 Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 2000; 342: 1887-1892
- 4 Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ 1996; 312: 1215-1218
- 5 Eichler HG, Abadie E, Breckenridge A. et al. Bridging the efficacy–effectiveness gap: a regulators perspective on addressing variability of drug response. Nat Rev Drug Discovery 2011; 10: 495
- 6 Franklin JM, Schneeweiss S. When and how can real world data analyses substitute for randomized controlled trials?. Clin Pharmacol Ther 2017; 102: 924-933
- 7 Corrigan-Curay J, Sacks L, Woodcock J. Real-world evidence and real-world data for evaluating drug safety and effectiveness. Jama 2018; 320: 867-868
- 8 Booth CM, Karim S, Mackillop WJ. Real-world data: towards achieving the achievable in cancer care. Nat Rev Clin Oncol 2019; 16: 312-325
- 9 Franklin JM, Glynn RJ, Martin D. et al. Evaluating the use of nonrandomized real-world data analyses for regulatory decision making. Clin Pharmacol Ther 2019; 105: 867-877
- 10 Wang SV, Schneeweiss S, Gagne JJ. et al. Using Real-World Data to Extrapolate Evidence From Randomized Controlled Trials. Clin Pharmacol Ther 2019; 105: 1156-1163
- 11 Liu F, Demosthenes P. Real-world data: a brief review of the methods, applications, challenges and opportunities. BMC Med Res Methodol 2022; 22: 287
- 12 McCulloch P, Taylor I, Sasako M. et al. Randomised trials in surgery: problems and possible solutions. BMJ 2002; 324: 1448-1451
- 13 Yin X, Mishra-Kalyan PS, Sridhara R. et al. Exploring the Potential of External Control Arms created from Patient Level Data: A case study in non-small cell lung cancer. Journal of Biopharmaceutical Statistics 2022; 32: 204-218
- 14 Li HK, Rombach I, Zambellas R. et al. OVIVA Trial Collaborators. Oral versus Intravenous Antibiotics for Bone and Joint Infection. N Engl J Med 2019; 380: 425-436
- 15 Handoll H, Brealey S, Rangan A. et al. The ProFHER (PROximal Fracture of the Humerus: Evaluation by Randomisation) trial - a pragmatic multicentre randomised controlled trial evaluating the clinical effectiveness and cost-effectiveness of surgical compared with non-surgical treatment for proximal fracture of the humerus in adults. Health Technol Assess 2015; 19: 1-280
- 16 Rau B, Lang H, Koenigsrainer A. et al. Effect of Hyperthermic Intraperitoneal Chemotherapy on Cytoreductive Surgery in Gastric Cancer With Synchronous Peritoneal Metastases: The Phase III GASTRIPEC-I Trial. J Clin Oncol 2024; 42: 146-156
- 17 Dagenais S, Russo L, Madsen A. et al. Use of real-world evidence to drive drug development strategy and inform clinical trial design. Clin Pharmacol Ther 2022; 111: 77-89
- 18 21st Century Cures Act. Zugriff am 29.04.2025 unter https://www.congress.gov/114/plaws/publ255/PLAW-114publ255.pdf.
- 19 PDUFA Reauthorization Performance Goals and Procedures Fiscal Years 2018 Through 2022. Zugriff am 29.04.2025 unter https://www.fda.gov/media/99140/download
- 20 US Food and Drug Administration. et al. Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices. 2017 Zugriff am 29.04.2025 unter https://www.fda.gov/regulatory-information/search-fda-guidance-documents/use-real-world-evidence-support-regulatory-decision-making-medical-devices
- 21 European Medicine Agency. Guideline on registry-based studies. 2021 Zugriff am 29.04.2025 unter https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-registry-based-studies_en.pdf-0
- 22 Institute for Quality and Efficiency in Health Care. 2020 Concepts for the generation of routine practice data and their analysis for the benefit assessment of drugs according to § 35a Social Code Book V (SGB V).
- 23 Schneeweiss S, Avorn J. A review of uses of health care utilization databases for epidemiologic research on therapeutics. J Clin Epidemiol 2005; 58: 323-337
- 24 Orsini LS, Berger M, Crown W. et al. Improving transparency to build trust in real-world secondary data studies for hypothesis testing – why, what, and how: recommendations and a road map from the real-world evidence transparency initiative. Value Health 2020; 23: 1128-1136
- 25 US Food and Drug Administration. et al. Real-World Evidence. 2022 Zugriff am 29.04.2025 unter https://www.fda.gov/science-research/science-and-research-special-topics/real-world-evidence
- 26 Caliebe A, Burger HU, Knoerzer D. et al. Big Data in der klinischen Forschung: Vieles ist noch Wunschdenken. Dtsch Arztebl 2019; 116: A 1534-A 1539
- 27 Burger HU, Gerlinger C, Harbron C. et al. The use of external controls: To what extent can it currently be recommended?. Pharmaceutical Statistics 2021; 20: 1002-1016
- 28 Götte H, Kirchner M, Krisam J. et al. An adaptive design for early clinical development including interim decision for single-arm trial with external controls or randomized trial. Pharmaceutical Statistics 2022; 21: 625-640
- 29 Moore TJ, Heyward J, Anderson G. et al. Variation in the estimated costs of pivotal clinical benefit trials supporting the US approval of new therapeutic agents, 2015-2017: a cross-sectional study. BMJ Open 2020; 10: e038863
- 30 Brøgger-Mikkelsen M, Ali Z, Zibert JR. et al. Online Patient Recruitment in Clinical Trials: Systematic Review and Meta-Analysis. J Med Internet Res 2020; 22: e22179
- 31 Naidoo N, Nguyen VT, Ravaud P. et al. The research burden of randomized controlled trial participation: a systematic thematic synthesis of qualitative evidence. BMC Med 2020; 18: 6
- 32 Robinson EJ, Kerr C, Stevens A. et al. Lay Public’s Understanding of Equipoise and Randomisation in Ramdomised Controlled Trials. International Journal of Technology Assessment in Health Care 2005; 9 192 iii-iv
- 33 Robinson EJ, Kerr C, Stevens A. et al. Lay Conceptions of the Ethical and Scientific Justifications for Random Allocation in Clinical Trials. Social Science & Medicine 2005; 58: 811-824
- 34 Meneguin S, Cesar LAM. Motivation and frustration in cardiology trial participation: the patient perspective. Clinics 2012; 67: 603-608
- 35 Collignon O, Schritz A, Spezia R. et al. Implementing historical controls in oncology trials. The Oncologist 2021; 26: e859-e862
- 36 WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Participants; Zugriff am 29.04.2025 unter https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/
- 37 Millum J, Wendler D. The Duty to Rescue and Randomized Controlled Trials Involving Serious Diseases. journal of moral philosophy 2015; 15: 298-323
- 38 Gloy V, Schmitt AM, Düblin P. et al. The evidence base of US Food and Drug Administration approvals of novel cancer therapies from 2000 to 2020. Int J Cancer 2023; 152: 2474-2484
- 39 Bishop DVM, Thompson J, Parker AJ. Can we shift belief in the ‘Law of Small Numbers’?. Royal Society Open Science 2022; 9: 211028
- 40 Ghadessi M, Tang R, Zhou J. et al. A roadmap to using historical controls in clinical trials–by Drug Information Association Adaptive Design Scientific Working Group (DIA-ADSWG). Orphanet Journal of Rare Diseases 2020; 15: 1-19
- 41 International Conference on Harmonization (ICH) E10: Choice of control group and related issues in clinical trials. Zugriff am 29.04.2025 unter https://database.ich.org/sites/default/files/E10_Guideline.pdf July 2000.
- 42 Dunger-Baldauf C, Hemmings R, Bretz F. et al. Generating the Right Evidence at the Right Time: Principles of a New Class of Flexible Augmented Clinical Trial Designs. Clin Pharmacol Ther 2023; 113: 1132-1138
- 43 European Medicine Agency. Reflection paper on establishing efficacy based on single-arm trials submitted as pivotal evidence in a marketing authorization. 2023 Zugriff am 29.04.2025 unter https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-establishing-efficacy-based-single-arm-trials-submitted-pivotal-evidence-marketing-authorisation_en.pdf
- 44 Lambert J, Lengliné E, Porcher R. et al. Enriching single-arm clinical trials with external controls: possibilities and pitfalls. Blood advances 2023; 7: 5680-5690
- 45 Erdmann S, Edelmann D, Kieser M. Using real-world data to predict health outcomes – The prediction design: Application and sample size planning. Biom J 2023; 65: 2200023
- 46 Edelmann D, Habermehl C, Schlenk RF. et al. Adjusting Simon’s optimal two-stage design for heterogeneous populations based on stratification or using historical controls. Biom J 2020; 62: 311-329
- 47 Götte H, Kirchner M, Krisam J. et al. Estimation of treatment effects in early-phase randomized clinical trials involving external control data. J Biopharm Stat 2024; 34: 680-699
- 48 Hernán MA, Robins JM. Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available. American Journal of Epidemiology 2016; 183: 758-764
- 49 Hernán MA, Wang W, Leaf DE. Target Trial Emulation: A Framework for Causal Inference From Observational Data. JAMA 2022; 328: 2446-2447
- 50 Franklin JM, Patorno E, Desai RJ. et al. Emulating randomized clinical trials with nonrandomized real-world evidence studies. Circulation 2021; 143: 1002-1013
- 51 Heyard R, Held L, Schneeweiss S. et al. Design differences and variation in results between randomised trials and non-randomised emulations: meta-analysis of rct-duplicate data. BMJ Medicine 2024; 3: e000709
- 52 Wing C, Simon K, Bello-Gomez RA. Designing difference in difference studies: best practices for public health policy research. Annual review of public health 2018; 39: 453-469
- 53 Imbens GW, Lemieux T. Regression discontinuity designs: A guide to practice. Journal of Econometrics 2008; 142: 615-635
- 54 Bärnighausen T, Tugwell P, Røttingen JA. et al. Quasi-experimental study designs series-paper 4: uses and value. J Clin Epidemiol 2017; 89: 21-29
- 55 van der Laan MJ, Rose S. Targeted Learning: Causal Inference for Observational and Experimental Data. Springer; 2011.
- 56 Wager S, Athey S. Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. Journal of the American Statistical Association 2018; 113: 1228-1242
- 57 Greenhalgh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis?. BMJ 2014; 348: g3725
- 58 Wilde M. The EBM+movement. Int J Biostat 2023; 19: 283-293
- 59 Upshur RE. Looking for rules in a world of exceptions: reflections on evidence-based practice. Perspect Biol Med 2005; 48: 477-489
Correspondence
Publication History
Received: 23 May 2025
Accepted: 29 October 2025
Article published online:
23 February 2026
© 2026. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/).
Georg Thieme Verlag KG
Oswald-Hesse-Straße 50, 70469 Stuttgart, Germany
-
Literature
- 1 Byar DP, Simon RM, Friedewald WT. et al. Randomized clinical trials: perspectives on some recent ideas. N Engl J Med 1976; 295: 74-80
- 2 Abel U, Koch A. The role of randomization in clinical studies: myths and beliefs. J Clin Epidemiol 1999; 52: 487-497
- 3 Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 2000; 342: 1887-1892
- 4 Black N. Why we need observational studies to evaluate the effectiveness of health care. BMJ 1996; 312: 1215-1218
- 5 Eichler HG, Abadie E, Breckenridge A. et al. Bridging the efficacy–effectiveness gap: a regulators perspective on addressing variability of drug response. Nat Rev Drug Discovery 2011; 10: 495
- 6 Franklin JM, Schneeweiss S. When and how can real world data analyses substitute for randomized controlled trials?. Clin Pharmacol Ther 2017; 102: 924-933
- 7 Corrigan-Curay J, Sacks L, Woodcock J. Real-world evidence and real-world data for evaluating drug safety and effectiveness. Jama 2018; 320: 867-868
- 8 Booth CM, Karim S, Mackillop WJ. Real-world data: towards achieving the achievable in cancer care. Nat Rev Clin Oncol 2019; 16: 312-325
- 9 Franklin JM, Glynn RJ, Martin D. et al. Evaluating the use of nonrandomized real-world data analyses for regulatory decision making. Clin Pharmacol Ther 2019; 105: 867-877
- 10 Wang SV, Schneeweiss S, Gagne JJ. et al. Using Real-World Data to Extrapolate Evidence From Randomized Controlled Trials. Clin Pharmacol Ther 2019; 105: 1156-1163
- 11 Liu F, Demosthenes P. Real-world data: a brief review of the methods, applications, challenges and opportunities. BMC Med Res Methodol 2022; 22: 287
- 12 McCulloch P, Taylor I, Sasako M. et al. Randomised trials in surgery: problems and possible solutions. BMJ 2002; 324: 1448-1451
- 13 Yin X, Mishra-Kalyan PS, Sridhara R. et al. Exploring the Potential of External Control Arms created from Patient Level Data: A case study in non-small cell lung cancer. Journal of Biopharmaceutical Statistics 2022; 32: 204-218
- 14 Li HK, Rombach I, Zambellas R. et al. OVIVA Trial Collaborators. Oral versus Intravenous Antibiotics for Bone and Joint Infection. N Engl J Med 2019; 380: 425-436
- 15 Handoll H, Brealey S, Rangan A. et al. The ProFHER (PROximal Fracture of the Humerus: Evaluation by Randomisation) trial - a pragmatic multicentre randomised controlled trial evaluating the clinical effectiveness and cost-effectiveness of surgical compared with non-surgical treatment for proximal fracture of the humerus in adults. Health Technol Assess 2015; 19: 1-280
- 16 Rau B, Lang H, Koenigsrainer A. et al. Effect of Hyperthermic Intraperitoneal Chemotherapy on Cytoreductive Surgery in Gastric Cancer With Synchronous Peritoneal Metastases: The Phase III GASTRIPEC-I Trial. J Clin Oncol 2024; 42: 146-156
- 17 Dagenais S, Russo L, Madsen A. et al. Use of real-world evidence to drive drug development strategy and inform clinical trial design. Clin Pharmacol Ther 2022; 111: 77-89
- 18 21st Century Cures Act. Zugriff am 29.04.2025 unter https://www.congress.gov/114/plaws/publ255/PLAW-114publ255.pdf.
- 19 PDUFA Reauthorization Performance Goals and Procedures Fiscal Years 2018 Through 2022. Zugriff am 29.04.2025 unter https://www.fda.gov/media/99140/download
- 20 US Food and Drug Administration. et al. Use of Real-World Evidence to Support Regulatory Decision-Making for Medical Devices. 2017 Zugriff am 29.04.2025 unter https://www.fda.gov/regulatory-information/search-fda-guidance-documents/use-real-world-evidence-support-regulatory-decision-making-medical-devices
- 21 European Medicine Agency. Guideline on registry-based studies. 2021 Zugriff am 29.04.2025 unter https://www.ema.europa.eu/en/documents/scientific-guideline/guideline-registry-based-studies_en.pdf-0
- 22 Institute for Quality and Efficiency in Health Care. 2020 Concepts for the generation of routine practice data and their analysis for the benefit assessment of drugs according to § 35a Social Code Book V (SGB V).
- 23 Schneeweiss S, Avorn J. A review of uses of health care utilization databases for epidemiologic research on therapeutics. J Clin Epidemiol 2005; 58: 323-337
- 24 Orsini LS, Berger M, Crown W. et al. Improving transparency to build trust in real-world secondary data studies for hypothesis testing – why, what, and how: recommendations and a road map from the real-world evidence transparency initiative. Value Health 2020; 23: 1128-1136
- 25 US Food and Drug Administration. et al. Real-World Evidence. 2022 Zugriff am 29.04.2025 unter https://www.fda.gov/science-research/science-and-research-special-topics/real-world-evidence
- 26 Caliebe A, Burger HU, Knoerzer D. et al. Big Data in der klinischen Forschung: Vieles ist noch Wunschdenken. Dtsch Arztebl 2019; 116: A 1534-A 1539
- 27 Burger HU, Gerlinger C, Harbron C. et al. The use of external controls: To what extent can it currently be recommended?. Pharmaceutical Statistics 2021; 20: 1002-1016
- 28 Götte H, Kirchner M, Krisam J. et al. An adaptive design for early clinical development including interim decision for single-arm trial with external controls or randomized trial. Pharmaceutical Statistics 2022; 21: 625-640
- 29 Moore TJ, Heyward J, Anderson G. et al. Variation in the estimated costs of pivotal clinical benefit trials supporting the US approval of new therapeutic agents, 2015-2017: a cross-sectional study. BMJ Open 2020; 10: e038863
- 30 Brøgger-Mikkelsen M, Ali Z, Zibert JR. et al. Online Patient Recruitment in Clinical Trials: Systematic Review and Meta-Analysis. J Med Internet Res 2020; 22: e22179
- 31 Naidoo N, Nguyen VT, Ravaud P. et al. The research burden of randomized controlled trial participation: a systematic thematic synthesis of qualitative evidence. BMC Med 2020; 18: 6
- 32 Robinson EJ, Kerr C, Stevens A. et al. Lay Public’s Understanding of Equipoise and Randomisation in Ramdomised Controlled Trials. International Journal of Technology Assessment in Health Care 2005; 9 192 iii-iv
- 33 Robinson EJ, Kerr C, Stevens A. et al. Lay Conceptions of the Ethical and Scientific Justifications for Random Allocation in Clinical Trials. Social Science & Medicine 2005; 58: 811-824
- 34 Meneguin S, Cesar LAM. Motivation and frustration in cardiology trial participation: the patient perspective. Clinics 2012; 67: 603-608
- 35 Collignon O, Schritz A, Spezia R. et al. Implementing historical controls in oncology trials. The Oncologist 2021; 26: e859-e862
- 36 WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Participants; Zugriff am 29.04.2025 unter https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/
- 37 Millum J, Wendler D. The Duty to Rescue and Randomized Controlled Trials Involving Serious Diseases. journal of moral philosophy 2015; 15: 298-323
- 38 Gloy V, Schmitt AM, Düblin P. et al. The evidence base of US Food and Drug Administration approvals of novel cancer therapies from 2000 to 2020. Int J Cancer 2023; 152: 2474-2484
- 39 Bishop DVM, Thompson J, Parker AJ. Can we shift belief in the ‘Law of Small Numbers’?. Royal Society Open Science 2022; 9: 211028
- 40 Ghadessi M, Tang R, Zhou J. et al. A roadmap to using historical controls in clinical trials–by Drug Information Association Adaptive Design Scientific Working Group (DIA-ADSWG). Orphanet Journal of Rare Diseases 2020; 15: 1-19
- 41 International Conference on Harmonization (ICH) E10: Choice of control group and related issues in clinical trials. Zugriff am 29.04.2025 unter https://database.ich.org/sites/default/files/E10_Guideline.pdf July 2000.
- 42 Dunger-Baldauf C, Hemmings R, Bretz F. et al. Generating the Right Evidence at the Right Time: Principles of a New Class of Flexible Augmented Clinical Trial Designs. Clin Pharmacol Ther 2023; 113: 1132-1138
- 43 European Medicine Agency. Reflection paper on establishing efficacy based on single-arm trials submitted as pivotal evidence in a marketing authorization. 2023 Zugriff am 29.04.2025 unter https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-establishing-efficacy-based-single-arm-trials-submitted-pivotal-evidence-marketing-authorisation_en.pdf
- 44 Lambert J, Lengliné E, Porcher R. et al. Enriching single-arm clinical trials with external controls: possibilities and pitfalls. Blood advances 2023; 7: 5680-5690
- 45 Erdmann S, Edelmann D, Kieser M. Using real-world data to predict health outcomes – The prediction design: Application and sample size planning. Biom J 2023; 65: 2200023
- 46 Edelmann D, Habermehl C, Schlenk RF. et al. Adjusting Simon’s optimal two-stage design for heterogeneous populations based on stratification or using historical controls. Biom J 2020; 62: 311-329
- 47 Götte H, Kirchner M, Krisam J. et al. Estimation of treatment effects in early-phase randomized clinical trials involving external control data. J Biopharm Stat 2024; 34: 680-699
- 48 Hernán MA, Robins JM. Using Big Data to Emulate a Target Trial When a Randomized Trial Is Not Available. American Journal of Epidemiology 2016; 183: 758-764
- 49 Hernán MA, Wang W, Leaf DE. Target Trial Emulation: A Framework for Causal Inference From Observational Data. JAMA 2022; 328: 2446-2447
- 50 Franklin JM, Patorno E, Desai RJ. et al. Emulating randomized clinical trials with nonrandomized real-world evidence studies. Circulation 2021; 143: 1002-1013
- 51 Heyard R, Held L, Schneeweiss S. et al. Design differences and variation in results between randomised trials and non-randomised emulations: meta-analysis of rct-duplicate data. BMJ Medicine 2024; 3: e000709
- 52 Wing C, Simon K, Bello-Gomez RA. Designing difference in difference studies: best practices for public health policy research. Annual review of public health 2018; 39: 453-469
- 53 Imbens GW, Lemieux T. Regression discontinuity designs: A guide to practice. Journal of Econometrics 2008; 142: 615-635
- 54 Bärnighausen T, Tugwell P, Røttingen JA. et al. Quasi-experimental study designs series-paper 4: uses and value. J Clin Epidemiol 2017; 89: 21-29
- 55 van der Laan MJ, Rose S. Targeted Learning: Causal Inference for Observational and Experimental Data. Springer; 2011.
- 56 Wager S, Athey S. Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. Journal of the American Statistical Association 2018; 113: 1228-1242
- 57 Greenhalgh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis?. BMJ 2014; 348: g3725
- 58 Wilde M. The EBM+movement. Int J Biostat 2023; 19: 283-293
- 59 Upshur RE. Looking for rules in a world of exceptions: reflections on evidence-based practice. Perspect Biol Med 2005; 48: 477-489
