Open Access
CC BY 4.0 · Libyan International Medical University Journal
DOI: 10.1055/s-0045-1813215
Review Article

Navigating the Competency Revolution in Medical Education

Authors

  • Rafik R. Elmehdawi

    1   University of Benghazi, Libya
    2   Libyan International University, Benghazi, Libya
  • Sara A. Glessa

    1   University of Benghazi, Libya
  • Arif Al-Areibi

    3   Western University, Canada
 


Graphical Abstract

Abstract

The Flexner Report in 1910 is considered as the first major milestone in medical education, where the quality of graduates and their training was given a priority; it also marked the beginning of a system-based medical education. Since 1910, there has been a slow update of the medical education system components till the recent change to Competency-Based Medical Education (CBME) in the last 10 years. For over 100 years, the system was mainly a time-based system that used time to meet certain goals and objectives, which were not necessarily outcome-based and relied mainly on summative assessments to assess competency. The CBME, with the introduction of entrustable professional activities (EPAs) and milestones, has clearly identified the major competencies that are required for all graduates to meet before graduating, regardless of time; nevertheless, time is still used as a resource to achieve them. The major implementation of CBME started in Canadian and American postgraduate training programs and was later adopted by their medical schools. The concept of CBME brought new ideas to the medical education system, such as more focus on the number and type of learner assessments in different contexts, a more thorough decision-making process through the creation of competence committees, and a shift toward more learner-driven training. The critical link between abstract competencies and clinical practice is provided by EPAs, which are specific, professional tasks that a trainee can be fully entrusted to perform unsupervised after demonstrating the necessary competence. National frameworks, such as the Association of American Medical Colleges (AAMC) Core EPAs, have been developed to standardize this approach and prepare graduates for residency. Despite the benefits, implementing CBME and EPAs faces several challenges. These include faculty resistance and resource intensity, as well as the risk of “assessment fatigue” and “conceptual dilution,” where the term EPA is misapplied. The current and future direction of medical education will mainly focus on overcoming these issues through focused faculty development, optimized assessment systems, and a commitment to standardized definitions, all of which are essential to fully realize the potential of CBME in producing competent, practice-ready physicians. In addition, the medical education society will continue to develop and work on fully implementing the concept of master adaptive learners.


Introduction

Medical education has undergone profound transformations over the past century, evolving from unstructured apprenticeships to standardized, competency-based frameworks. This review explores the development of modern medical education, the shift from time-based to competency-based models, and the role of entrustable professional activities (EPAs) and milestones in bridging theoretical competencies to clinical practice. Drawing on key reforms like the Flexner Report and contemporary frameworks such as the Canadian Medical Directives for Specialists (CanMEDS) and U.S. Accreditation Council for Graduate Medical Education (ACGME) milestones, we highlight the advantages, challenges, and future directions of Competency-Based Medical Education (CBME).


Time-Based Medical Education

The American Medical Association established the Council on Medical Education in 1904 to standardize training, culminating in Abraham Flexner's landmark 1910 report. Flexner, commissioned by the Carnegie Foundation, evaluated medical schools against Johns Hopkins’ rigorous model. Before 1910, medical education in North America was marked by inconsistency and minimal oversight. The 155 medical schools operating in the United States and Canada had curricula that varied widely, with most programs lasting only 2 years and emphasizing memorization over practical skills. Libraries and laboratories were under-resourced, clinical training was observational, and faculty often comprised part-time local physicians with no formal pedagogical training.[1] [2]

Flexner, in his 1910 report, recommended closure of substandard schools, which resulted in reducing their number from 155 to 31, setting stricter admission requirements and curriculum reform to include two main stages: a 2-year basic sciences stage and a 2-year clinical rotations stage, with medical schools overseeing hospital-based education.[1] These reforms established the time-based model, where competence was assumed after fixed training durations, a paradigm that dominated for decades.[3]

The traditional model, while successful in standardizing training, faced criticism for its rigidity:

  • Fixed-time, variable-outcome: as ten Cate noted, traditional medical education awarded licenses based on duration of training rather than demonstrated competence.[4] This meant learners could graduate simply by completing a set period, as in a 4-year medical school, even if their competence varied widely.[4] [5] Hodges compared this to “tea steeping”—assuming time alone ensures readiness.[6] Without clear competency checks, graduates' abilities could differ significantly, risking patient care quality.[5] This variability in competence is starkly evident in surgical training. Mattar et al demonstrated that a “substantial percentage of general surgery residents are not adequately prepared for the demands of surgical subspecialty fellowships upon completion of their residency.”[7] Their survey of fellowship program directors revealed significant deficiencies in preparedness, directly linking the rigid, time-based model of residency to tangible gaps in trainee readiness. This finding exemplifies the core flaw identified by ten Cate and Hodges: “graduating based solely on time served fails to ensure all learners achieve the necessary competence, potentially compromising patient care even at advanced training stages.”

  • Summative assessment bias: the traditional overreliance on final exams created significant gaps in evaluating physician development.[8] These end-of-training assessments often provided only a snapshot of performance rather than measuring progressive skill acquisition. This “all-or-nothing” testing approach fails to capture the longitudinal development of clinical competencies that occur throughout training.[5] These high-stakes summative assessments tend to:

    • — Focus on easily measurable knowledge recall rather than complex clinical skills.

    • — Provide limited opportunities for formative feedback and improvement.

    • — May overlook critical competencies like communication and professionalism that develop gradually.

    • — Create “assessment gaps” where learners may pass exams despite persistent skill deficiencies.

  • Replacing this outdated model with workplace-based assessment (WBA) systems allows tracking of competence development over time, offering both trainees and educators more meaningful data about clinical progression.[6]

  • Lack of individualization: a fundamental limitation of rigid time-based education systems is their inherent lack of individualization, forcing all learners to progress uniformly through a predetermined curriculum at the same pace, irrespective of their actual understanding or mastery.[5] This “lockstep” approach operates on the faulty assumption that all students require identical instructional time to achieve proficiency in a given topic.[9] Consequently, students who grasp concepts quickly are often held back, leading to boredom, disengagement, and wasted potential as they wait for peers to catch up. Conversely, students who need more time or alternative explanations to achieve mastery are inexorably pushed forward before foundational gaps are adequately addressed.[10] This creates a compounding effect, where initial misunderstandings or incomplete knowledge snowball into significant learning deficits in subsequent, more complex topics that build upon these shaky foundations. The system prioritizes covering the curriculum within the allotted time (e.g., a semester or school year) over ensuring genuine competency development for each individual learner.[11] As Hattie (2009) emphasizes in his synthesis of educational influences, adapting teaching to the learner's needs has a significantly positive effect on achievement, highlighting the opportunity cost of the standardized pace.[12] This factory-model approach fails to accommodate the natural variability in learning speeds, prior knowledge, and cognitive styles present in any diverse classroom, ultimately serving the administrative structure of the system better than the developmental needs of the students within it.

  • These shortcomings catalyzed the shift to CBME in the late 20th century, driven by demands for accountability and patient safety.[13]


Competency-Based Medical Education: Core Concepts and Frameworks

CBME emerged in the 1990s, with pioneering work by CanMEDS (1996) and ACGME (2002).[14] [15] These frameworks redefined competence as “the integrated application of knowledge, skills, and attitudes in clinical contexts.”[16] This concept was added to the existing time-based medical education by incorporating the new CanMEDS and ACGME competencies. However, after more than 20 years of trials in the time-based system with its rigid structure and culture, it was clear that there is a need for a better system to practically execute these concepts. This led recently to the implementation of the more advanced CBME system, with its unique EPAs, milestones, and robust assessment system.

CBME fundamentally reorients the educational paradigm by prioritizing demonstrable mastery over seat time, addressing the critical shortcomings of rigid time-based systems. Van Melle et al described five core components of CBME[17]:

  • Outcome-focused design: competency-based education (CBE) begins with clearly defined, measurable competencies that articulate the essential knowledge, skills, and attitudes learners must master. These competencies serve as the explicit, transparent goals for both learners and educators. Examples include frameworks like CanMEDS' seven roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar, Professional), which provide a comprehensive blueprint for physician competence.[16] [18] This specificity ensures learning is targeted and relevant, moving beyond vague curricular coverage.

  • Sequenced progression of competence: unlike time-based models enforcing uniform pacing, CBME decouples advancement from calendar constraints. Progression to the next stage or competency is contingent solely upon the learner demonstrating the required level of mastery, regardless of the time taken to achieve it.[19] This flexibility accommodates individual learning trajectories, allowing those who grasp concepts quickly to advance without delay, while providing crucial additional time and support for those who need it, thereby preventing proficiency gaps from persisting or widening.[3]

  • Programmatic assessment: assessment in CBME is not merely a summative endpoint but an integral, ongoing process. It relies heavily on frequent, low-stakes, formative evaluations designed to provide actionable feedback for growth. It also adopts a concept of thorough assessment system that involves assessing each competency by multiple assessors, using multiple assessment tools and in multiple contexts.[20] Methods include direct observation of performance in authentic settings (e.g., clinical encounters, labs), analysis of portfolios documenting development over time, and simulations replicating real-world challenges. These rich data inform both learner improvement and instructional adjustments.[21] [22]

  • Competency-focused instruction or coaching: a defining feature, particularly in professional fields, is the principle of entrustment. Supervisors progressively grant learners greater autonomy and responsibility for tasks or decisions based on direct, observed evidence of their developing competence and professional judgment. This moves beyond simple pass/fail grades to a nuanced judgment of readiness for independent practice, reflecting real-world expectations of trust and accountability.[4]

  • Tailored learning experiences: learning experiences should be designed to meet the specific needs of individual learners and to prepare them for real-world practice. This may involve providing diverse learning opportunities and addressing potential barriers to competency development.

The main functional unit of CBME is EPA, which was conceptualized by Olle ten Cate in 2005 as a pragmatic solution to the persistent “competency–curriculum gap” between abstract competency frameworks (e.g., CanMEDS, ACGME Core Competencies) and the tangible tasks clinicians perform daily. As ten Cate argued, competencies alone lack specificity for workplace assessment, requiring translation into observable units of work. An EPA is formally defined as “a unit of professional practice that can be fully entrusted to a trainee once they have demonstrated the requisite competence to execute this activity unsupervised.”[4] [23] Examples include high-stakes tasks like “managing a cardiac arrest” or “performing a safe discharge for a hospitalized patient.” EPAs serve as the critical “missing link” between theoretical competencies and clinical responsibilities.

EPAs possess distinct features essential for operationalizing CBME:

  • Task-specificity: EPAs represent discrete, essential activities rather than abstract attributes. Each EPA integrates multiple underlying competencies (e.g., the EPA “Breaking bad news” requires medical knowledge, communication skills, empathy, and ethical judgment).[4] [23]

  • Contextual authenticity: EPAs must be performed in real or simulated clinical settings where authentic patient care decisions, consequences, and team interactions occur. This distinguishes them from knowledge-based exams or isolated skill assessments.[24]

  • Developmental entrustment: supervision levels for an EPA evolve based on observed performance. Frameworks like the Ottawa CAPER or Chen's 5-Level Scale describe progression from direct observation (“Show me”) to reactive supervision (“Do it - I'll watch”), and to eventual unsupervised practice (“Do it independently”).[25] [26] Entrustment decisions are inherently dynamic.

There have been major initiatives that have standardized EPA implementation, such as:

  • AAMC Core EPAs for Entering Residency: this landmark project defined 13 activities that all medical school graduates must be trusted to perform with indirect supervision (e.g., “Perform a handoff,” “Collaborate as an interprofessional team member,” “Recognize a patient requiring urgent care”). It emphasized readiness for residency.[24]

  • Association of Faculties of Medicine of Canada (AFMC) EPA Framework: aligned with CanMEDS roles, Canada established 12 core EPAs for medical graduates (e.g., “Obtain a history and perform a physical examination,” “Formulate clinical plans”). This framework explicitly maps each EPA to relevant CanMEDS competencies, reinforcing integration.[27]

  • CBD (Competence by Design): CBD is simply the Canadian brand of CBME, which was built on their well-known CanMEDS competencies and incorporated the five core components of CBME. Features of the CBD framework include its basis in the CanMEDS 2015 competency framework, a time-variable approach to postgraduate training, sequencing of training along the four stages of the Competence Continuum, and the use of EPAs specific to each stage.[28]

  • ACGME CBME: despite the complete reliance on EPAs in the American undergraduate system, they mainly rely on milestones in the postgraduate system.[29] [30]

CBME offers significant potential benefits, including:

  • The continuous thorough assessment system allows early identification of trainees' gaps and this in turn allows more time for interventions and improvements.[31] This system of assessments also provides training program leads with a more objective and fair assessment of trainees to overcome the well-known Failure to Fail phenomenon in medical education.[32]

  • Enhanced patient safety: CBME aims to ensure all graduates meet standardized, predefined performance benchmarks before progressing, theoretically reducing variability in competence and improving the safety and quality of patient care.[24] [33]

  • Individualized learning: by decoupling progression from fixed time, CBME accommodates diverse learner paces and styles. Learners who master competencies quickly can advance without being held back, while those needing more time receive targeted support, theoretically optimizing learning efficiency and reducing gaps.[23] [33] [34] [35]

  • Lifelong learning: CBME's emphasis on continuous formative assessment, feedback, and demonstrable mastery inherently fosters a mindset of ongoing skill refinement and self-regulated learning beyond initial training, aligning with the need for continuous professional development.[18] [33] [36]

Although CBME offers significant potential benefits, these advantages are often accompanied by inherent challenges and limitations that require careful consideration:

  • Defining valid benchmarks: establishing truly valid, reliable, and universally accepted performance benchmarks for complex competencies (especially nontechnical skills like judgment or leadership) is extremely difficult. Poorly defined or assessed benchmarks create a false sense of security.[35]

  • Assessment fatigue: the constant cycle of observation, feedback, and documentation required for continuous assessment can lead to significant assessment fatigue for both learners and supervisors. These risks diminish engagement and the perceived value of feedback over time,[37] and potentially could be distracting from direct patient care time and authentic learning experiences.[35]

  • Stress and gaming: high-stakes competency decisions based on frequent assessments can increase trainee anxiety and potentially encourage “gaming” the system to meet specific observed criteria rather than focusing on holistic development.[38] [39]

  • Resource intensity: providing truly individualized pathways requires substantial resources: more faculty time for tailored supervision, assessment, and feedback; sophisticated data management systems to track individual progress; and flexible scheduling for clinical experiences. This is often unsustainable without significant institutional investment.[33] [40]

  • Faculty development needs: effective individualization demands faculty skilled in diagnosing learning needs, providing nuanced feedback, and making complex entrustment decisions. Many faculty lack training in these areas, leading to inconsistent implementation.[35] [41]

  • Logistical complexity and potential stigma: managing learners on different timelines within the same program creates scheduling complexities for rotations, assessments, and faculty assignments.[3] Slower progression, despite being competency-focused, can carry unintended stigma or raise concerns about program efficiency.[42]

  • Defining and assessing “attitudes”: while CBME frameworks include attitudes (e.g., professionalism, ethics), these are notoriously difficult to define operationally and assess reliably and objectively. Assessments are often subjective and prone to bias, making genuine mastery hard to guarantee.[35]

  • Focus on measurables: the drive to assess competence continuously can inadvertently prioritize easily measurable tasks and knowledge over more complex, tacit, or holistic aspects of clinical expertise and professional identity formation, which are crucial for lifelong learning but harder to quantify.[22] [43]


The Future of Medical Education

It has been less than 10 years since the official implementation of CBME as a system in North America, and therefore, the system is still in its development stages. Nevertheless, there are still challenges that need to be addressed, and we foresee the future of medical education focusing on:

  • More investment in faculty development: providing comprehensive, ongoing training for educators in EPA assessment, entrustment decision-making, and effective feedback within clinical workflows.

  • Optimizing WBA systems: leveraging technology to streamline data capture (e.g., mobile apps, AI-assisted analytics), enhance rater training, and develop efficient methods for synthesizing longitudinal assessment data into meaningful entrustment decisions.

  • Promoting collaboration and standardization: fostering inter-institutional collaboration to refine EPA definitions, share best practices, develop validated assessment tools, and establish clearer benchmarks for entrustment across the continuum of training.

  • Rigorously safeguarding the EPA concept: maintaining clear criteria for what constitutes a true EPA to prevent terminological drift and preserve its utility as the operational unit of CBME.

  • Continuous evaluation: rigorously researching the long-term impact of CBME and EPA implementation on learner outcomes, patient care quality, and system efficiency.

  • Fostering the development of master adaptive learners.[44]

The evolution from Flexner's time-based standardization to CBME's mastery-focused, individualized approach represents a profound transformation in preparing physicians for the complexities of modern health care. EPAs provide the essential mechanism to bridge the gap between competency frameworks and clinical practice. While substantial implementation hurdles remain, the trajectory is clear: CBME, underpinned by robust EPA frameworks and effective WBA, holds the promise of producing physicians who are not only knowledgeable but demonstrably competent, entrusted, and ready to deliver high-quality, safe patient care from day one. Successfully navigating the challenges ahead is paramount to fully realizing this transformative potential.


Conclusion

This review tried to trace the remarkable journey of medical education from its fragmented pre-Flexner roots, through the standardization revolution driven by the Flexner Report and its ensuing era of rigid time-based training, to the contemporary paradigm shift towards CBME. The limitations of the traditional model—particularly its fixed-time, variable-outcome approach, overreliance on summative assessments, and inherent lack of individualization—proved increasingly misaligned with the demands for physician accountability, patient safety, and personalized learning in the 21st century. CBME emerged as a necessary response, fundamentally reorienting education around the demonstrable mastery of predefined competencies rather than mere time served.

Core frameworks like CanMEDS and the ACGME Core Competencies/Milestones provided the essential architecture, defining competence as the integrated application of knowledge, skills, and attitudes. The core principles of CBME—outcome-focused design, variable-time progression based on mastery, continuous formative assessment, and graduated entrustment—collectively address the shortcomings of the past by prioritizing learner readiness and genuine preparedness for practice. This shift promises significant advantages, including enhanced patient safety through standardized benchmarks, individualized learning pathways that accommodate diverse needs and paces, and fostering a culture of lifelong learning and continuous improvement.

However, the translation of abstract competencies into observable clinical practice presented a significant challenge. EPAs, conceptualized by ten Cate, serve as the critical “missing link” in this transition. By defining discrete, essential units of professional practice that integrate multiple competencies and are performed in authentic contexts, EPAs became the core component of CBME. National frameworks like the AAMC Core EPAs for Entering Residency and the AFMC EPA Framework in Canada provide standardized roadmaps, explicitly linking EPAs to foundational competencies like CanMEDS roles and emphasizing readiness for the next stage of training through developmental entrustment.

Despite this conceptual clarity and growing adoption, significant implementation challenges persist. Faculty resistance due to unfamiliarity and time constraints, the resource intensity of training assessors and managing robust WBA systems, and inconsistent operational definitions of EPAs across institutions hinder widespread, uniform implementation. The inherent complexity of assessment—requiring frequent, direct observation, trained faculty, reliable tools, and sophisticated data aggregation—remains a major hurdle. Furthermore, the rising popularity of EPAs risks conceptual dilution if the term is misapplied to tasks lacking the requisite complexity or entrustability, undermining its foundational rigor.

In the near future, the medical education society will continue to be busy with overcoming the current challenges of CBME and work on graduating master adaptive learners.[44]



Conflict of Interest

R.R.E. reported role in “Chair of Research Ethics Committee.” All other authors reported no conflict of interest.

Declaration of use of AI in the writing process

In preparing this manuscript, Gemini was utilized to enhance article structure, summarize and synthesize existing literature, and improve the clarity, conciseness, and grammatical correctness of the writing. Following the application of AI, the authors conducted a thorough review and editing process, assuming full responsibility for the final content. Recognizing the potential for AI to generate incorrect, incomplete, or biased information, the manuscript underwent rigorous human revision and judgment. Consistent with Elsevier's Authorship Policy, no AI or AI-assisted technologies have been designated as authors or co-authors, as the inherent responsibilities of authorship are exclusively human.



Address for correspondence

Rafik R. Elmehdawi
University of Benghazi
Benghazi
Libya   

Publication History

Received: 21 August 2025

Accepted: 07 October 2025

Article published online:
26 November 2025

© 2025. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)

Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India