CC BY 4.0 · Libyan International Medical University Journal 2023; 08(02): 070-075
DOI: 10.1055/s-0043-1776309
Original Article

Medical Students' Performances Using Different Assessment Methods during the Final Examination in Internal Medicine at the University of Benghazi, Libya

Najat Buzaid
1   Department of Internal Medicine, Faculty of Medicine, University of Benghazi, Libya
2   Department of Medicine, 7th of October Hospital, Libya
1   Department of Internal Medicine, Faculty of Medicine, University of Benghazi, Libya
3   Department of Medicine, Benghazi Medical Center, Benghazi, Libya
Saleh M. Alawgali
1   Department of Internal Medicine, Faculty of Medicine, University of Benghazi, Libya
2   Department of Medicine, 7th of October Hospital, Libya
Amina Albash
1   Department of Internal Medicine, Faculty of Medicine, University of Benghazi, Libya
2   Department of Medicine, 7th of October Hospital, Libya
Mousa Alfakhri
1   Department of Internal Medicine, Faculty of Medicine, University of Benghazi, Libya
2   Department of Medicine, 7th of October Hospital, Libya
› Author Affiliations
Funding None.


Background Distinctive evaluation tools assess diverse fields of learning that considerably impact the learning process.

Objective To compare and correlate the performances of undergraduate final year medical students in written, clinical, and viva examinations in the subject of internal medicine.

Methods This is a retrospective study. After authority approval, data was collected from final year examination results during 2019 to 2020. All the students of the medical school at University of Benghazi were included in this study. Their gender and their written, clinical, viva, and total scores were included. Data were coded and transferred from Excel to SPSS version 24 and expressed as frequencies and percentages. Chi-squared analysis was performed to test for differences in the proportions of categorical variables between two or more groups. Odd ratio (OR) is used to calculate the odds of passing the subject based on scores in different types of exams. Person's correlation (R) is used to evaluate the consistency of students' performances in different examinations. A p-value of less than 0.05 was considered the cut-off value of significant.

Results The total number of students was 679, out of which 499 (73.5%) were females and 180 (26.5%) were males. The total number of students who passed the course was 422 (62%) with no significant differences between males and females. A statistically significant (p < 0.001) greater percentage of students achieved a passing score in clinical assessment (502 [73.9%]), followed by viva assessment (458.0 [67.5%]). The students performed the worse in written examination with only 291/679 (43%) students passing the examination, with no gender-based differences. There was a highly significant association between the total score of students who passed the subject and their scores in the written examination with an OR of 2.3 (p < 0.001). Viva examination and total score OR was 0.79 with no significant differences for males or females. On the contrary, there was a statistically significant negative association between clinical exams and total scores of students who passed the subject (OR = 0.58). There was a highly significant correlation (p < 0.001) between written examination and viva examination (R = 0.638), between written examination and clinical examination (R = 0.629), and between clinical and viva examinations (R = 0.763).

Conclusion Students demonstrated higher performance on clinical and viva exams compared with written exams. Additionally, there were no notable disparities in results between male and female students across any of the three exam types. The written exam served as the most reliable indicator of a student's success in the subject. Furthermore, the data revealed a positive correlation between scores on the different exam formats, indicating that students exhibited consistent performance across all modes of evaluation.

Zoom Image

Publication History

Article published online:
13 December 2023

© 2023. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (

Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India

  • References

  • 1 Adams NE. Bloom's taxonomy of cognitive learning objectives. J Med Libr Assoc 2015; 103 (03) 152-153
  • 2 Anderson LW, Krathwohl DR. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York, NY: Longman; 2021
  • 3 Case SM, Swanson DB. Constructing Written Test Questions for the Basic and Clinical Sciences. 3rd. ed.. Philadelphia, PA: National Board of Medical Examiners; 2002
  • 4 Day SC, Norcini JJ, Diserens D. et al. The validity of the essay test of clinical judgement. Acad Med 1990; 65 (09) S39-S40
  • 5 Epstein RM, Hundert EM. Defining and assessing clinical competence. JAMA 2002; 387: 226-235
  • 6 Boulet JR, Rebbecchi TA, Denton EC, McKinley DW, Whelan GP. Assessing the written communication skills of medical school graduates. Adv Health Sci Educ Theory Pract 2004; 9 (01) 47-60
  • 7 Norcini JJ. Peer assessment of competence. Med Educ 2003; 37 (06) 539-543
  • 8 Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990; 65 (9, Suppl): S63-S67
  • 9 Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: a method for assessing clinical skills. Ann Intern Med 2003; 138 (06) 476-481
  • 10 Wass V, Van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001; 357 (9260): 945-949
  • 11 Epstein RM. Assessment in medical education. N Engl J Med 2007; 356 (04) 387-396
  • 12 Kerdijk W, Snoek JW, van Hell EA, Cohen-Schotanus J. The effect of implementing undergraduate competency-based medical education on students' knowledge acquisition, clinical performance and perceived preparedness for practice: a comparative study. BMC Med Educ 2013; 13: 76
  • 13 Amin TT, Kaliyadan F, Al-Muhaidib NS. Medical students' assessment preferences at King Faisal University, Saudi Arabia. Adv Med Educ Pract 2011; 2: 95-103
  • 14 Charles J, Kalpana S, Stephen Max LJ, Shantharam D. A cross sectional study on domain based evaluation of medical students. IOSR J Res Method Educ 2014; 4 (04) 33-36
  • 15 Hesse DW, Ramsey LM, Bruner LP. et al. Exploring academic performance of medical students in an integrated hybrid curriculum by gender. Med Sci Educ 2023; 33 (02) 353-357
  • 16 Conger D, Long MC. Why are men falling behind? Gender gaps in college performance and persistence. Accessed April 2, 2015 at:
  • 17 Maliki AE, Ngban AN, Ibu JE. Analysis of students' performance in Junior Secondary School Mathematics Examination in Bayelsa State of Nigeria. Stud Home Comm Sci 2009; 3: 131-134
  • 18 Alam KK, Begum SN, Nargis T. Feedback on formative assessment in undergraduate medical education Bangladesh. Bang J Physiol Pharmacol 2009; 25 (1–2): 18-22
  • 19 Memon S, Shaikh SU. Comparison of performance on written and OSCE assessment during end semester pediatric examination. Pak J Med Sci 2020; 36 (04) 711-716
  • 20 Rahman N, Ferdousi S, Hoq N, Amin R, Kabir J. Evaluation of objective structured practical examination and traditional practical examination. Mymensingh Med J 2007; 16 (01) 7-11
  • 21 Mondal R, Sarkar S, Nandi M, Hazra A. Comparative analysis between objective structured clinical examination (OSCE) and conventional examination (CE) as a formative evaluation tool in Pediatrics in semester examination for final MBBS students. Kathmandu Univ Med J 2012; 10 (37) 62-65