CC BY-NC-ND 4.0 · J Acad Ophthalmol 2018; 10(01): e127-e132
DOI: 10.1055/s-0038-1668574
Research Article
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

The Pediatric Examination Assessment Rubric (PEAR): A Pilot Project

Michael Langue
1  Department of Ophthalmology, Penn State Eye Center, Hershey, Pennsylvania
,
Ingrid U. Scott
1  Department of Ophthalmology, Penn State Eye Center, Hershey, Pennsylvania
2  Department of Public Health Sciences, Penn State College of Medicine, Hershey, Pennsylvania
,
Ajay Soni
1  Department of Ophthalmology, Penn State Eye Center, Hershey, Pennsylvania
› Institutsangaben
Weitere Informationen

Publikationsverlauf

22. April 2018

09. Juli 2018

Publikationsdatum:
16. August 2018 (online)

  

Abstract

Purpose The pediatric ophthalmic examination is often considered a challenge to ophthalmologists at any level of training. New tools need to be developed and tested to enhance resident and fellow training in pediatric ophthalmology. To our knowledge, this pilot project introduces the first educational rubric designed specifically for a pediatric ophthalmic examination.

Methods Preliminary surveys were completed by 11 ophthalmic residents, of all three postgraduate years (PGY), to gauge comfort level with the pediatric ophthalmic examination. A one-page Pediatric Examination Assessment Rubric (PEAR) was developed and reviewed by 13 content experts (12 pediatric ophthalmologists and a lead developer of the Ophthalmic Clinical Exercise Examination [OCEX] tool) at eight academic institutions. A total of five educators from three academic institutions used the rubric to evaluate a total of six residents during a new strabismus evaluation. Postevaluation surveys were completed by both the five educators and the six residents.

Results Preliminary surveys showed that only 18.2% of residents felt their pediatric examination skills were good. Residents noted higher levels of frustration and less comfort with the pediatric examination when compared with an adult examination. Thirteen experts' comments were incorporated into the rubric to establish content validity. Postevaluation surveys showed that 60% of faculty and 100% of residents found the rubric to be very effective in providing feedback.

Conclusion In this pilot project, we established the need for more concrete educational tools in pediatric ophthalmology, created an educational tool, established content validity, and demonstrated feasibility. The PEAR helps residents identify skills to target for improvement based on the quality of their pediatric ophthalmic examinations. At three academic institutions, the PEAR was shown to be easy to use and a useful tool for training residents to perform the pediatric ophthalmic examination.