J Wrist Surg 2017; 06(01): 046-053
DOI: 10.1055/s-0036-1587316
Scientific Article
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

AO Distal Radius Fracture Classification: Global Perspective on Observer Agreement

Prakash Jayakumar
1   Department of General Surgery, OLVG, Amsterdam, The Netherlands
,
Teun Teunis
1   Department of General Surgery, OLVG, Amsterdam, The Netherlands
,
Beatriz Bravo Giménez
2   Orthopaedic Upper Extremity Service, Hospital Universitario Doce de Octubre-Universidad Complutense, Madrid, Spain
,
Frederik Verstreken
3   Department of Hand Surgery, Monica Hospital/Antwerp University Hospital, Edegem, Belgium
,
Livio Di Mascio
4   Department of Trauma and Orthopaedic Surgery, Barts and The Royal London Hospital, London, United Kingdom
,
Jesse B. Jupiter
1   Department of General Surgery, OLVG, Amsterdam, The Netherlands
› Author Affiliations
Further Information

Publication History

18 April 2016

30 June 2016

Publication Date:
08 August 2016 (online)

Preview

Abstract

Background The primary objective of this study was to test interobserver reliability when classifying fractures by consensus by AO types and groups among a large international group of surgeons. Secondarily, we assessed the difference in inter- and intraobserver agreement of the AO classification in relation to geographical location, level of training, and subspecialty.

Methods A randomized set of radiographic and computed tomographic images from a consecutive series of 96 distal radius fractures (DRFs), treated between October 2010 and April 2013, was classified using an electronic web-based portal by an invited group of participants on two occasions.

Results Interobserver reliability was substantial when classifying AO type A fractures but fair and moderate for type B and C fractures, respectively. No difference was observed by location, except for an apparent difference between participants from India and Australia classifying type B fractures. No statistically significant associations were observed comparing interobserver agreement by level of training and no differences were shown comparing subspecialties. Intra-rater reproducibility was “substantial” for fracture types and “fair” for fracture groups with no difference accounting for location, training level, or specialty.

Conclusion Improved definition of reliability and reproducibility of this classification may be achieved using large international groups of raters, empowering decision making on which system to utilize.

Level of Evidence Level III

Note

Prakash Jayakumar and Teun Teunis contributed equally to this work. This work was performed at the Orthopaedic Hand and Upper Extremity Service, Massachusetts General Hospital - Harvard Medical School.