Observer variation in the evaluation and classification of severe central tarsal bone fractures in racing Greyhounds
Received:03 June 2010
Accepted:07 January 2011
19 December 2017 (online)
Objectives: To determine observer agreement on radiographic evaluation of central tarsal bone (CTB) fractures and compare this with evaluation of the same fractures using computed tomography (CT).
Methods: Radiographs and CT scans were obtained of the right tarsi from limbs of Greyhounds euthanatized after sustaining severe CTB fracture during racing. Four observers described and classified each fracture. Inter- and intra-observer agreements were calculated.
Results: Inter-observer agreement was higher for assessment of fractures using CT. Several fractures assessed by radiography were mis-classified as a less severe type. Intra-observer agreement for assessment and classification of CTB fractures via radiography versus CT was variable. Overall agreement among all four observers was higher for CT than radiography. Additionally, when identifying fractures of the adjacent tarsal bones, observer agreement was higher for CT than radiography.
Clinical significance: Computed tomography improved observer ability to correctly evaluate CTB fracture and detect the degree of displacement and extent of any comminution. Identification of fractures of adjacent tarsal bones was also improved when tarsi were assessed using CT. These data suggest that treatment decisions based solely on radiographic assessment of CTB fractures may not produce the expected outcome.
- 1 Sicard GK, Short K, Manley PA. A survey of injuries at five greyhound racing tracks. J Small Anim Pract 1999; 40: 428-432.
- 2 Hickman J. Greyhound injuries. J Small Anim Pract 1975; 16: 455-460.
- 3 Boudrieau RJ, Dee JF, Dee LG. Central tarsal bone fractures in the racing Greyhound: a review of 114 cases. J Am Vet Med Assoc 1984; 184: 1486-1491.
- 4 Guilliard MJ. Fractures of the central tarsal bone in eight racing greyhounds. Vet Rec 2000; 147: 512-515.
- 5 Gannon JR. Stress fractures in the greyhound. Aust Vet J 1972; 48: 244-250.
- 6 Evans HE, Miller ME. Skeleton. In Howard E. Evans. editor Miller"e;s anatomy of the dog. 3rd ed. Philadelphia: W.B. Saunders; 1993: pg. 122-218.
- 7 Dee JF, Dee J, Piermattei DL. Classification, management, and repair of central tarsal fractures in the racing greyhound. J Am Anim Hosp Assoc 1976; 12: 398-405.
- 8 Carlisle CH, Reynolds KM. Radiographic anatomy of the tarsocrural joint of the dog. J Small Anim Pract 1990; 31: 273-279.
- 9 Fitch RB, Hathcock JT, Montgomery RD. Radiographic and computed tomographic evaluation of the canine intercondylar fossa in normal stifles and after notchplasty in stable and unstable stifles. Vet Radiol Ultrasound 1996; 37: 266-274.
- 10 Gielen IM, De Rycke LM, van Bree HJ. et al. Computed tomography of the tarsal joint in clinically normal dogs. Am J Vet Res 2001; 62: 1911-1915.
- 11 Morgan JW, Santschi EM, Zekas LJ. et al. Comparison of radiography and computed tomography to evaluate metacarpo/metatarsophalangeal joint pathology of paired limbs of thoroughbred racehorses with severe condylar fracture. Vet Surg 2006; 35: 611-617.
- 12 Draffan D, Clements D, Farrell M. et al. The role of computed tomography in the classification and management of pelvic fractures. Vet Comp Orthop Traumatol 2009; 22: 190-197.
- 13 Lozano-Calderon S, Blazar P, Zurakowski D. et al. Diagnosis of scaphoid fracture displacement with radiography and computed tomography. J Bone Joint Surg Am 2006; 88: 2695-2703.
- 14 Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas 1960; 20: 37-46.
- 15 Cohen J. Weighted Kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull 1968; 70: 213-220.
- 16 Fleiss JL. The measurement of interrater agreement. In: Statistical methods for rates and proportions. 2nd ed. New York: John Wiley and Sons; 1981: pg. 225-232.
- 17 Fleiss JL. Measuring nominal scale agreement among many raters. Psychol Bull 1971; 76: 378-382.
- 18 Feinstein AR, Cicchetti DV. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol 1990; 43: 543-549.
- 19 Gwet K. The AC1 and AC2 statistics. In: Handbook of inter-rater reliability: how to estimate the level of agreement between two or multiple raters. Gaithersburg, Maryland: Stataxis Publishing Co; 2001: pg. 79-82.
- 20 Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159-174.
- 21 Brennan P, Silman A. Statistical methods for assessing observer variability in clinical measures. BMJ 1992; 304: 1491-1494.
- 22 Byrt T, Bishop J, Carlin JB. Bias, prevalence and kappa. J Clin Epidemiol 1993; 46: 423-429.
- 23 Eugenio BD, Glass M. The kappa statistic: a second look. Computational Linguistics 2004; 30: 95-101.
- 24 Gwet K. Inter-rater reliability: dependence on trait prevalence and marginal homogeneity. Series: Statistical Methods For Inter-Rater Reliability Assessment 2002; 2: 1-9.
- 25 Gwet KL. Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol 2008; 61: 29-48.