Summary
Objectives: The German Association for Medical Informatics, Biometry and Epidemiology implemented
a field test for the ICD-11 Beta Draft. Aim was to analyze completeness and appropriateness
of the ICD-11 Beta Draft in its entire breadth.
Methods: Starting point was the synonym thesaurus (“Alphabet”) of the German modification
of ICD-10. The Alphabet included a list of diagnoses terms that supports the coding
of diagnoses with ICD-10. A sample of 60,328 diagnosis terms was drawn to be mapped
to the ICD-11 Beta Draft. A subsample of 13,975 diagnosis terms was prepared for assessing
reliability. First, the coders had to assign a diagnosis term from the sample to an
appropriate English one. This included the automatic selection of the respective code
from the ICD-11 Beta Draft. Secondly, the coders had to answer questions regarding
completeness, appropriateness, and other issues.
Results: Finally, 49,184 results from 36 coders were available for the analysis. Problems
with completeness were indicated in 4.7% of the results, problems with appropriateness
in 5.3%. On the level of chapters, Cohen’s kappa reached grade “fair” at a maximum.
The coders agreed in 31.4% of the terms.
Conclusions: Problems with the ICD-11 Beta Draft appeared to be moderate. Completeness was high,
reliability was low as it is known for ICD-10. Concerns with the structure of the
ICD-11 Beta Draft were noted, e. g. for neoplasms. A post processing of the ICD-11
Beta Draft seems to be sufficient with regard to the content. Methodologically, a
thorough review of the structure might be advisable.
Keywords
Classification - coding - diagnoses - International Classification of Diseases - reliability