Summary
Objectives:
When two raters consider a qualitative variable ordered according to three categories,
the qualitative agreement is commonly assessed with a symmetrically weighted kappa
statistic. However, these statistics can present paradoxes, since they may be insensitive
to variations of either complete agreements or disagreements.
Methods:
Agreement may be summarized by the relative amounts of complete agreements, partial
and maximal disagreements beyond chance. Fixing the marginal totals and the trace,
we computed symmetrically weighted kappa statistics and we developed a new statistic
for qualitative agreements. Data sets from the literature were used to illustrate
the methods.
Results:
We show that agreement may be better assessed with the unweighted kappa index, κc, and a new statistic ζ, which assesses the excess of maximal disagreements with respect to the partial ones,
and does not depend on a particular weighting system. When ζis equal to zero, maximal and partial disagreements beyond chance are equal. With
its estimated large sample variance, we compared the values of two contingency tables.
Conclusions:
The (κc, ζ) pair is sensitive to variations in agreements and/or disagreements and enables locating
the difference between two qualitative agreements. The qualitative agreement is better
with increasing values of κc and ζ.
Keywords
Weighted kappa - agreement - concordance - order relationship - maximal and partial
disagreement