In a recent reply to our article, Brenneman et al (2017), McFarland raised several
issues regarding our statistical design and findings, some of which we will address
briefly here. We reported the coefficient of determination, r
2, a commonly used effect size statistic that describes the degree of shared variance
between variables, and the raw correlation coefficients on which the r
2 values were based. Using these correlation coefficients, McFarland argued that the
relationship we reported between some of our central auditory processing disorder
(CAPD) measures and cognition measures may actually be quite large when considered
in the context of reliability data from another study ([Musiek et al, 1991]), which McFarland used to determine the expected upper limit of shared variance.
However, if one considers other reliability data, such as those reported by [Strouse and Hall (1995)], then the degree of shared variance between CAPD and cognition in the present study
appears quite modest, as we reported.
McFarland incorrectly assumed that our goal was to address “whether CAPD tests provide
incremental validity beyond that provided by language and cognitive tests.” and argued
that our statistical approach was inadequate to pursue that goal. In fact, establishing
incremental validity was not our goal. As stated in the introduction, we set out to
examine to what degree clinical measures of CAPD, language, and cognition shared variance
in a population that has a high comorbidity rate for auditory processing, speech-language,
and cognitive deficits. Given this high comorbidity rate, we chose to examine the
relationship between pairs of tests rather than taking a regression approach in which
multicollinearity of the predictors could potentially be an issue.
McFarland’s comments obscure two important implications of our findings. First, the
largest correlation and r
2 value in our data were between the cognition and speech-language tests. If one criterion
required to establish CAPD tests as independent measures is how well they disassociate
from other measures, then, clearly, our results suggest that CAPD is more independent
from speech-language and cognition than speech-language and cognition measures are
from each other. Second, the present study showed that significant relationships between
CAPD tests and other measures were more likely observed if participants with lower
cognition were included. For this reason, we emphasized the importance of clearly
defining participants in future studies investigating CAPD in pediatric populations
relative to potential confounding variables and/or comorbid cognitive and speech-language
deficits, something that currently is rarely done.
In his closing, McFarland advocates for the use of multi-modality testing in the diagnosis
of auditory processing deficits. Limitations regarding this approach have been raised
frequently in the literature ([Musiek et al, 2005]; [Dillon et al, 2014]; [Moore and Ferguson, 2014]). We would refer the interested reader to these existing publications for discussion
of this approach.