J Am Acad Audiol 2018; 29(05): 364-377
DOI: 10.3766/jaaa.16130
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Referral and Diagnosis of Developmental Auditory Processing Disorder in a Large, United States Hospital-Based Audiology Service

David R. Moore
*   Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
§   Department of Otolaryngology, University of Cincinnati College of Medicine, Cincinnati, OH
,
Stephanie L. Sieswerda
*   Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Maureen M. Grainger
*   Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Alexandra Bowling
*   Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Nicholette Smith
*   Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Audrey Perdew
*   Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Susan Eichert
†   Division of Audiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Sandra Alston
†   Division of Audiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Lisa W. Hilbert
†   Division of Audiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Lynn Summers
†   Division of Audiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Li Lin
‡   Division of Patient Services, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
,
Lisa L. Hunter
*   Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
†   Division of Audiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH
§   Department of Otolaryngology, University of Cincinnati College of Medicine, Cincinnati, OH
› Author Affiliations
Further Information

Corresponding author

David R. Moore
Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center
Cincinnati, OH 45229

Publication History

Publication Date:
29 May 2020 (online)

 

Abstract

Background:

Children referred to audiology services with otherwise unexplained academic, listening, attention, language, or other difficulties are often found to be audiometrically normal. Some of these children receive further evaluation for auditory processing disorder (APD), a controversial construct that assumes neural processing problems within the central auditory nervous system. This study focuses on the evaluation of APD and how it relates to diagnosis in one large pediatric audiology facility.

Purpose:

To analyze electronic records of children receiving a central auditory processing evaluation (CAPE) at Cincinnati Children’s Hospital, with a broad goal of understanding current practice in APD diagnosis and the test information which impacts that practice.

Research Design:

A descriptive, cross-sectional analysis of APD test outcomes in relation to final audiologist diagnosis for 1,113 children aged 5–19 yr receiving a CAPE between 2009 and 2014.

Results:

Children had a generally high level of performance on the tests used, resulting in marked ceiling effects on about half the tests. Audiologists developed the diagnostic category “Weakness” because of the large number of referred children who clearly had problems, but who did not fulfill the AAA/ASHA criteria for diagnosis of a “Disorder.” A “right-ear advantage” was found in all tests for which each ear was tested, irrespective of whether the tests were delivered monaurally or dichotically. However, neither the side nor size of the ear advantage predicted the ultimate diagnosis well. Cooccurrence of CAPE with other learning problems was nearly universal, but neither the number nor the pattern of cooccurring problems was a predictor of APD diagnosis. The diagnostic patterns of individual audiologists were quite consistent. The number of annual assessments decreased dramatically during the study period.

Conclusions:

A simple diagnosis of APD based on current guidelines is neither realistic, given the current tests used, nor appropriate, as judged by the audiologists providing the service. Methods used to test for APD must recognize that any form of hearing assessment probes both sensory and cognitive processing. Testing must embrace modern methods, including digital test delivery, adaptive testing, referral to normative data, appropriate testing for young children, validated screening questionnaires, and relevant objective (physiological) methods, as appropriate. Audiologists need to collaborate with other specialists to understand more fully the behaviors displayed by children presenting with listening difficulties. To achieve progress, it is essential for clinicians and researchers to work together. As new understanding and methods become available, it will be necessary to sort out together what works and what doesn’t work in the clinic, both from a theoretical and a practical perspective.


#

INTRODUCTION

Children seeking audiology services are sometimes found to have normal audiograms, despite reports from parents, teachers, and primary care professionals that suggest the child has hearing or listening difficulties. Common reports include difficulty hearing speech in challenging environments, unexplained academic difficulties, impaired attention to sounds, and difficulty following spoken directions ([AAA, 2010]; [BSA, 2011]). Many of these children have a variety of additional learning problems, especially involving language ([Dawes and Bishop, 2009]; [Sharma et al, 2009]). Any of these reports or difficulties could be exacerbated by, or fundamentally dependent upon, either subclinical hearing impairment (i.e., a “hidden hearing loss”; [Liberman, 2015]) or some aspect of cognitive function that is necessary for normal auditory function (e.g., attention, memory).

Children with reported listening difficulties, but normal audiograms, may be given further assessments for auditory processing disorder (APD). In a recent, unpublished survey by one of the authors (L.L.H.), 16/25 pediatric facilities (children’s and general hospitals, university clinics) in the United States reported evaluating APD. APD remains a poorly defined construct ([Vermiglio, 2014]), despite attempts by professional societies in several states, provinces, and countries to provide comprehensive position statements and professional guidelines. One consequence of this poor understanding is the lack of an international “gold standard” test battery for screening and diagnosis, with potentially enormous variability in individual practice based on interpretation of current guidelines ([Wilson and Arnott, 2013]).

In this article, we focus on how “developmental” APD ([Moore et al, 2013]) has been assessed in one large pediatric facility, Cincinnati Children’s Hospital Medical Center (CCHMC). We conducted this analysis to understand, at a practical, clinical level, how individual children come into the system, how their subsequent testing leads to diagnosis, and the benefit and consistency of their diagnoses. This exercise has influenced the recruitment for and design of a large project on the mechanisms of APD, now underway at CCHMC (see [Moore, 2015]). The current, retrospective analysis is based on electronic records, and complements another APD service analysis that surveyed audiologists across the United States who listed APD as an area of expertise ([Emanuel et al, 2011]). In that survey, which covered all stages, from pretesting and screening, through diagnosis and management, a picture emerged of both a dominant consensus regarding test procedures, and a diversity of individual diagnostic tests (n = 27). A remarkable finding was that while almost all audiologists (97%) listed the “audiologist” as the professional responsible for diagnosing APD, only 40% thought that treatment or management provision was the audiologist’s responsibility. Almost all survey respondents (97%) used a test battery. Most (80%) always or often used a minimum battery for all children with additional tests based on individual case history and age. Children are referred for APD testing for a wide variety of reasons, some of which may only loosely be connected with hearing ([AAA, 2010]; [Hind et al, 2011]), and audiologists often make individualized judgments that can also vary based on the knowledge of the practitioner ([Moore et al, 2013]).

Both the American Speech-Hearing-Language Association ([ASHA, 2005]) and [AAA (2010)] recommend that a diagnosis of APD be based on a score 2 standard deviations (SDs) below the mean on at least two different tests, with additional detail concerning ear advantage (EA) given in [AAA (2010)] or the possibility of a diagnosis based on a score 3 SDs below the mean on at least one test. These criteria are arbitrary, and they can result in a wide variety of outcomes ([Wilson and Arnott, 2013]). [Emanuel and colleagues (2011)] did not comment on the application of these criteria. However, they found that two-thirds of the responding audiologists did not rely on a single test battery in any case, so the number of passed/failed tests becomes somewhat irrelevant.

The “SCAN” is one of the most widely used test batteries in the United States for both screening and diagnosing APD ([Emanuel et al, 2011]). The current version (SCAN-3C; [Keith, 2009]) has three screening (Gap Detection, Auditory Figure-Ground [AFG], and Competing Words [CW]) and four diagnostic (Filtered Words [FW], AFG, CW, and Competing Sentences [CS]) tests. Advantages of using the SCAN include that it is psychometrically sound and has nationally normalized data for each test, as well as a composite measure, that can be applied to specific age groups ([Emanuel et al, 2011]). A disadvantage of the SCAN, however, is that all of the diagnostic tests have speech and language components that necessarily involve abilities beyond those that are purely auditory ([Loo et al, 2013]). The CS test, for example, requires participants to repeat back a sentence heard in one ear while ignoring a sentence playing in the other. This task can be difficult for individuals with attention, language, or memory impairment and can thus result in poor scores for reasons unrelated to “bottom-up” auditory processing (AP; [Moore et al, 2013]). In general, the SCAN and most other currently used tests of AP do not adhere to psychoacoustic principles of “criterion-free” measurement or distinguish between auditory and cognitive interpretations of the outcome. They may, nevertheless, be ecologically valid measures of the sorts of problems typically experienced by children with listening difficulties, since those problems include speech, noise, and competing signals.

One AP test routinely used at CCHMC that is not based on language is the Pitch Pattern Test (PPT; Auditec, St Louis, MO; [Musiek, 1994]). Although the PPT is often considered a test of auditory temporal processing ([McDermott et al, 2016]), it may be more appropriately described as a test of auditory pattern memory that involves labeling successive tones as “high” or “low” pitched. In common with the SCAN tests, it thus requires multiple skills in addition to AP within the central auditory nervous system.

Despite these various issues with current testing of AP, audiologists must make at least two decisions when confronted with a child who has normal audiometry, but reported listening or other related difficulties. First, do I perform any tests for APD, or simply make recommendations or arrangements for management without testing? Second, if I do test, what tests do I choose and how do I interpret the outcomes? This study aimed to address these questions by analyzing electronic records of children receiving a central auditory processing evaluation (CAPE) at CCHMC with a broad goal of understanding current practice in diagnosing APD and the test information that impacts that practice.


#

METHODS

Participants, Materials, and Diagnosis

Anonymized electronic records (Electronic Privacy Information Center) were obtained for 1,113 individuals (724 males, 389 females) aged 5–19 yr ([Figure 1A]) who received a CAPE in the Division of Audiology at CCHMC between June 2009 and October 2014. All evaluated children had “normal” audiograms, bilaterally, as defined by pure-tone thresholds ≤20 dB HL for the frequencies 0.5, 1, 2, 4, and 8 kHz, normal speech perception in quiet, and absence of middle ear pathology as assessed by tympanometry.

Zoom Image
Figure 1 Deomographics: (A) age, (B) referral source, and (C) primary reported difficulty of the children whose data are reported in this study.

The CAPE included an extensive developmental, behavioral, medical, and, in some cases, social history. Behavioral evaluation included SCAN-3C testing as a default (four core tests, n = 1,036, and up to four supplementary tests, n = 1–57). SCAN core tests were two measures of dichotic listening (CW, CS), a speech-in-noise test (AFG +8), and a test of the ability to identify distorted speech (FW). Up to ten other AP tests were used including Phonemic Synthesis (n = 1,061; [Katz and Harmon, 1981]; [Katz, 1983]), Staggered Spondaic Words (n = 588; [Katz and Smith, 1991]), PPT (n = 766), Dichotic Digits (n = 420; [Musiek, 1983]), and Auditec CS (n = 89). Results from three of these tests are considered further below. The Phonemic Synthesis test measures the ability to discriminate individual speech sounds by asking a listener to combine binaural, successively presented phonemes into a spoken word. PPT is described in section “Introduction.” Dichotic Digits tests involve simultaneous presentation of two different digits to the two ears and usually (as here) use two successively presented digit pairs. Listeners are asked to recall either all digits heard, or the digits presented to one or the other ear. Although often described as a test of binaural integration ([Musiek and Weihing, 2011]; [Cameron, Glyde, Dillon, Whitfield, and Seymour, 2016]; [McDermott et al, 2016]), the test involves the ability to separate and report the signals presented to each ear.

Diagnosis was made by individual audiologists based on the above information. Because no nationally specified diagnostic procedure is applied to all evaluated children ([AAA 2010]; [Emanuel et al, 2011]), and because an aim of this study was to relate actual clinical diagnostic outcome to test findings, the overall assessment of the attending audiologist is considered here to be definitive. Audiologists involved in APD diagnosis at CCHMC held monthly meetings to discuss procedures and “which tests worked.” An aim of these meetings was to help families “get to the right door.” They were influenced by psychology and speech/language pathology colleagues. The diagnostic term “weakness” (see “Individual AP Tests” below) appears to have come from these interactions and was used as a label for families to grasp and to summarize a condition “halfway between a disorder and nothing.”


#

Procedure

Progress notes for CAPE appointments were reviewed for demographics, referral source, primary complaint at time of appointment, other relevant patient history, including problems related to listening, test battery performance, and audiologist diagnosis. Individual patient notes were coded into a database using operationally defined category labels. After coding, research coordinators performed double data entry on 10% of the extracted information to ensure consistency and interrater reliability.


#

Analysis

We focused primarily on the diagnosis of APD and the factors contributing to that diagnosis. We first briefly examine the clinical presentation of APD, offering descriptive statistics for comorbidities, referral sources, and commonly reported primary complaints. Many children were referred for or presented with more than one complaint and more than one comorbidity. Additional analyses were conducted to evaluate the likelihood of a diagnosis of APD given certain clinical presentations. The study sample was divided into children who received a diagnosis of “APD” (Disorder), those with an “auditory processing weakness” (Weakness), and those who received no AP diagnosis (Undiagnosed). Two children whose progress note did not contain enough information to determine one of these diagnostic categories were excluded from this part of the analysis.

Performance on the various tests was indexed initially as a “raw score,” typically obtained by simply adding up the number of items correctly identified by each child. For the SCAN, however, “scaled scores” were available ([Keith, 2009]), whereby raw scores were “normalized” (also called “scaled” or “standardized”) in a large trial population, so that they formed a normal distribution in that population. By analogy with IQ testing, the mean of the composite (scaled) SCAN score was 100, with a SD of 15. The individual tests were scaled to a mean of 10 (SD = 3).

Statistical analyses were performed using Statistical Analysis System software, version 9.3 (SAS Institute, Cary, NC). One-way analysis of variance (ANOVA) was performed to study the difference of the SCAN composite (scaled) score among the three diagnosis groups. Wilcoxon signed-rank tests were used to study the difference of each of the four (unscaled) test scores between the right ear and left ear. Kruskal–Wallis tests evaluated differences of each of the four test scores among the diagnosis groups for each ear and Wilcoxon rank-sum tests were used for pairwise comparisons between groups. Logistic regression was used to study differences in the probability of having hearing difficulty among the three diagnosis groups. Effect sizes were calculated for significant effects using η2 (for ANOVA), Cohen’s d (for Tukey), Pearson’s r (for Wilcoxon), and odds ratio. We analyzed categorical data extracted from progress notes using χ2 analyses. Post hoc multiple comparison adjustment was applied for significant effects using Tukey or Bonferroni. The two-sided significance level was set at 0.05.


#
#

RESULTS

Referral and Presentation

Children for whom a CAPE was performed represented 1.03% of the total number of visits to CCHMC Audiology between January 9 and October 2014 (n = 107,737). As above, all children receiving a CAPE had “normal” audiograms. They were all evaluated for APD because of otherwise unexplained concerns about their hearing, academic performance, or cognitive function. About 70% of the children receiving a CAPE were referred from two sources, the CCHMC Department of Developmental and Behavioral Pediatrics ([Figure 1B]), and community primary care practices. Smaller numbers were referred from other departments at CCHMC. The primary reasons for attendance stated by the parents or caregivers of those evaluated were more evenly distributed, but two concerns dominated; poorer than expected performance at school, and general or unspecified listening difficulties. Significant numbers of other concerns, including attention and language, were also stated. Many records indicated two (n = 426), or three (n = 64) of these concerns ([Figure 1C]).


#

SCAN Composite Performance

Surprisingly, this large group of referred children performed similarly on the SCAN composite score to those in the US national, stratified, normative sample (“scaled scores”; [Figure 2A]; ([Keith, 2009]). Scaled scores in the present sample were distributed normally with a mean of 98.5 (SD = 12.9). According to the SCAN manual ([Keith, 2009]), 14 of these children would be classified Disordered (>2 SD below mean), 127 Borderline, and 895 Normal. In contrast, 179 children in this sample were diagnosed by the evaluating audiologist to have a Disorder. Those children had the lowest mean composite score (84.4; [Figure 2B], [Table 1]). Children who were considered to have a Weakness and those who were Undiagnosed had higher mean composite scores, respectively. The difference between the groups was highly significant [ANOVA: F (2,1032) = 292.52, p < 0.001, η2 = 0.36], as were the differences between each pair of groups, with large effect sizes (Tukey adjustment, p < 0.001, d = 0.94–1.91).

Zoom Image
Figure 2 SCAN overview: (A) sample composite scores, derived from SCAN Manual ([Keith, 2009]) and based on equal weighting of the four individual test scaled scores ([Figure 3A]), rescaled to produce a population mean of 100 (SD = 15). The sample mean ± 2 SD are indicated by hatched vertical lines; (B) relation between SCAN composite score (mean + SD) and audiologists’ diagnosis. Number (n) of children receiving each diagnosis is indicated; (C) number of children in sample having at least one SCAN individual test (scaled) score ≤3 (>2 SD below mean); and (D) number of children in sample having at least two SCAN individual test (scaled) scores ≤7 and >3 (1–2 SD below mean).
Table 1

Performance of Children on AP Tests, Divided According to Diagnosis and Performance

SCAN

Diagnosis

Test

AFG

FW

CW

CS

Comp

Pitch

DD-L

DD-R

REA

Phon

Undiagnosed

Mean

10.3

12.0

10.6

10.2

105.6

94.5

90.0

91.5

1.5

22.0

SD

2.0

2.0

2.1

1.8

9.5

10.0

8.1

6.9

7.8

2.5

Median

10

12

11

10

105

100

92

92.5

0

23

25%

9

11

9

9

100

90

85.8

88

−2.8

21

75%

12

13

12

12

111

100

96

96

5

24

5%

8

9

7

8

92

70

76

80

−9.1

18

95%

14

16

14

13

121

100

100

100

16

25

Weakness

Mean

9.2

11.3

8.8

8.6

95.9

89.8

84.2

87.4

3.2

17.6

SD

2.4

2.0

2.4

2.5

11.2

17.5

10.1

9.3

11.0

5.1

Median

9

11

9

9

96

100

85

90

2.4

19

25%

8

10

7

7

89

80

80

82

−1.9

15

75%

11

13

11

10

101

100

91.3

94

10

22

5%

5

8

5

4

80

60

65.2

72.1

−16.0

7.5

95%

13

14

13

12

115

100

97.0

100

20

24

Disorder

Mean

7.7

10.3

6.5

6.6

84.4

88.0

82.2

83.6

1.4

16.9

SD

3.0

2.5

2.6

2.8

10.8

19.1

11.2

12.3

13.7

5.3

Median

8

11

6.5

6

84

100

84

86

2

18

25%

6

9

5

5

77

80

75.7

77

−5.9

14

75%

10

12

9

8

91.5

100

91

92

8

21

5%

3

6

2

1.5

67

48.6

59.1

68.7

−20

8

95%

12

13.7

11

11.6

101

100

95.7

98

20.9

23

Notes: For each test, the mean; SD; median; and 5, 25, 75, and 95 percentiles are shown.


Comp = composite SCAN score; DD-L = Dichotic Digits-Left; DD-R = Dichotic Digits-Right; Phon = Phonemic Synthesis; Pitch = PPT.



#

Individual AP Tests

For individual SCAN tests, all children who performed at or below −2 SD from the mean (scaled score ≤3; [Keith, 2009]) on at least one test were diagnosed by the audiologist as having either an AP “Weakness” or a “Disorder” ([Figure 2C]). For scaled scores between 4 and 7 (1–2 SD below mean) on two or more tests, the proportion of children with a Weakness increased ([Figure 2D]). Note that these were neither necessary nor sufficient criteria for diagnosis (see section “Methods”). However, the audiologists appeared to be strongly guided by these criteria that resemble, but are not identical to, AAA and ASHA recommendations.

Individual SCAN test scaled score distributions generally aligned well with each other around a mean scaled score of 10–11 ([Figure 3A]). However, the FW test consistently returned higher scores than the other tests, with a strong mode at a scaled score of 12–13, almost 1 SD above the expected mean for the normal population. In fact, using the scaled score tables for the SCAN-3C, only 13 children of 1,098 tested scored more than 1 SD below the mean on the FW test. Nevertheless, few children obtained a ceiling performance, suggesting that the test could provide upper-end discrimination with appropriate rescaling. Recognizing this problem, some of the audiologists additionally used the FW test from the former SCAN-C battery that provided scores more in line with those of the other tests.

Zoom Image
Figure 3 Individual AP test scores: (A) SCAN tests, scaled scores showing number of children achieving each score and mean of each test (arrows); (B) PPT, unscaled scores. Note the extreme skewness toward high scores, indicative of a ceiling effect; (C) Dichotic Digits, unscaled score showing % digits correctly identified in each ear; (D) Dichotic Digits, EA showing number of digits correctly identified in the right ear relative to the left ear. Positive scores indicate a REA; and (E) Phonemic Synthesis, unscaled score indicating number of correct items.

A common feature of all the other unscaled (nonnormalized) AP measures was a marked ceiling effect ([Figures 3] and [4]; [Table 1]). Nearly 2/3 of the 486 children who completed the PPT obtained 100% correct ([Figure 3B]). Similarly, 31% (28/90) of children scored 100% correct using the right ear in the Auditec CS test (data not shown). For Dichotic Digits, many scores were also at ceiling, and higher for the right ear than for the left ([Figure 3C]). Individual children had an EA that more commonly favored the right side, a right-ear advantage (REA; n = 287), than the left (n = 199; [Figure 3D]). Note, however, that 41% of this large sample had a left-ear advantage (LEA). Phonemic Synthesis, finally, had a broader spread of lower scores, potentially adding sensitivity. A strong ceiling effect was also observed for Staggered Spondaic Words (n = 588; data not shown).

Zoom Image
Figure 4 SCAN individual tests. Unscaled scores for each ear. Note the REA for each test: (A) AFG, (B) FW, (C) CW (directed ear), and (D) CS.

Higher right ear, unscaled scores were found for both monotic (“monaural”; [Figure 4A and B]) and dichotic ([Figure 4C and D]) SCAN tests (all p < 0.001; Wilcoxon signed-rank test), although this effect was more pronounced for the dichotic tests (effect sizes: r = 0.11 [“small”;[ Figure 4A]], r = 0.47 [“large”; [Figure 4B]], r = 0.65 [“large”; [Figure 4C]], r = 0.58 [“large”;[ Figure 4D]]; [Rosenthal, 1994]). Unscaled scores for the AFG (+8) test showed generally high performance and little individual discrimination; almost all children scored >16/20. Despite a strong ceiling effect for the CS test, particularly for the right ear, the distribution of scores spanned the entire range, suggesting test sensitivity. Further analysis of Dichotic Digits, one of the most popular clinical diagnostic tests for APD ([Emanuel et al, 2011]), is shown with respect to age in [Figure 5]. It has been suggested that REA gradually declines with age ([Keith, 2009]). These data show a more complex pattern. The incidence of large REAs declined after age 10, but the proportion of children with a REA remained stable.

Zoom Image
Figure 5 Dichotic Digits EA across age: (A) number of children achieving each level of EA (see [Figure 3D]) and (B) percent children achieving each level of EA.

#

Diagnosis

Of the 1,098 participants for whom data were recorded, 53.4% were Undiagnosed; they were found to have neither a Weakness nor a Disorder ([Figure 6A]). Overall, nearly all of the children diagnosed as having a Disorder matched the number satisfying one of the two SCAN criteria ([Figure 2C and D]), whereas only 26% of those found to have a Weakness satisfied those criteria.

Zoom Image
Figure 6 Audiologists’ diagnoses: (A) number of children receiving each diagnosis in whole sample; (B) summary number of diagnoses given by each of four principal audiologists; (C) yearly cases by number, 2009–2014, fitting each diagnostic category; and (D) yearly cases by percentage, 2009–2014, fitting each diagnostic category.

The balance between right- and left-ear scores for the separate SCAN tests did not vary markedly with diagnostic category ([Figure 7]). However, Kruskal–Wallis tests did show significant differences (χ2(2) p < 0.001) among the three groups for both ears and all four tests ([Table 2]). Pairwise comparison (Wilcoxon rank-sum test, r) showed that, first, the Undiagnosed group had significantly higher scores than the Weakness group (small to medium effect sizes), second, the Undiagnosed group had significantly higher scores than the Disorder group (medium to large effect sizes, except FW with small effect sizes), and third, the Weakness group had significantly higher scores than the Disorder group (small to medium effect sizes, except FW nonsignificant).

Zoom Image
Figure 7 SCAN Individual tests by diagnostic category. Unscaled scores shown separately for each ear: Column (A) AFG, (B) FW, (C) CW, and (D) CS. Top row: Undiagnosed, middle row: Weakness, bottom row: Disorder.
Table 2

SCAN: Performance on Individual (Unscaled) Tests for Each Ear by Diagnostic Group

Tests

Right Ear

Left Ear

Group

Median (IQR)

p Value

Median (IQR)

p Value

AFG

χ2(2) = 59.7, p < 0.0001*

χ2 (2) = 63.2, p < 0.0001*

 Undiagnosed

19 (18–20)

1-0: r = 0.15, p < 0.0001**

19 (18–19)

1-0: r = 0.15, p < 0.0001**

 Weakness

19 (18–20)

2-0: r = 0.30, p < 0.0001**

19 (18–19)

2-0: r = 0.31, p < 0.0001**

 Disorder

18 (17–19)

2-1: r = 0.18, p < 0.0001**

18 (17–19)

2-1: r = 0.19, p < 0.0001**

FW

χ2 (2) = 40.9, p < 0.0001*

χ2 (2) = 14.1, p = 0.0009*

 Undiagnosed

17 (16–19)

1-0: r = 0.19, p < 0.0001**

16 (14–17)

1-0: r = 0.09, p = 0.011**

 Weakness

17 (15–18)

2-0: r = 0.20, p < 0.0001**

16 (13–17)

2-0: r = 0.14, p = 0.001**

 Disorder

16 (14–18)

2-1: r = 0.05, p = 0.242**

15 (13–17)

2-1: r = 0.06, p = 0.154**

CW (DE)

χ2 (2) = 118.3, p < 0.0001*

χ2 (2) = 102.8, p < 0.0001*

 Undiagnosed

24 (21–26)

1-0: r = 0.21, p < 0.0001**

18 (15–22)

1-0: r = 0.19, p < 0.0001**

 Weakness

22 (19–25)

2-0: r = 0.41, p < 0.0001**

16 (12–20)

2-0: r = 0.40, p < 0.0001**

 Disorder

18 (14–23)

2-1: r = 0.27, p < 0.0001**

13 (9–17)

2-1: r = 0.23, p < 0.0001**

CS

χ2 (2) = 59.0, p < 0.0001*

χ2 (2) = 57.8, p < 0.0001*

 Undiagnosed

34 (32–35)

1-0: r = 0.12, p < 0.0001**

27 (21–33)

1-0: r = 0.17, p < 0.0001**

 Weakness

33 (31–34)

2-0: r = 0.29, p < 0.0001**

24 (15–31)

2-0: r = 0.27, p < 0.0001**

 Disorder

31 (25–34)

2-1: r = 0.21, p < 0.0001**

21 (12–28)

2-1: r = 0.12, p = 0.004**

Notes: *Kruskal–Wallis test; **Wilcoxon rank-sum test.


χ2 = χ2 test (degrees of freedom); DE = directed ear; IQR = interquartile range; r = effect size.


Individual children who had an EA also generally had a REA (%REA, [Table 3]). Note, however, that a substantial proportion of children did not have an EA (No EA), particularly on the monaural tests. Restricting the comparison to children with larger EAs (“Large EA,” [Table 3]) increased the proportion of children with a REA. χ2 analysis showed no significant differences in EA across diagnostic groups, except on the (monaural) FW test for both the No EA (p < 0.05) and Large EA (p < 0.01) criteria and the CS test for the Large EA criterion.

Table 3

SCAN Tests: Individual Children EA as a Function of Diagnostic Group

EA: Group

Any EA Tests

REA

LEA

No EA

% REA

Large EA Tests

REA

LEA

% REA

Undiagnosed

AFG

186

133

143

58

AFG (>1)

63

45

58

FW

298

81*

86

79

FW (>2)

146

15*

91

CW

382

60

27

86

CW (>6)

179

10

95

CS

385

62

52

86

CS (>8)

177

15*

92

Weakness

AFG

142

105

105

57

AFG

53

39

58

FW

203

75

64

73

FW

85

22*

79

CW

270

60

15

82

CW

147

15

91

CS

286

58

25

83

CS

174

23

88

Disorder

AFG

72

53

39

58

AFG

31

21

60

FW

89

48*

18

65

FW

47

12

80

CW

141

28

5

83

CW

72

9

89

CS

133

37

7

78

CS

86

20*

81

Notes: See [Figure 7]. χ2 tests: All comparisons nonsignificant, except where noted with asterisks. *p = 0.0273 Bonferroni adjusted, odds for LEA in Disorder and Weakness groups larger than Undiagnosed group (small/medium effect sizes). SCAN tests as per [Table 1]. Any EA Tests: Numbers of children having REA or LEA >0. Large EA tests: REA or LEA greater than numbers indicated for tests in Undiagnosed.


For the other individual AP tests shown in [Figure 3], mean, median, and percentile performance generally correlated with diagnosis, and showed significant differences between groups (Kruskal–Wallis, p ≤ 0.0015), but differentiated between diagnostic categories less well than the SCAN ([Table 1]). For example, the PPT had a median score of 100% for all three diagnostic categories. For Dichotic Digits EA ([Figure 5]), neither the mean nor median differentiated well between diagnostic groups (Kruskal–Wallis, p ≥ 0.10). Dichotic Digits EA was thus a weak predictor of diagnosis. Phonemic Synthesis provided a broader spread of scores at the lower end (at and below the median), which could differentiate the Undiagnosed children from those with either a Weakness or a Disorder (p < 0.0001; r = 0.48, 0.49 respectively). However, this test did not differentiate those with a Weakness from those with a Disorder (r = 0.07; p = 0.113).

Four individual audiologists each evaluated 238–300 children and together accounted for 91% of total service evaluations. The diagnostic profile had some similarities and some differences between these audiologists ([Figure 6B]). Audiologists 2 and 3 more closely reflected the overall profile ([Figure 6A]), whereas Audiologist 1 tended to diagnose a Weakness more frequently than a Disorder, and Audiologist 4 more commonly diagnosed a Disorder.

The number of children receiving a CAPE decreased dramatically during the study period ([Figure 6C]). This decline was steady for each year from 2010, noting that data for 2009 and 2014 span less than a complete year. Expressed as a percentage of children evaluated each year, there was a trend of decreasing willingness to diagnose, most notably for a Disorder between 2009 and 2010. These annual trends were accompanied by changes in the source of referral, with a decreasing proportion from Behavioral Pediatrics and an increasing proportion from Primary Care in more recent years (data not shown).


#

Cooccurrence with other Learning Difficulties

Primary reasons for referral were presented above ([Figure 1C]). A more detailed breakdown of learning difficulties found that, overall, 90% of evaluated children had one or more additional difficulties ([Figure 8A]). Of these 76% of evaluated children had school problems, 43% had speech/language problems, 41% had attention-deficit hyperactivity disorder (ADHD), and 24% had behavioral/emotional problems. Excluding school problems, one or more other learning difficulties were found in 70% of children Undiagnosed on the CAPE evaluation, 75% of those with a Weakness, and 79% of those with a Disorder ([Figure 8B]). About the same proportion of children had each primary cooccurring difficulty ([Figure 8A]) and number of cooccurring difficulties ([Figure 8B]) irrespective of their CAPE diagnosis (χ2, nonsignificant). These data show that while co-occurring difficulties were very common among the children in our survey, diagnostic category was independent of those difficulties. Had there been a codependence, we would have predicted a systematic difference in the distribution of cooccurring difficulties between the diagnostic groups.

Zoom Image
Figure 8 Cooccurrence of audiologists’ diagnoses with other difficulties: (A) first listed (primary) difficulty—school problems, speech and language, AD(H)D, behavioral and emotional, cognitive delay, ASD, dyslexia and (B) number of cooccurring difficulties excluding school problems.

#
#

DISCUSSION

A number of interrelated findings from this study suggested that a simple diagnosis of APD based on AAA/ASHA guidelines is neither realistic, given the current tests used, nor appropriate, as judged by the audiologists providing the service. One positive outcome was that a very low score (>2 SD below mean) on any one SCAN test or on the composite score usually resulted in a diagnosis of APD. However, only 1.2% of evaluated children fulfilled the criterion for “Disordered” according to the SCAN manual ([Keith, 2009]). These children were, in turn, only 7.8% of those diagnosed with APD. Audiologists also developed the diagnostic category “Weakness” because of the additional large number of referred children who clearly had problems, but who did not fulfill the AAA/ASHA criteria. The main reason for these developments was the generally high level of performance of the children on the tests. For the majority of (non-SCAN) tests lacking normalization, there was a strong ceiling effect. A ceiling effect was also seen in the data of two of the SCAN tests (AFG, CSs) before normalization. These high performance levels are particularly notable given that all the children in this large sample were referred for assessment due to learning or listening problems or both. A REA was found in all the tests for which differential testing of the left and right ears was conducted, irrespective of whether the tests were delivered monotically (monaurally) or dichotically. Although a REA was somewhat more pronounced for the dichotic than for the monotic tests, a LEA was nearly as common as a REA for Dichotic Digits, and neither the side nor size of the EA predicted the ultimate diagnosis well. Overall accuracy on each of the SCAN tests was a better predictor and, in this respect, the dichotic tests differentiated the children better than the AFG (speech-in-noise) or FW tests. Cooccurrence of assessment with other learning problems was nearly universal, but neither the number nor the pattern of cooccurring problems was a predictor of APD diagnosis. Finally, the number of annual assessments decreased dramatically during the study period.

Patterns of Referral and Influence on Results

This study confirmed previous smaller scale ([Hind et al, 2011]) and anecdotal ([AAA, 2010]; [Campbell et al, 2012]) reports that school problems and listening difficulties are major reasons for referral for APD evaluation. For younger, preschool children, speech problems with or without listening problems are very prevalent ([Hind et al, 2011]) among audiology service clients but, in line with many services in the United States ([Emanuel et al, 2011]), evaluation for APD before 7 yr old was relatively uncommon in our survey. Related to this, referral source changes with age, but among the mostly elementary/primary school students in this sample, primary care agencies were the major source, again consistent with other work ([Hind et al, 2011]). It may seem surprising that more children were not referred directly from schools, considering the nature of their difficulties, but insurers and legislation usually require referral through a physician. Provision of a screening test suitable for use by teachers and other caring professions (e.g., listening questionnaire; [Barry et al, 2015]) could efficiently streamline this process.

This was a hospital-based study of a single audiology service. Although the number of children whose data were reviewed was large, the generalization of some of the findings may therefore be limited or questionable. For example, most of the children referred to this service had multiple learning issues. In some cases, children were referred from Behavioral Pediatrics to Audiology to “rule out” APD. Do children already diagnosed with other well-recognized learning problems (e.g., speech/language problems, ADHD, autism spectrum disorder [ASD]) derive additional care benefit from being diagnosed with a listening problem? Possibly not, and it is notable that the Diagnostic and Statistical Manual of Mental Disorders-5 criteria for ASD specify that “these disturbances [ASD symptoms] are not better explained by intellectual disability … or global developmental delay” (Diagnostic and Statistical Manual of Mental Disorders-5, 299.00 [F84.0]). Audiologists completing evaluations need to consider carefully whether that criterion also applies to the child under consideration for APD. On the other hand, about half the children in this sample did not have symptoms severe enough to lead to a positive diagnosis (AP weakness or disorder). It seems desirable to reduce this proportion, and this could be achieved by audiologists also using a screening questionnaire before APD assessment to establish whether a child has listening difficulties of concern to the caregiver. For those receiving a positive diagnosis of APD, the next big questions are how reliable such a diagnosis is and what to do about it.


#

Are Currently Used APD Tests Useful?

A surprising and concerning finding of this study was that many commonly used tests for APD had strong raw-score ceiling effects, even among a group of children referred for evaluation. Particularly striking examples were the PPT and the CS (right ear) test of the SCAN-3 ([Keith, 2009]). The possible reasons for these findings are that, first, the tested children were extraordinarily able, second, the tests were too easy or, third, they were not administered or scored correctly. The third possibility seems unlikely, due to consistency across testers and to lack of outliers in the data. In fact, it is possible that the testers at CCHMC used greater than average care to optimize the engagement of the children and that factor contributed to the high scores in this study. The first possibility is also unlikely, since these were a socially heterogeneous group, most of whom had other identified learning difficulties. Concerning the ease of the tests, many are essentially unchanged since they were introduced to audiology 40 or more years ago. Furthermore, none of the tests is designed based upon a clear rationale for developmental validity for children, or type or location of disorder within the auditory pathways. A possible exception is dichotic listening tests, although these tests are loosely based upon auditory laterality, attention, and executive control functions. Apart from their interpretation, which shall not be discussed in detail here, they were designed before digital electronics enabled adaptive stimulus presentation, with the consequent ability to track response performance efficiently, reliably, and according to the principles of signal detection theory ([Green and Swets, 1966]). Additional lack of normalization of many APD tests makes them almost useless. Contrast these properties with those of the audiogram, the bases of which rest on careful calibration, normalization, and adaptive response tracking ([Carhart and Jerger, 1959]).

Electrophysiology is not part of the current assessment for APD at CCHMC. Although advocated by some guidelines ([ASHA, 2005]; [AAA, 2010]), “there are no widely accepted criteria as to when ABRs should be included” and “even when … abnormalities are noted, these results … may be of limited use for the development of … intervention plans” ([AAA, 2010], pp. 21–22). Nevertheless, as detailed by [AAA (2010)], there are several promising research methods in the pipeline that may be especially valuable for early identification or prediction of neurological problems associated with APD.


#

SCAN

The SCAN suite of tests is widely used ([Emanuel et al, 2011]) and has been well normalized, with the possible exception of the FW test. We found here that very poor performance on at least one SCAN test (>2 SD below the mean) was strongly predictive of a Disorder diagnosis, although only about a third of those with a Disorder satisfied this criterion. On the composite SCAN score, those diagnosed usually scored more poorly than those not diagnosed. However, nonscaled SCAN performance was subject to ceiling effects, most strikingly AFG (either ear) and CSs correctly identified in the right ear (CS Right). On the other hand, FW (either ear), CW, and CS Left differentiated well between children on those tasks. Overall, some components of the SCAN could be useful if the aim was to examine children’s ability to identify low-pass FW or to attend selectively to words presented to one ear.


#

Ear Advantage and Dichotic Testing

We found a REA on the “monaural” SCAN tests (AFG, FW); the proportion of children with a REA on the FW test almost matched that found on the dichotic CW and CS tests. These results were surprising, as the SCAN manual ([Keith, 2009]) and AAA guidelines ([AAA, 2010]) state that a typical child will have similar right-ear and left-ear scores on the monaural tests, and the speech-in-noise literature generally assumes (or has measured; [Smits et al, 2016]) similar right- and left-ear performance. We considered the possibility that this monaural REA is attributable to a training effect. However, SCAN protocol for the AFG and FW specifies testing the right ear before the left ear, so a global training effect would more likely result in a LEA than a REA. Typically, a REA is explained by the greater role of the left cortical hemisphere in processing speech sounds, together with the dominant contralateral projection from each ear to the contralateral hemisphere ([Kimura, 1961]; [Musiek and Weihing, 2011]). This explanation is supported by the current results, but it appears that dichotic presentation is not required to achieve a REA. Future research studies and clinical measures of speech perception should thus pay careful attention to and report the ear of sound delivery.

Although a REA is more commonly observed than a LEA, we report here that a substantial proportion of children (15–40%) had a LEA on the dichotic tests and, more importantly, the proportion of LEA and REA did not generally differ between diagnostic categories. In short, REA does not appear to be used by audiologists as an informative clinical measure. This raises the question of what a dichotic test actually measures and why it has been considered an important component of an APD evaluation. Following [Broadbent (1952)] and [Kimura (1961)], [Musiek (1983)] systematically examined performance on a Dichotic Digits test for groups of adults with typical hearing, mild hearing loss, or brain lesions affecting the auditory system. As we report here, the typically hearing group performed near maximum for presentation to either ear, as did those with hearing loss, while those with brain lesions were clearly impaired, particularly for digits presented to the left ear. Subsequently, it was found that children with a variety of forms of listening difficulties, but typical audiograms, also sometimes perform poorly on these tests. It was reasoned that these children, like the patients with brain lesions, also have central AP problems ([ASHA, 1996]; [2005]). However, as shown by [Penner et al (2009)] working memory load is also important in performing a Dichotic Digits task; the more complex and longer the dichotic stream, the greater the load. Recently, Cameron and colleagues ([Cameron, Glyde, Dillon, and Whitfield, 2016]; [Cameron, Glyde, Dillon, Whitfield, and Seymour, 2016]) have shown that performance on a Dichotic Digits task is highly correlated with performance on a diotic version of the same task, in adults and both typically developing children and children who perform poorly on a traditional Dichotic Digits test. Although some participants in those studies appeared to have a “true” dichotic deficit, it was concluded that further research using both diotic and dichotic versions of the same digit recall task was needed.

Based on the current results, which show no systematic relationship between a REA and a clinical diagnosis of APD, and the results of ([Cameron, Glyde, Dillon, and Whitfield, 2016]; [Cameron, Glyde, Dillon, Whitfield, and Seymour, 2016]), interpretation of dichotic test results does not appear to provide a clear diagnostic structure, other than a general problem selectively attending to and recalling simultaneously presented stimuli. Nevertheless, this finding may suggest a need for both further cognitive assessment, and a relevant solution, for example, remediation through targeted cognitive training.


#

Cooccurrence with Other Learning Problems

Several studies have shown a close relationship between performance on clinical tests of AP and those used to diagnose other learning problems, including language and reading impairment ([Sharma et al, 2009]; [Dawes and Bishop, 2010]; [Ferguson et al, 2011]; [Miller and Wagstaff, 2011]). Nevertheless, concurrent conditions (e.g., inattention) have traditionally been viewed as a reason not to test for APD ([ASHA, 2005]). In the large sample examined here, we found that 90% of children undergoing APD assessment had at least one additional learning problem, and about two-thirds had more than one problem. This is perhaps unsurprising in a hospital service seeking multidisciplinary evaluation. However, a novel finding here was that the prevalence of these problems appeared to have little influence on the outcome of the APD assessment. While the findings of previous studies were interpreted as showing that referral route likely dictated the diagnostic outcome, this new finding could suggest that audiologists are making diagnostic decisions based on other or additional criteria. These criteria may include evidence not captured in this study, for example, the clinical history or other informal reports. Alternately, those achieving very low scores on the SCAN and other tests may have specific deficits not captured by other (nonaudiological) testing. The results are consistent with an informal “recalibration” of test scores whereby the criteria for a disorder now becomes one test score >2 SD below the mean rather than the ASHA/AAA recommended two test scores. This strategy could also explain why the diagnostic label of “Weakness” was introduced, to cover performance that does not fulfill the ASHA/AAA criteria, but is still of concern to the attending audiologist. Finally, the normative data for each of the tests need to be reconsidered, in light of the high performance on nearly all of the overall tests, and on subtests of the SCAN, in particular, the FW test. Given that this was a large sample of children referred for listening and learning concerns, it was highly unexpected to see such strong performance across tests designed to detect these very issues.


#

Are audiologists Withdrawing from APD Testing?

We found that the rate of conducting an assessment in this service has dropped markedly in recent years. We do not know whether this trend is occurring across other services; further research would seem warranted. However, many institutions or whole regions have never had an APD service, or have abandoned APD assessment until clearer evidence and evidence-based guidelines are in place. In any case, the question needs to be asked about management strategies for the children who have listening difficulties but who are not being served by current practice. A positive development is that quality, evidence-based research into APD has reached unprecedented levels and several international professional groups have recently published detailed guidelines concerning assessment and management of APD. This is driving discussion (e.g., Biannual International Conference at Audiology Now!) and a number of new proposals and actions ([Cameron et al, 2015]; [DeBonis, 2015]) for better assessment and management of APD.


#

Some Ways Forward

Several themes run through the findings of this study. First, audiologists need to collaborate with other specialists (speech-language pathologists, psychologists) to understand more fully the behaviors displayed by children presenting with listening difficulties. Simply calling that behavior APD is not enough. Second, the methods used to test for APD need to be tightened up in several respects, including recognition of the multifaceted nature of any form of hearing test, and embracing modern methods of adaptive testing and digital test delivery, requirement of normative data, appropriate testing for young children, use of quality screening questionnaires, and development of objective (physiological) methods, as appropriate. Third, it is essential for clinicians and researchers to work together. As new understanding and methods become available, it will be necessary to sort out together what works and what does not work in the clinic, both from a theoretical and a practical perspective.


#
#

Abbreviations

ADHD: attention-deficit hyperactivity disorder
AFG: Auditory Figure-Ground
ANOVA: analysis of variance
AP: auditory processing
APD: auditory processing disorder
ASD: autism spectrum disorder
ASHA: American Speech-Hearing-Language Association
CAPE: central auditory processing evaluation
CCHMC: Cincinnati Children’s Hospital Medical Center
CS: Competing Sentences
CW: Competing Words
EA: ear advantage
FW: Filtered Words
LEA: left-ear advantage
PPT: Pitch Pattern Test
REA: right-ear advantage
SD: standard deviation


#

No conflict of interest has been declared by the author(s).

This study was supported in part by the National Institute on Deafness and Other Communication Disorders, grant 1R01DC014078, and by funds from Cincinnati Children’s Research Foundation.


  • REFERENCES

  • American Academy of Audiology (AAA) 2010. Guidelines for the diagnosis, treatment, and management of children and adults with central auditory processing disorder. Washington, DC: AAA;
  • American Speech-Language-Hearing Association (ASHA) 1996; Central auditory processing: Current status of research and implications for clinical practice. Am J Audiol 5 (01) 41-54
  • American Speech-Language-Hearing Association (ASHA) 2005. (Central) auditory processing disorders: the role of the audiologist. Rockville, MD: American Speech-Language-Hearing Association;
  • Barry JG, Tomlin D, Moore DR, Dillon H. 2015; Use of questionnaire-based measures in the assessment of listening difficulties in school-aged children. Ear Hear 36 (06) e300-e313
  • Broadbent DE. 1952; Listening to one of two synchronous messages. J Exp Psychol 44 (01) 51-55
  • British Society of Audiology (BSA) 2011 Position statement: auditory processing disorder (APD). www.thebsa.org.uk/wp-content/uploads/2011/04/OD104-39-Position-Statement-APD-2011-1.pdf . Accessed December 20, 2016
  • Cameron S, Glyde H, Dillon H, King A, Gillies K. 2015; Results from a national central auditory processing disorder service: a “real world” assessment of diagnostic practices and remediation for CAPD. Semin Hear 36: 216-236
  • Cameron S, Glyde H, Dillon H, Whitfield J. 2016; Investigating the Interaction between Dichotic Deficits and Cognitive Abilities Using the Dichotic Digits difference Test (DDdT) Part 2. J Am Acad Audiol 27 (06) 470-479
  • Cameron S, Glyde H, Dillon H, Whitfield J, Seymour J. 2016; The Dichotic Digits difference Test (DDdT): Development, Normative Data, and Test-Retest Reliability Studies Part 1. J Am Acad Audiol 27 (06) 458-469
  • Campbell NG, Bamiou DE, Sirimanna T. 2012; Current progress in auditory processing disorder. ENT Audiol News 21: 86-90
  • Carhart R, Jerger J. 1959; Preferred method for clinical determination of pure-tone thresholds. J Speech Hear Disord 24: 330-345
  • Dawes P, Bishop D. 2009; Auditory processing disorder in relation to developmental disorders of language, communication and attention: a review and critique. Int J Lang Commun Disord 44 (04) 440-465
  • Dawes P, Bishop DV. 2010; Psychometric profile of children with auditory processing disorder and children with dyslexia. Arch Dis Child 95 (06) 432-436
  • DeBonis DA. 2015; It is time to rethink central auditory processing disorder protocols for school-aged children. Am J Audiol 24 (02) 124-136
  • Emanuel DC, Ficca KN, Korczak P. 2011; Survey of the diagnosis and management of auditory processing disorder. Am J Audiol 20 (01) 48-60
  • Ferguson MA, Hall RL, Riley A, Moore DR. 2011; Communication, listening, cognitive and speech perception skills in children with auditory processing disorder (APD) or Specific Language Impairment (SLI). J Speech Lang Hear Res 54 (01) 211-227
  • Green DM, Swets JE. 1966. Signal Detection Theory and Psychophysics. New York, NY: Wiley;
  • Hind SE, Haines-Bazrafshan R, Benton CL, Brassington W, Towle B, Moore DR. 2011; Prevalence of clinical referrals having hearing thresholds within normal limits. Int J Audiol 50 (10) 708-716
  • Katz J. 1983. Phonemic synthesis. In: Lasky E, Katz J. Central Auditory Processing Disorders. Baltimore, MD: University Park Press;
  • Katz J, Harmon C. 1981. Phonemic synthesis: diagnostic and training program. In: Keith R. Central Auditory and Language Disorders in Children. Houston, TX: College Hill Press;
  • Katz J, Smith PS. 1991; The Staggered Spondaic Word Test: a ten-minute look at the central nervous system through the ears. Ann N Y Acad Sci 620: 233-251
  • Keith RW. 2009. SCAN–3:C Tests for Auditory Processing Disorders for Children. Bloomington, MN: Pearson;
  • Kimura D. 1961; Cerebral dominance and the perception of verbal stimuli. Can J Psychol 15: 166-171
  • Liberman MC. 2015; Hidden hearing loss. Sci Am 313 (02) 48-53
  • Loo JH, Bamiou DE, Rosen S. 2013; The impacts of language background and language-related disorders in auditory processing assessment. J Speech Lang Hear Res 56 (01) 1-12
  • McDermott EE, Smart JL, Boiano JA, Bragg LE, Colon TN, Hanson EM, Emanuel DC, Kelly AS. 2016; Assessing auditory processing abilities in typically developing school-aged children. J Am Acad Audiol 27 (02) 72-84
  • Miller CA, Wagstaff DA. 2011; Behavioral profiles associated with auditory processing disorder and specific language impairment. J Commun Disord 44 (06) 745-763
  • Moore DR. 2015; Sources of pathology underlying listening disorders in children. Int J Psychophysiol 95 (02) 125-134
  • Moore DR, Rosen S, Bamiou DE, Campbell NG, Sirimanna T. 2013; Evolving concepts of developmental auditory processing disorder (APD): a British Society of Audiology APD special interest group ‘white paper’. Int J Audiol 52 (01) 3-13
  • Musiek FE. 1983; Assessment of central auditory dysfunction: the dichotic digit test revisited. Ear Hear 4 (02) 79-83
  • Musiek FE. 1994; Frequency (pitch) and duration pattern tests. J Am Acad Audiol 5 (04) 265-268
  • Musiek FE, Weihing J. 2011; Perspectives on dichotic listening and the corpus callosum. Brain Cogn 76 (02) 225-232
  • Penner IK, Schläfli K, Opwis K, Hugdahl K. 2009; The role of working memory in dichotic-listening studies of auditory laterality. J Clin Exp Neuropsychol 31 (08) 959-966
  • Rosenthal R. 1994. Parametric measures of effect size. In: Cooper H, Hedges LV. The Handbook of Research Synthesis. New York, NY: Russell Sage Foundation;
  • Sharma M, Purdy SC, Kelly AS. 2009; Comorbidity of auditory processing, language, and reading disorders. J Speech Lang Hear Res 52 (03) 706-722
  • Smits C, Watson CS, Kidd GR, Moore DR, Goverts ST. 2016; A comparison between the Dutch and American-English digits-in-noise (DIN) tests in normal-hearing listeners. Int J Audiol 55 (06) 358-365
  • Vermiglio AJ. 2014; On the clinical entity in audiology: (central) auditory processing and speech recognition in noise disorders. J Am Acad Audiol 25 (09) 904-917
  • Wilson WJ, Arnott W. 2013; Using different criteria to diagnose (central) auditory processing disorder: how big a difference does it make?. J Speech Lang Hear Res 56 (01) 63-70

Corresponding author

David R. Moore
Communication Sciences Research Center, Cincinnati Children’s Hospital Medical Center
Cincinnati, OH 45229

  • REFERENCES

  • American Academy of Audiology (AAA) 2010. Guidelines for the diagnosis, treatment, and management of children and adults with central auditory processing disorder. Washington, DC: AAA;
  • American Speech-Language-Hearing Association (ASHA) 1996; Central auditory processing: Current status of research and implications for clinical practice. Am J Audiol 5 (01) 41-54
  • American Speech-Language-Hearing Association (ASHA) 2005. (Central) auditory processing disorders: the role of the audiologist. Rockville, MD: American Speech-Language-Hearing Association;
  • Barry JG, Tomlin D, Moore DR, Dillon H. 2015; Use of questionnaire-based measures in the assessment of listening difficulties in school-aged children. Ear Hear 36 (06) e300-e313
  • Broadbent DE. 1952; Listening to one of two synchronous messages. J Exp Psychol 44 (01) 51-55
  • British Society of Audiology (BSA) 2011 Position statement: auditory processing disorder (APD). www.thebsa.org.uk/wp-content/uploads/2011/04/OD104-39-Position-Statement-APD-2011-1.pdf . Accessed December 20, 2016
  • Cameron S, Glyde H, Dillon H, King A, Gillies K. 2015; Results from a national central auditory processing disorder service: a “real world” assessment of diagnostic practices and remediation for CAPD. Semin Hear 36: 216-236
  • Cameron S, Glyde H, Dillon H, Whitfield J. 2016; Investigating the Interaction between Dichotic Deficits and Cognitive Abilities Using the Dichotic Digits difference Test (DDdT) Part 2. J Am Acad Audiol 27 (06) 470-479
  • Cameron S, Glyde H, Dillon H, Whitfield J, Seymour J. 2016; The Dichotic Digits difference Test (DDdT): Development, Normative Data, and Test-Retest Reliability Studies Part 1. J Am Acad Audiol 27 (06) 458-469
  • Campbell NG, Bamiou DE, Sirimanna T. 2012; Current progress in auditory processing disorder. ENT Audiol News 21: 86-90
  • Carhart R, Jerger J. 1959; Preferred method for clinical determination of pure-tone thresholds. J Speech Hear Disord 24: 330-345
  • Dawes P, Bishop D. 2009; Auditory processing disorder in relation to developmental disorders of language, communication and attention: a review and critique. Int J Lang Commun Disord 44 (04) 440-465
  • Dawes P, Bishop DV. 2010; Psychometric profile of children with auditory processing disorder and children with dyslexia. Arch Dis Child 95 (06) 432-436
  • DeBonis DA. 2015; It is time to rethink central auditory processing disorder protocols for school-aged children. Am J Audiol 24 (02) 124-136
  • Emanuel DC, Ficca KN, Korczak P. 2011; Survey of the diagnosis and management of auditory processing disorder. Am J Audiol 20 (01) 48-60
  • Ferguson MA, Hall RL, Riley A, Moore DR. 2011; Communication, listening, cognitive and speech perception skills in children with auditory processing disorder (APD) or Specific Language Impairment (SLI). J Speech Lang Hear Res 54 (01) 211-227
  • Green DM, Swets JE. 1966. Signal Detection Theory and Psychophysics. New York, NY: Wiley;
  • Hind SE, Haines-Bazrafshan R, Benton CL, Brassington W, Towle B, Moore DR. 2011; Prevalence of clinical referrals having hearing thresholds within normal limits. Int J Audiol 50 (10) 708-716
  • Katz J. 1983. Phonemic synthesis. In: Lasky E, Katz J. Central Auditory Processing Disorders. Baltimore, MD: University Park Press;
  • Katz J, Harmon C. 1981. Phonemic synthesis: diagnostic and training program. In: Keith R. Central Auditory and Language Disorders in Children. Houston, TX: College Hill Press;
  • Katz J, Smith PS. 1991; The Staggered Spondaic Word Test: a ten-minute look at the central nervous system through the ears. Ann N Y Acad Sci 620: 233-251
  • Keith RW. 2009. SCAN–3:C Tests for Auditory Processing Disorders for Children. Bloomington, MN: Pearson;
  • Kimura D. 1961; Cerebral dominance and the perception of verbal stimuli. Can J Psychol 15: 166-171
  • Liberman MC. 2015; Hidden hearing loss. Sci Am 313 (02) 48-53
  • Loo JH, Bamiou DE, Rosen S. 2013; The impacts of language background and language-related disorders in auditory processing assessment. J Speech Lang Hear Res 56 (01) 1-12
  • McDermott EE, Smart JL, Boiano JA, Bragg LE, Colon TN, Hanson EM, Emanuel DC, Kelly AS. 2016; Assessing auditory processing abilities in typically developing school-aged children. J Am Acad Audiol 27 (02) 72-84
  • Miller CA, Wagstaff DA. 2011; Behavioral profiles associated with auditory processing disorder and specific language impairment. J Commun Disord 44 (06) 745-763
  • Moore DR. 2015; Sources of pathology underlying listening disorders in children. Int J Psychophysiol 95 (02) 125-134
  • Moore DR, Rosen S, Bamiou DE, Campbell NG, Sirimanna T. 2013; Evolving concepts of developmental auditory processing disorder (APD): a British Society of Audiology APD special interest group ‘white paper’. Int J Audiol 52 (01) 3-13
  • Musiek FE. 1983; Assessment of central auditory dysfunction: the dichotic digit test revisited. Ear Hear 4 (02) 79-83
  • Musiek FE. 1994; Frequency (pitch) and duration pattern tests. J Am Acad Audiol 5 (04) 265-268
  • Musiek FE, Weihing J. 2011; Perspectives on dichotic listening and the corpus callosum. Brain Cogn 76 (02) 225-232
  • Penner IK, Schläfli K, Opwis K, Hugdahl K. 2009; The role of working memory in dichotic-listening studies of auditory laterality. J Clin Exp Neuropsychol 31 (08) 959-966
  • Rosenthal R. 1994. Parametric measures of effect size. In: Cooper H, Hedges LV. The Handbook of Research Synthesis. New York, NY: Russell Sage Foundation;
  • Sharma M, Purdy SC, Kelly AS. 2009; Comorbidity of auditory processing, language, and reading disorders. J Speech Lang Hear Res 52 (03) 706-722
  • Smits C, Watson CS, Kidd GR, Moore DR, Goverts ST. 2016; A comparison between the Dutch and American-English digits-in-noise (DIN) tests in normal-hearing listeners. Int J Audiol 55 (06) 358-365
  • Vermiglio AJ. 2014; On the clinical entity in audiology: (central) auditory processing and speech recognition in noise disorders. J Am Acad Audiol 25 (09) 904-917
  • Wilson WJ, Arnott W. 2013; Using different criteria to diagnose (central) auditory processing disorder: how big a difference does it make?. J Speech Lang Hear Res 56 (01) 63-70

Zoom Image
Figure 1 Deomographics: (A) age, (B) referral source, and (C) primary reported difficulty of the children whose data are reported in this study.
Zoom Image
Figure 2 SCAN overview: (A) sample composite scores, derived from SCAN Manual ([Keith, 2009]) and based on equal weighting of the four individual test scaled scores ([Figure 3A]), rescaled to produce a population mean of 100 (SD = 15). The sample mean ± 2 SD are indicated by hatched vertical lines; (B) relation between SCAN composite score (mean + SD) and audiologists’ diagnosis. Number (n) of children receiving each diagnosis is indicated; (C) number of children in sample having at least one SCAN individual test (scaled) score ≤3 (>2 SD below mean); and (D) number of children in sample having at least two SCAN individual test (scaled) scores ≤7 and >3 (1–2 SD below mean).
Zoom Image
Figure 3 Individual AP test scores: (A) SCAN tests, scaled scores showing number of children achieving each score and mean of each test (arrows); (B) PPT, unscaled scores. Note the extreme skewness toward high scores, indicative of a ceiling effect; (C) Dichotic Digits, unscaled score showing % digits correctly identified in each ear; (D) Dichotic Digits, EA showing number of digits correctly identified in the right ear relative to the left ear. Positive scores indicate a REA; and (E) Phonemic Synthesis, unscaled score indicating number of correct items.
Zoom Image
Figure 4 SCAN individual tests. Unscaled scores for each ear. Note the REA for each test: (A) AFG, (B) FW, (C) CW (directed ear), and (D) CS.
Zoom Image
Figure 5 Dichotic Digits EA across age: (A) number of children achieving each level of EA (see [Figure 3D]) and (B) percent children achieving each level of EA.
Zoom Image
Figure 6 Audiologists’ diagnoses: (A) number of children receiving each diagnosis in whole sample; (B) summary number of diagnoses given by each of four principal audiologists; (C) yearly cases by number, 2009–2014, fitting each diagnostic category; and (D) yearly cases by percentage, 2009–2014, fitting each diagnostic category.
Zoom Image
Figure 7 SCAN Individual tests by diagnostic category. Unscaled scores shown separately for each ear: Column (A) AFG, (B) FW, (C) CW, and (D) CS. Top row: Undiagnosed, middle row: Weakness, bottom row: Disorder.
Zoom Image
Figure 8 Cooccurrence of audiologists’ diagnoses with other difficulties: (A) first listed (primary) difficulty—school problems, speech and language, AD(H)D, behavioral and emotional, cognitive delay, ASD, dyslexia and (B) number of cooccurring difficulties excluding school problems.