J Am Acad Audiol 2017; 28(09): 799-809
DOI: 10.3766/jaaa.16151
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Relationship of Grammatical Context on Children’s Recognition of s/z-Inflected Words

Meredith Spratford
*   Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
,
Hannah Hodson McLean
†   Eastern Virginia Medical School, Norfolk, VA
,
Ryan McCreery
*   Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
› Author Affiliations
Further Information

Corresponding author

Meredith Spratford
Boys Town National Research Hospital
Omaha, NE 68131

Publication History

Publication Date:
26 June 2020 (online)

 

Abstract

Background:

Access to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.

Purpose:

To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).

Research Design:

A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.

Study Sample:

Thirty-five children, aged 5–12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.

Data Collection and Analysis:

Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.

Results:

When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.

Conclusions:

High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children’s use of high-frequency audibility in a manner that approximates how they learn language.


#

INTRODUCTION

Audibility of high-frequency phonemes supports phonological, morphosyntactic, and lexical development in young children with normal hearing and children with hearing loss ([Stelmachowicz et al, 2001]; [2004]; [Moeller et al, 2007]; [Pittman, 2008]; [Koehlinger et al, 2013]; [Tomblin et al, 2015]). The degree to which listeners rely on high-frequency audibility to detect fricatives depends on the amount of linguistic information present in the speech recognition materials, with listeners showing a stronger reliance on high-frequency audibility for shorter stimuli without linguistic context versus running speech or continuous discourse ([Mayo and Turk, 2004]; [Silberer et al, 2015]). Compared with children with normal hearing (CNH), children who are hard of hearing (CHH) may have reduced access to high-frequency audibility because of their significant high-frequency hearing loss and/or the limited high-frequency bandwidth through their hearing aids (HAs) leading to delays in acquisition of fricative phonemes and subsequent morphological development ([Moeller et al, 2007]; [Koehlinger et al, 2015]; [Tomblin et al, 2015]). Therefore, clinicians need methods to assess the impact of variability in high-frequency bandwidth on speech recognition in children who wear HAs. In combination with electroacoustic measures of speech bandwidth, children’s behavioral recognition of plural versus singular nouns has been used clinically to determine whether high-frequency phonemes are accessible and whether frequency-lowering strategies should be activated in HAs ([Glista et al, 2009]; [Wolfe et al, 2010]; [Glista and Scollie, 2012]). Assessing recognition of s/z-inflected words in isolation without surrounding linguistic context may underestimate word+morpheme recognition in sentences or running speech that contain additional semantic and syntactic cues that support recognition. However, young children may not yet have the linguistic competence to take advantage of these linguistic cues. Assessing recognition with sentences where the inflected target words are not predictable from either semantic or syntactic sentence cues may reveal distinctive perceptual strategies CHH use to recognize fricative inflections in an acoustic context that more closely represents how they learn language. However, speech recognition materials that contain sentence-level acoustic-phonetic cues, such as coarticulation cues between words, but control for semantic and grammatical cues, are not available clinically. The primary goal of this study is to examine the relationship of high-frequency audibility and word+morpheme recognition for monosyllabic words presented in isolation compared with inflected words embedded in low-predictability sentences that control for semantic and syntactic knowledge for CNH and CHH who wear HAs. A secondary goal of this study is to explore how electroacoustic measures of aided access to speech relate to recognition of nouns and verbs inflected with /s/ or /z/ for CHH.

Listener’s reliance on high-frequency acoustic cues for speech recognition depends on several different factors, including linguistic context and age. Results from a study by [Studebaker and colleagues (1993)] indicated that listeners depend on different frequency ranges to understand stimuli with different amounts of linguistic context. Without linguistic context, importance weights are more evenly distributed across frequency. When stimuli contain more linguistic context, listeners are less reliant on information in the highest and lowest frequencies. In a series of studies, Nittrouer and colleagues used stimuli with low linguistic context containing fricatives to analyze the relative weighting children assigned to acoustic characteristics of the stimuli ([Nittrouer, 1996]; [2002]; [Nittrouer and Miller, 1997]). School-age (7-yr old) children weighted the high-frequency spectra of the fricative noise greater than the following vocalic formant transitions, supporting a reliance on high-frequency audibility for fricative recognition within short, nonsense syllable stimuli. Using low-pass filtered stimuli varying in linguistic context, [Silberer et al, (2015)] documented that CNH, aged 7–10 yrs, also had increased reliance on high-frequency audibility for short, real words presented in isolation without grammatical context (Maryland CNC Test and the University of Western Ontario Plurals Test) compared with sentence-level stimuli that contained grammatical context (Multimodal Lexical Sentence Test). The effects of high-frequency audibility on word+morpheme recognition may not be as evident within linguistically dense sentences compared with isolated words or nonsense syllables because of the availability of semantic and/or syntactic context (e.g., subject–verb tense agreement) in running speech and continuous discourse that support recognition of s/z-inflected words. When children are developing linguistic knowledge, however, they may be less efficient at using semantic and syntactic cues in sentences to predict s/z inflection. Instead, children may rely more on accessibility of high-frequency acoustic-phonetic information. The reliance on high-frequency audibility required to recognize s/z-inflected words in low-predictability sentences is unknown. Using sentences without linguistic cues to plurality as stimuli may help isolate the impact of high-frequency audibility on the recognition of s/z-inflected target words in a sentence context. We created sentences with singular and plural nouns as direct objects within a subject–verb–direct object framework so that the plurality of the target word was not cued by semantic and syntactic cues but included many of the acoustic cues that would be present in sentences that are not available in isolated words. Without grammatical cues and semantic predictability within a sentence, having access to acoustic-phonetic cues may be vital to detecting s/z inflections on words, especially when they occur in positions of low salience.

Lexical category (noun versus verb) and sentence position are two factors that have the potential to affect the frequency of occurrence and salience of s/z-inflected words in spoken English. Within running speech, [Hsieh, Leonard, and Swanson (1999)] found that plural nouns occurred more frequently overall compared with third person singular verbs. In addition, plural morphemes had longer durations than third person singular tense markers. The additional fricative duration and the increased frequency within the input for inflected nouns may make the plural noun marker more consistently detectable compared with the third person singular verb marker. The position of the s/z-inflected word, regardless of lexical category, within a sentence may also impact its salience. Sentence-medial morphological markers (e.g., 3rd person singular “He walks to the store.”) are less perceptually salient to typically developing children than sentence-final markers (e.g., “She has two cats.”) because of lower intensity and shorter durational cues, in addition to coarticulation effects with surrounding words ([Hsieh et al, 1999]; [Sundara et al, 2011]). High-frequency audibility may especially be of supportive value when detecting s/z inflections when they are in positions of low salience, whether because of lexical category or sentence position. Research is needed to examine effects of varying high-frequency audibility on stimuli that may have low salience. To examine the relationship of audibility and word+morpheme recognition across lexical word type, we used s/z-inflected monosyllabic nouns and verbs presented in isolation. To examine recognition within a sentence-medial position, the target word was embedded sentence-medially within the aforementioned low-predictability sentences created for this study. It is expected that word+morpheme recognition will be better for nouns over verbs and that audibility will particularly be important for the recognition of word+morpheme targets embedded within low-predictability sentences.

Beyond linguistic factors impacting salience of inflectional morphemes, children who have hearing loss may have variable high-frequency audibility through their HAs, further impacting the reliability or accessibility of fricative inflections. Findings of poor word-final s/z detection for CHH ([Stelmachowicz et al, 2002]; [Glista and Scollie, 2012]) may be explained, in part, by the limited saliency of high-frequency fricatives caused by restricted bandwidth through HAs. [Stelmachowicz et al (2001)] found that children’s perception of /s/ improved, especially for a child and female talker, with bandwidth extending up to 9 kHz. However, this amount of high-frequency access necessary for detection of plural endings often is not achievable for CHH because of restricted bandwidth due to sloping configurations of hearing loss, gain constraints of the HAs ([Kimlinger et al, 2015]) or large deviations from prescriptive targets resulting in poorer aided audibility for speech ([McCreery et al, 2013]). Because acoustic-phonetic information for recognition of s/z inflections may be unreliable or inaccessible through HAs, CHH may have an increased reliance on grammatical cues for recognition of fricative inflections. Using low-predictability sentence materials controls linguistic knowledge by limiting the ability of a listener to use semantic and grammatical cues to recognize s/z inflections, which in turn emphasizes the use of the available acoustic-phonetic cues for recognition.

One way clinicians estimate the availability of acoustic-phonetic cues for CHH is with electroacoustic measures of aided audibility of speech, such as the Speech Intelligibility Index (SII; [ANSI, 1997]). However, research on the relationship between perception of s/z inflections and electroacoustic measures of high-frequency audibility, including the maximum audible frequency, ANSI frequency range, or short-term audibility, has been limited ([Pittman and Stelmachowicz, 2000]). It is important to understand if and how different electroacoustic measures of HAs relate to children’s recognition of inflections realized as fricatives. If relationships are strong, audiologists could use these measures to more accurately estimate access to high-frequency speech information for children who cannot perform or are too young for typical speech recognition tasks. Even when children’s HAs are optimally fitted to prescriptive targets, electroacoustic measures used in HA verification may not be accurate indicators of access to high-frequency speech information. The SII is a measure of audibility averaged across all frequency-importance bands within the long-term average speech spectrum (LTASS), with no particular focus on the high-frequency bands. The lack of high-frequency emphasis within the broadband SII encouraged [Pittman and Stelmachowicz (2000)] to examine a measure of short-term audibility centered around high-frequency spectral content of fricatives with listeners’ recognition of fricative segments within vowel–consonant nonwords. Both normal hearing (NH) and hard-of-hearing (HH) children benefited from increased short-term audibility of the fricative to detect /s/ within the vowel–consonant syllable context. Although this research is informative in that electroacoustic measures of high-frequency audibility relate to fricative detection ability, short-term audibility lacks clinical utility because of the complexity of measurement and calculation. The closest clinical measure that parallels the concept of high-frequency audibility is the sensation level of speech. The sensation level of speech is the amount (in dB) by which the aided output level of speech exceeds the entered hearing threshold at a given frequency (Hz). The sensation level of speech at 6 kHz has value in predicting production of plural and other high-frequency morphemes in the conversational speech of young CHH ([Koehlinger et al, 2015]). However, it is unknown how this measure may relate to recognition.

The ANSI ([ANSI, 2003]) frequency range is another clinical measure that has been used in the past to predict high-frequency fricative detection ([Stelmachowicz et al, 2001]). The ANSI frequency range is measured using a pure-tone swept stimuli, can be obtained quickly in the clinic, and provides an effective bandwidth of the hearing instrument. However, current HA verification protocols do not rely on the ANSI frequency range when estimating access to speech ([Bagatto et al, 2011]; [AAA, 2013]). The ANSI frequency range measurement underestimates high-frequency speech access compared with the maximum audible frequency, as reported by [Kimlinger et al (2015)]. The maximum audible frequency is the highest frequency at which the average level of the aided LTASS crosses the audiogram/entered thresholds. The maximum audible frequency is obtainable during the SII measure. Clinical practice has moved toward using the maximum audible frequency as a tool to represent aided access to high-frequency phonemes and has the potential to estimate word+morpheme recognition. The degree to which general measures of audibility, such as the aided SII, and specific measures of high-frequency accessibility, such as the maximum audible frequency and sensation level of speech at 6 kHz, relate to children’s recognition of high-frequency fricatives with and without linguistic context is unknown.

CHH were found to be delayed in use of noun and verb morphemes, especially those with limited perceptual salience, such as –s/z ([Moeller et al, 2010]; [Koehlinger et al, 2015]). This provides additional motivation for testing morpheme perception behaviorally and using electroacoustic tests that measure high-frequency access. The goal of this project was to examine the effect of access to high-frequency audibility on varying grammatical context on children’s recognition of nouns and verbs inflected with s/z in noise. Specifically, we were interested in determining the influence of isolated or sentence-embedded presentation and the effects of varying access to high-frequency speech information for CNH and CHH. Three research questions were explored, and the following predictions were made:

  • What is the effect of grammatical context and lexical category on recognition of inflected words in degraded listening conditions for NH listeners? When degraded by noise and low-pass filtering, recognition of s/z-inflected words in isolation will be more accurate than for words presented sentence-medially within low-predictability sentences. Recognition accuracy of s/z-inflected nouns will be higher than verbs because of the increased exposure to plural noun inflections (longer duration of s/z and more frequent in the input) and the high likelihood of children hearing inflected verbs within positions of low salience within the input during language learning. This result is predicted to be exacerbated by hearing loss.

  • How does the recognition of words inflected with fricatives of CNH compare with CHH? CHH will show decreased recognition of inflected words compared with CNH for isolated and embedded conditions because of restricted high-frequency audibility through their HAs.

  • Do electroacoustic measures of HAs relate to recognition accuracy for CHH? Within the HH group, children with less high-frequency audibility will have poorer recognition than children with higher audibility. We hypothesize an interaction between audibility and stimulus type, in those children with better audibility will have better recognition scores for isolated words compared with words embedded within sentences. We predict measures of high-frequency audibility, such as 6-kHz sensation level of speech and maximum audible frequency, will be more accurate predictors than overall audibility measured by the SII.


#

METHODS

Participants

Thirty-five children between the ages of 5 and 12 yrs participated in the study (n = 17 male); 25 CNH and 10 CHH who wore HAs. CHH had mild to severe bilateral, sensorineural hearing loss. All participants were native speakers of English. Participants were recruited from the Human Research Subjects Core Database at Boys Town National Research Hospital (P30DC004662).


#

Speech Perception Stimuli

Target words were singular and plural nouns and first and third person singular verbs within the lexicon of a first grade child according to an online lexical calculator ([Storkel and Hoover, 2010]). Two hundred nouns and 200 verbs were used as target words within the monosyllabic word lists. Lists of 120 low-predictability sentences were created by inserting a target noun as the direct object in a sentence constructed to be syntactically correct but have low syntactic and semantic content ([Bell et al, 1992]) with a subject–verb–direct object–prepositional phrase syntactic structure (e.g., “They take the cat between the homes.” and “They tie the lamps inside the barn.”). To reduce semantic cues and maintain low predictability of the presence of /s/ and /z/, the tense of the verb preceding the direct object noun phrase did not indicate plurality of the object. Prepositional phrases following the object varied in whether they began with a vowel or consonant. Verbs were not included in the sentence-embedded condition. Syntactic rules would have required the verb to be marked for tense, which would negate the low-predictability context. The stimuli were recorded by a young adult female talker at a sampling rate of 22050 Hz using a Shure 53 BETA head-worn boom microphone (Shure Incorporated, Niles, IL) and custom recording software. Two examiners listened to the stimuli and chose the best exemplar of each recording. The recordings were edited to remove excess silence surrounding the stimuli and the root-mean square level was equalized in Praat ([Boersma and Weenink, 2001]). Speech stimuli were band-pass filtered between 88 and 4 kHz (4 kHz condition) or between 88 and 8 kHz (8-kHz condition) using a Butterworth filter in MATLAB (Mathworks, Natick, MA). Spectrally matched steady-state masking noise was created in MATLAB by taking a Fast Fourier transform of a concatenated sound file containing all of the stimuli, randomizing the phase of the signal at each sample point, and then taking the inverse Fast Fourier transform. Stimuli were presented at 65 dB SPL at a +10 dB signal-to-noise ratio to prevent ceiling effects. A personal computer with a MOTU Track 16 USB Audio Interface was used to present the speech stimuli (MOTU, Cambridge, MA).


#

Procedure

Measures of hearing ability were completed for all participants. HA performance was evaluated for CHH. Recognition of nouns and verbs with s/z morphological inflections were assessed using open-set speech perception tasks with monosyllabic target words presented in isolation or sentence-medially. Speech perception was assessed in noise to avoid ceiling effects. To control for possible effects of articulation confounds in interpreting recognition of s/z, an articulation screening test was given. Participants were asked to label pictures of objects containing word-final consonant clusters ending in /s/ and /z/ to ensure that their productions were intelligible for scoring purposes. All children were able to regularly produce the word-final s/z. To screen for poor working memory that might impair ability to complete sentence recall tasks, participants completed a measure of visuospatial working memory (Odd One Out subtest on the Automated Working Memory Assessment, AWMA; [Alloway, 2007]). Participants’ standard scores were within or above 1.5 standard deviations from the mean of the test’s normative sample (Mean = 111.5, SD ± 14.7). With the exception of the HA measures, all testing was completed in sound field in a sound-treated audiometric test booth. Participants were compensated $15/h for their participation.


#

Measures

Hearing Thresholds

Audiometric thresholds were measured using a GSI-61 audiometer in a sound-treated audiometric booth. CNH completed a hearing screening and had thresholds no poorer than 15 dB HL at octave test frequencies 250–8000 Hz. An audiogram was completed for CHH unless a recent clinical audiogram was available. See [Figure 1] for the average and range of thresholds for the HH listeners. Average better-ear pure-tone average of the CHH was 42 dB HL (range: 28.8–78.8 dB HL).

Zoom Image
Figure 1 Average thresholds (O = right ear; X = left ear) and overall range (hatched area) of audiograms for HH participants (n = 11). (This figure appears in color in the online version of this article.)

#
#

Amplification

CHH wore their own HAs at user settings throughout testing. Aided audibility (SII; [ANSI, 1997]) of their HAs was measured with speechmapping software using real-ear verification on an Audioscan RM500SL. At a conversational input level (65 dB SPL), the better-ear Speech Intelligibility Index (BESII) was the highest aided SII value between right and left HAs. The average BESII across all HA users was 82 (range 64–95). All BESII values were within the expected range of audibility based on the pure-tone average for that ear ([Bagatto et al, 2011]). Maximum audible frequency of the LTASS signal at an input level of 65 dB SPL ranged from 5000 to 8000 Hz with a mean of 7100 Hz. Better-ear sensation level of speech at 6 kHz ranged from 0 to 32 dB SPL with a mean of 12.4 dB SPL.


#

Speech Perception

Recognition of words with s/z inflections in noise was assessed in an open-set task using monosyllabic nouns and verbs as the target words. Monosyllabic word lists contained 50 noun and 50 verb target words, and sentence lists contained 60 noun target words embedded in a sentence-medial position. Each list had equal numbers of inflected and noninflected words and among the inflected words, there were equal numbers of voiced and voiceless markers. Participants were asked to listen to the words and sentences in noise and repeat exactly what they heard, with no explicit instruction to listen for an s/z inflection. Word+morpheme recognition was scored online by an examiner sitting within close proximity of the child. CHH completed the 8-kHz filter condition while wearing their personal HAs, whereas CNH completed the speech perception tasks for both the 4- and 8-kHz filter conditions. Lists and filter conditions, if applicable, were presented in randomized order for all participants.


#
#

RESULTS

To examine the effect of linguistic context on children’s recognition of s/z-inflected words, we explored the following research questions:

Research Question #1: What is the Effect of Isolated Versus Embedded Presentation on Recognition of Words Inflected with s/z in Degraded Listening Conditions for CNH?

To assess the effects of high-frequency audibility on recognition of inflected words in isolation compared with words embedded in sentences for CNH, a repeated-measures ANOVA was used with frequency (8 kHz, 4 kHz) and context (isolated, embedded) as within-subjects factors. The main effect of frequency was significant [F (1,22) = 1289.17, p < 0.001, ηp 2 = 0.983] with performance in the 8-kHz (91.5%) condition higher than the 4-kHz condition (23.5%). The main effect of context was not significant [F (1,22) = 2.25, p = 0.148, ηp 2 = 0.093] with no overall difference in word+morpheme recognition between words in isolation (55.7%) and words embedded in sentences (59.2%). However, the two-way interaction between frequency and context was significant [F (1,22) = 77.4, p < 0.001, ηp 2 = 0.779], indicating that the effect of context differed between 8-kHz and 4-kHz conditions. To assess the pattern of word+morpheme recognition across frequency and context, a posthoc analysis was assessed using Tukey’s honestly significant difference with a significant minimum mean difference of 5.8. For the 8-kHz condition, recognition for isolated target words was higher (97.1%) than words embedded in sentences (86%). For the 4-kHz condition, recognition of isolated target words was significantly lower (14.4%) than words embedded in sentences (32.4%).


#

Research Question #1a: Does Lexical Category (noun, verb) Impact Recognition Accuracy for s/z Inflections?

To assess the effect of high-frequency audibility on word+morpheme recognition for isolated nouns compared with isolated verbs in listeners with NH, a repeated-measures ANOVA was used with frequency (8 kHz, 4 kHz) and grammatical word type (noun, verb) as within-subjects factors. As before, the main effect of frequency was significant [F (1,20) = 790.4, p < 0.001, ηp 2 = 0.975] with performance in the 8-kHz condition higher than the 4-kHz condition. The main effect of word type [F (1,20) = 0.399, p = 0.535, ηp 2 = 0.020] and the two-way interaction between frequency and word type [F (1,20) = 0.62, p = 0.441, ηp 2 = 0.030] were not significant, indicating that there was no difference between s/z recognition for nouns compared with verbs and no differences between nouns and verbs when the bandwidth was reduced from 8 to 4 kHz. When bandwidth was restricted, the inflected words in isolation (both nouns and verbs) were negatively affected to a much larger degree than for words embedded in sentences.


#

Research Question #2: How does the Word+Morpheme Recognition of CNH Compare with that of CHH?

Recognition accuracy for each condition is plotted in [Figure 2]. To compare performance between CNH and CHH on the word+morpheme recognition task, a mixed-model ANOVA was completed with context (isolated versus embedded) as a within-subjects factor and hearing status (NH, HH) as a between-subjects factor. Only the 8-kHz conditions were used for this comparison because the 4-kHz conditions were not completed with CHH. The main effect of context for the 8-kHz condition was significant [F (1,33) = 10.81, p = 0.002, ηp 2 = 0.247] with word+morpheme recognition higher for isolated targets (88.2%) than embedded (81.4%). The main effect of hearing status was significant [F (1,33) = 9.74, p = 0.004, ηp 2 = 0.227] with higher performance for the NH group than the HH group. The interaction between group and context was not significant [F (1,33) = 2.96, p = 0.095, ηp 2 = 0.082], indicating that the pattern for word+moprheme recognition for isolated and embedded words was the same for NH and HH groups.

Zoom Image
Figure 2 Percent correct plural recognition for CNH (solid fill) and CHH (hatched fill) as a function of context (isolated, embedded) and cutoff frequency (8 kHz, 4 kHz). The boxplots represent the interquartile range and the error bars represent the 10th–90th percentiles. The filled circles are the mean values and the line in each box represents the median. (This figure appears in color in the online version of this article.)

#

Research Question #3: Do Electroacoustic Measures of Hearing Aids Relate to Word+Morpheme Recognition Accuracy for CHH?

To examine the factors that contributed to word+morpheme recognition in the children who were HH, several measures of audibility were examined as predictors of morpheme recognition for isolated words and words embedded in sentences. The BESII was not a significant predictor of s/z inflection recognition for isolated words (r = 0.120, p = 0.724) or words in sentences (r = 0.243, p = 0.472). The better-ear sensation level of the LTASS at 6 kHz was also not a significant predictor of recognition for isolated words (r = 0.237, p = 0.342) or words in sentences (r = 0.387, p = 0.269). The better-ear maximum audible frequency was a significant predictor of recognition of plural words in sentences (r = 0.706, p = 0.015), but not for s/z-inflected words in isolation (r = 0.201, p = 0.550). [Figure 3] displays plural recognition in the sentence-embedded condition for CHH as a function of their aided maximum audible frequency and, for reference, as a function of low-pass filter condition for CNH.

Zoom Image
Figure 3 Individual percent correct plural recognition in the embedded sentence environment as a function of cutoff frequency (8 kHz, 4 kHz) for CNH (filled blue circles) and maximum audible frequency for CHH (filled green circles). Data points at 4000 and 8000 Hz were horizontally jittered to show all participants. (This figure appears in color in the online version of this article.)

#
#

DISCUSSION

The goal of this project was to examine the influence of linguistic context on children’s ability to recognize s/z-inflected words in noise with varying access to high-frequency speech information. Word+morpheme recognition was also compared across lexical word type and between isolated and embedded presentation conditions for both CNH and CHH. Electroacoustic measures of high-frequency speech access through HAs were compared with the word+morpheme recognition for CHH. In conditions of optimal audibility, recognition for both CNH and CHH was lower for targets in the sentence-embedded position versus isolated targets, corresponding to the lower saliency of s/z inflections when in sentence-medial positions. In conditions of restricted high-frequency access, CNH had better word+morpheme accuracy for embedded targets compared with isolated, indicating benefit from coarticulation cues found in sentences that do not exist in isolated words. Measuring recognition of embedded targets in low-predictability sentences allows for the examination of the acoustic-phonetic effects of sentence-medial positioning on plural morpheme recognition while controlling for language knowledge. The current results indicate utility in using the electroacoustic maximum audible frequency measurement for predicting word+morpheme target recognition in sentence contexts for CHH.

Recognition for isolated and embedded words was compared between low-pass filter conditions for CNH. The CNH in this study had better recognition of words inflected for plurality and verb tense when more high-frequency speech information was available with higher bandwidth, consistent with past studies on fricative perception ([Kortekaas and Stelmachowicz, 2000]; [Stelmachowicz et al, 2001]; [2002]; [Leibold et al, 2014]). Counter to our prediction, there was no significant difference in word+morpheme recognition between isolated nouns or verbs. As predicted, when high-frequency information was removed, recognition of noun and verb inflections for the isolated target words decreased further than for the embedded plural word targets. The concept that a stronger reliance on high-frequency speech information for recognition of s/z inflections exists for shorter, simpler stimuli compared with sentences is supported by the results of this study. In conditions where high-frequency speech information was limited, CNH benefited from using acoustic cues present within a sentence framework to detect plurality. Results from [Silberer et al (2015)] also found an improvement in recognition within sentences compared with isolated words when high-frequency speech information was restricted. However, the sentence stimuli used in the Silberer study, the Multimodal Lexical Sentence Test, contained syntactic cues that could impact recognition of high-frequency morphemes based on the child’s ability to interpret and use those cues to predict plurality and/or tense (e.g., “I saw seven eggs in the street.” and “The camp needs a new barn.”). Because the sentence stimuli in this study were designed to control for the contributions of language development on recognition by limiting both the semantic and syntactic context that could influence predictability of the target word plurality, it is likely that the benefit children received in the sentence-embedded condition was because of acoustic—and not linguistic—effects. If recognition in noise was only tested using words in isolation or in conditions of optimal audibility, children’s ability to benefit from acoustic cues present in sentence contexts in degraded listening conditions would not be realized. This could result in an underestimation of children’s speech recognition in settings where additional sentence-level acoustic cues exist, but access to high-frequency speech information is restricted.

In situations where young children are unable to accurately repeat sentences, measures using monosyllabic words in isolation, such as those in the University of Western Ontario Plurals Test, would still provide information on access to high-frequency phonemes. Clinically, plural recognition is an outcome used to judge whether nonlinear frequency-lowering strategies should be implemented in pediatric and adult amplification, and it is also used as a behavioral verification measure of those frequency-lowering strategies ([Glista and Scollie, 2012]). Frequency-lowering strategies are intended to improve audibility of high-frequency phonemes over conventional processing by relocating otherwise inaudible high-frequency spectral content into a lower, more audible frequency region. Frequency lowering, by definition, introduces spectral distortion to the original speech signal so it should be employed only when high-frequency audibility cannot be achieved with conventional processing. Underestimating children’s word+morpheme recognition by using a speech recognition test with simple stimuli could result in overprescription of settings like frequency lowering, introducing unnecessary distortion, when children are otherwise able to make use of cues present in conversational speech. Interpreting recognition of s/z-inflected words within a sentence context may allow clinicians to more prudently set frequency lowering and reduce unnecessary distortion. The effects of frequency lowering on the recognition of isolated versus embedded s/z-inflected words should be directly evaluated in future studies.

The second research question examined differences in word+morpheme recognition based on listener group. It was predicted that CNH would out-perform CHH for both isolated and embedded recognition. Based on the 8-kHz filter condition for both groups, HH listeners had poorer recognition with their HAs compared with NH peers, consistent with previous studies ([Stelmachowicz et al, 2002]; [Glista and Scollie, 2012]). It was our expectation that CHH would not benefit from sentence context in this study because of (a) the low saliency of plural markers in sentence-medial positions and (b) limited audibility and bandwidth through their HAs. Furthermore, the noise we introduced to reduce ceiling effects could have reduced the saliency of phonemes that occur at lower intensities that would otherwise contribute to coarticulation cues, preventing CHH from taking advantage of those cues. The recognition of CHH suffered to a similar degree as the CNH from sentence-medial position effects, in that both groups of listeners had better word+morpheme recognition for the isolated targets compared with the embedded targets in the 8-kHz condition. In line with our expectation, HH listeners did not show an advantage in plural recognition in the sentence-embedded condition.

For the third research question, the influence of varying acoustic access to speech information on word+morpheme recognition across linguistic context for CHH was analyzed with regard to overall aided audibility, BESII, and two measures of high-frequency audibility, the sensation level of speech at 6 kHz and the maximum audible frequency. It was expected that measures of access to high-frequency speech information would be more sensitive than overall audibility in predicting individual differences in word+morpheme recognition. Our results indicate that the broadband estimate of audibility, BESII, was not predictive of recognition in the isolated or sentence-embedded context. The SII estimates speech recognition for a given stimuli based on frequency-importance bands spanning the entire LTASS, not just high-frequency bands. Results from [Gustafson and Pittman (2011)] and [Hogan and Turner (1998)] indicate that increases in bandwidth and resulting improvements in sentence recognition may not necessarily be well-represented by measures of general audibility. [Gustafson and Pittman (2011)] found that children had better speech recognition for meaningful and nonsense sentences with increasing bandwidth while maintaining approximately equal audibility. In this regard, the SII does not represent the variability in performance seen with varying bandwidth. Thus, it was not surprising that a general measure of speech audibility did not relate to recognition of s/z-inflected words in this study.

Of the two measures of high-frequency audibility, the maximum audible frequency was more informative in relation to accuracy of word+morpheme recognition than the sensation level of speech measured at 6 kHz, which was not predictive of recognition in any context. It may be that for the talker used in this study, the fricative energy of the s/z inflections was concentrated above 6 kHz, thus measuring the sensation level at that frequency did not garner enough information about the s/z-inflected word to predict recognition. In contrast, the amount of speech bandwidth provided by the HAs, as measured by the maximum audible frequency, positively related to plural recognition in sentences. These results show promise for using the maximum audible frequency to estimate aided plural recognition for stimuli with linguistic context for CHH. The fact that none of the electroacoustic measures of high-frequency audibility were related to word+morpheme recognition in isolated monosyllabic words suggests that the s/z morphemes may have been salient enough for recognition in the isolated condition, even with degraded high-frequency audibility through HAs.

Focusing on the upper limit of speech bandwidth that the child has access to, in place of relying on a broadband measure of SII or a sensation level at one frequency, may help audiologists better anticipate children’s behavioral recognition of high-frequency morphemes, such as plurals or third person singular verb markers. The maximum audible frequency is easily and quickly obtainable while performing speechmapping SII verification measures and gives audiologists specific information about children’s access to—and ability to use—high-frequency speech information in degraded listening conditions that approximate how they learn language.


#

LIMITATIONS AND FUTURE DIRECTIONS

The lack of a significant difference for recognition of s/z inflections between nouns and verbs in the isolated condition should be interpreted with caution. It is possible that isolated words do not allow us to examine effects of input frequency and duration differences that exist between plural noun and third person singular verb inflections within running speech. Future work could center on differences in s/z-inflected noun and verb recognition within a sentence-medial position. The small number of HH participants limited the applicability of between-subject findings regarding the influence of electroacoustic measures of audibility. Future studies on the relationship between maximum audible frequency and fricative recognition should be explored for children with more severe degrees of hearing loss, and thus, potentially lower and more varied amounts of bandwidth. We did not examine HH listeners’ recognition in the 4-kHz filter condition, limiting our ability to compare NH with HH performance in equally restricted bandwidth conditions. Noise might have interfered with acoustic access to low-intensity phonemes that would otherwise contribute to coarticulation cues, specifically for the CHH. If tested in quiet to improve audibility across all phonemes, it is feasible that the CHH could employ strategies similar to that of the CNH in the 4-kHz condition and gain benefit from acoustic cues within a sentence context. Future research should evaluate effects of context on recognition of s/z-inflected words in quiet to evaluate access to high-frequency sounds without interfering noise. Alternatively, documenting the effect of noise on high-frequency audibility would support clinicians in more accurately measuring audible bandwidth/access to high-frequency speech information.


#

CONCLUSIONS

A primary finding of this study is that varying amounts of high-frequency audibility, whether due to low-pass filtering or limited HA audibility and/or bandwidth, impacts word+morpheme recognition across different stimuli context for both CNH and CHH. Assessing word+morpheme recognition with only monosyllabic word lists, especially in conditions where bandwidth is restricted, may underestimate children’s high-frequency fricative recognition in low semantic or syntactic settings that they may encounter when developing language. Therefore, measuring recognition of s/z-inflected words within sentences rather than in a monosyllabic word list may be an alternative approach to setting and verifying frequency-lowering strategies for children. All children in this study benefited from access to high-frequency acoustic information to support recognition of inflected words, and the electroacoustic maximum audible frequency measurement for CHH indicated promise for estimating word+morpheme recognition. Further work remains to be done on the effect of lexical category within sentence-medial positions and isolating the effects of high-frequency audibility on fricative recognition without interfering noise.


#

Abbreviations

BESII: Better-ear Speech Intelligibility Index
CHH: children who are hard of hearing
CNH: children with normal hearing
HA: hearing aid
HH: hard-of-hearing
LTASS: long-term average speech spectrum
NH: normal hearing
SII: Speech Intelligibility Index


#

No conflict of interest has been declared by the author(s).

Acknowledgments

The authors wish to acknowledge Dr. Mary Pat Moeller for her helpful and insightful comments on a previous version of this manuscript.

This research was supported by the following grants awarded by the NIH-NIDCD: T35 DC008757, P30 DC004662, RO3 DC010505.


This paper was presented at the poster session of the annual meeting of the American Auditory Society, March 2013, Scottsdale, AZ.


  • REFERENCES

  • Alloway TP. 2007. Automated working memory assessment. London, UK: Pearson Assessment;
  • American Academy of Audiology. (2013) American Academy of Audiology Clinical Practice Guidelines: Pediatric Amplification. Reston, VA: American Academy of Audiology.
  • American National Standards Institute (ANSI) 1997. Methods for Calculation of the Speech Intelligibility Index. Technical Report S3.5-1997 . New York, NY: ANSI;
  • American National Standards Institute (ANSI) 2003. Specification of Hearing Aid Characteristics. ANSI S3.22-2003 . New York, NY: ANSI;
  • Bagatto MP, Moodie ST, Malandrino AC, Richert FM, Clench DA, Scollie SD. 2011; The University of Western Ontario pediatric audiological monitoring protocol (UWO PedAMP). Trends Amplif 15 (01) 57-76
  • Bell TS, Dirks DD, Trine TD. 1992; Frequency-importance functions for words in high- and low-context sentences. J Speech Hear Res 35 (04) 950-959
  • Boersma P, Weenink D. (2001) Praat speech processing software. Institute of Phonetics Sciences of the University of Amsterdam. http://www.praat.org
  • Hogan CA, Turner CW. 1998; High-frequency audibility: benefits for hearing-impaired listeners. J Acoust Soc Am 104 (01) 432-441
  • Hsieh L, Leonard LB, Swanson L. 1999; Some differences between English plural noun inflections and third singular verb inflections in the input: the contributions of frequency, sentence position, and duration. J Child Lang 26 (03) 531-543
  • Glista D, Scollie S. 2012; Development and evaluation of an English language measure of detection of word-final plurality markers: the University of Western Ontario Plurals Test. Am J Audiol 21 (01) 76-81
  • Glista D, Scollie S, Bagatto M, Seewald R, Parsa V, Johnson A. 2009; Evaluation of nonlinear frequency compression: clinical outcomes. Int J Audiol 48 (09) 632-644
  • Gustafson SJ, Pittman AL. 2011; Sentence perception in listening conditions having similar speech intelligibility indices. Int J Audiol 50 (01) 34-40
  • Kimlinger C, McCreery R, Lewis D. 2015; High-frequency audibility: the effects of audiometric configuration, stimulus type, and device. J Am Acad Audiol 26 (02) 128-137
  • Koehlinger K, Van Horne AO, Oleson J, McCreery R, Moeller MP. 2015; The role of sentence position, allomorph, and morpheme type on accurate use of s-related morphemes by children who are hard of hearing. J Speech Lang Hear Res 58 (02) 396-409
  • Koehlinger KM, Van Horne AJO, Moeller MP. 2013; Grammatical outcomes of 3- and 6-year-old children who are hard of hearing. J Speech Lang Hear Res 56 (05) 1701-1714
  • Kortekaas RW, Stelmachowicz PG. 2000; Bandwidth effects on children’s perception of the inflectional morpheme /s/: acoustical measurements, auditory detection, and clarity rating. J Speech Lang Hear Res 43 (03) 645-660
  • Leibold LJ, Hodson H, McCreery RW, Calandruccio L, Buss E. 2014; Effects of low-pass filtering on the perception of word-final plurality markers in children and adults with normal hearing. Am J Audiol 23 (03) 351-358
  • Mayo C, Turk A. 2004; Adult-child differences in acoustic cue weighting are influenced by segmental context: children are not always perceptually biased toward transitions. J Acoust Soc Am 115 (06) 3184-3194
  • McCreery RW, Bentler RA, Roush PA. 2013; Characteristics of hearing aid fittings in infants and young children. Ear Hear 34 (06) 701-710
  • Moeller MP, Hoover B, Putman C, Arbataitis K, Bohnenkamp G, Peterson B, Wood S, Lewis D, Pittman A, Stelmachowicz P. 2007; Vocalizations of infants with hearing loss compared with infants with normal hearing: Part I--phonetic development. Ear Hear 28 (05) 605-627
  • Moeller MP, McCleary E, Putman C, Tyler-Krings A, Hoover B, Stelmachowicz P. 2010; Longitudinal development of phonology and morphology in children with late-identified mild-moderate sensorineural hearing loss. Ear Hear 31 (05) 625-635
  • Nittrouer S. 1996; Discriminability and perceptual weighting of some acoustic cues to speech perception by 3-year-olds. J Speech Hear Res 39 (02) 278-297
  • Nittrouer S. 2002; Learning to perceive speech: how fricative perception changes, and how it stays the same. J Acoust Soc Am 112 (02) 711-719
  • Nittrouer S, Miller ME. 1997; Predicting developmental shifts in perceptual weighting schemes. J Acoust Soc Am 101 (04) 2253-2266
  • Pittman AL. 2008; Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. J Speech Lang Hear Res 51 (03) 785-797
  • Pittman AL, Stelmachowicz PG. 2000; Perception of voiceless fricatives by normal-hearing and hearing-impaired children and adults. J Speech Lang Hear Res 43 (06) 1389-1401
  • Silberer AB, Bentler R, Wu YH. 2015; The importance of high-frequency audibility with and without visual cues on speech recognition for listeners with normal hearing. Int J Audiol 54 (11) 865-872
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2001; Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. J Acoust Soc Am 110 (04) 2183-2190
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2002; Aided perception of /s/ and /z/ by hearing-impaired children. Ear Hear 23 (04) 316-324
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE, Moeller MP. 2004; The importance of high-frequency audibility in the speech and language development of children with hearing loss. Arch Otolaryngol Head Neck Surg 130 (05) 556-562
  • Storkel HL, Hoover JR. 2010; An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behav Res Methods 42 (02) 497-506
  • Studebaker GA, Sherbecoe RL. 1993. Frequency-importance functions for speech recognition. In: Studebaker GA, Hochberg I. Acoustical Factors Affecting Hearing Aid Performance. Boston: Allyn and Bacon; 185-204
  • Sundara M, Demuth K, Kuhl PK. 2011; Sentence-position effects on children’s perception and production of English third person singular -s. J Speech Lang Hear Res 54 (01) 55-71
  • Tomblin JB, Harrison M, Ambrose SE, Walker EA, Oleson JJ, Moeller MP. 2015; Language outcomes in young children with mild to severe hearing loss. Ear Hear 36 (01) (Suppl) 76S-91S
  • Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. 2010; Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 21 (10) 618-628

Corresponding author

Meredith Spratford
Boys Town National Research Hospital
Omaha, NE 68131

  • REFERENCES

  • Alloway TP. 2007. Automated working memory assessment. London, UK: Pearson Assessment;
  • American Academy of Audiology. (2013) American Academy of Audiology Clinical Practice Guidelines: Pediatric Amplification. Reston, VA: American Academy of Audiology.
  • American National Standards Institute (ANSI) 1997. Methods for Calculation of the Speech Intelligibility Index. Technical Report S3.5-1997 . New York, NY: ANSI;
  • American National Standards Institute (ANSI) 2003. Specification of Hearing Aid Characteristics. ANSI S3.22-2003 . New York, NY: ANSI;
  • Bagatto MP, Moodie ST, Malandrino AC, Richert FM, Clench DA, Scollie SD. 2011; The University of Western Ontario pediatric audiological monitoring protocol (UWO PedAMP). Trends Amplif 15 (01) 57-76
  • Bell TS, Dirks DD, Trine TD. 1992; Frequency-importance functions for words in high- and low-context sentences. J Speech Hear Res 35 (04) 950-959
  • Boersma P, Weenink D. (2001) Praat speech processing software. Institute of Phonetics Sciences of the University of Amsterdam. http://www.praat.org
  • Hogan CA, Turner CW. 1998; High-frequency audibility: benefits for hearing-impaired listeners. J Acoust Soc Am 104 (01) 432-441
  • Hsieh L, Leonard LB, Swanson L. 1999; Some differences between English plural noun inflections and third singular verb inflections in the input: the contributions of frequency, sentence position, and duration. J Child Lang 26 (03) 531-543
  • Glista D, Scollie S. 2012; Development and evaluation of an English language measure of detection of word-final plurality markers: the University of Western Ontario Plurals Test. Am J Audiol 21 (01) 76-81
  • Glista D, Scollie S, Bagatto M, Seewald R, Parsa V, Johnson A. 2009; Evaluation of nonlinear frequency compression: clinical outcomes. Int J Audiol 48 (09) 632-644
  • Gustafson SJ, Pittman AL. 2011; Sentence perception in listening conditions having similar speech intelligibility indices. Int J Audiol 50 (01) 34-40
  • Kimlinger C, McCreery R, Lewis D. 2015; High-frequency audibility: the effects of audiometric configuration, stimulus type, and device. J Am Acad Audiol 26 (02) 128-137
  • Koehlinger K, Van Horne AO, Oleson J, McCreery R, Moeller MP. 2015; The role of sentence position, allomorph, and morpheme type on accurate use of s-related morphemes by children who are hard of hearing. J Speech Lang Hear Res 58 (02) 396-409
  • Koehlinger KM, Van Horne AJO, Moeller MP. 2013; Grammatical outcomes of 3- and 6-year-old children who are hard of hearing. J Speech Lang Hear Res 56 (05) 1701-1714
  • Kortekaas RW, Stelmachowicz PG. 2000; Bandwidth effects on children’s perception of the inflectional morpheme /s/: acoustical measurements, auditory detection, and clarity rating. J Speech Lang Hear Res 43 (03) 645-660
  • Leibold LJ, Hodson H, McCreery RW, Calandruccio L, Buss E. 2014; Effects of low-pass filtering on the perception of word-final plurality markers in children and adults with normal hearing. Am J Audiol 23 (03) 351-358
  • Mayo C, Turk A. 2004; Adult-child differences in acoustic cue weighting are influenced by segmental context: children are not always perceptually biased toward transitions. J Acoust Soc Am 115 (06) 3184-3194
  • McCreery RW, Bentler RA, Roush PA. 2013; Characteristics of hearing aid fittings in infants and young children. Ear Hear 34 (06) 701-710
  • Moeller MP, Hoover B, Putman C, Arbataitis K, Bohnenkamp G, Peterson B, Wood S, Lewis D, Pittman A, Stelmachowicz P. 2007; Vocalizations of infants with hearing loss compared with infants with normal hearing: Part I--phonetic development. Ear Hear 28 (05) 605-627
  • Moeller MP, McCleary E, Putman C, Tyler-Krings A, Hoover B, Stelmachowicz P. 2010; Longitudinal development of phonology and morphology in children with late-identified mild-moderate sensorineural hearing loss. Ear Hear 31 (05) 625-635
  • Nittrouer S. 1996; Discriminability and perceptual weighting of some acoustic cues to speech perception by 3-year-olds. J Speech Hear Res 39 (02) 278-297
  • Nittrouer S. 2002; Learning to perceive speech: how fricative perception changes, and how it stays the same. J Acoust Soc Am 112 (02) 711-719
  • Nittrouer S, Miller ME. 1997; Predicting developmental shifts in perceptual weighting schemes. J Acoust Soc Am 101 (04) 2253-2266
  • Pittman AL. 2008; Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. J Speech Lang Hear Res 51 (03) 785-797
  • Pittman AL, Stelmachowicz PG. 2000; Perception of voiceless fricatives by normal-hearing and hearing-impaired children and adults. J Speech Lang Hear Res 43 (06) 1389-1401
  • Silberer AB, Bentler R, Wu YH. 2015; The importance of high-frequency audibility with and without visual cues on speech recognition for listeners with normal hearing. Int J Audiol 54 (11) 865-872
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2001; Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. J Acoust Soc Am 110 (04) 2183-2190
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2002; Aided perception of /s/ and /z/ by hearing-impaired children. Ear Hear 23 (04) 316-324
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE, Moeller MP. 2004; The importance of high-frequency audibility in the speech and language development of children with hearing loss. Arch Otolaryngol Head Neck Surg 130 (05) 556-562
  • Storkel HL, Hoover JR. 2010; An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behav Res Methods 42 (02) 497-506
  • Studebaker GA, Sherbecoe RL. 1993. Frequency-importance functions for speech recognition. In: Studebaker GA, Hochberg I. Acoustical Factors Affecting Hearing Aid Performance. Boston: Allyn and Bacon; 185-204
  • Sundara M, Demuth K, Kuhl PK. 2011; Sentence-position effects on children’s perception and production of English third person singular -s. J Speech Lang Hear Res 54 (01) 55-71
  • Tomblin JB, Harrison M, Ambrose SE, Walker EA, Oleson JJ, Moeller MP. 2015; Language outcomes in young children with mild to severe hearing loss. Ear Hear 36 (01) (Suppl) 76S-91S
  • Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. 2010; Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 21 (10) 618-628

Zoom Image
Figure 1 Average thresholds (O = right ear; X = left ear) and overall range (hatched area) of audiograms for HH participants (n = 11). (This figure appears in color in the online version of this article.)
Zoom Image
Figure 2 Percent correct plural recognition for CNH (solid fill) and CHH (hatched fill) as a function of context (isolated, embedded) and cutoff frequency (8 kHz, 4 kHz). The boxplots represent the interquartile range and the error bars represent the 10th–90th percentiles. The filled circles are the mean values and the line in each box represents the median. (This figure appears in color in the online version of this article.)
Zoom Image
Figure 3 Individual percent correct plural recognition in the embedded sentence environment as a function of cutoff frequency (8 kHz, 4 kHz) for CNH (filled blue circles) and maximum audible frequency for CHH (filled green circles). Data points at 4000 and 8000 Hz were horizontally jittered to show all participants. (This figure appears in color in the online version of this article.)