J Am Acad Audiol 2017; 28(09): 799-809
DOI: 10.3766/jaaa.16151
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Relationship of Grammatical Context on Children’s Recognition of s/z-Inflected Words

Meredith Spratford
*   Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
,
Hannah Hodson McLean
†   Eastern Virginia Medical School, Norfolk, VA
,
Ryan McCreery
*   Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
› Author Affiliations
Further Information

Publication History

Publication Date:
26 June 2020 (online)

Abstract

Background:

Access to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.

Purpose:

To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).

Research Design:

A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.

Study Sample:

Thirty-five children, aged 5–12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.

Data Collection and Analysis:

Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.

Results:

When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.

Conclusions:

High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children’s use of high-frequency audibility in a manner that approximates how they learn language.

This research was supported by the following grants awarded by the NIH-NIDCD: T35 DC008757, P30 DC004662, RO3 DC010505.


This paper was presented at the poster session of the annual meeting of the American Auditory Society, March 2013, Scottsdale, AZ.


 
  • REFERENCES

  • Alloway TP. 2007. Automated working memory assessment. London, UK: Pearson Assessment;
  • American Academy of Audiology. (2013) American Academy of Audiology Clinical Practice Guidelines: Pediatric Amplification. Reston, VA: American Academy of Audiology.
  • American National Standards Institute (ANSI) 1997. Methods for Calculation of the Speech Intelligibility Index. Technical Report S3.5-1997 . New York, NY: ANSI;
  • American National Standards Institute (ANSI) 2003. Specification of Hearing Aid Characteristics. ANSI S3.22-2003 . New York, NY: ANSI;
  • Bagatto MP, Moodie ST, Malandrino AC, Richert FM, Clench DA, Scollie SD. 2011; The University of Western Ontario pediatric audiological monitoring protocol (UWO PedAMP). Trends Amplif 15 (01) 57-76
  • Bell TS, Dirks DD, Trine TD. 1992; Frequency-importance functions for words in high- and low-context sentences. J Speech Hear Res 35 (04) 950-959
  • Boersma P, Weenink D. (2001) Praat speech processing software. Institute of Phonetics Sciences of the University of Amsterdam. http://www.praat.org
  • Hogan CA, Turner CW. 1998; High-frequency audibility: benefits for hearing-impaired listeners. J Acoust Soc Am 104 (01) 432-441
  • Hsieh L, Leonard LB, Swanson L. 1999; Some differences between English plural noun inflections and third singular verb inflections in the input: the contributions of frequency, sentence position, and duration. J Child Lang 26 (03) 531-543
  • Glista D, Scollie S. 2012; Development and evaluation of an English language measure of detection of word-final plurality markers: the University of Western Ontario Plurals Test. Am J Audiol 21 (01) 76-81
  • Glista D, Scollie S, Bagatto M, Seewald R, Parsa V, Johnson A. 2009; Evaluation of nonlinear frequency compression: clinical outcomes. Int J Audiol 48 (09) 632-644
  • Gustafson SJ, Pittman AL. 2011; Sentence perception in listening conditions having similar speech intelligibility indices. Int J Audiol 50 (01) 34-40
  • Kimlinger C, McCreery R, Lewis D. 2015; High-frequency audibility: the effects of audiometric configuration, stimulus type, and device. J Am Acad Audiol 26 (02) 128-137
  • Koehlinger K, Van Horne AO, Oleson J, McCreery R, Moeller MP. 2015; The role of sentence position, allomorph, and morpheme type on accurate use of s-related morphemes by children who are hard of hearing. J Speech Lang Hear Res 58 (02) 396-409
  • Koehlinger KM, Van Horne AJO, Moeller MP. 2013; Grammatical outcomes of 3- and 6-year-old children who are hard of hearing. J Speech Lang Hear Res 56 (05) 1701-1714
  • Kortekaas RW, Stelmachowicz PG. 2000; Bandwidth effects on children’s perception of the inflectional morpheme /s/: acoustical measurements, auditory detection, and clarity rating. J Speech Lang Hear Res 43 (03) 645-660
  • Leibold LJ, Hodson H, McCreery RW, Calandruccio L, Buss E. 2014; Effects of low-pass filtering on the perception of word-final plurality markers in children and adults with normal hearing. Am J Audiol 23 (03) 351-358
  • Mayo C, Turk A. 2004; Adult-child differences in acoustic cue weighting are influenced by segmental context: children are not always perceptually biased toward transitions. J Acoust Soc Am 115 (06) 3184-3194
  • McCreery RW, Bentler RA, Roush PA. 2013; Characteristics of hearing aid fittings in infants and young children. Ear Hear 34 (06) 701-710
  • Moeller MP, Hoover B, Putman C, Arbataitis K, Bohnenkamp G, Peterson B, Wood S, Lewis D, Pittman A, Stelmachowicz P. 2007; Vocalizations of infants with hearing loss compared with infants with normal hearing: Part I--phonetic development. Ear Hear 28 (05) 605-627
  • Moeller MP, McCleary E, Putman C, Tyler-Krings A, Hoover B, Stelmachowicz P. 2010; Longitudinal development of phonology and morphology in children with late-identified mild-moderate sensorineural hearing loss. Ear Hear 31 (05) 625-635
  • Nittrouer S. 1996; Discriminability and perceptual weighting of some acoustic cues to speech perception by 3-year-olds. J Speech Hear Res 39 (02) 278-297
  • Nittrouer S. 2002; Learning to perceive speech: how fricative perception changes, and how it stays the same. J Acoust Soc Am 112 (02) 711-719
  • Nittrouer S, Miller ME. 1997; Predicting developmental shifts in perceptual weighting schemes. J Acoust Soc Am 101 (04) 2253-2266
  • Pittman AL. 2008; Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. J Speech Lang Hear Res 51 (03) 785-797
  • Pittman AL, Stelmachowicz PG. 2000; Perception of voiceless fricatives by normal-hearing and hearing-impaired children and adults. J Speech Lang Hear Res 43 (06) 1389-1401
  • Silberer AB, Bentler R, Wu YH. 2015; The importance of high-frequency audibility with and without visual cues on speech recognition for listeners with normal hearing. Int J Audiol 54 (11) 865-872
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2001; Effect of stimulus bandwidth on the perception of /s/ in normal- and hearing-impaired children and adults. J Acoust Soc Am 110 (04) 2183-2190
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE. 2002; Aided perception of /s/ and /z/ by hearing-impaired children. Ear Hear 23 (04) 316-324
  • Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE, Moeller MP. 2004; The importance of high-frequency audibility in the speech and language development of children with hearing loss. Arch Otolaryngol Head Neck Surg 130 (05) 556-562
  • Storkel HL, Hoover JR. 2010; An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English. Behav Res Methods 42 (02) 497-506
  • Studebaker GA, Sherbecoe RL. 1993. Frequency-importance functions for speech recognition. In: Studebaker GA, Hochberg I. Acoustical Factors Affecting Hearing Aid Performance. Boston: Allyn and Bacon; 185-204
  • Sundara M, Demuth K, Kuhl PK. 2011; Sentence-position effects on children’s perception and production of English third person singular -s. J Speech Lang Hear Res 54 (01) 55-71
  • Tomblin JB, Harrison M, Ambrose SE, Walker EA, Oleson JJ, Moeller MP. 2015; Language outcomes in young children with mild to severe hearing loss. Ear Hear 36 (01) (Suppl) 76S-91S
  • Wolfe J, John A, Schafer E, Nyffeler M, Boretzki M, Caraway T. 2010; Evaluation of nonlinear frequency compression for school-age children with moderate to moderately severe hearing loss. J Am Acad Audiol 21 (10) 618-628