J Am Acad Audiol 2020; 31(08): 566-577
DOI: 10.1055/s-0040-1709448
Research Article

Neural Coding of Syllable-Final Fricatives with and without Hearing Aid Amplification

Sharon E. Miller
1   Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas
,
Yang Zhang
2   Department of Speech-Language Hearing Science, University of Minnesota, Minneapolis, Minnesota
3   Center for Neurobehavioral Development, University of Minnesota, Minneapolis, Minnesota
4   Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota
› Author Affiliations
Funding Dr. Miller and Dr. Zhang received funds from the College of Liberal Arts, University of Minnesota.
 

Abstract

Background Cortical auditory event-related potentials are a potentially useful clinical tool to objectively assess speech outcomes with rehabilitative devices. Whether hearing aids reliably encode the spectrotemporal characteristics of fricative stimuli in different phonological contexts and whether these differences result in distinct neural responses with and without hearing aid amplification remain unclear.

Purpose To determine whether the neural coding of the voiceless fricatives /s/ and /ʃ/ in the syllable-final context reliably differed without hearing aid amplification and whether hearing aid amplification altered neural coding of the fricative contrast.

Research Design A repeated-measures, within subject design was used to compare the neural coding of a fricative contrast with and without hearing aid amplification.

Study Sample Ten adult listeners with normal hearing participated in the study.

Data Collection and Analysis Cortical auditory event-related potentials were elicited to an /ɑs/–/ɑʃ/ vowel-fricative contrast in unaided and aided listening conditions. Neural responses to the speech contrast were recorded at 64-electrode sites. Peak latencies and amplitudes of the cortical response waveforms to the fricatives were analyzed using repeated-measures analysis of variance.

Results The P2' component of the acoustic change complex significantly differed from the syllable-final fricative contrast with and without hearing aid amplification. Hearing aid amplification differentially altered the neural coding of the contrast across frontal, temporal, and parietal electrode regions.

Conclusions Hearing aid amplification altered the neural coding of syllable-final fricatives. However, the contrast remained acoustically distinct in the aided and unaided conditions, and cortical responses to the fricative significantly differed with and without the hearing aid.


#

Perception of the voiceless fricative /s/–/ʃ/ contrast depends on multiple acoustic cues, including spectral frication shape and dynamic formant transitions to neighboring vowels, and their relative importance varies as a function of phonological context as well as hearing sensitivity.[1] [2] [3] [4] [5] [6] In the syllable-initial context, adult listeners with normal hearing (NH) use both the frication and formant transition cues for accurate recognition.[1] [3] However, adult listeners with hearing impairment (HI) rely mainly on the frication spectrum for correct perception.[3] In the syllable-final or coda position, adult listeners with NH and HI both give more perceptual weight to the frication segment than the vowel and formant transitions.[2] In low-level noise, previous work in listeners with NH suggests that /ʃ/ in the syllable-final position is perceptually confused with other voiceless consonants roughly 10% of the time, with the most confusions occurring with /s/.[7] In the consonant-final position, /s/ is confused with other voiceless consonants roughly 30% of the time, with errors occurring primarily for /ʃ/ and /ɵ/.[7] For listeners with HI, recognition of voiceless fricatives in the syllable-final or coda position is on average 15% poorer compared with the syllable-initial position.[7] [8]

Given previous /s/–/ʃ/ consonant confusion results and that spectral frication cues are important for accurate recognition of /s/–/ʃ/ in listeners with HI, it is crucial hearing aids reliably code the important acoustic cues differentiating the contrast across varying phonological contexts. However, hearing aids modify the spectral and temporal properties of speech sounds,[9] [10] potentially altering important cues that contribute to accurate recognition. For example, /s/ has a spectral frication peak ranging from roughly 4,000 to 8,000 Hz, whereas the spectral peak for /ʃ/ occurs near 2,000 to 4,000 Hz.[11] [12] Newer hearing aid models offer an extended bandwidth and can theoretically provide substantial gain up to 8,000 Hz, allowing for acoustic differentiation of /s/ and /ʃ/. However, recent work suggests that ANSI bandwidth specifications for newer behind-the-ear hearing aids reported by manufacturers can overestimate the maximally audible high-frequency cues in fricative stimuli.[13] Real-ear probe microphone measures can assess the acoustic output of hearing aids in the ear canal for different speech sounds, but verification systems cannot account for higher order processing of the processed, amplified signals.[9] While behavioral tests can objectively assess fricative perception in varying phonological contexts, behavioral assessments cannot shed light on the temporal dynamics and neural coding of speech processing that underlies perception. Indeed, electrophysiological differences to speech sound contrasts can emerge prior to changes in behavior.[14] Likewise, changes in behavior are not always linked to changes in audible acoustic information.[15] [16] Electrophysiological measures, though, can assess how spectral and temporal features of speech are coded in the central auditory system and how neural responses to acoustic changes within and across speech sounds relate to behavioral perception.[17] [18] Objective measurements of speech processing independent of cognitive or attentional skill are a potentially attractive tool for clinicians who monitor speech and language outcomes as the neural coding of speech segments has been shown to be predictive of sentence perception abilities in adult listeners with and without HI.[19] [20] It remains unclear whether syllable-final /s/–/ʃ/ fricatives produce different electrophysiological responses in listeners with and without hearing loss when stimuli are behaviorally discriminable. It also remains untested how acoustic modifications via hearing aid signal processing affect neural responses to voiceless fricative segments in different phonological contexts.

Auditory event-related potentials (ERPs) are a useful tool for examining whether hearing aid amplification modifies the neural coding of voiceless fricative speech sounds in differing phonological contexts. The ERP response has exquisite temporal resolution on the order of milliseconds and can be elicited by a variety of auditory stimuli.[21] [22] The averaged ERP waveform has a series of positive (P) and negative (N) peaks with numeric designations (i.e., P1 is the first positive peak of the waveform). The P1–N1–P2 complex is elicited by repeated auditory stimulation (see Key et al[21] for a review) or when an acoustic change occurs in an ongoing signal which is termed the acoustic change complex (ACC).[23] [24] [25] [26] Importantly, ERPs have been shown to be sensitive to acoustic differences present in naturally spoken /ʃ/ segments when produced in varying phonologic contexts in unaided listening conditions.[17]

Previous work in adult listeners with NH indicates that ERPs are reliably elicited by the voiceless fricatives /s/ and /ʃ/ in the consonant-initial context with and without hearing aid amplification.[27] [28] Miller and Zhang[28] recorded aided and unaided ERP responses to a /sɑ/–/ʃɑ/ contrast that controlled frication duration, amplitude, and dynamic formant transition cues in the speech sounds. The unaided results showed differences in spectral frication alone elicited significantly different ERP responses to the initial consonant.[28] Hearing aid amplification acoustically modified the contrast, but because the /s/–/ʃ/ stimuli were still acoustically distinct, the aided ERP waveforms to the fricative also significantly differed.[28] Tremblay and colleagues[27] demonstrated that when frication duration differences were present in naturally produced /si/–/ʃi/ consonant–vowel stimuli, the temporal differences were reliably reflected in the unaided ACC response latencies to the following vowel. The addition of hearing aid amplification preserved the unaided neural ACC patterns.[27] It remains untested whether the ACC reliably differs for the /s/–/ʃ/ contrast when the fricatives are in the coda position and whether the neural coding of the contrast is affected by hearing aid amplification.

The present electrophysiological study in listeners with NH expands on previous work and examines the neural coding of the voiceless fricative /s/–/ʃ/ contrast in the syllable-final context. There are two primary research aims: (1) to examine whether the neural coding of the voiceless fricatives /s/ and /ʃ/ in the syllable-final context reliably differs without hearing aid amplification and (2) to determine whether hearing aid amplification preserves the unaided neural coding pattern of the /s/–/ʃ/ contrast or whether hearing aid amplification modifies the unaided neural response patterns. We hypothesize that if the acoustic outputs reliably differ in the unaided and aided conditions, the ERP responses would differ across conditions as well. Answers to these research questions would shed light on how neural coding of fricative sounds in the coda position is affected by hearing aid amplification.

Materials and Methods

Participants

Ten right-handed adult listeners with NH participated in the study (five males and five females; mean age: 23 years). All listeners were native speakers of American-English, denied a history of speech, cognitive, or language impairments, self-reported NH, and passed a hearing screening of 20-dB HL at 1,000 Hz. Normal auditory-evoked responses to the 1,000 Hz tone were also verified in each subject prior to beginning the experiment. Informed consent was obtained in compliance with the institutional Human Research Protection Program at the University of Minnesota.


#

ERP Stimuli

The speech stimuli used to elicit the ERP responses were 350-millisecond nonsense vowel–consonant (VC) syllables, /ɑs/ and /ɑʃ/, produced by a native female speaker of American-English. The stimuli were recorded using a Sennheiser high-fidelity microphone (model e865; frequency response 40–20,000 Hz) in a sound booth (ETS-Lindgren Acoustic Systems) and were digitally recorded to a disk (44.1 kHz). Because auditory-evoked responses directly reflect the temporal characteristics of the stimuli, the stimuli were digitally edited in Sony Sound Forge 9.0 (Sony Creative Software, United States) to control the duration parameters for the vowel and fricative portions in the stimuli.[26] [28] [29] Using temporal stretching and shrinking via the Pitch Synchronous Overlap-Add (PSOLA) technique,[30] the vowel portion /ɑ/ was edited to 200 milliseconds and the fricative portion of each stimulus was 150 milliseconds. All VC stimuli were equated for the root mean square intensity level.


#

Hearing Aid Acoustics

A digital Starkey Destiny 1200, 12-channel, behind-the-ear hearing aid coupled to the right ear of each listener with a foam earpiece was used for the aided testing. According to manufacturer specifications, the hearing aid had a frequency response ranging from 200 to 6,400 Hz, 60 dB sound pressure level (SPL) peak full-on gain, and 54 dB SPL high-frequency average full-on gain. The hearing aid was set to an omnidirectional mode and had an average of 10 dB of insertion gain across frequencies, verified using a 60 dB SPL digital speech stimulus in KEMAR (G.R.A.S. Sound and Vibration, Denmark; [Fig. 1]). The hearing aid employed multichannel compression and was programmed to maximize speech intelligibility by having a higher threshold of knee point (TK) in the low-frequency regions (TK = 50 dB SPL) compared with the high-frequency regions (TK = 30 dB SPL). Overall, the compression ratio was set to 1.125:1 across channels and fast compression time constants of 1 to 10 milliseconds were employed to maximize audibility of the low-intensity syllable-final consonant. All advanced feedback and noise reduction technologies were disabled for testing. The hearing aid was electroacoustically verified in a 2-cc coupler (Fonix 7000, Frye Electronics Inc., Tigard, OR) before each subject and test session.

Zoom Image
Fig. 1 Real ear insertion gain (REIG) in dB to a 60 dB SPL input measured in a Knowles Electronics Manikin for Acoustic Research (KEMAR).

To characterize the aided and unaided acoustic outputs, in-the-canal recordings of the /ɑs/ and /ɑʃ/ stimuli were made using KEMAR (G.R.A.S. Sound and Vibration, Denmark) with and without the hearing aid ([Fig. 2]). Acoustic outputs were routed from KEMAR's internal microphones to a sound card (Gina, Echo Audio) via an external audio interface and then recorded to Audacity. Acoustic analyses were performed offline in Praat[31] and MATLAB (MathWorks, Natick, MA). [Table 1] displays the mean intensity of the vowel and fricative portions of each stimulus measured in the canal. Spectra of the unaided and aided in-the-canal waveforms are shown in [Fig. 3]. To document how hearing aid signal processing and compression parameters affected temporal cues for the fricative contrast, the unaided and aided stimuli were analyzed using the envelope difference index (EDI).[32] The EDI quantifies temporal envelope changes across stimuli and has been previously used to compare unaided and aided recordings.[33] The EDI ranges from a value of 0 to 1, with 0 representing a complete match in envelopes and 1 representing no relationship. To obtain the EDI, the envelope of each unaided and aided /ɑs/ and /ɑʃ/ signal was extracted using a Hilbert transform function and then filtered using a low-pass Hann filter with a 50-Hz cutoff (Praat, MATLAB). Prior to computing the EDI, the envelopes were down-sampled to 6,000 Hz and the average stimulus amplitude was calculated in the aided and unaided conditions. To scale the signals and allow for comparisons across hearing aid conditions, each sample point was divided by the mean amplitude value of the syllable. The EDI was calculated using the following formula:

Zoom Image

Here, N represents the number of samples in the waveform, Env1 n is the reference signal, and Env2 n is the comparison signal. To compare envelopes of the fricative contrasts, EDIs were computed comparing /ɑs/ versus /ɑʃ/ in the unaided ([Fig. 4A]) and aided ([Fig. 4B]) conditions. To quantify how hearing aid compression affected each stimulus, EDIs were also computed for each stimulus in the unaided versus aided condition ([Fig. 5]).

Zoom Image
Fig. 2 Unaided and aided waveforms of the /ɑs/ and /ɑʃ/ in-the-canal, acoustic recordings using KEMAR in relative amplitude.
Zoom Image
Fig. 3 In-the-canal spectra of the frication portion of the (A) /ɑs/ and (B) /ɑʃ/ stimuli in the unaided (solid line) and aided (dotted line) conditions.
Table 1

Average in-the-canal intensities in dBA for the vowel and fricative segments of the /ɑs/ and /ɑʃ/ stimuli in the unaided and aided conditions measured using KEMAR

Unaided

Aided

/ɑs/

/ɑʃ/

/ɑs/

/ɑʃ/

Vowel

66.9 (1.27)

67.1 (1.75)

77.1 (1.1)

77.2 (1.0)

Fricative

64.3 (1.16)

66.2 (0.8)

65.4 (1.35)

70.6 (0.8)

Zoom Image
Fig. 4 Envelope difference index (EDI) for the /ɑs/-/ɑʃ/ contrast in the (A) unaided and (B) aided conditions in normalized amplitude.
Zoom Image
Fig. 5 Envelope difference index (EDI) for the unaided versus aided (A) /ɑs/ and (B) /ɑʃ/ stimuli in normalized amplitude.

#

Behavioral Testing of the ERP Stimuli

To ensure that the stimuli used in the electrophysiological experiment were perceptually distinct, subjects completed behavioral identification and discrimination tests of the /ɑs/ and /ɑʃ/ stimuli in aided and unaided listening conditions. For the identification task, participants were presented with 40 trials each of the /ɑs/ or /ɑʃ/ stimulus and instructed to label it as /ɑs/ or /ɑʃ/. For the discrimination task, listeners heard 20 presentations each of /ɑs/–/ɑs/, /ɑʃ/–/ɑʃ/, /ɑs/–/ɑʃ/, and /ɑʃ/–/ɑs/ stimulus pairs and indicated whether the two stimuli were perceptually the same or different. The order of aided and unaided conditions was counterbalanced across the listeners.


#

EEG Data Recording

Continuous electroencephalography (EEG) data were recorded (bandpass filter: 0.016–200 Hz; 512 Hz sample rate) using the Advanced Neuro Technology system and a 64-channel Waveguard cap (Enschede, the Netherlands).[34] The cap used the standard 10 to 20 arrangement of the Ag/AgCl electrodes with additional intermediate positions. Electrode AFz served as the ground electrode. The average electrode impedance was kept below 10 kOhm for the entire recording session (∼60 minutes total).


#

EEG Procedures

For all testing, subjects were comfortably seated in an acoustically treated sound-booth (ETS-Lindgren Acoustic Systems). For the electrophysiological measures, the speech stimuli were presented in the sound field via bilateral loudspeakers (M-audio BX8a) placed at approximately 60° azimuth to each participant. Stimuli were calibrated to a sound level of 60 dB SPL at the subject's head for all testing and were presented in both unaided and aided conditions, counterbalanced across listeners. For aided testing, a behind-the-ear hearing aid was coupled to each listener's right ear using a foam earpiece.

Stimuli were presented to listeners using a passive listening, alternating short-block paradigm.[28] [35] Each block contained 20 presentations of the same stimulus followed by a block of 20 stimuli from a different speech category. The interstimulus interval between consecutive stimuli in a block was randomized between 900 and 1,000 milliseconds to prevent neural habituation. Between each block, there were 2 seconds of silence. A minimum of 120 presentations of each stimulus were recorded, and the first trial of every block was excluded from averaging to avoid any possible contamination from the mismatch negativity component. During the passive listening task, listeners were seated 2.5 m from a 20-inch LCD TV and watched a muted, subtitled movie of their choice. Listeners were instructed to sit quietly and ignore the auditory stimuli.


#

ERP Waveform Analysis

ERP waveform analysis and averaging was performed offline using the EEGLAB toolbox[36] in MATLAB (Mathworks, Natick, MA). Electrooculographic (EOG) artifacts were removed by applying a blind source separation Infomax independent component analysis (ICA) algorithm[37] to the EEG data. Independent components having spatial and temporal characteristics of EOG activity were identified and removed from the ICA matrix prior to averaging.[38] [39] [40] The ERP epoch was 700 milliseconds in total and consisted of a 100-millisecond pre-stimulus baseline and a 600-millisecond recording window. For averaging, data were bandpass filtered from 0.5 to 40 Hz using finite impulse response filters and trials with artifacts of amplitudes ± 50 µV were removed.

Peak amplitudes and latencies for the ACC elicited by the fricative portions of the /ɑs/–/ɑʃ/ stimuli were extracted from the averaged waveforms from each subject. Based on the grand mean ERP waveforms, the following time windows were used in extracting the peaks to the fricative: N1' (the initial negative peak of the ACC), ranged from 270 to 375 milliseconds and P2' (the initial positive peak of the ACC), ranged from 375 to 450 milliseconds. The naming of these ERP components is consistent with that in the work of Hari.[23]


#

Statistical Analysis

To examine electrode region and hemisphere effects, the electrodes were grouped for the statistical analysis.[28] [34] The frontal electrode group included F3, F5, F7, FC3, FC5, and FT7 and the corresponding electrodes on the right scalp region. The central electrodes included T7, TP7, C3, C5, CP3, and CP5 and the corresponding electrodes on the right. Parietal electrodes included P3, P5, P7, PO3, PO5, and PO7 and the corresponding electrodes on the right. The midline frontal electrode group included F1, Fz, F2, FC1, FCz, and FC2. The midline central electrode group included C1, Cz, C2, CP1, CPz, and CP2. Midline parietal electrodes included P1, Pz, P2, and POz.

Effects of fricative identity (syllable-final /s/ and /ʃ/), amplification (unaided and aided), electrode region (frontal, central, and parietal), and hemisphere (electrodes on left, midline, and right-hemisphere sites) on peak amplitudes and latencies from the individual ERP data were assessed using within-subject, repeated-measures analysis of variance (ANOVA). Post-hoc repeated-measures univariate ANOVAs were performed on all significant factors. Where applicable, Holm–Bonferroni corrections were applied to the reported p-values involving multiple comparisons. Greenhouse–Geisser corrections were applied if violations of sphericity occurred.


#
#

Results

Behavioral Results

In the unaided condition, mean identification of /ɑs/ was 97.5% (standard deviation: 1.77) and mean identification of /ɑʃ/ was 98.5% (standard deviation: 1.41). With the hearing aid, mean aided identification of /ɑs/ was 97% (standard deviation: 2.1) and mean aided /ɑʃ/ identification was 98.5% (standard deviation: 1.36). Unaided percent correct discrimination of the /ɑs/–/ɑʃ/ stimuli, controlling for false positives, was 95.1% (standard deviation: 1.6). Aided discrimination of ɑs/–/ɑʃ/ stimuli was 94.25% (standard deviation: 2.3). Paired t-tests revealed no significant differences for unaided versus aided identification or discrimination performance (p > 0.05).


#

ACC Results to the Syllable-Final Fricative

Clear ACC responses (N1' and P2' peaks) to the fricative portion of /ɑs/ and /ɑʃ/ stimuli were observed in the unaided and aided conditions ([Fig. 6]). The grand mean peak N1' and P2' latency and amplitude data averaged across the nine electrode regions used in the statistical analysis are summarized in [Table 2]. Separate repeated-measures ANOVAs were completed for N1' and P2' peak latencies and amplitudes to the fricatives, and the results from the full model ANOVAs for each component are summarized in [Table 3].

Table 2

Mean peak amplitude and latency values (standard deviation) averaged across the nine electrode regions for the ACC (N1'–P2') components elicited by the fricatives used in the statistical analysis

Unaided

Aided

ERP peaks

/ɑs/

/ɑʃ/

/ɑs/

/ɑʃ/

N1'

Amplitude (μV)

−1.8 (1.9)

−2.4(1.8)

−2.02 (1.5)

−1.65 (2.0)

Latency (ms)

333.4 (44.3)

352.11 (44.7)

346.35 (36.3)

344.35 (42.9)

P2'

Amplitude (μV)

0.99 (1.9)

0.02 (1.6)

0.36 (1.3)

0.03 (1.8)

Latency (ms)

396.5 (23.6)

394.9(23.5)

398.9 (27.8)

408.7 (27.1)

Abbreviation: ACC, acoustic change complex.


Table 3

Repeated-measures ANOVA results summary

Fricative identity

Amplification

Electrode region

Hemisphere

Significant interactions

F

statistic

p-Value

Partial η 2

F

statistic

p-Value

Partial

η 2

F

statistic

p-Value

Partial

η 2

F

statistic

p-Value

Partial

η 2

Interaction

F

statistic

p-Value

Partial η 2

Amplitude

N1'

0.05

0.83

0.006

0.8

0.39

0.08

0.59

0.46

0.06

4.4

0.03

0.33

P2'

5.68

0.04

0.39

0.65

0.44

0.06

2.48

0.14

0.21

0.39

0.63

0.04

amp × elec

6.8

0.007

0.43

Latency

N1'

1.57

0.24

0.15

0.19

0.67

0.02

18.5

0.0001

0.67

0.56

0.5

0.06

amp × elec

hemi × elec

amp × elec × fric

4.9

3.82

4.57

0.025

0.009

0.027

0.35

0.29

0.34

P2'

0.45

0.39

0.05

1.1

0.32

0.11

0.6

0.56

0.06

1.38

0.27

0.13

amp × hemi × elec

2.75

0.04

0.24

Notes: All significant interactions observed between the within-subject factors are listed in the far right column. Significant p-values (p < 0.05) are indicated in bold. Fricative identity (fric); amplification (amp); electrode region (elec); hemisphere (hemi).


Zoom Image
Fig. 6 Grand mean ERP waveforms elicited by the /ɑs/ (solid line) and /ɑʃ/ (dotted line) stimuli for the nine electrode regions in the unaided (gray scale) and aided (black) conditions. Negative plotted up; linked mastoid reference. ERP, event-related potential.

Neural Index of the Syllable-Final Fricative Contrast

The first aim of our study was to determine whether the neural coding of /s/–/ʃ/ contrast in the syllable-final position differed in the unaided condition. Part of our second aim was to examine whether the hearing aid preserved this neural coding relationship. Results of our statistical analysis indicated that the P2' component of the ACC significantly differed for /s/ versus /ʃ/ in both the unaided and aided conditions. Repeated-measures ANOVA showed a significant main effect of fricative identity (syllable-final /s/ vs. /ʃ/) for P2' amplitudes, with syllable-final /s/ eliciting significantly greater P2' amplitudes than syllable-final /ʃ/ [F(1,9) = 5.676, p = 0.04]. The lack of a significant fricative identity × amplification (unaided vs. aided) interaction [F(1,9) = 1.43, p > 0.05] suggests that the larger P2' peak component to /s/ serves as an index for the syllable-final /s/–/ʃ/ contrast with and without hearing aid amplification.


#

Effects of Amplification on Neural Coding of the Fricative Contrast

The second aim of our study was to examine whether hearing aid amplification preserved unaided neural coding patterns or whether amplification modified the brain's responses to the fricative stimuli. Our results suggest that acoustic modifications introduced by the hearing aid altered the latencies of N1' peaks to the fricatives relative to the unaided responses. Repeated-measures ANOVA indicated a significant main effect of the electrode region (frontal vs. central vs. parietal) [F(2,18) = 18.5, p = 0.0001], and significant amplification × electrode region [F(2,18) = 4.913, p = 0.025] and electrode region × hemisphere (left, midline, and right) [F(4,36) = 3.824, p = 0.009] interactions. The three-way interaction between amplification × electrode region × fricative identity was also significant for N1' peak latencies [F(2,18) = 4.57, p = 0.027].

To examine the significant three-way interaction and analyze how amplification affected N1' peak latencies to the fricative contrast across the different electrode regions, post-hoc one-way ANOVAs within each level of amplification that included the factors of fricative identity, hemisphere, and electrode region indicated that the electrode region × fricative identity interaction was significant in the aided condition [F(2,18) = 4.8, p = 0.023], but not in the unaided condition [F(2,18) = 0.664, p = 0.521]. As shown in [Fig. 7], without the hearing aid, N1' latencies to syllable-final /ʃ/ were consistently later than syllable-final /s/ across frontal, central, and parietal electrode regions, but hearing aid amplification delayed N1' to /s/ to a greater degree across these electrode regions, resulting in similar aided N1' latencies for /s/ and /ʃ/.

Zoom Image
Fig. 7 Mean N1' and P2' peak latency (ms) and amplitude (μV) values averaged across the frontal (F), central (C), and parietal (P) electrode regions for /ɑs/ and /ɑʃ/ in the unaided and aided conditions. Error bars represent standard error of the mean.

Our results also indicated hearing aid signal processing significantly altered P2' latencies relative to unaided responses. Repeated-measures ANOVA showed that for P2' latencies, there was a significant three-way interaction between the main effects of amplification, hemisphere, and electrode region [F(4,36) = 2.749, p = 0.04]. Post-hoc ANOVAs within each level of electrode region indicated that the amplification × hemisphere interaction was significant for the frontal electrode region only [F(2,18) = 4.405, p = 0.037]. Within the frontal electrodes, post-hoc ANOVAs indicated a significant main effect of hemisphere on P2' latencies in the aided condition [F(2,18) = 3.6, p = 0.049], but not in the unaided condition [F(2,18) = 1.973, p = 0.178]. [Fig. 8] shows that P2' latencies in the right frontal electrode region showed the largest mean difference across unaided and hearing aid conditions compared with left-frontal or midline-frontal electrode regions.

Zoom Image
Fig. 8 Mean P2' peak latency (ms) values for the fricatives in the frontal electrode region at the left, midline, and right hemisphere sites in the unaided and aided conditions. Error bars represent standard error of the mean.

Hearing aid amplification also altered P2' peak amplitudes to the fricatives relative to the unaided neural response patterns. For P2' amplitudes, repeated-measures ANOVA showed a significant interaction between amplification and electrode region [F(2,18) = 6.802, p = 0.007]. Post-hoc one-way ANOVAs within each level of amplification indicated a significant effect of electrode region within the aided condition [F(2,18) = 5.56, p = 0.037], but not in the unaided condition [F(2,18) = 0.217, p = 0.807]. In the aided condition, pairwise comparisons revealed that P2' amplitudes were significantly larger in the parietal electrode region compared with the central electrode region for the fricative stimuli (p = 0.03).


#
#
#

Discussion

The results of the present study demonstrated that syllable-final voiceless fricatives /s/ and /ʃ/ were differentially encoded in the auditory cortex with and without a hearing aid in listeners with NH. While hearing aids modified the acoustic cues of the speech sounds, spectral and temporal differences for the fricative contrast were reflected in the neural representations.

Distinct Neural Coding for the /ɑs/–/ɑʃ/ Contrast with and without a Hearing Aid

Listeners with NH and HI rely primarily on differences in spectral cues for fricative coda perception,[2] making it imperative hearing aids preserve spectral differences to contrasts such as /s/ versus /ʃ/. Our unaided in-the-canal recordings documented that /s/ had higher peak spectral energy than /ʃ/. However, the hearing aid shifted the peak spectral energy of syllable-final /s/ to a lower frequency, while the location of the spectral peak for syllable-final /ʃ/ remained similar in the unaided and aided conditions ([Fig. 3]). While the hearing aid modified the acoustic characteristics of the fricatives, the contrast remained acoustically distinct, and behavioral testing revealed that perception of the contrast in aided and unaided conditions was at the ceiling level. Moreover, the ACC elicited by /s/ significantly differed from the ACC to /ʃ/ in both the unaided and aided conditions. In the unaided condition, P2' amplitudes were significantly larger to syllable-final /s/ compared with syllable-final /ʃ/ across all electrodes. Importantly, in the aided condition, the P2' component of the ACC also indexed the fricative contrast and remained significantly larger for /s/ than for /ʃ/ with the hearing aid.

The ACC is generated by spectral and temporal changes within an ongoing speech sound,[23] [24] [41] and our results suggest that it is sensitive to acoustic cues that differentiate syllable-final fricatives. Whether the N1' and P2' components of the ACC have the same neural generators as the initial N1–P2 complex, and thus reflect the same cortical processes, is unknown. However, in the syllable-initial context, the P2 component of the P1–N1–P2 complex is thought to reflect stimulus categorization,[42] and our P2' results are consistent with this interpretation.


#

Effect of Amplification on Neural Coding of Fricatives

While the P2' component of the ACC reliably indexed the speech contrast in both the aided and unaided conditions, acoustic modifications by the hearing aid altered the unaided ACC responses to the syllable-final fricatives. In the unaided condition, N1' latencies to /ɑs/ were significantly earlier than to /ɑʃ/ across frontal, central, and parietal electrode regions ([Fig. 7]). This result is consistent with previous findings showing that higher frequency stimuli often elicit earlier N1 components compared with lower frequency stimuli.[43] However, with the hearing aid, N1' latencies to /ɑs/ were significantly delayed at central and parietal regions and only differed from /ɑʃ/ at frontal electrode sites ([Fig. 7]). The hearing aid shifted the high-frequency spectral energy and altered the spectral envelope tilt of /s/, making it acoustically more similar to /ʃ/ in the aided condition ([Fig. 3]). It is possible that these spectral changes introduced by the hearing aid diminished aided neural coding differences to the fricative contrast. This interpretation is consistent with models suggesting that the N1 component reflects stimulus feature encoding.[22] [44]

Hearing aid compression also differentially modified the temporal envelopes of our stimuli ([Figs. 4] and [5]), which also could have contributed to the delayed aided N1' latencies to /s/ we observed relative to the unaided condition. The EDI, which quantifies the effects of compression on a stimulus, was larger for unaided versus aided /ɑs/ (EDI = 0.26) than for /ɑʃ/ (EDI = 0.21), indicating that hearing aid compression distorted the temporal envelopes to both stimuli, but affected /ɑs/ to a greater degree. Distortion introduced by hearing aid compression significantly altered the relative normalized amplitude differences between the initial vowel and the syllable-final fricative, with larger resultant amplitude difference between the vowel and /s/ segment compared with the vowel and /ʃ/ segment ([Fig. 5]). The peaks of the ACC are affected by changes in the amplitude envelope,[24] and the greater distortion to the envelope for /ɑs/ may have contributed to delayed processing for N1' peaks. Examination of the stimulus envelopes also indicates that hearing aid compression altered the rise time of the fricatives, with aided /ʃ/ having a more abrupt onset than the aided /s/ portion ([Fig. 5]). Easwar et al[17] showed that a fricative with more abrupt rise time produces an earlier N1 latency than when the same fricative has a slower rise time. Thus it is also possible that the slower rise time for aided /s/ contributed to the delayed N1' latencies with the hearing aid.

Use of high-density EEG revealed that the hearing aid also modified neural representations to the syllable-final fricatives across hemispheric sites and varying scalp regions. For example, as evidenced by our P2' latency results, the hearing aid significantly modified hemispheric processing at the frontal electrode sites relative to the unaided condition ([Fig. 8]). In addition, a significant effect of electrode region was observed for aided P2' amplitudes within the parietal scalp region that was not present in the unaided condition. A dual-pathway model for speech and language suggests that cortical processing of speech begins bilaterally in superior temporal gyrus and then splits into parallel ventral and dorsal processing streams that project to temporal, parietal, and frontal gyri.[45] The ventral stream is thought to be engaged in sound classification and recognition, whereas the dorsal stream is thought to be involved in articulatory motor movements and planning.[45] While electrode-site effects in the ERP components may reflect contributions of different cortical/subcortical source activities, they are not equivalent to source localization results. Accordingly, what neural architecture accounts for the changes we observed in the aided neural responses relative to the unaided responses to the fricative contrast is still unknown. To control for equivalency of cognitive load across aided and unaided conditions, we employed a passive listening design where the participants were instructed to ignore the stimuli and watch a muted movie for the duration of testing. Thus, it is likely acoustic differences from hearing aid signal processing mainly contributed to the observed aided neural response patterns. Furthermore, we enrolled listeners with NH to isolate the effects of amplification on the neural coding of syllable-final fricatives, so the differences in aided responses across hemispheric and scalp regions we observed relative to the unaided condition also cannot be the result of plastic reorganization due to sensory deprivation. Thus, our results suggest that acoustic alterations from hearing aid amplification likely differentially modified neural coding of fricative sounds.


#

Study Limitations

The present study aimed to examine whether spectral differences in an /ɑs/–/ɑʃ/ contrast elicited distinct neural responses in an unaided condition and whether hearing aid signal processing modified responses. For ecological validity, we recorded neural responses using a traditional 12-channel hearing aid with an average of 10 dB of gain that altered the spectrotemporal characteristics as well as the intensity level of the stimuli. We did not use an acoustic control condition, though, so it is impossible to disentangle the effects of hearing aid signal processing from presentation-level effects on the observed neural responses to the fricative contrast in the aided condition. Future studies should use an acoustic control condition where neural responses to the fricative contrast are measured in response to (1) intensity-level differences without the use of the hearing aid and (2) hearing aid signal processing without gain.

The hearing aid was programmed with fast-acting compression time constants and a 1.125:1 compression ratio to maximize audibility for the low-intensity, syllable-final fricatives. Use of these fast time constants, though, introduced distortion to the temporal envelopes of stimuli and significantly compressed the stimuli beyond our programmed compression ratio which could also account for changes to the unaided neural response patterns we observed. Future studies should examine whether the use of slower attack and release times which better preserve temporal envelope modulations produces similar patterns of aided neural responses to fricative stimuli.


#
#

Conclusion

The results of the present study suggest that hearing aid amplification alters neural representations of syllable-final fricatives in a complex manner. Consistent with results for syllable-initial fricative sounds,[28] normal-hearing listeners were able to discriminate the contrast with ease, and their aided and unaided ACC components did significantly differ for /ɑs/ versus /ɑʃ/, suggesting a differentiation of underlying cortical processing of the speech contrast that is sensitive to the use of hearing aids. Together, the ERP results revealed that hearing aids altered the cortical processing of fricative contrasts across the scalp in both onset and coda positions. Acclimatization to hearing aid use when measured using longitudinal speech recognition scores has a long time course.[46] Our results indicate that hearing aid signal processing altered the spectral and temporal properties of fricatives that corresponded with neural response changes to the contrast. Therefore, even though behavioral responses to the fricatives were unaffected by amplification, the brain would need to accommodate these acoustic changes from hearing aid signal processing to recognize the respective sound categories.


#
#

Conflicts of Interest

None declared.

  • References

  • 1 Hedrick MS, Younger MS. Labeling of /s/ and /∫/ by listeners with normal and impaired hearing, revisited. J Speech Lang Hear Res 2003; 46 (03) 636-648
  • 2 Pittman AL, Stelmachowicz PG. Perception of voiceless fricatives by normal-hearing and hearing-impaired children and adults. J Speech Lang Hear Res 2000; 43 (06) 1389-1401
  • 3 Zeng FG, Turner CW. Recognition of voiceless fricatives by normal and hearing-impaired subjects. J Speech Hear Res 1990; 33 (03) 440-449
  • 4 Harris KS. Cues for the discrimination of American English fricatives in spoken syllables. Lang Speech 1958; 1: 1-7
  • 5 Heinz JM, Stevens KN. On the properties of voiceless fricatives. J Acoust Soc Am 1961; 33: 589-596
  • 6 Hughes GW, Halle M. Spectral properties of fricative consonants. J Acoust Soc Am 1956; 28: 303-310
  • 7 Redford MA, Diehl RL. The relative perceptual distinctiveness of initial and final consonants in CVC syllables. J Acoust Soc Am 1999; 106 (3, Pt 1): 1555-1565
  • 8 Dubno JR, Dirks DD, Langhofer LR. Evaluation of hearing-impaired listeners using a Nonsense-syllable Test. II. Syllable recognition and consonant confusion patterns. J Speech Hear Res 1982; 25 (01) 141-148
  • 9 Souza PE, Tremblay KL. New perspectives on assessing amplification effects. Trends Amplif 2006; 10 (03) 119-143
  • 10 Stelmachowicz PG, Kopun J, Mace A, Lewis DE, Nittrouer S. The perception of amplified speech by listeners with hearing loss: acoustic correlates. J Acoust Soc Am 1995; 98 (03) 1388-1399
  • 11 Stevens KN. Acoustic Phonetics. Cambridge, MA: MIT Press; 1998
  • 12 Ladefoged P. Elements of Acoustic Phonetics. Chicago, IL: University of Chicago Press; 1962
  • 13 Kimlinger C, McCreery R, Lewis D. High-frequency audibility: the effects of audiometric configuration, stimulus type, and device. J Am Acad Audiol 2015; 26 (02) 128-137
  • 14 Tremblay K, Kraus N, McGee T. The time course of auditory perceptual learning: neurophysiological changes during speech-sound training. Neuroreport 1998; 9 (16) 3557-3560
  • 15 Arlinger S, Gatehouse S, Bentler RA. et al. Report of the Eriksholm Workshop on auditory deprivation and acclimatization. Ear Hear 1996; 17 (03) 87S-98S
  • 16 Kuk FK, Potts L, Valente M, Lee L, Picirrillo J. Evidence of acclimatization in persons with severe-to-profound hearing loss. J Am Acad Audiol 2003; 14 (02) 84-99
  • 17 Easwar V, Glista D, Purcell DW, Scollie SD. The effect of stimulus choice on cortical auditory evoked potentials (CAEP): consideration of speech segment positioning within naturally produced speech. Int J Audiol 2012; 51 (12) 926-931
  • 18 Swink S, Stuart A. Auditory long latency responses to tonal and speech stimuli. J Speech Lang Hear Res 2012; 55 (02) 447-459
  • 19 Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: a mismatch negativity study. Hear Res 2016; 339: 40-49
  • 20 Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: a P3 study. Hear Res 2017; 350: 58-67
  • 21 Key APF, Dove GO, Maguire MJ. Linking brainwaves to the brain: an ERP primer. Dev Neuropsychol 2005; 27 (02) 183-215
  • 22 Näätänen R, Winkler I. The concept of auditory stimulus representation in cognitive neuroscience. Psychol Bull 1999; 125 (06) 826-859
  • 23 Hari R. Activation of the human auditory cortex by speech sounds. Acta Otolaryngol Suppl 1991; 491: 132-137 , discussion 138
  • 24 Martin BA, Boothroyd A. Cortical, auditory, evoked potentials in response to changes of spectrum and amplitude. J Acoust Soc Am 2000; 107 (04) 2155-2161
  • 25 Kaukoranta E, Hari R, Lounasmaa OV. Responses of the human auditory cortex to vowel onset after fricative consonants. Exp Brain Res 1987; 69 (01) 19-23
  • 26 Sharma A, Marsh CM, Dorman MF. Relationship between N1 evoked potential morphology and the perception of voicing. J Acoust Soc Am 2000; 108 (06) 3030-3035
  • 27 Tremblay KL, Billings CJ, Friesen LM, Souza PE. Neural representation of amplified speech sounds. Ear Hear 2006; 27 (02) 93-103
  • 28 Miller S, Zhang Y. Neural coding of phonemic fricative contrast with and without hearing aid. Ear Hear 2014; 35 (04) e122-e133
  • 29 Zhang Y, Kuhl PK, Imada T, Kotani M, Tohkura Y. Effects of language experience: neural commitment to language-specific auditory patterns. Neuroimage 2005; 26 (03) 703-720
  • 30 Moulines E, Charpentier F. Pitch-synchronous wave-form processing techniques for text-to-speech synthesis using diphones. Speech Commun 1990; 9: 453-467
  • 31 Boersma P, Weenink DJM. PRAAT, a system for doing phonetics by computer. Glot International 2001; 5 (9–10): 341-345
  • 32 Fortune TW, Woodruff BD, Preves DA. A new technique for quantifying temporal envelope contrasts. Ear Hear 1994; 15 (01) 93-99
  • 33 Souza PE. Effects of compression on speech acoustics, intelligibility, and sound quality. Trends Amplif 2002; 6 (04) 131-165
  • 34 Rao A, Zhang Y, Miller S. Selective listening of concurrent auditory stimuli: an event-related potential study. Hear Res 2010; 268 (1–2): 123-132
  • 35 Zhang Y, Koerner T, Miller S. et al. Neural coding of formant-exaggerated speech in the infant brain. Dev Sci 2011; 14 (03) 566-581
  • 36 Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 2004; 134 (01) 9-21
  • 37 Bell AJ, Sejnowski TJ. An information-maximization approach to blind separation and blind deconvolution. Neural Comput 1995; 7 (06) 1129-1159
  • 38 Gilley PM, Sharma A, Dorman M, Finley CC, Panch AS, Martin K. Minimization of cochlear implant stimulus artifact in cortical auditory evoked potentials. Clin Neurophysiol 2006; 117 (08) 1772-1782
  • 39 Jung TP, Makeig S, Humphries C. et al. Removing electroencephalographic artifacts by blind source separation. Psychophysiology 2000; 37 (02) 163-178
  • 40 Miller S, Zhang Y. Validation of the cochlear implant artifact correction tool for auditory electrophysiology. Neurosci Lett 2014; 577: 51-55
  • 41 Ostroff JM, Martin BA, Boothroyd A. Cortical evoked response to acoustic change within a syllable. Ear Hear 1998; 19 (04) 290-297
  • 42 Ceponiene R, Torki M, Alku P, Koyama A, Townsend J. Event-related potentials reflect spectral differences in speech and non-speech stimuli in children and adults. Clin Neurophysiol 2008; 119 (07) 1560-1577
  • 43 Jacobson GP, Lombardi DM, Gibbens ND, Ahmad BK, Newman CW. The effects of stimulus frequency and recording site on the amplitude and latency of multichannel cortical auditory evoked potential (CAEP) component N1. Ear Hear 1992; 13 (05) 300-306
  • 44 Näätänen R, Picton T. The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology 1987; 24 (04) 375-425
  • 45 Hickok G, Poeppel D. Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 2004; 92 (1–2): 67-99
  • 46 Gatehouse S. The time course and magnitude of perceptual acclimatization to frequency responses: evidence from monaural fitting of hearing aids. J Acoust Soc Am 1992; 92 (03) 1258-1268

Address for correspondence

Sharon E. Miller, PhD

Publication History

Received: 15 April 2019

Accepted: 10 January 2020

Article published online:
27 April 2020

© 2020. American Academy of Audiology. This article is published by Thieme.

Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA

  • References

  • 1 Hedrick MS, Younger MS. Labeling of /s/ and /∫/ by listeners with normal and impaired hearing, revisited. J Speech Lang Hear Res 2003; 46 (03) 636-648
  • 2 Pittman AL, Stelmachowicz PG. Perception of voiceless fricatives by normal-hearing and hearing-impaired children and adults. J Speech Lang Hear Res 2000; 43 (06) 1389-1401
  • 3 Zeng FG, Turner CW. Recognition of voiceless fricatives by normal and hearing-impaired subjects. J Speech Hear Res 1990; 33 (03) 440-449
  • 4 Harris KS. Cues for the discrimination of American English fricatives in spoken syllables. Lang Speech 1958; 1: 1-7
  • 5 Heinz JM, Stevens KN. On the properties of voiceless fricatives. J Acoust Soc Am 1961; 33: 589-596
  • 6 Hughes GW, Halle M. Spectral properties of fricative consonants. J Acoust Soc Am 1956; 28: 303-310
  • 7 Redford MA, Diehl RL. The relative perceptual distinctiveness of initial and final consonants in CVC syllables. J Acoust Soc Am 1999; 106 (3, Pt 1): 1555-1565
  • 8 Dubno JR, Dirks DD, Langhofer LR. Evaluation of hearing-impaired listeners using a Nonsense-syllable Test. II. Syllable recognition and consonant confusion patterns. J Speech Hear Res 1982; 25 (01) 141-148
  • 9 Souza PE, Tremblay KL. New perspectives on assessing amplification effects. Trends Amplif 2006; 10 (03) 119-143
  • 10 Stelmachowicz PG, Kopun J, Mace A, Lewis DE, Nittrouer S. The perception of amplified speech by listeners with hearing loss: acoustic correlates. J Acoust Soc Am 1995; 98 (03) 1388-1399
  • 11 Stevens KN. Acoustic Phonetics. Cambridge, MA: MIT Press; 1998
  • 12 Ladefoged P. Elements of Acoustic Phonetics. Chicago, IL: University of Chicago Press; 1962
  • 13 Kimlinger C, McCreery R, Lewis D. High-frequency audibility: the effects of audiometric configuration, stimulus type, and device. J Am Acad Audiol 2015; 26 (02) 128-137
  • 14 Tremblay K, Kraus N, McGee T. The time course of auditory perceptual learning: neurophysiological changes during speech-sound training. Neuroreport 1998; 9 (16) 3557-3560
  • 15 Arlinger S, Gatehouse S, Bentler RA. et al. Report of the Eriksholm Workshop on auditory deprivation and acclimatization. Ear Hear 1996; 17 (03) 87S-98S
  • 16 Kuk FK, Potts L, Valente M, Lee L, Picirrillo J. Evidence of acclimatization in persons with severe-to-profound hearing loss. J Am Acad Audiol 2003; 14 (02) 84-99
  • 17 Easwar V, Glista D, Purcell DW, Scollie SD. The effect of stimulus choice on cortical auditory evoked potentials (CAEP): consideration of speech segment positioning within naturally produced speech. Int J Audiol 2012; 51 (12) 926-931
  • 18 Swink S, Stuart A. Auditory long latency responses to tonal and speech stimuli. J Speech Lang Hear Res 2012; 55 (02) 447-459
  • 19 Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: a mismatch negativity study. Hear Res 2016; 339: 40-49
  • 20 Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: a P3 study. Hear Res 2017; 350: 58-67
  • 21 Key APF, Dove GO, Maguire MJ. Linking brainwaves to the brain: an ERP primer. Dev Neuropsychol 2005; 27 (02) 183-215
  • 22 Näätänen R, Winkler I. The concept of auditory stimulus representation in cognitive neuroscience. Psychol Bull 1999; 125 (06) 826-859
  • 23 Hari R. Activation of the human auditory cortex by speech sounds. Acta Otolaryngol Suppl 1991; 491: 132-137 , discussion 138
  • 24 Martin BA, Boothroyd A. Cortical, auditory, evoked potentials in response to changes of spectrum and amplitude. J Acoust Soc Am 2000; 107 (04) 2155-2161
  • 25 Kaukoranta E, Hari R, Lounasmaa OV. Responses of the human auditory cortex to vowel onset after fricative consonants. Exp Brain Res 1987; 69 (01) 19-23
  • 26 Sharma A, Marsh CM, Dorman MF. Relationship between N1 evoked potential morphology and the perception of voicing. J Acoust Soc Am 2000; 108 (06) 3030-3035
  • 27 Tremblay KL, Billings CJ, Friesen LM, Souza PE. Neural representation of amplified speech sounds. Ear Hear 2006; 27 (02) 93-103
  • 28 Miller S, Zhang Y. Neural coding of phonemic fricative contrast with and without hearing aid. Ear Hear 2014; 35 (04) e122-e133
  • 29 Zhang Y, Kuhl PK, Imada T, Kotani M, Tohkura Y. Effects of language experience: neural commitment to language-specific auditory patterns. Neuroimage 2005; 26 (03) 703-720
  • 30 Moulines E, Charpentier F. Pitch-synchronous wave-form processing techniques for text-to-speech synthesis using diphones. Speech Commun 1990; 9: 453-467
  • 31 Boersma P, Weenink DJM. PRAAT, a system for doing phonetics by computer. Glot International 2001; 5 (9–10): 341-345
  • 32 Fortune TW, Woodruff BD, Preves DA. A new technique for quantifying temporal envelope contrasts. Ear Hear 1994; 15 (01) 93-99
  • 33 Souza PE. Effects of compression on speech acoustics, intelligibility, and sound quality. Trends Amplif 2002; 6 (04) 131-165
  • 34 Rao A, Zhang Y, Miller S. Selective listening of concurrent auditory stimuli: an event-related potential study. Hear Res 2010; 268 (1–2): 123-132
  • 35 Zhang Y, Koerner T, Miller S. et al. Neural coding of formant-exaggerated speech in the infant brain. Dev Sci 2011; 14 (03) 566-581
  • 36 Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods 2004; 134 (01) 9-21
  • 37 Bell AJ, Sejnowski TJ. An information-maximization approach to blind separation and blind deconvolution. Neural Comput 1995; 7 (06) 1129-1159
  • 38 Gilley PM, Sharma A, Dorman M, Finley CC, Panch AS, Martin K. Minimization of cochlear implant stimulus artifact in cortical auditory evoked potentials. Clin Neurophysiol 2006; 117 (08) 1772-1782
  • 39 Jung TP, Makeig S, Humphries C. et al. Removing electroencephalographic artifacts by blind source separation. Psychophysiology 2000; 37 (02) 163-178
  • 40 Miller S, Zhang Y. Validation of the cochlear implant artifact correction tool for auditory electrophysiology. Neurosci Lett 2014; 577: 51-55
  • 41 Ostroff JM, Martin BA, Boothroyd A. Cortical evoked response to acoustic change within a syllable. Ear Hear 1998; 19 (04) 290-297
  • 42 Ceponiene R, Torki M, Alku P, Koyama A, Townsend J. Event-related potentials reflect spectral differences in speech and non-speech stimuli in children and adults. Clin Neurophysiol 2008; 119 (07) 1560-1577
  • 43 Jacobson GP, Lombardi DM, Gibbens ND, Ahmad BK, Newman CW. The effects of stimulus frequency and recording site on the amplitude and latency of multichannel cortical auditory evoked potential (CAEP) component N1. Ear Hear 1992; 13 (05) 300-306
  • 44 Näätänen R, Picton T. The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology 1987; 24 (04) 375-425
  • 45 Hickok G, Poeppel D. Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 2004; 92 (1–2): 67-99
  • 46 Gatehouse S. The time course and magnitude of perceptual acclimatization to frequency responses: evidence from monaural fitting of hearing aids. J Acoust Soc Am 1992; 92 (03) 1258-1268

Zoom Image
Fig. 1 Real ear insertion gain (REIG) in dB to a 60 dB SPL input measured in a Knowles Electronics Manikin for Acoustic Research (KEMAR).
Zoom Image
Zoom Image
Fig. 2 Unaided and aided waveforms of the /ɑs/ and /ɑʃ/ in-the-canal, acoustic recordings using KEMAR in relative amplitude.
Zoom Image
Fig. 3 In-the-canal spectra of the frication portion of the (A) /ɑs/ and (B) /ɑʃ/ stimuli in the unaided (solid line) and aided (dotted line) conditions.
Zoom Image
Fig. 4 Envelope difference index (EDI) for the /ɑs/-/ɑʃ/ contrast in the (A) unaided and (B) aided conditions in normalized amplitude.
Zoom Image
Fig. 5 Envelope difference index (EDI) for the unaided versus aided (A) /ɑs/ and (B) /ɑʃ/ stimuli in normalized amplitude.
Zoom Image
Fig. 6 Grand mean ERP waveforms elicited by the /ɑs/ (solid line) and /ɑʃ/ (dotted line) stimuli for the nine electrode regions in the unaided (gray scale) and aided (black) conditions. Negative plotted up; linked mastoid reference. ERP, event-related potential.
Zoom Image
Fig. 7 Mean N1' and P2' peak latency (ms) and amplitude (μV) values averaged across the frontal (F), central (C), and parietal (P) electrode regions for /ɑs/ and /ɑʃ/ in the unaided and aided conditions. Error bars represent standard error of the mean.
Zoom Image
Fig. 8 Mean P2' peak latency (ms) values for the fricatives in the frontal electrode region at the left, midline, and right hemisphere sites in the unaided and aided conditions. Error bars represent standard error of the mean.