Int Arch Otorhinolaryngol 2016; 20(03): 226-234
DOI: 10.1055/s-0035-1571159
Original Research
Thieme Publicações Ltda Rio de Janeiro, Brazil

Brainstem Encoding of Aided Speech in Hearing Aid Users with Cochlear Dead Region(s)

Mohammad Ramadan Hassaan
1   Audiology unit, Otorhinolaryngology department, Faculty of Medicine, Zagazig University, Zagazig, Sharkia, Egypt
,
Ola Abdallah Ibraheem
1   Audiology unit, Otorhinolaryngology department, Faculty of Medicine, Zagazig University, Zagazig, Sharkia, Egypt
,
Dalia Helal Galhom
1   Audiology unit, Otorhinolaryngology department, Faculty of Medicine, Zagazig University, Zagazig, Sharkia, Egypt
› Author Affiliations
Further Information

Address for correspondence

Mohammad Ramadan Hassaan, Associate Professor of Audiology
40-B AbdulAziz Ali st.
Zagazig 44519
Egypt   

Publication History

19 June 2015

04 October 2015

Publication Date:
01 February 2016 (online)

 

Abstract

Introduction Neural encoding of speech begins with the analysis of the signal as a whole broken down into its sinusoidal components in the cochlea, which has to be conserved up to the higher auditory centers. Some of these components target the dead regions of the cochlea causing little or no excitation. Measuring aided speech-evoked auditory brainstem response elicited by speech stimuli with different spectral maxima can give insight into the brainstem encoding of aided speech with spectral maxima at these dead regions.

Objective This research aims to study the impact of dead regions of the cochlea on speech processing at the brainstem level after a long period of hearing aid use.

Methods This study comprised 30 ears without dead regions and 46 ears with dead regions at low, mid, or high frequencies. For all ears, we measured the aided speech-evoked auditory brainstem response using speech stimuli of low, mid, and high spectral maxima.

Results Aided speech-evoked auditory brainstem response was producible in all subjects. Responses evoked by stimuli with spectral maxima at dead regions had longer latencies and smaller amplitudes when compared with the control group or the responses of other stimuli.

Conclusion The presence of cochlear dead regions affects brainstem encoding of speech with spectral maxima perpendicular to these regions. Brainstem neuroplasticity and the extrinsic redundancy of speech can minimize the impact of dead regions in chronic hearing aid users.


#

Introduction

The proper temporal resolution of complex auditory signals, such as speech, begins with the spectral selectivity of the inner hair cells (IHCs) of the cochlea.[1] Whenever there is a non-functioning sector in the IHCs, their assigned acoustic signals tend to stimulate the nearby functioning edges leaving this region un-stimulated, a condition called dead region of the cochlea (DR). This impairment in the cochlear selectivity leads to overlapping of spectrally different parts of the acoustic signal in the same auditory fibers and, consequently, in the same neurons of the higher auditory centers, which will affect the temporal resolution of complex sounds.[2] [3]

According to the presence or absence of DRs, auditory performance may vary from one subject to another, even among those with identical audiograms. In addition, the presence of DRs could have several perceptual consequences: abnormal pitch perception,[4] rapid growth of loudness,[5] and distorted perception of pure tones.[6] The aforementioned consequences, from a clinical perspective, affect speech perception capabilities.[7] Moore et al[8] developed the TEN test for diagnosing DRs in a reasonably quick, easy to administer, and suitable way for use in clinical practice.

The neural encoding of the sound begins in the auditory nerve as it arises from the cochlea and travels to the auditory brainstem.[9] Studying the brainstem processing of speech sounds with spectral content perpendicular to cochlear dead region(s) may be helpful in understanding the effect of peripheral hearing loss on speech processing. It is possible to examine brainstem processing of speech in hearing impaired subjects using an objective, noninvasive, and reliable tool, namely, speech-evoked auditory brainstem responses (sABR).[10] [11] Speech syllables of brief duration, such as /ba/, /ga/, and /da/, are usually used to elicit sABR because of their time-varying property and the considerable phonetic information, particularly the stop consonants, making them perceptually applicable in clinical populations.[12] These stimuli are characterized by having different spectral maxima across the hearing frequency range. This difference is a major determinant of their differential perception, the so-called “perceptual hypothesis.”[13] [14] [15] [16]

This study examined the brainstem neural encoding of the three spectrally different speech stimuli using sABR technique in hearing aids regular users with and without DRs of the cochlea. The differences between the responses of the three stimuli, if any, will be correlated to the site of the DRs. The presence of a unique pattern for sABR according to the DRs can be an additive to the battery of DR assessment. This can improve the decision about amplification benefit and the choice of suitable algorithms to overcome the effect of the DR on speech perception.


#

Methodology

Participants

The study participants were 40 adults ranging in age from 18 to 40 years old, with bilateral SNHL ranging from moderate to severe across the frequency range 250–8000 Hz. They were selected according to the following criteria:

  1. Normal middle ear function.

  2. Not exhibiting criteria of auditory neuropathy nor neurological deficits.

  3. Regular and appropriate fitting of digital hearing aids with wide dynamic range compression in the tested ears, for not less than 5 years, to ensure adequate auditory stimulation.

  4. Aided monaural pure-tone thresholds ≤ 25 dBnHL across 500 Hz – 4 kHz frequency range.

Based on the TEN test, participants were classified into groups according to the presence or absence of DRs and their site, if any, as follows:

  • (1) Group with no DRs were the control group (CG). It consisted of 15 subjects (30 ears) matching the study group (SG) in gender (40% male and 60% female) and average age (mean ± SD: 33.4 ± 4.9 years) [t (p) = 0.559 (0.580)]. The average hearing thresholds (mean ± SD) were 55dBHL ± 7.5 at 0.25 KHz, 59 dBHL ± 9.9 at 0.5 KHz, 57 dBHL ± 9.7 at 1 KHz, 57 dBHL ± 7.5 at 2 KHz, 62.6 dBHL ± 6.5 at 4 KHz, and 67.6 dBHL ± 7 at 8 KHz.

  • (2) Group with DRs that was considered the study group (SG) and consisted of 25 subjects (36% males and 64% females) with average age of 32.5 ± 4.8 years. It was subdivided into three subgroups depending on the site of DRs. Dead regions were not always found at identical frequencies in both ears of the same subject, thus, sub-grouping was performed per ears rather than subjects. The subgroups were:

    • Subgroup I (SGI): patients with low frequency DR (at 500 and/or 750 Hz), involving 11 ears.

    • Subgroup II (SGII): patients with mid frequency DR (at 1, 1.5 and/or 2 kHz), involving 14 ears.

    • Subgroup III (SGIII): patients with high frequency DR (at 3 and/or 4 kHz), involving 21 ears.

The study group involved 46 ears. We had excluded 4 ears from the study: two ears had severe to profound SNHL and therefore exceeded the hearing threshold level required in this study, one ear had no DRs, and another one had DRs at both the mid and high frequency regions. The average hearing thresholds (mean ± SD) were 55.8 dBHL ± 8.7 at 0.25 KHz, 58.3 dBHL ± 6.8 at 0.5 KHz, 58.8 dBHL ± 6.6 at 1 KHz, 57 dBHL ± 5.7 at 2 KHz, 62.2 dBHL ± 6.3 at 4 KHz, and 68 dBHL ± 7.8 at 8 KHz.

Participants were enlisted from the Audiology Unit at the Otorhinolaryngology Department in the Zagazig University Hospitals between June 2014 and March 2015. Each participant provided informed written consent after they had been informed about the purpose of the study. Institutional review board approval number 1869/20–6-2014 was acquired for the study procedures on June 2014.


#

Equipment

Basic audiological evaluation was performed using middle ear analyzer Madsen (model Zodiak 902, U.S.A.) and two channel audiometer Madsen (model Orbiter 922 version 2, Hauppauge, New York, U.S.A.). A TEN test CD was loaded on a CD player that conveys its output to the audiometer. We used an auditory evoked potential audiometer (model Smart EP version 2.39, Intelligent Hearing Systems, Miami, Florida, U.S.A.) for the click- and speech-evoked ABR recording.


#

Procedures

The procedure was performed in two sessions, about two hours for the first session and one hour for the second session. The second session was within one week from the first one. The first session involved: (1) full history taking including hearing aid administration duration and regularity; (2) otoscopic examination; (3) analysis of middle ear with tympanometry and acoustic reflexes to assure normal middle ear function; (4) air conduction pure tone audiometry from 250 Hz up to 8 kHz and bone conduction pure tone audiometry from 500 Hz up to 4 kHz; (5) speech audiometry encompassing speech reception thresholds and speech recognition scores that should coincide with the hearing threshold; (6) aided monaural pure tone response from 500 Hz up to 4 kHz; and (7) TEN test to exclude or detect the cochlear DRs. The second session was devoted to the aided sABR testing using the three stimuli: /ba/, /ga/, and /da/.

The TEN Test

We controlled the levels of noise masker and the tone signal using separate channels for each. The controls on the audiometer were used to adjust the levels of the tone and the noise to the desired values in each channel. The following steps were carried out:

  1. The calibration tone was presented on track 1 and the audiometer was set so that both VU meters read 0 dB.

  2. The right channel (which contains noise) was turned off. Using the tone input from the left channel, we measured the absolute thresholds for each ear at 500, 750, 1000, 1500, 2000, 3000, and 4000 Hz.

  3. The two channels were then mixed and the desired noise level (dB/ERB; Equivalent Rectangular Bandwidth) was set using the right channel attenuator. We measured the masked threshold for each ear at each test frequency. A DR at a specific frequency is indicated by a masked threshold that is at least 10 dB above the absolute threshold and 10 dB above the nominal noise level per ERB.[8]


#

The Aided sABR

We used three different speech stimuli (/ba/, /ga/, and /da/) to elicit the sABR in all subjects. These consonant-vowel (CV) stimuli differ from each other in the dominant maximum spectral energy of their consonant portion, where: the labial consonant /b/ has a spectral burst emphasizing low frequencies (near 0.8 kHz); the velar /g/ typically has a prominent peak of energy in the mid frequency region (near 1.6 kHz); providing a compact spectral form; and the alveolar /d/ has a predominance of energy in the higher frequencies (close to 3 kHz). On the other hand, the formant transitions representing the vowel portion are similar ([Table 1]). Thus, tracking of formant transitions would not allow syllable identification.[13] [14] [15] [16]

Table 1

Formants frequencies (in Hz) of each stimulus

/ba/

/ga/

/da/

Pitch(Start-Finish)

112.4–111.2

99.4–100

109.1–102.1

F1

818

775

732

F2

1378

1421

1335

F3

2024

2242

2498

F4

2800

3187

3058

F5

4436

4613

3828

The duration of the /ba/, /ga/, and /da/ stimuli were 114.875 milliseconds, 213.250 milliseconds, and 206.275 milliseconds, respectively. They were delivered at a rate of 8.42/s and intensity of 60 dBnHL in an alternating polarity. The stimuli were delivered through a speaker that is connected to the evoked potential equipment by an external amplifier. A sound level meter was used to calibrate the speaker output according to the ANSI specifications (1969).[17] Subjects were semi-seated facing the loudspeaker at zero azimuth and one meter apart. Similar heights of the loudspeaker and the center head position were kept during examination. To ensure that subjects are still and ignoring the test stimulus, they watched a film presented in a laptop without sound. The laptop was put beside the speaker in the contra-lateral side of the tested ear so that when the subject watched the film, the hearing aid microphone was practically in front of the outlet of the speaker. To avoid stimulation of the non-tested ear, we blocked the ear by kinking the tubal end of the earmold with adhesive tape.

We obtained all recordings using cup electrodes. To keep the electrode impedance below 3 kΩ, we cleaned the electrode points with alcohol and scrubbed them with abrasive paste. The responses were differentially recorded from Fz (active) to ipsilateral mastoid (reference) with the contralateral mastoid as a ground. We collected two recordings of one thousand sweeps from the right and left ears separately using a band-pass filter of 30–3000 Hz and a gain of 100K. Epochs containing myogenic artifacts were rejected using an artifact criterion of ± 35 µV. Then, we plotted the recordings in a time window spanning 10 milliseconds prior to and 70 milliseconds after the stimulus onset.

The analysis of the first 70 milliseconds of sABR to CV stimuli included transient, transitional, and sustained response features. Transient response reflects neural activity in response to speech events including sound onset and offset. The transient portion was analyzed for its onset response including wave V and A latencies and V–A complex components [interpeak amplitude, duration, and slope (interpeak amplitude/duration)]; whereas we analyzed the offset response (where the acoustic properties of the three stimuli approaches the point of being identical, representing an end point peak) for wave O latency and amplitude. The sustained response reflects activity that is time-locked to periodic stimulus components or modulations and tends to resemble the stimulus features to which they are locked. It corresponds to the relatively unchanging vowel. It was identified as negative troughs (D, E, and F) occurring every 10 milliseconds and measured for their latency and amplitude. The transitional negative wave C located between the two portions of the sABR represents the onset of the voicing. It was analyzed for its amplitude and latency.[18] [19] [20]


#
#
#

Statistical Analyses

We recorded the data gathered in this study in raw SPSS tables, where they were statistically analyzed via the SPSS software statistical computer package version 20 (SPSS Inc., Chicago, Illinois, U.S.A.). We performed a simple descriptive analysis to calculate the mean values (X) and standard deviation (±SD) of the test variables. We compared the X of the CG with study subgroupś sABR measures for the different stimuli by applying the independent sample t-test to calculate the t-value and its probability (p). We added the confidence limits (CLs) of the sABR measures of the CG to compare them with the X of the SG measures. We used a one-way ANOVA test to compare the X of sABR wave measures of the three study subgroups when each of the three stimuli was presented by measuring the F-value and its probability (p). For all tests, statistical significance was set at p value less than 0.05.


#

Results

The aided recording of sABR revealed identifiable and repeatable waves in all ears (100%). We used the independent sample t-test to compare the X of the peak latencies, peak amplitudes, and V-A complex ([Table 2] [3] [4]) of the aided sABR waves of CG to those of the SG. We also estimated the upper and lower CLs of sABR measures of the CG and included them in [Tables 2] [3] [4]. When a certain stimulus was used, significant delay of peak latencies and significant reduction of peak amplitudes of sABR waves were mostly observed in the study subgroup having DRs corresponding to the stimulus maximum spectra. On the other hand, non-significant differences were mostly found between sABR wave peak latency and amplitude in the CG versus the other two study subgroups. The values of the study subgroup with significant differences were outside the CLs of the CG values, while those with non-significant differences were within the CLs, except for few sporadic data that did not follow a specific pattern. [Fig. 1] shows examples of aided sABR elicited by /ba/, /ga/, and /da/ stimuli from top to bottom, respectively, with A = CG, B = SGI, C = SGII, and D = SGIII.

Table 2

Comparison of aided sABR wave peak latency (in ms), peak amplitude (in uV), and V-A complex in ba-CG versus each study subgroup when subjected to /ba/ stimulus using independent sample t test

sABR waves

ba-CG X (±SD)

(CLs)

ba-SGI

ba-SGII

ba-SGIII

X (±SD)

t (p)

X (±SD)

t (p)

X (±SD)

t (p)

V latency

6.8 (±1.2)

(6.1–7.4)

13.5 (±3)

−7.5 (0.000)

8 (±1)

−2.5 (0.018)

6.7 (±1.2)

−2.7 (0.012)

A latency

10.1 (±1.5)

(9.3 - 11)

17 (±3.6)

−6.4 (0.000)

10.5 (±2.6

−0.42 (0.675)

10.1 (±1.5)

−0.93 (0.360)

C latency

18.8 (±3.1)

(17.1–20.5)

27 (±3.8)

−5.1 (0.000)

20.2 (±1)

−1.3 (0.217)

18.8 (±3.1)

−0.47 (0.639)

D latency

30 ( ± 2)

(28.9–31.1)

33.1 ( ± 4.9)

−2.2 (0.043)

30.1 (±2.7)

−0.127 (0.900)

29.6 (±1.7)

0.48 (0.630)

E latency

37.6 (±1.2)

(36.9–38.2)

45 ( ± 4.3)

−6.3 (0.000)

38.6 (±1.1)

−2.0 (0.053)

38.4 (±1.6)

−1.4 (0.158)

F latency

47.9 (±1.1)

(47.3–48.5)

54.8 (±3.7)

−6.8 (0.000)

44.6 (±2.7)

4.2 (0.000)

48.7 (±2.3)

−1.2 (0.253)

O latency

57.1 (1.4)

(56.4–57.9)

64.2 (±2.3)

−8.9 (0.000)

57.3 (±1.7)

−0.182 (0.858)

57.2 (±2)

−0.07 (0.942)

V-A amplitude

1.2 (±0.51)

(1–1.5)

0.9 (±0.3)

−1.5 (0.148)

1.2 (±0.3)

0.01 (0.996)

1.4 (±0.6)

0.7 (0.504)

V-A duration (ms)

3.4 (±1.1)

(2.7 - 4)

3.6 (±1.5)

−0.4 (0.720)

2.5 (±1.9)

1.3 (0.182)

2.8 (±0.8)

1.5 (0.151)

V-A slope

0.4 (±0.2)

(0.3–0.5)

0.3 (±0.15)

−1.3 (0.204)

0.7 (±0.4)

−2.2 (0.037)

0.6 (±0.5)

1.2 (0.233)

C amplitude

1 (±0.31)

(0.8–1.2)

0.7 (±0.2)

2.6 (0.015)

0.9 (±0.4)

0.6 (0.581)

1.2 (±0.3)

−1.5 (0.142)

D amplitude

1 (±0.19)

(0.9–1.2)

0.6 (±0.2)

6.4 (0.000)

1.5 (±0.2)

−5.1 (0.000)

1.3 (±0.3)

−2.5 (0.021)

E amplitude

1.2 (±0.49)

(0.9–1.5)

0.6 (±0.2)

4 (0.001)

1.6 (±0.5)

−2 (0.054)

1.9 (±0.9)

−2.4 (0.029)

F amplitude

1.3 (0.40)

(1.1–1.6)

0.8 (±0.3)

3.7 (0.001)

1.8 (±0.9)

−2.2 (0.041)

1.3 (±0.3)

0.5 (0.645)

O amplitude

1.3 (±0.36)

(1.1–1.5)

0.5 (±0.3)

6 (0.000)

1.3 (0.2)

−0.1 (0.933)

0.7 (±0.3)

3.8 (0.001)

Abbreviations: ba-CG, control group subjected to /ba/ stimulus; ba-SGI, study subgroup with low frequency dead region subjected to /ba/ stimulus; ba-SGII, study subgroup with mid frequency dead region subjected to /ba/ stimulus; ba-SGIII, study subgroup with high frequency dead region subjected to /ba/ stimulus; CLs, confidence limits; sABR, speech-evoked auditory brainstem response; SD, standard deviations; t (p), t-value and its probability; X, mean values.


Table 3

Comparison of aided sABR waves peak latency (in ms), peak amplitude (in uV), and V-A complex in ga-CG versus each study subgroup when subjected to /ga/ stimulus using independent sample t test

sABR waves

ga-CG X (±SD)

(CLs)

ga-SGI

ga-SGII

ga-SGIII

X (±SD)

t (p)

X (±SD)

t (p)

X (±SD)

t (p)

V latency

9.1 (±1.7)

(8.1–10.1)

11.4 (±0.7)

−3.1 (0.005)

17 (±1.7)

−9.7 (0.000)

10.3 (±2)

−1.7 (0.098)

A latency

13.9 (±4)

(11.7–16.1)

15.3 (±1.3)

−0.85 (0.406)

20.3 (±2.3)

−4.2 (0.000)

13.5 (±1.8)

0.35 (0.733)

C latency

22 (±2.9)

(20.4–23.5)

24 (±1)

−1.8 (0.084)

30 (±1.8)

−6.8 (0.000)

24 (±1.4)

−2.3 (0.032)

D latency

30.7 (±1.9)

(29.7–31.8)

32.3 (±1.5)

−1.7 (0.089)

38 (±1.7)

−8.7 (0.000)

32.7 (±2.1)

−2.5 (0.021)

E latency

41 (±3)

(39.4–42.7)

43.3 (±2)

−1.7 (0.103)

49 (±3.4)

−5.7 (0.000)

42 (±2.1)

−0.8 (0.431)

F latency

50 (±2.3)

(49–51.6)

54 (±3.2)

−3 (0.008)

57 (±1.4)

−7.6 (0.000)

52 (±2.3)

−1.9 (0.076)

O latency

59 (±2.6)

57.7–60.5

62.2(±1.7)

−2.7 (0.016)

64 (±0.8)

−5.4 (0.000)

60.5 (±2.5)

−1.3 (0.194)

V-A amplitude

1.7 (±0.5)

(1.4–2)

1.5 (±0.7)

0.4 (0.708)

0.8 (±0.5)

4.8 (0.000)

1.3 (±0.5)

1.9 (0.071)

V-A duration

4.8 (±2.8)

(3.3–6.3)

3.9 (±1.4)

0.7 (0.471)

3.9 (±0.6)

0.9 (0.368)

3.1 (±1.3)

1.9 (0.072)

V-A slope

0.5 (±0.3)

(0.3–0.7)

0.4 (±0.1)

0.8 (0.462)

0.2 (±0.08)

2.5 (0.022)

0.5 (±0.3)

0.22 (0.831)

C amplitude

1.3 (±0.8)

(0.9–1.8)

1.4 (±0.5)

−0.27 (0.778)

0.5 (±0.2)

3.5 (0.002)

1.1 (±0.5)

0.7 (0.473)

D amplitude

1.6 (±0.8)

(1.1–2.1)

0.8 (±0.6)

2.1 (0.050)

0.4 (±0.2)

4.7 (0.000)

2.1 (±0.7)

−1.5 (0.153)

E amplitude

1.4 (±0.7)

(1–1.7)

1.8 (±0.9)

−1.3 (0.218)

0.3 (±0.1)

4.9 (0.000)

1.7 (±0.5)

−1.1 (0.279)

F amplitude

1.7 (±1)

(1.1–2.2)

1.5 (±0.3)

−0.52 (0.606)

0.6 (±0.1)

3.7 (0.001)

1.4 (±0.4)

0.7 (0.466)

O amplitude

1.2 (±0.5)

(1–1.5)

1.2 (±0.8)

0.24 (0.808)

0.5 (±0.2)

4.7 (0.001)

1.7 (±0.5)

−2.2 (0.033)

Abbreviations: CLs, confidence limits; ga-CG, control group subjected to /ga/ stimulus; ga-SGI, study subgroup with low frequency dead region subjected to /ga/ stimulus; ga-SGII, study subgroup with mid frequency dead region subjected to /ga/ stimulus; ga-SGIII, study subgroup with high frequency dead region subjected to /ga/ stimulus; sABR, speech-evoked auditory brainstem response; SD, standard deviations; t(p), t-value and its probability; X, mean values.


Table 4

Comparison of aided sABR waves peak latency (in ms), peak amplitude (in uV), and V-A complex in da-CG versus each study subgroup when subjected to /da/ stimulus using independent sample t test

sABR waves

da-CG X (±SD)

(CLs)

da-SGI

da-SGII

da-SGIII

X (±SD)

t (p)

X (±SD)

t (p)

X (±SD)

t (p)

V latency

8.2 (±2.1)

(7.1–9.4)

9.8 (±3.2)

−1.3 (0.197)

8.9 (±1.3)

−0.8 (0.465)

10.4 (±2.3)

−2.4 (0.024)

A latency

11 (±1.6)

(10.1–11.9)

12 (±2.2)

−1.3 (0.216)

11.9 (±1.6)

−1.3 (0.187)

13.6 (±3.4)

−2.6 (0.014)

C latency

20.5 (±2.8)

(18.9–22)

21.5 (±1)

−0.9 (0.400)

20.8 (±1.3)

−0.3 (0.747)

25.4 (±3.7)

−3.8 (0.001)

D latency

31 (±3.7)

(28.8–32.9)

32 (±1.9)

−0.6 (0.559)

33 (±2.3)

−1.5 (0.157)

35.5 (±2.8)

−3.4 (0.002)

E latency

40 (±3)

(38–41.4)

41.3 (±2.8)

−1.1 (0.273)

42 (±1.7)

−1.6 (0.118)

42.6 (±3.7)

−2.2 (0.038)

F latency

51 (±2.9)

(49.4–52.6)

51 (±1.9)

0.26 (0.796)

49 (±3)

1.7 (0.111)

52 (±0.8)

−0.6 (0.554)

O latency

60 (±3)

(58.4–61.8)

62 (±0.9)

−1.3 (0.198)

60 (±1.6)

−3.4 (0.731)

61 (±1.3)

−0.6 (0.578)

V-A amplitude

1.1 (±0.6)

(0.8–1.5)

1.6 (±0.3)

−1.6 (0.129)

2 (±0.5)

−3.2 (0.004)

0.54 (±0.2)

3 (0.006)

V-A duration

2.7 (±1.7)

(1.7–3.7)

2.3 (±2.6)

0.5 (0.649)

3.1 (±0.6)

−0.6 (0.578)

3.3 (±2.1)

−0.8 (0.452)

V-A slope

0.6 (±0.5)

(0.3–0.9)

0.2 (±1)

2.5 (0.022)

0.7 (±0.3)

−0.2 (0.829)

0.2 (±0.2)

2.5 (0.021)

C amplitude

1.5 (±1.3)

(0.8–2.2)

1.1 (±0.4)

0.8 (0.442)

0.9 (±0.4)

1.4 (0.176)

0.75 (±0.4)

1.9 (0.068)

D amplitude

1.5 (±0.5)

(1.3–1.8)

1.7 (±0.2)

−0.5 (0.640)

2 (±0.5)

−2 (0.062)

0.72 (±0.4)

4.7 (0.000)

E amplitude

1.9 (±1.1)

(1.3–2.5)

1.1 (±1)

1.6 (0.123)

1.4 (±0.7)

1.2 (0.228)

0.61 (±0.4)

3.7 (0.001)

F amplitude

1.6 (±0.7)

(1.2–2)

1.3 (±0.9)

0.9 (0.388)

1.6 (±0.4)

0.00 (0.993)

0.94 (±0.6)

2.5 (0.019)

O amplitude

2.1 (±2)

(1–3.2)

0.8 (±0.4)

1.5 (0.141)

1.3 (±0.8)

1.1 (0.274)

1 (±0.5)

1.8 (0.084)

Abbreviations: CLs, confidence limits; da-CG, control group subjected to /da/ stimulus; da-SGI, study subgroup with low frequency dead region subjected to da/ stimulus; da-SGII, study subgroup with mid frequency dead region subjected to /da/ stimulus; da-SGIII, study subgroup with high frequency dead region subjected to /da/ stimulus; sABR, speech-evoked auditory brainstem response; SD, standard deviations; t (p), t-value and its probability; X, mean values.


Zoom Image
Fig. 1 Examples of aided sABR elicited from top to bottom by /ba/, /ga/ and /da/ stimuli, respectively. Ashows CG, B-shows SGI, C-shows SGII, and D-shows SGIII. Abbreviations: sABR, speech-evoked auditory brainstem response; CG, control group; SGI, study subgroup with low frequency dead region; SGII, study subgroup with mid frequency dead region; SGIII, study subgroup with high frequency dead region.

Comparing sABR measures of the three study subgroups was performed when elicited with each of /ba/, /ga/ and /da/ stimuli using one-way ANOVA test ([Table 5]). Generally, stimulation with a certain stimulus elicited sABR waves that were delayed in timing and reduced in magnitude in the study subgroup with related DRs when compared with the other study subgroups.

Table 5

Comparison between sABR waves of the three study subgroups for the three stimuli /ba/, /ga/, or /da/ presented using one-way ANOVA test

sABR waves

/ba/

/ga/

/da/

F (p)

Ordering*

F (p)

ordering

F (p)

ordering

V latency

27.9 (0.000)

(a b b)

31.2 (0.000)

(a b a)

0.96 (0.399)

(a a a)

A latency

18.3 (0.000)

(a b b)

33.3 (0.000)

(a b a)

1.1 (0.336)

(a a a)

C latency

28.3 (0.000)

(a b b)

37.5 (0.000)

(a b a)

8.1 (0.002)

(a a b)

D latency

2.9 (0.077)

(a ab b)

22.5 (0.000)

(a b a)

4.9 (0.017)

(a a b)

E latency

17.3 (0.000)

(a b b)

17.3 (0.000)

(a b a)

0.45 (0.642)

(a a a)

F latency

22.8 (0.000)

(a b c)

11.6 (0.000)

(a b a)

3.1 (0.067)

(ab a b)

O latency

28.2 (0.000)

(a b b)

9 (0.001)

(a b a)

2.1 (0.149)

(a a a)

V-A amplitude

2.3 (0.12)

(a ab b)

5.1 (0.015)

(a b a)

48 (0.000)

(a a b)

VA duration

1.2 (0.035)

(a b b)

1.5 (0.238)

(a a a)

0.6 (0.574)

(a a a)

VA slope

1.6 (0.022)

(a b b)

4.7 (0.019)

(a b a)

4.9 (0.017)

(a b a)

C amplitude

6.4 (0.007)

(a b b)

13.8 (0.000)

(a b a)

1.3 (0.281)

(a a a)

D amplitude

44.8 (0.000)

(a b b)

28.7 (0.000)

(a b a)

27.8 (0.000)

(a a b)

E amplitude

16.2 (0.000)

(a b b)

24.7 (0.000)

(a b a)

2.9 (0.043)

(a a b)

F amplitude

12 (0.000)

(a b b)

33.7 (0.000)

(a b a)

2.5 (0.05)

(a a b)

O amplitude

24.2 (0.000)

(a b c)

12.9 (0.000)

(a b a)

1.1 (0.353)

(a a a)

Abbreviations: ANOVA, analysis of variance; F (p), F-value and its probability; sABR, speech-evoked auditory brainstem response.


*Subgroups were given symbols (a, b, or c). When they shared the same symbol, this meant that there was no difference between them, while a different symbol meant significant difference.



#

Discussion

In this study, aided sABR could be detected in all subjects even though the stimulus spectral cues correspond to the site of the DR but with a delay and impairment of the neural response strength. This agreed with Russo et al[9] who reported that sABR waves are characterized by their replicability and reliability in all subjects. Aided stimulation provides sABR waves with a quality resembling that elicited with earphones, which also coincided with the findings of Bellier et al.[11] This could be explained by the spread of excitation along the basilar membrane. In case of low frequency DR, when a low-frequency tone is presented, it will not be detected via neurons arising from the apical region of the cochlea, as the IHCs in that region are dead. However, the tone will become audible when it produces sufficient neural excitation in the mid and high frequency regions, the so-called “upward spread of excitation.”[2] [21] Also, in subjects with mid frequency DRs, residual hearing at low and high frequencies is sufficient to permit good speech perception reflecting upward and downward spread of excitation.[22] A high frequency DR will produce a restricted downward spread of excitation in the cochlea.[2] However, there can be significant individual variability, and marked downward spread of excitation can happen in some ears with DRs.[1]

Effect of Site of Dead Region on Speech Perception

The purpose of this study was to determine whether the brainstem differentially processes speech with spectral cues mainly related to cochlear DRs located at different frequency regions on basilar membrane. This proposal is based on the frequency selectivity property of the auditory system. Actually, in people with impaired hearing, frequency regions (DRs) without functioning inputs to the auditory system could exist, resulting in reduced ability to analyze and separate sounds of different frequencies so that frequency selectivity and, consequently, speech perception are impaired.[22] [23] A principal finding in this study was the reduced neural encoding of speech at the level of the brainstem when the aided sABR was elicited by a DRs-related stimulus. The less enhanced response reflected less excitation of these areas when stimulated from spectrally-related stimuli. Despite this impaired response, it seemed that the brainstem could process the DRs-related speech signals, which can be explained by the theory of spread of excitation on the basilar membrane and the complex spectral nature of speech stimuli. However, this study examined individuals who are chronic hearing aid users. The pattern of brainstem response can be different if the period of hearing aid use is not sufficient for the neuroplastic physiological behavior of the auditory system.


#

Low Frequency Dead Region

An important outcome of this study was the significant difference between the CG and the SGI with respect to the neural encoding of the speech stimulus /ba/. It appeared as significant prolongation of neural conduction time and reduction of instant energy of aided sABR waves. However, SGII and SGIII exhibited brainstem processing of /ba/ stimulus that didn't differ markedly from that of the CG. To our knowledge, there is no research work available about the effect of DRs on aided sABR, but plenty of previous studies had examined the effect of differently located DRs on speech perception using different methods of assessment. An analogous effect of low frequency DRs on speech perception was proven by Thornton and Abbas[24] and van Tasell and Turner.[25] They suggested that the low frequency speech components are partially processed by IHCs and neurons tuned to middle or high frequencies. Vinay et al[26] found a similar conclusion. They found that subjects with low frequency DRs had benefited from amplification extending about one octave above the DR; however, deterioration in their performance was noticeable when amplification extended into the DR. Generally, the results are concordant in that ears with low frequency DRs extract limited cues from low frequency components of the speech, while some of these low frequency cues can be utilized by mid and high frequency regions of the basilar membrane. Also, the mid and high frequency regions can utilize the corresponding cues of the speech adequately.


#

Mid and High Frequency Dead Region

The effect of mid and high frequency DRs on sABR followed the same previous pattern of low frequency DR. In other words, sABR was less enhanced when stimulated by a DR-related stimulus (/ga/ and /da/, respectively). Consistent with this, Moore and Alcantara[27] and Moore[2] reported that in subjects with mid frequency DRs, a mid frequency signal cue can be detected via IHCs and neurons of apical and basal region of the basilar membrane permitting good speech perception. Moore et al,[8] Vickers et al,[28] and Bear et al[29] studied the effect of high frequency DR on speech perception. Their findings reflect that subjects with high frequency DR utilize little information from the high frequency cues of the speech, whereas speech processing depends mainly on the mid and low frequency cues of the speech.


#

Confidence Limits

Calculated CLs of the CG are proved to be statistically valid measurements, as the mean values of the study subgroup subjected to DR-related stimulus were mostly outside the limits. On the other hand, the values of the subgroups with unrelated DRs were within these limits except for sporadic values that did not constitute a statistical pattern. Despite this statistical advantage of the calculated CLs, it could be difficult to apply them clinically on separate cases due to small differences between affected and CL values. It is more suitable to consider a statistical pattern response for all variables rather than comparing separate values. A protocol can be proposed to test aided sABR using the three speech stimuli and, when the subject reveals that most of latencies and amplitudes are outside the CLs of a certain stimulus, a DR can be suspected in its frequency domain.


#
#

Conclusion

Brainstem can encode speech signals with spectral maxima perpendicular to cochlear DRs in chronic hearing aid users. This can be attributed to the spread of excitation on the basilar membrane, the extrinsic redundancy of speech signals, and the neuroplasticity of the auditory system after five years or more of hearing aid use (period used in the current study). Speech ABR represents a good reflection of brainstem neural encoding of speech. When speech stimuli peering different spectral cues are delivered through a free field, hearing aids provide a perfect spectral precision suspecting the presence of DRs. Understanding the brainstem's ability to encode temporal and spectral cues improves the selection of proper hearing aid algorithms. Consequently, aided sABR recording can improve hearing aid fitting and verification, especially in subjects that are difficult to test.


#
#
  • References

  • 1 Moore BCJ, Glasberg BR. The role of frequency selectivity in the perception of loudness, pitch and time. In: Moore BCJ, ed. Frequency Selectivity and Hearing. London: Academic Press; 1986
  • 2 Moore BCJ. Dead regions in the cochlea: diagnosis, perceptual consequences, and implications for the fitting of hearing AIDS. Trends Amplif 2001; 5 (1) 1-34
  • 3 Moore BCJ. Dead regions in the cochlea: conceptual foundations, diagnosis, and clinical applications. Ear Hear 2004; 25 (2) 98-116
  • 4 Moore B, Carlyon R. Perception of pitch by people with cochlear hearing loss and by cochlear implant users. In: Plack CJ, Oxenham AJ, Fay RR, , et al, eds. Pitch Perception. New York: Springer; 2005: 234-277
  • 5 McDermott HJ, Lech M, Kornblum MS, Irvine DR. Loudness perception and frequency discrimination in subjects with steeply sloping hearing loss: possible correlates of neural plasticity. J Acoust Soc Am 1998; 104 (4) 2314-2325
  • 6 Huss M, Moore BC. Dead regions and noisiness of pure tones. Int J Audiol 2005; 44 (10) 599-611
  • 7 Moore BC, Vinay SN. Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions. Brain 2009; 132 (Pt 2): 524-536
  • 8 Moore BC, Huss M, Vickers DA, Glasberg BR, Alcántara JI. A test for the diagnosis of dead regions in the cochlea. Br J Audiol 2000; 34 (4) 205-224
  • 9 Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115 (9) 2021-2030
  • 10 Sinha SK, Basavaraj V. Speech evoked auditory brainstem responses: a new tool to study brainstem encoding of speech sounds. Indian J Otolaryngol Head Neck Surg 2010; 62 (4) 395-399
  • 11 Bellier L, Veuillet E, Vesson JF, Bouchet P, Caclin A, Thai-Van H. Speech Auditory Brainstem Response through hearing aid stimulation. Hear Res 2015; 325: 49-54
  • 12 Russo N, Nicol T, Trommer B, Zecker S, Kraus N. Brainstem transcription of speech is disrupted in children with autism spectrum disorders. Dev Sci 2009; 12 (4) 557-567
  • 13 Stevens KN, Blumstein SE. Invariant cues for place of articulation in stop consonants. J Acoust Soc Am 1978; 64 (5) 1358-1368
  • 14 Kewley-Port D. Time-varying features as correlates of place of articulation in stop consonants. J Acoust Soc Am 1983; 73 (1) 322-335
  • 15 Steinschneider M, Fishman YI. Enhanced physiologic discriminability of stop consonants with prolonged formant transitions in awake monkeys based on the tonotopic organization of primary auditory cortex. Hear Res 2011; 271 (1–2) 103-114
  • 16 Steinschneider M, Nourski KV, Fishman YI. Representation of speech in human auditory cortex: is it special?. Hear Res 2013; 305: 57-73
  • 17 ANSI (American National Standard Institute). American National Standard specification for audiometers (ANSI S3.6–1996). New York; 1996
  • 18 Johnson KL, Nicol T, Zecker SG, Bradlow AR, Skoe E, Kraus N. Brainstem encoding of voiced consonant—vowel stop syllables. Clin Neurophysiol 2008; 119 (11) 2623-2635
  • 19 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (3) 302-324
  • 20 Anderson S, Kraus N. The Potential Role of the cABR in Assessment and Management of Hearing Impairment. Int J Otolaryngol 2013; 2013: 604729
  • 21 Halpin C, Thornton A, Hasso M. Low-frequency sensorineural loss: clinical evaluation and implications for hearing aid fitting. Ear Hear 1994; 15 (1) 71-81
  • 22 Moore BCJ, Tyler LK, Marslen-Wilsen WD. The Perception of Speech: from Sound to Meaning (revised and updated). USA: Oxford University Press; 2009
  • 23 Moore BCJ. Auditory Processing of Temporal Fine Structure: Effects of Age and Hearing Loss. Singapore: World Scientific; 2014
  • 24 Thornton AR, Abbas PJ, Abbas PJ. Low-frequency hearing loss: perception of filtered speech, psychophysical tuning curves, and masking. J Acoust Soc Am 1980; 67 (2) 638-643
  • 25 Van Tasell DJ, Turner CW. Speech recognition in a special case of low-frequency hearing loss. J Acoust Soc Am 1984; 75 (4) 1207-1212
  • 26 Vinay BT, Baer T, Moore BC. Speech recognition in noise as a function of highpass-filter cutoff frequency for people with and without low-frequency cochlear dead regions. J Acoust Soc Am 2008; 123 (2) 606-609
  • 27 Moore BCJ, Alcántara JI. The use of psychophysical tuning curves to explore dead regions in the cochlea. Ear Hear 2001; 22 (4) 268-278
  • 28 Vickers DA, Moore BCJ, Baer T. Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. J Acoust Soc Am 2001; 110 (2) 1164-1175
  • 29 Baer T, Moore BCJ, Kluk K. Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. J Acoust Soc Am 2002; 112 (3 Pt 1): 1133-1144

Address for correspondence

Mohammad Ramadan Hassaan, Associate Professor of Audiology
40-B AbdulAziz Ali st.
Zagazig 44519
Egypt   

  • References

  • 1 Moore BCJ, Glasberg BR. The role of frequency selectivity in the perception of loudness, pitch and time. In: Moore BCJ, ed. Frequency Selectivity and Hearing. London: Academic Press; 1986
  • 2 Moore BCJ. Dead regions in the cochlea: diagnosis, perceptual consequences, and implications for the fitting of hearing AIDS. Trends Amplif 2001; 5 (1) 1-34
  • 3 Moore BCJ. Dead regions in the cochlea: conceptual foundations, diagnosis, and clinical applications. Ear Hear 2004; 25 (2) 98-116
  • 4 Moore B, Carlyon R. Perception of pitch by people with cochlear hearing loss and by cochlear implant users. In: Plack CJ, Oxenham AJ, Fay RR, , et al, eds. Pitch Perception. New York: Springer; 2005: 234-277
  • 5 McDermott HJ, Lech M, Kornblum MS, Irvine DR. Loudness perception and frequency discrimination in subjects with steeply sloping hearing loss: possible correlates of neural plasticity. J Acoust Soc Am 1998; 104 (4) 2314-2325
  • 6 Huss M, Moore BC. Dead regions and noisiness of pure tones. Int J Audiol 2005; 44 (10) 599-611
  • 7 Moore BC, Vinay SN. Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions. Brain 2009; 132 (Pt 2): 524-536
  • 8 Moore BC, Huss M, Vickers DA, Glasberg BR, Alcántara JI. A test for the diagnosis of dead regions in the cochlea. Br J Audiol 2000; 34 (4) 205-224
  • 9 Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115 (9) 2021-2030
  • 10 Sinha SK, Basavaraj V. Speech evoked auditory brainstem responses: a new tool to study brainstem encoding of speech sounds. Indian J Otolaryngol Head Neck Surg 2010; 62 (4) 395-399
  • 11 Bellier L, Veuillet E, Vesson JF, Bouchet P, Caclin A, Thai-Van H. Speech Auditory Brainstem Response through hearing aid stimulation. Hear Res 2015; 325: 49-54
  • 12 Russo N, Nicol T, Trommer B, Zecker S, Kraus N. Brainstem transcription of speech is disrupted in children with autism spectrum disorders. Dev Sci 2009; 12 (4) 557-567
  • 13 Stevens KN, Blumstein SE. Invariant cues for place of articulation in stop consonants. J Acoust Soc Am 1978; 64 (5) 1358-1368
  • 14 Kewley-Port D. Time-varying features as correlates of place of articulation in stop consonants. J Acoust Soc Am 1983; 73 (1) 322-335
  • 15 Steinschneider M, Fishman YI. Enhanced physiologic discriminability of stop consonants with prolonged formant transitions in awake monkeys based on the tonotopic organization of primary auditory cortex. Hear Res 2011; 271 (1–2) 103-114
  • 16 Steinschneider M, Nourski KV, Fishman YI. Representation of speech in human auditory cortex: is it special?. Hear Res 2013; 305: 57-73
  • 17 ANSI (American National Standard Institute). American National Standard specification for audiometers (ANSI S3.6–1996). New York; 1996
  • 18 Johnson KL, Nicol T, Zecker SG, Bradlow AR, Skoe E, Kraus N. Brainstem encoding of voiced consonant—vowel stop syllables. Clin Neurophysiol 2008; 119 (11) 2623-2635
  • 19 Skoe E, Kraus N. Auditory brain stem response to complex sounds: a tutorial. Ear Hear 2010; 31 (3) 302-324
  • 20 Anderson S, Kraus N. The Potential Role of the cABR in Assessment and Management of Hearing Impairment. Int J Otolaryngol 2013; 2013: 604729
  • 21 Halpin C, Thornton A, Hasso M. Low-frequency sensorineural loss: clinical evaluation and implications for hearing aid fitting. Ear Hear 1994; 15 (1) 71-81
  • 22 Moore BCJ, Tyler LK, Marslen-Wilsen WD. The Perception of Speech: from Sound to Meaning (revised and updated). USA: Oxford University Press; 2009
  • 23 Moore BCJ. Auditory Processing of Temporal Fine Structure: Effects of Age and Hearing Loss. Singapore: World Scientific; 2014
  • 24 Thornton AR, Abbas PJ, Abbas PJ. Low-frequency hearing loss: perception of filtered speech, psychophysical tuning curves, and masking. J Acoust Soc Am 1980; 67 (2) 638-643
  • 25 Van Tasell DJ, Turner CW. Speech recognition in a special case of low-frequency hearing loss. J Acoust Soc Am 1984; 75 (4) 1207-1212
  • 26 Vinay BT, Baer T, Moore BC. Speech recognition in noise as a function of highpass-filter cutoff frequency for people with and without low-frequency cochlear dead regions. J Acoust Soc Am 2008; 123 (2) 606-609
  • 27 Moore BCJ, Alcántara JI. The use of psychophysical tuning curves to explore dead regions in the cochlea. Ear Hear 2001; 22 (4) 268-278
  • 28 Vickers DA, Moore BCJ, Baer T. Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. J Acoust Soc Am 2001; 110 (2) 1164-1175
  • 29 Baer T, Moore BCJ, Kluk K. Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. J Acoust Soc Am 2002; 112 (3 Pt 1): 1133-1144

Zoom Image
Fig. 1 Examples of aided sABR elicited from top to bottom by /ba/, /ga/ and /da/ stimuli, respectively. Ashows CG, B-shows SGI, C-shows SGII, and D-shows SGIII. Abbreviations: sABR, speech-evoked auditory brainstem response; CG, control group; SGI, study subgroup with low frequency dead region; SGII, study subgroup with mid frequency dead region; SGIII, study subgroup with high frequency dead region.