J Am Acad Audiol 2019; 30(07): 564-578
DOI: 10.3766/jaaa.17096
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Impact of Unilateral Hearing Loss on Behavioral and Evoked Potential Measures of Auditory Function in Adults

Oscar M. Cañete
*   Speech Science, School of Psychology, The University of Auckland, Auckland, New Zealand
†   Eisdell Moore Centre for Research in Hearing and Balance, Auckland, New Zealand
,
Suzanne C. Purdy
*   Speech Science, School of Psychology, The University of Auckland, Auckland, New Zealand
†   Eisdell Moore Centre for Research in Hearing and Balance, Auckland, New Zealand
,
Colin R. S. Brown
‡   Starship Children’s Hospital, Auckland, New Zealand
,
Michel Neeff
‡   Starship Children’s Hospital, Auckland, New Zealand
,
Peter R. Thorne
†   Eisdell Moore Centre for Research in Hearing and Balance, Auckland, New Zealand
§   Section of Audiology, The University of Auckland, Auckland, New Zealand
› Author Affiliations
Further Information

Corresponding author

Oscar M. Cañete
Speech Science, School of Psychology, The University of Auckland
Auckland 92019, New Zealand

Publication History

Publication Date:
25 May 2020 (online)

 

Abstract

Background:

A unilateral hearing loss (UHL) can have a significant functional and social impact on children and adults, affecting their quality of life. In adults, UHL is typically associated with difficulties understanding speech in noise and sound localization, and UHL increases the self-perception of auditory disability for a range of listening situations. Furthermore, despite evidence for the negative effects of reduced unilateral auditory input on the neural encoding of binaural cues, the perceptual consequences of these changes are still not well understood.

Purpose:

Determine effects of UHL on auditory abilities and speech-evoked cortical auditory evoked potentials (CAEPs).

Research Design:

CAEPs, sound localization, speech perception in noise and self-perception of auditory abilities (speech, spatial, and qualities hearing scale) were assessed.

Study Sample:

Thirteen adults with UHL with a range of etiologies, duration of hearing loss, and severity and a control group of eleven binaural listeners with normal hearing.

Results:

Participants with UHL varied greatly in their ability to localize sound and reported speech recognition and listening effort were the greatest problem. There was a greater effect of right ear than left ear hearing loss on N1 amplitude hemispheric asymmetry and N1 latencies evoked by speech syllables in noise. As duration of hearing loss increased, contralateral dominance (N1 amplitude asymmetry) decreased. N1 amplitudes correlated with speech scores, larger N1 amplitudes were associated with better speech recognition in noise scores. N1 latencies are delayed (in the better ear) and amplitude hemisphere asymmetry differed across UHL participants as function of side of deafness, mainly for right-sided deafness.

Conclusion:

UHL affects a range of auditory abilities, including speech detection in noise, sound localization, and self-perceived hearing disability. CAEPs elicited by speech sounds are sensitive enough to evidence changes within the auditory cortex due to an UHL.


#

INTRODUCTION

A unilateral hearing loss (UHL) can have a significant functional and social impact on children and adults, affecting their quality of life ([Borton et al, 2010]; [Wie et al, 2010]). People with UHL have a diverse range of auditory difficulties, including speech recognition in noisy and group situations and sound localization ([Gustafson and Hamill, 1995]; [Welsh et al, 2004]; [Ruscetta et al, 2005]). Compared with binaural listeners (BL) with normal hearing in both ears children with UHL have increased risk for language problems, academic failure, behavioral difficulties, and needing further educational assistance, despite the presence of one apparently normally functioning ear ([Brookhouser et al, 1991]; [Lieu, 2004]; [Lieu et al, 2010]). In adults, UHL is typically associated with difficulties understanding speech in noise and sound localization ([Giolas and Wark, 1967]; [Rothpletz et al, 2012]), and UHL increases the self-perception of auditory disability for a range of listening situations ([Colletti et al, 1988]; [Douglas et al, 2007]; [Augustine et al, 2013]).

Changes within the central auditory system (CAS) due to unilateral auditory deprivation have also been reported ([Vasama et al, 1995]; [Vasama and Mäkelä, 1997]; [Ponton et al, 2001]). UHL reduces or abolishes the auditory inputs from one ear, resulting in an imbalance in auditory signal representation within the CAS, which is associated with functional and anatomical changes at cortical and subcortical levels ([McAlpine et al, 1997]; [Hutson et al, 2008]; [Maslin et al, 2013a]). Binaural listeners have larger and earlier cortical responses in the hemisphere contralateral to the stimulated ear, which is evidence for hemispheric asymmetry (contralateral dominance) within the auditory pathway ([Majkowski et al, 1971]; [Musiek, 1986]). Several studies have shown significant effects of UHL on contralateral dominance. Functional magnetic resonance imaging and magnetoencephalography show that the normal contralateral dominance pattern changes in UHL, with larger ipsilateral hemisphere activation compared with that seen in BL ([Fujiki et al, 1998]; [Bilecen et al, 2000]). Some studies of acquired profound UHL reveal a symmetric pattern of activation in the ipsilateral and contralateral hemisphere compared with normal hearing controls, explained by higher activation of the hemisphere ipsilateral to the stimulated (better) ear. However, these changes tend to revert in the long term ([Ponton et al, 2001]; [Maslin et al, 2013b]). The extent of these changes may depend on factors such as duration of the hearing loss, etiology, or side of deafness ([Vasama et al, 1995]; [Vasama and Mäkelä, 1997]; [Khosla et al, 2003]; [Maslin et al, 2013a]).

Despite evidence for negative effects of reduced unilateral auditory input on the neural encoding of binaural cues, the relationship of hemispheric asymmetry in electrophysiological measures is still not well understood. There are some inconsistences across studies that may result from differences in methodology and participant characteristics ([Vasama et al, 1995]; [Ponton et al, 1996]; [Vasama and Mäkelä, 1996]; [Vasama and Mäkelä, 1997]; [Fujiki et al, 1998]; [Ponton et al, 2001]; [Khosla et al, 2003]; [Hine et al, 2008]; [Hanss et al, 2009]; [Maslin et al, 2013b]).

Cortical responses reflect auditory processing at higher levels of the auditory pathway and the N1 responses are sensitive to changes in stimulus characteristics (temporal and frequency features) ([Näätänen and Picton, 1987]; [Hyde, 1997]; [Martin et al, 2008]) which are important for speech recognition. There is a strong relationship between cortical auditory evoked potential (CAEP) amplitudes and speech perception performance. For instance, in normal hearing adults, as N1 amplitude becomes larger, better performance is observed for speech recognition tasks ([Parbery-Clark et al, 2011]; [Billings et al, 2013]). In children, smaller N2 amplitudes are associated with better performance ([Anderson et al, 2010]). [Makhdoum et al (1998)] found a positive relationship between N1–P2 complex amplitude and speech perception scores in cochlear implant users. Similarly [Kelly et al (2005)] reported a correlation between speech scores and P2 latencies as earlier responses are related to higher speech scores. There is limited published information on the relation between speech perception and CAEPs in UHL.

To better understand the effects of UHL on auditory processing, we investigated the relationship between speech-evoked auditory evoked potentials and behavioral measures of speech perception. We hypothesized that UHL causes changes in the auditory cortex that are associated with altered speech processing in the normal hearing ear, compared with normal hearing adults.


#

METHODS

Participants

Thirteen adults (six males, seven females) aged 24–65 years (mean = 42.3, standard deviation [SD] = 12.9) with UHL were included in the experimental group (demographic details in [Table 1]). The study group was a convenience sample from a series of consecutive patients attending a hospital otology clinic. Because of the small number of UHL cases, all participants who met the inclusion criteria and who provided consent were included in the study. Consequently participants had a range of hearing loss etiologies, configurations, duration, and severity. Participants had left-sided (n = 6) or right-sided hearing loss (n = 7). Participants with UHL had hearing levels (0.5, 1, 2, 4 kHz pure-tone average [4PTA]) ≤20 dB HL in their better ear and ≥25 dB HL in their affected ear. Three of the UHL participants reported inconsistent use of hearing aids but the other UHL participants had never used hearing instruments. All assessments were performed without hearing instruments. A BL comparison group of people with normal hearing thresholds [4PTA ≤ 20 dB HL—hearing loss criterion from [Clark (1981)]] consisted of eleven adults (eight females and three males) aged 18–52 years (mean = 31.3, SD = 9.7), who participated in the speech recognition, questionnaire, and CAEP assessments. No significant differences in PTA4 thresholds were observed between groups (good ear in UHL versus BL). For sound localization, an exploratory assessment revealed that BL participants had no errors when performing the task. To simulate the effect of UHL on sound localization ability, ten new participants with normal hearing (aged 24–33 years, eight females, two males) were tested on the task with simulated right and left ear unilateral conductive with the hearing loss created by deep insertion of a foam earplug. The order of testing right and left ears was counterbalanced for this group (Binaural listeners plugged [BLpl]). The average attenuation provided by the earplug was 31.5 dB (0.25, 0.5 kHz), 35.5 dB (1, 2 kHz), and 48.3 dB (3, 4, 6, 8 kHz), for low, medium, and high frequencies, respectively.

Table 1

Unilateral Hearing Impaired Participants’ Demographic Information

Participant

Age (years)

M/F

Hearing Loss

Etiology

4PTA

Ear

Duration (years)

Language

Hearing Device

UHL01

27

M

SN severe-profound

Sudden HL

85.0

Left

1;7

English

No

UHL02

34

F

SN profound

Congenital

125.0

Right

34

English

CROS hearing aids

UHL03

44

M

CHL moderate

Chronic otitis media

38.8

Right

≈20

English/Malay

No

UHL04

46

M

SN severe-profound

Meningitis

98.8

Left

39

English

No

UHL05

54

F

SN severe-profound

Sudden HL

98.8

Right

18

English

CROS hearing aids

UHL06

52

M

SN moderate

Acoustic neuroma

48.8

Right

2

English

No

UHL07

65

M

SN moderate

Sudden HL

47.5

Left

1;8

English

No

UHL08

24

M

CHL moderate

Chronic otitis media

40.0

Right

4

English

No

UHL09

49

F

SN moderate

Meniere’s disease

36.3

Left

3

English

No

UHL10

56

F

Mixed profound

Chronic otitis media

115.0

Right

≈40

English

No

UHL11

41

F

CHL moderate

Temporal bone Fracture

40.0

Left

≈35

English

Conventional hearing aid

UHL12

25

F

CHL severe

Aural atresia

71.3

Right

23

English

No

UHL13

33

F

SN profound

Acoustic neuroma

125.0

Left

8

English

No

Note: ≈ = approximately; 4PTA = pure tone average for 5, 1, 2, and 4 kHz; SN = sensorineural; CROS = contralateral routing of signal.


The study was approved by the University of Auckland Human Participants Ethics Committee and all participants gave written informed consent.


#

Behavioral Assessment

Sound Localization

Sound localization was tested using a five-speaker setup. A spondee word “Frenchfries” spoken by a female native speaker of New Zealand English was presented at 62 dB SPL on average. The level was randomly varied between 54 and 70 dB SPL (roved ± 8 dB to avoid use of absolute levels for localizing the sound). Five loudspeakers were placed at −90°, −45°, 0°, 45°, and 90° azimuth at 1 m distance from the participant with the loudspeaker centers at approximately head height. Participants were instructed to always look to the front speaker and to point to the speaker where they heard the sound. Stimuli were presented randomly a total of six times to each loudspeaker for a total of 30 trials. Sound localization errors were quantified by calculating root mean square error. This setup and stimuli were selected based on previous studies ([Johnstone et al, 2010]; [Cullington et al, 2011]) and because of the feasibility of assessing children and adults in clinical settings using this approach.


#

Speech Recognition in Noise

Speech materials included the Bamford–Kowal–Bench/Australian (BKB/A) version and the consonant–nucleus–consonant (CNC) monosyllabic words. Babble noise was 100 people speaking in a canteen. Speech recognition was measured in the sound field with three loudspeakers placed 1 m from the participant at −45°, 0°, and 45° azimuth, with the center of the speaker at approximately head height. The following conditions were tested: (a) Monaural direct (MD): signal to good ear/noise to bad ear (CNCs, BKB/A sentences), (b) monaural indirect (MI): signal to bad ear/noise to good ear (CNCs, BKB/A sentences), and (c) signal/noise in front (BKB/A sentences only). The level of the speech material through the loudspeakers was set at 65 dB SPL for the CNC words, with multi-talker babble noise fixed at 60 dB SPL. Whole-word scoring was used and percent correct scores (%) were determined. For BKB/A sentences, the speech recognition threshold (SRT) in noise was defined as the signal-to-noise ratio (SNR) producing 50% correct whole sentence recognition.

For the adaptive BKB/A task, the noise was at a fixed level (60 dB SPL) and the speech level was adjusted on the audiometer. The two first sentences served as practice, presented at 64 dB SPL. If they were repeated correctly the signal was decreased by 4 dB (initial step size). If the next sentence was repeated correctly the signal was decreased by 2 dB and if it was repeated incorrectly, the signal was increased by 2 dB. After a complete list was administered (26 sentences), the average presentation level for the last ten items, corresponding to the 50% correct identification level, was calculated as the SRT (50% SRT) ([Cañete et al, 2017]).


#

Self-Reported Hearing Performance

The Speech, Spatial, and Qualities of Hearing Scale (SSQ49 items) has been developed to assess listeners’ self-perception of listening abilities in everyday complex situations ([Gatehouse and Noble, 2004]). The SSQ 12-item short version was used ([Noble et al, 2013]) in the present study. The SSQ12 measures self-perception of auditory disability in three subscales (speech recognition, spatial hearing, and qualities of hearing) separated into components such as speech in noise, multiple speech streams, localization, segregation, and listening effort. Items are grouped into subscales as follows: (a) Speech subscale—the participant’s ability to understand speech in different types of noises (e.g., “You are talking with one other person and there is a TV on in the same room. Without turning the TV down, can you follow what the person you’re talking to says?”); (b) spatial hearing subscale—sound and source localization (e.g., “You are outside. A dog barks loudly. Can you tell immediately where it is, without having to look?”); and (c) qualities of hearing subscale—naturalness, clarity of sounds, and listening effort (e.g., “Do you have to concentrate very much when listening to someone or something?”). Participants rated their responses using a 0–10 scale presented as a ruler, with the left-hand end representing inability or absence of quality and the right-hand end indicating full ability or presence of quality ([Gatehouse and Noble, 2004]).


#
#

Cortical Auditory Evoked Potentials

Stimuli

CAEPs were tested using three speech syllables (/di, gi, ti/) recorded using a native New Zealand English female speaker in a soundproof room via a AKG HC 577 L omnidirectional headset microphone (AKG Harman) placed 3 cm from the speaker’s lips attached to an M-Audio MobilePre (M-Audio). Speech was recorded and edited using Adobe Audition CS6 sound editing software, with a sampling rate of 44.1 kHz and 16 bit quantization rate. The selected speech stimuli have spectral emphasis in the low (<3000 Hz; /di/-/gi/; voiced sounds) and high (>3000 Hz; /ti/; voiceless sound) frequencies, providing information about processing of speech sounds in different frequency regions. The stimuli also differ temporally. Stimulus voice onset times were as follows: /di/ = 12.0 msec, /gi/ = 33.7 msec, and /ti/ = 86.9 msec. The F1 average for the sustained portion of the vowel for /di/, /gi/, and /ti/ was 335.1, 340.0, and 339.1 Hz, respectively ([Figure 1]). The total duration of each syllable was 246 msec after editing. The three speech syllables were presented at 65 dB SPL with continuous multi-talker babble presented at 60 dB SPL. For BL and UHL groups the loudspeaker setup was similar to that used for the spatial speech recognition in noise task (MD and MI conditions). A +5 dB SNR was selected as it is consistent with common everyday listening conditions ([Smeds et al, 2015]) and this SNR allows robust CAEPs to be recorded that are sensitive to noise effects as they differ in latency and morphology from CAEPs in quiet ([Whiting et al, 1998]).

Zoom Image
Figure 1 Consonant–vowel (CVs) acoustic stimuli used to elicit CAEPs. Time-domain waveforms (left column) and respective spectrograms (right column) derived using Praat 5.3.53 software.

The stimulus presentation order was randomized and testing condition counterbalanced across participants, with two runs of 150 stimuli for each stimulus and condition. The Neuroscan STIM system (Compumedics Neuroscan) was used to present the speech stimuli with a 920-msec interstimulus interval. Babble and speech stimuli were presented via loudspeakers (Impact 50 Turbosound; Turbosound Ltd, UK). Sounds were calibrated using a Bruel & Kjaer 2215 sound level meter measured at 1 m distance from the loudspeaker at the participant’s ear level.


#

Cortical Recordings

The Neuroscan SCAN (version 4.3 Compumedics, Neuroscan) was used to record CAEPs using ten electroencephalography (EEG) channels, with gold 10-mm disc electrodes placed at Cz and Fz referenced to M2; C4 and F4 referenced to M2 and M1 (ipsi and contra references); C3 and F3 referenced to M2 and M1 (contra and ipsi references). The ground electrode was located on the forehead and eye blink activity was monitored using an electrode placed above the left eye, referenced to M2. Electrode impedances were kept under 5 kΩ. The electrode montage was selected to minimize electrode application, in preparation for future planned studies with children, and to provide sufficient scalp locations to enable investigation of frontal versus central and hemispheric differences.

EEG was amplified with a gain of 50,000 and sampled at the rate of 1000 Hz. EEG epochs with −100 msec pre-stimulus to 600 msec poststimulus time windows were extracted post hoc from the continuous file. Before averaging, responses were digitally low-pass filtered at 30 Hz. All recordings were baseline corrected before averaging. Recordings with eye blink artifacts were corrected using the regression procedure ocular artifact rejection function in Neuroscan software ([Neuroscan.Inc., 2007]). This involves calculating an average blink from a minimum of 20 blinks for each participant and removing the contribution of the blink from all other channels on a point-by-point basis. The artifact rejection threshold was set in the range ±50 to ±75 µV. Short breaks were given between testing conditions if needed. Participants were tested while seated in a comfortable reclining chair, watching a captioned movie in a double-walled sound attenuating booth.


#

Cortical Analysis

Amplitude and latency values for N1 peaks were determined for each condition. The amplitude of N1 was identified as the largest negative deflection between 80 and 160 msec after stimulus onset. Latency of the peak was measured at the center of the peak. When the waveform contained a double peak of equal amplitude or a peak with a plateau, the latency was measured at the midpoint of the peak. Responses were determined by the agreement of two experienced judges.

N1 latency contralateral hemisphere dominance (LCHD) was expressed as the percentage of the asymmetry between hemispheres (LCHD = 100*[CL − IL/CL]), where CL and IL represent the latency values for recordings that were contralateral and ipsilateral to the stimulated ear. A negative value indicates contralateral dominance (shorter contralateral latencies), whereas values close to zero represent synchronous ipsi and contra latencies. N1 amplitude contralateral hemisphere dominance (ACHD) was expressed as percentage of the amplitude asymmetry between hemispheres (ACHD = 100*[CA − IA/CA]), where CA and IA represent N1 amplitudes for recordings that were contralateral and ipsilateral to the stimulated ear. A positive value reflects larger contralateral responses, whereas values close to zero indicate symmetric responses.


#
#

Statistical Analysis of the Behavioral and Cortical Data

The Shapiro–Wilk test of normality was applied to all data, and nonparametric tests were used to compare groups and conditions when assumptions of normality were not met. Between-group comparisons of UHL versus control participants were conducted using independent t-tests or Mann–Whitney U-tests. Within-group comparisons (stimulus and electrode effects) were made used paired t-tests, Wilcoxon matched pairs tests, and repeated measures analysis of variance (ANOVA). A p value <0.05 was considered statistically significant. Bonferroni corrections were applied to correct for multiple comparisons. The Greenhouse–Geisser correction was applied to the repeated measures ANOVA when Mauchly’s test indicated that the assumption of sphericity had been violated. IBM SPSS statistics 21.0 (IBM Corporation) version was used.


#
#

RESULTS

Sound Localization

The overall performance of the UHL group was compared with the BLpl group with simulated conductive hearing loss. Scores were collapsed across ears for the BLpl group as right and left ear plugged scores did not differ significantly. Overall the BLpl group had localization errors of 14.2 (SD = 11.8), similar to the errors of the UHL group (16.0, SD = 24.1); however, the UHL group showed much greater intersubject variability. There was no relationship between the level of conductive hearing loss in the BLpl group and localization errors. Differences between the BLpl and UHL group errors were not significant. There was no difference in the localization for right and left UHL or onset of hearing loss. People with right (n = 7) UHL showed a median of 7.3° error (interquartile range [IQR] 0–17.4); left (n = 6) UHL showed a median of 9.9° error (IQR 0–26.3). To investigate the relationship with severity of hearing loss, participants were grouped into two categories, those with a four-frequency (0.5, 1.0, 2.0, and 4.0 kHz, 4PTA) PTA < 60 dB HL were classed as mild to moderate and the others with 4PTA Hz ≥ 60 dB HL were grouped as having a severe to profound hearing loss. Participants who had severe to profound hearing loss (4PTA median 17.4, IQR 11.0–43.1, n = 7) had significantly (U = 0.0, p = 0.002, r = −0.85) higher error scores than those with lesser degree of hearing loss (4PTA median 0, IQR 0, n = 6). The six participants with moderate hearing loss included three with a conduction hearing loss (CHL) and three with a sensorineural loss (SNHL). The significant difference in sound localization errors was maintained (U = 0.0, p = 0.020, r = −0.77) when CHL cases were removed from the moderate hearing loss group (median 0, IQR 0.0–3.7, n = 3 versus median 17.4, IQR 16.2–26.3, n = 4 SNHL).


#

Speech Recognition in Noise

The UHL participants had lower performance overall for CNC words (% correct) and BKB/A sentences (dB SNR) than the BL group ([Table 2]). For CNC words, significant differences were only observed when the speech was directed to the bad ear. Participants with UHL required significantly greater SNR for BKB/A sentences than BL individuals for all conditions. Greater severity of UHL was associated with a significant decrease in performance for CNC [t (11) = 3.39, p = 0.006, d = 1.89] and BKB/A [t (11) = −3.56, p = 0.004, d = −1.98] stimuli but only for the MI condition.

Table 2

Mean (SD) of the Speech Recognition (CNC Words, BKB/A Sentences) and SSQ12 Scores for BL and UHL Groups

BL[*]

UHL[]

p Value

CNC (%)

MD

97.64 (2.01)

96.92 (2.90)

p = 0.617

MI

97.64 (2.01)

58.31 (20.23)

p < 0.001

BKB/A (SNR dB)

MD

−2.81 (0.97)

−0.25 (2.08)

p = 0.001

MI

−2.81 (0.97)

10.55 (4.35)

p < 0.001

Signal/noise in front

6.93 (0.73)

8.27 (1.03)

p = 0.005

SSQ12

Speech

8.09 (1.19)

4.90 (2.29)

p = 0.001

 Speech in noise

8.62 (1.38)

5.02 (2.44)

p < 0.001

 Multiple speech streams

7.89 (1.57)

4.80 (2.00)

p = 0.001

 Speech in speech

7.77 (1.60)

4.90 (2.91)

p = 0.038

Spatial

8.50 (1.05)

5.34 (2.65)

p = 0.002

 Localization

8.50 (1.05)

5.48 (2.97)

p = 0.009

 Distance and movement

8.50 (1.24)

5.26 (2.47)

p = 0.001

Quality

8.72 (0.69)

5.97 (2.09)

p < 0.001

 Segregation

8.64 (1.05)

5.72 (3.17)

p = 0.013

 Identification of sound

8.86 (1.05)

7.26 (2.41)

p = 0.043

 Quality and naturalness

9.41 (0.74)

7.49 (2.60)

p = 0.011

 Listening effort

7.95 (1.35)

3.41 (2.53)

p < 0.001

Overall

8.45 (0.76)

4.95 (1.74)

p < 0.001

* N = 11.


N = 13.



#

Self-Report Questionnaire

The BL participants had significantly better (higher) scores than UHL participants for the SSQ overall and across SSQ subscales ([Table 2]). The BL and UHL groups both reported the poorest (lowest) scores for the speech subscale, which examines speech in noise, in speech contexts, and in multiple speech streams. The question about listening effort from the subscales components produced the lowest scores across all subscales for the UHL participants. The side of the UHL was not associated with differences in SSQ scores; however, for greater degree of hearing loss, spatial SSQ12 scores were significantly poorer (U = 6.50, p = 0.038, r = −0.57).


#

Cortical Auditory Evoked Potentials

Two-factor (stimuli [3] and electrodes [4]) repeated measures ANOVAs with presentation ear (right and left) and group (BL and UHL) as between-subject factors were used to separately investigate N1 amplitudes and latencies. There were significant stimuli and electrode main effects for N1 amplitudes [F (2,62) = 87.18, p < 0.001 and F (2,93) = 5.16, p = 0.008, respectively], and a significant electrode by presentation ear interaction [F (2,93) = 7.99, p < 0.001]. The /di/stimulus produced the largest N1 amplitudes, followed by /gi/ and /ti/ regardless of the electrode location, presentation ear, or group ([Figure 2]). N1 amplitudes were larger for left hemisphere electrodes (C3 and F3). Also, right ear presentation elicited larger responses mainly for left hemisphere electrode locations across stimuli. Pairwise comparisons revealed significant differences in amplitudes for /ti/ at central (C3–C4) versus frontal (F3–F4) locations for right ear presentation, but this electrode difference was not evident for left ear presentation or for the other two stimuli.

Zoom Image
Figure 2 (A) Grand mean for binaural listeners (N = 11) and (B) UHL individuals (N = 13) for contralateral and ipsilateral waveforms at central and frontal electrodes for right and left ear presentation for /di/, /gi/, and /ti/ speech stimuli.

N1 latency analyses showed a significant main effect for stimuli [F (2,62) = 56.26, p < 0.001] and significant interactions for electrode × ear presentation [F (3,93) = 7.10, p < 0.001], electrode × group [F (3,93) = 3.55, p = 0.017], and electrode × ear presentation × group [F (3,93) = 6.03, p = 0.01] ([Figure 3]). The /ti/ stimulus was associated with shortest latencies followed by /di/ and /gi/, regardless of electrode location, presentation ear, or group. Across groups, right ear stimulation elicited shorter responses mainly for electrodes located on the left hemisphere (C3 and F3) ([Figure 3]). The UHL group had significantly increased N1 latencies across electrodes, but mainly for left hemisphere locations (C3 145.3 msec and F3 145.8 msec), regardless ear of presentation. Participants with a UHL had significantly longer N1 latencies for left ear presentation (right ear UHL), mainly for left hemisphere locations (C3 and F3) ([Figure 3]; [Tables 3] and [4]). For the BL group pairwise comparisons revealed significant differences in latency for /di/ and /gi/ for the central electrodes and for /gi/ and /ti/ for frontal electrodes for right ear presentation ([Figure 2]).

Zoom Image
Figure 3 N1 latency for BL (N = 11) and UHL individuals (N = 13) for central (C3–C4) and frontal (F3–F4) locations for right (RE) and left (LE) ear presentation for /gi/ sound.
Table 3

N1 Amplitude and Latency (SD) as Function of Stimuli and Ear of Presentation (N = 11) for BL Group

Amplitude, µV (SD)

C3

C4

p Value

F3

F4

p Value

Right ear presentation

 /di/

−6.11 (2.32)

−5.22 (1.74)

p = 0.006

−5.93 (2.69)

−5.00 (2.12)

p = 0.006

 /gi/

−4.37 (1.75)

−3.70 (1.14)

p = 0.040

−4.32 (1.90)

−3.63 (1.40)

p = 0.032

 /ti/

−3.75 (1.46)

−2.43 (1.04)

p < 0.001[*]

−3.38 (1.70)

−2.26 (1.14)

p = 0.001[*]

Left ear presentation

 /di/

−5.35 (2.03)

−5.55 (1.61)

p = 0.417

−5.28 (2.25)

−5.68 (1.88)

p = 0.108

 /gi/

−3.63 (1.87)

−3.88 (1.34)

p = 0.292

−3.72 (1.99)

−4.00 (1.56)

p = 0.163

 /ti/

−2.79 (1.51)

−3.05 (1.38)

p = 0.365

−2.53 (1.52)

−3.05 (1.25)

p = 0.237

Latency, msec (SD)

Right ear presentation

 /di/

135.55 (6.67)

139.00 (7.87)

p = 0.001[*]

135.45 (6.62)

140.35 (9.24)

p = 0.006

 /gi/

145.18 (7.48)

152.91 (8.95)

p = 0.001[*]

146.73 (7.63)

152.91 (8.87)

p = 0.003[*]

 /ti/

122.09 (9.17)

125.64 (9.04)

p = 0.072

120.45 (9.16)

128.00 (7.50)

p = 0.002[*]

Left ear presentation

 /di/

139.73 (8.06)

137.18 (5.88)

p = 0.083

140.55 (8.20)

136.82 (7.74)

p = 0.006

 /gi/

151.36 (11.87)

147.64 (7.97)

p = 0.094

154.27 (12.55)

148.27 (9.34)

p = 0.034

 /ti/

125.55 (10.62)

121.36 (8.02)

p = 0.039

125.45 (10.47)

125.00 (10.12)

p = 0.855

* Significant difference after Bonferroni correction, p < 0.004.


Table 4

N1 Amplitude and Latency (SD) as Function of Stimuli and Ear of Presentation (N = 13) for UHL Group at Central and Frontal Locations

Amplitude, µV (SD)

C3

C4

F3

F4

Right ear presentation (LEUHL)

 /di/

−5.10 (1.46)

−3.97 (1.46)

−5.23 (1.63)

−4.52 (1.01)

 /gi/

−3.50 (0.93)

−3.15 (0.71)

−3.82 (1.44)

−2.85 (0.59)

 /ti/

−2.97 (1.32)

−2.23 (1.12)

−3.22 (1.31)

−2.50 (1.17)

Left ear presentation (REUHL)

 /di/

−4.32 (0.87)

−3.98 (0.86)

−4.40 (0.92)

−4.59 (0.85)

 /gi/

−3.44 (0.68)

−3.07 (0.47)

−3.30 (1.11)

−3.21 (1.00)

 /ti/

−1.90 (0.73)

−1.88 (0.72)

−2.19 (1.31)

−2.21 (1.13)

Latency, msec (SD)

Right ear presentation (LEUHL)

 /di/

144.83 (8.45)

141.17 (15.96)

143.33 (8.96)

142.17 (12.16)

 /gi/

160.00 (12.84)

157.50 (13.66)

156.83 (10.05)

154.50 (10.25)

 /ti/

127.83 (4.75)

126.00 (16.67)

127.00 (6.63)

123.17 (10.44)

Left ear presentation (REUHL)

 /di/

143.71 (9.21)

141.00 (8.43)

144.86 (10.96)

142.43 (10.98)

 /gi/

156.86 (14.80)

154.29 (14.43)

160.71 (11.91)

155.14 (15.72)

 /ti/

138.86 (32.32)

141.00 (31.16)

142.57 (29.60)

140.73 (30.92)

Note: LEUHL = left ear unilateral hearing loss; REUHL = right ear unilateral hearing loss.



#

Contralateral Dominance: BL versus UHL

Data from BL were obtained for monaural right and left stimulation and UHL participants were tested with stimuli presented to the unaffected side with noise presented to the affected side. BL showed the expected hemisphere asymmetry pattern. ACHD and LCHD were significantly different between right and left ear presentation for /ti/ at central (U = 26, p = 0.023, r = −0.48) and frontal electrodes (U = 21, p = 0.009, r = −0.50), respectively. For /ti/, stronger activation of the left hemisphere was evidenced by larger amplitude ([Figure 4A]) and shorter responses elicited by right ear presentation for BL.

Zoom Image
Figure 4 Mean N1 ACHD values (central electrodes) for normal and unilateral hearing groups in function of ear of stimulation. (A) For right ear presentation, binaural listeners group (RE-BL) and left ear presentation, binaural listeners group (LE-BL). (B) Left ear presentation (REUHL, right ear unilateral hearing loss), right ear presentation (LEUHL, left ear unilateral hearing loss). Error bars represent the 95% confidence interval. Asterisks represent significant difference, p < 0.005.

ACHD values for the UHL group differed significantly between right and left side stimulation for /di/ at central electrodes (U = 3.00, p = 0.010, r = −0.71). The contralateral dominance based on CAEP amplitudes was reduced in the right ear UHL group (i.e., smaller right hemisphere amplitudes with left ear stimulation), as the asymmetry values are negative or close to zero for central electrodes across all stimuli ([Figure 4B]).

The comparison of ACHD and LCHD values between BL and UHL groups as a function of side of stimulation did not show significant statistical differences between groups, except for LCHD for /gi/ at central electrodes (C3 and C4) when /gi/ was presented to the right ear. For /gi/ to the right ear, the latency asymmetry was significantly smaller for participants with UHL than BL (U = 11.00, p = 0.027, r = −0.53). For this condition, N1 latencies were shorter for the contralateral hemisphere (C3) for the BL group, resulting in a negative LCHD value; this asymmetry was still present but was reduced in the UHL participants with left-sided deafness (right ear stimulus presentation).

N1 ACHD and LCHD indices for the three stimuli were used to explore relationships between CAEP asymmetry and degree of hearing loss and duration of hearing loss. For participants with UHL, no significant correlations were found across all stimuli for ACHD or LCHD and 4PTA; however, duration of hearing loss showed a moderate negative correlation for /di/ at central (C3–C4, rs = −0.571, p = 0.041) and frontal (F3–F4, rs = −0.604, p = 0.029) electrode locations for the ACHD index (see [Figure 5]). As duration of hearing loss increased, contralateral dominance decreased based on amplitude measures. [Figure 6] shows a participant who appears to be an outlier because of their long duration of deafness and very large negative hemispheric asymmetry (indicating strong ipsilateral hemisphere response dominance; −40% ACHD). With this outlier removed the correlation was still statistically significant, improving slightly for central (rs = −0.629, p = 0.028) and frontal (r = −0.685, p = 0.014) locations, supporting the finding that longer durations of deafness were associated with less contralateral dominance.

Zoom Image
Figure 5 Spearman correlation between N1 ACHD % for /di/ and duration of deafness for central and frontal locations.
Zoom Image
Figure 6 Spearman correlation between composite N1 amplitude (Cz and Fz) for /di/ and speech scores for words (A) and sentences (B) for MI condition.

#

Cortical versus Behavioral Measurements

Composite N1 results were used for correlation analyses (across Cz/Fz electrode locations and stimuli). Speech scores were compared with N1 amplitudes (Cz, Fz). As seen in [Figures 6A and B], CNC words scores for the MI condition showed a significant correlation for /di/ (rs = −0.727, p = 0.005). The negative correlations for CNC words indicate that better speech scores were associated with larger (more negative) N1 amplitudes. For sentences, better speech perception (smaller dB SNR) was associated with increased N1 amplitude (more negative) (rs = 0.571, p = 0.047). There were no other significant correlations between CAEP and behavioral measures. Spearman rank correlations failed to show significant associations between SSQ12 overall, subscale, or component scores or localization errors and N1 amplitudes or latencies (p > 0.05).

Because of a small sample is not possible to conduct statistical analyses to investigate the effects of type of hearing loss (conductive versus sensorineural). To explore whether the type of hearing loss might influence findings, [Table 5] shows results for individual cases with conductive and moderate sensorineural hearing loss. Visual inspection of the results indicates that performance is similar across tests for moderate conductive and sensorineural cases, except for N1 ACHD for /ti/ sound where a conductive loss was associated with much more symmetric hemispheric responses.

Table 5

Behavioral Scores for MI (Speech to Poor Ear/Noise to Good Ear) Condition and CAEP Amplitude Asymmetry Index (%) for Moderate Unilateral Hearing Loss Conductive and SNHL Cases

Participant

Pure Tone Average (500, 1, 2, 4 kHz)

BKB/A Speech Reception Threshold

CNC Word Scores

Localization Errors

% N1 Amplitude Asymmetry at Central Electrodes (C3–C4)

(dB HL)

(dB SNR)

(%)

(degrees)

/di/

/gi/

/ti/

Conductive

UHL-3

38.8

6.1

70

0

2.6

11.5

0.5

UHL-8

40.0

0.9

98

0

−11.1

−42.9

0.1

UHL11

40.0

6.4

68

0

6.9

28.6

5.1

Mean (SD)

39.6 (0.6)

4.4 (3.0)

78.7 (16.7)

0 (0)

−0.5 (9.4)

−0.93 (37.3)

1.9 (2.7)

SNHL

UHL-6

48.8

11.5

66

0

−4.7

−8.6

17.6

UHL-7

47.5

10.6

76

3.7

26.9

22.7

24.1

UHL-9

36.3

8.0

62

0

20

−27.9

23.8

Mean (SD)

44.2 (6.8)

10.0 (1.8)

68.0 (7.2)

1.2 (2.1)

14.0 (16.6)

−4.5 (25.4)

21.83 (3.6)


#
#

DISCUSSION

Most studies of individuals with UHL have focused on either behavioral or electrophysiological measures but few have explored and compared both types of measure in the same participants. The present study thus provides a broad view of auditory function for people with UHL, including the underlying cortical electrophysiology, and contributes to the understanding of the effects of UHL at physiological and behavioral levels.

CAEP Responses

We observed a clear differentiation in CAEPs across stimuli for both groups, which is in line with previous studies ([Agung et al, 2006]). Stimulus selection was based on place, voicing, and frequency emphasis, as previous studies have found CAEP differences in response to spectro-temporally different stimuli for CAEPs recorded in noise and quiet for aided and unaided conditions ([Tremblay et al, 2003]; [Agung et al, 2006]; [Kuruvilla-Mathew et al, 2015]). These differences reflect the specificity of neural processing of different speech stimuli.

Because different stimuli evoked differences in CAEPs for both UHL and BL groups, CAEP stimulus differences were preserved in the normal hearing ear of participants with UHL.

Similar to previous studies, our results showed larger N1 amplitudes for low-frequency stop consonants /di/ and /gi/ compared with the high-frequency stop consonant /ti/ ([Kuruvilla-Mathew et al, 2015]). This is consistent with earlier studies and is thought to reflect the decrease in latency and amplitude associated with more restricted and earlier basilar membrane activation by stimuli with high-frequency emphasis ([Näätänen and Picton, 1987]; [Jacobson et al, 1992]). Temporal differences in voice onset time (VOT) between voiced (/di/, /gi/) and unvoiced (/ti/) sounds could also be contributing to the differences seen in our results across stimuli. The voiced sounds with short VOTs had the same energy content for the following vowel’s format compared with the voiceless /ti/, which had a small amount of energy before the onset of the vowel ([Kuruvilla-Mathew et al, 2015]).

In participants with UHL, CAEP latencies were significantly delayed regardless of the stimulus (presented to the normal hearing ear) ([Figure 3] and [Table 4]) compared with the BL group. This indicates that monaural deprivation has an impact on auditory processing associated with the “good ear.” In our sample, differences in CAEPs suggest altered cortical processing is associated with the lack on binaural input to central auditory pathways. Prolonged N1 latencies suggest delays in synchronous firing; this could be associated with abnormalities in processing time-varying cues such as speech sounds. This is consistent with the finding that participants with UHL had significantly higher (poorer) SNR scores, even when the speech was directed to the good ear compared with BL.

The atypical CAEP pattern (smaller amplitudes, delayed latencies) across stimuli which was more evident for right ear UHL cases (left ear presentation of sound to the good ear) is an indication of adaptive changes which may be contributing to difficulties recognizing speech in noise faced by people with UHL.

Robust N1 responses in the presence of noise depend heavily on the signal audibility more than sound discrimination ([Martin and Stapells, 2005]). A supra-threshold presentation level of 65 dB SPL was used to elicit CAEPs, and hence, the signal was audible in the good ear even when noise is presented to the poorer ear. It is likely that the poorer ear would have contributed little to the CAEP responses from the good ear because of a reduced audibility of the speech signal in that ear, for all participants including those with less severe degrees of UHL.

We did not observe any significant associations between CAEP measures and SSQ self-report of hearing difficulties. This could reflect a lack of statistical power; however, this lack of association between objective CAEP measures and the SSQ is perhaps not surprising as self-perception of difficulties is multifactorial and heavily dependent on the individual’s complex interaction with the environment, which is beyond the level of sound detection and discrimination ([Noble and Hétu, 1994]).


#

Auditory Cortex Asymmetry

UHL disrupts the binaural balance of neural inputs, altering binaural interactions within the CAS and producing functional and/or physiological changes within the structures of the CAS ([Keating et al, 2015]; [Kral et al, 2015]). Changes in CAS activity are expected with stimulation of the intact ear, as the afferent input is now unilateral. Indeed, we observed that with right ear deafness (left ear stimulation) the CAEP amplitude asymmetry pattern seen in the BL was disrupted, with changes in the activity recorded over the right hemisphere contralateral to the left ear. It has been suggested that cortical reorganization occurs in UHL as a result of a decrease in contralateral activity, an increase in ipsilateral hemisphere activity, or both ([Vasama et al, 1995]; [Scheffler et al, 1998]; [Khosla et al, 2003]; [Hanss et al, 2009]). One possible hypothesis is that disinhibition (unmasking) may occur to compensate for the reduced input from the affected ear, increasing the responsiveness of the ipsilateral cortex to the intact ear ([Bilecen et al, 2000]; [Salvi et al, 2000]; [Tremblay and Moore, 2012]).

As seen in [Figure 2], ACHD values were negative or close to zero in people with right ear UHL (left ear stimulation and central electrodes indicating stronger ipsilateral activity). Thus, the present study provides some support for the idea that changes in hemispheric activation after UHL may be ear dependent. Statistical differences were not consistently seen across stimuli and electrode locations, and hence, as has been the case in previous studies, evidence from the present study is not sufficient to confirm whether there are ear-dependent differences in the impact of UHL. Others have reported a reduction in amplitude and/or latency differences between hemispheres mainly for left-sided deafness when tones (e.g., 1 kHz), clicks, or simple speech sounds (e.g., vowel /a/), mostly in quiet, are used as stimuli ([Vasama et al, 1995]; [Fujiki et al, 1998]; [Khosla et al, 2003]; [Hanss et al, 2009]). Differences in stimuli (consonant-vowel syllables in noise in the present study) may have contributed to differences in findings across studies regarding ear effects on hemispheric asymmetry.

Although there are reports in the literature of reduced hemispheric asymmetry in UHL ([Bilecen et al, 2000]; [Ponton et al, 2001]; [Khosla et al, 2003]; [Langers et al, 2005]; [Hanss et al, 2009]), there are studies that did not find clear significant changes in hemispheric asymmetry in UHL ([Vasama et al, 1995]; [Vasama and Mäkelä, 1997]; [Hine et al, 2008]). Inconsistences across studies may reflect factors such as etiology (e.g., acoustic neuroma, congenital single-sided deafness, and sudden hearing loss); duration (from two to 18 years), onset (early and late), degree (moderate to profound), and side of the UHL; and methodological differences between such types of stimuli (e.g., tones and syllables) and small number of participants.

For our sample, hearing loss duration was correlated with %ACHD just for /di/; as the duration of the hearing loss increased, the normal pattern of hemisphere asymmetry decreased. Duration of hearing loss rather than age at onset was associated with hemispheric CAEP asymmetry. By contrast, [Kral et al (2013)] reported a sensitive period for reorganization, as they found that ipsilateral–contralateral hemisphere latency changes were more evident for early onset of hearing loss. Early and late onset UHL may be associated with different auditory plasticity mechanisms as the auditory brain adapts to the new balance of auditory inputs ([Kacelnik et al, 2006]; [Keating and King, 2013]). Differences between studies in the onset and duration of UHL and time of assessment could account for differences in findings. [Maslin et al (2013b)] followed people after the onset of UHL and found recovery of the normal asymmetric CAEP pattern after a period of time in cases of profound hearing loss due to neuroma removal. CAEP assessment occurred many years after the hearing loss onset for most participants in the present study, so we were not able to identify a baseline or possible changes that take place over time compared with the onset of hearing loss.

One relevant factor that should be considered is the age at the onset of hearing loss, which may account for some of the differences observed in the present study across participants. As cortical development is regulated by experience ([Kral, 2013]), a late onset (acquired) hearing loss may have less impact on cortical responses, depending on the period when deprivation occurred. Studies investigating the impact of conductive hearing loss in animals show differential changes in binaural sound representations after unilateral deafening as evidenced by changes in ipsilateral/contralateral hemisphere cortical activity that are age-dependent ([Polley et al, 2013]). Changes in cortical asymmetry in this animal study were greater with earlier onset of deafness, consistent with the concept of a sensitive period for bilateral processing, as has been seen in human studies of UHL ([Kral, 2013]).

The perceptual consequences of a loss of hemispheric asymmetry are not well established. Our data showed that speech in noise recognition and sound localization were markedly affected by a UHL, but there was no association between CAEP asymmetry and behavioral measures. However, [Bellis et al (2000)] reported that older listeners who had symmetric hemispheric responses for synthetic speech stimuli experienced difficulties discriminating fast spectro-temporal changes within a syllable. This suggests the normal asymmetry may have an important role for the recognition of acoustic cues, especially in tasks where fine temporal resolution is required. In addition, the right ear advantage that is typically seen in dichotic listening tests seems to be dependent of this brain asymmetry. In normal hearing adults the right ear advantage for dichotic listening is linked to N100 latency differences between hemispheres ([Eichele et al, 2005]). It would be useful to explore different temporal characteristics of speech stimuli that might be more sensitive to the perceptual consequences of CAEP hemispheric activity patterns.

One important factor to consider is potential age effects on CAEPs. It has been reported previously that CAEPs responses are age-dependent, particularly in those aged 50 years and older ([Bellis et al, 2000]; [Tremblay et al, 2004]; [Ross et al, 2007]). Participants with UHL in the present study were generally younger than this (average age 42 years) but some were older (oldest participant was 65 years). It would be useful in a future study with a larger sample size to examine CAEP hemispheric activity patterns in older and younger people with early-onset UHL to see whether aging effects are the same as those observed in people with bilaterally normal hearing.

Evidence for perceptual changes in UHL has been reported by [Maslin et al (2015)], who found that adults with UHL had improved intensity discrimination in the intact ear compared with controls. [Mishra et al (2015)] reported poorer performance in the good ear for gap detection in noise in UHL compared with BL. This is consistent with the finding of poorer speech perception in UHL in the present study. Further research is needed to clarify the relationship between psychoacoustic perceptual measures and speech perception in UHL. Results for the SSQ spatial subscale were sensitive to the degree of UHL: the finding of significantly lower scores for participants with more severe UHL is consistent with previous reports that people with hearing loss asymmetries perceive greater disability across all SSQ subscales, but mainly in the spatial subscale, compared with people with bilateral symmetrical hearing loss ([Noble and Gatehouse, 2004]; [Most et al, 2012]).

Although there was no relationship between CAEP asymmetry indices and behavioral performance for our sample, we found robust correlations between N1 amplitudes and CNC word and sentence performance for the MI condition for /di/. Previous studies involving BL also show correlations between behavioral speech measures and N1 amplitudes, particularly when CAEPs are evoked by speech stimuli and measured at Cz ([Anderson et al, 2010]; [Parbery-Clark et al, 2011]; [Billings et al, 2013]). However, there is limited evidence for this correlation in people with hearing loss. Our results suggest that N1 response amplitude elicited by /di/ might be a useful objective indicator of speech perception in noise in people with UHL.


#

Limitations

There was considerable variability in the sound localization performance of participants with UHL. There are some factors to consider, such as etiology, duration, and onset of hearing loss, which were difficult to control for in this study. It has been suggested that mechanisms involved in sound localization may differ for sensorineural and conductive hearing losses. For example, spectral cues would be mainly compromised in people with SNHL, whereas altered timing occurs in CHL as sound is transmitted to both cochleae via bone conduction ([Häusler et al, 1983]; [Noble et al, 1994]). Our data did not allow us to explore this because of the small number of participants with CHL. The effects of etiology on sound localization may be more evident with higher stimulus levels due to greater contribution of bone conduction and may be less significant in the present study where conversational stimulus levels were used.

It is possible that significant differences were not observed because of a lack of statistical power due to the small sample size and heterogeneity of the participants with UHL. Multisite studies and meta-analysis may be needed to solve this problem. Results that were observed for the behavioral tests and questionnaires in the present study are consistent with previous studies; however, most studies examining the impact of UHL have used similar small samples. The CAEPs showed considerable variability across participants. A larger sample size is needed to confirm the observed ear differences, the degree of CAEP asymmetry in people with different types and degrees of UHL, and the association with behavioral findings.


#
#

CONCLUSION

UHL affects a wide range of auditory abilities, including speech in noise recognition, sound localization, and self-perception of hearing disability. Speech perception in noise was compromised for people with UHL even when the acoustic environment should be advantageous, when the signal was presented to the good ear and noise to the poor ear. Sound localization was worse for people with more severe UHL but varied greatly for people with the same degree of hearing loss; this may reflect effects of age at onset, duration, and etiology of the hearing loss. Speech in noise perception and listening effort were a major concern of the participants with UHL, who rated this as their greatest problem. Despite the limitations of the present study, CAEPs evoked by speech syllables in noise showed a greater effect of right ear UHL on N1 amplitude asymmetry than left ear UHL; however, this effect was restricted to the speech sound (/di/) and central electrode locations, and hence, further research is needed to verify this effect. In addition, our findings suggest that the neural stimulus representation at the cortical level for UHL people differs from normal hearing controls when good ear responses are measured, which may be contributing to some extent to the difficulties experienced by the participants recognizing speech in noise.

Longer durations of the hearing loss were associated with reduced CAEP amplitude hemispheric asymmetry for /di/. CAEP amplitudes were also correlated with speech perception for /di/. Thus, this speech stimulus may be useful for further studies involving larger numbers of participants with UHL.


#

Abbreviations

ACHD: amplitude contralateral hemisphere dominance
ANOVA: analysis of variance
BKB/A: Bamford–Kowal–Bench/Australian
BL: binaural listeners
BLpl: binaural listeners plugged
CAEP: cortical auditory evoked potential
CAS: central auditory system
CHL: conduction hearing loss
CNC: consonant–nucleus–consonant
LCHD: latency contralateral hemisphere dominance
MD: monaural direct
MI: monaural indirect
PTA: pure tone average
SNHL: sensorineural loss
SNR: signal-to-noise ratio
SRT: speech recognition threshold
SSQ: speech, spatial, and qualities
UHL: unilateral hearing loss


#

No conflict of interest has been declared by the author(s).

  • REFERENCES

  • Agung K, Purdy SC, McMahon CM, Newall P. 2006; The use of cortical auditory evoked potentials to evaluate neural encoding of speech sounds in adults (report). J Am Acad Audiol 17: 559
  • Anderson S, Chandrasekaran B, Yi H, Kraus N. 2010; Cortical‐evoked potentials reflect speech‐in‐noise perception in children. Eur J Neurosci 32: 1407-1413
  • Augustine AM, Chrysolyte SB, Thenmozhi K, Rupa V. 2013; Assessment of auditory and psychosocial handicap associated with unilateral hearing loss among Indian patients. Indian J Otolaryngol Head Neck Surg 65: 120-125
  • Bellis TJ, Nicol T, Kraus N. 2000; Aging affects hemispheric asymmetry in the neural representation of speech sounds. J Neurosci 20: 791-797
  • Bilecen D, Seifritz E, Radü EW, Schmid N, Wetzel S, Probst R, Scheffler K. 2000; Cortical reorganization after acute unilateral hearing loss traced by fMRI. Neurology 54: 765-767
  • Billings C, McMillan G, Penman T, Gille S. 2013; Predicting perception in noise using cortical auditory evoked potentials. J Assoc Res Otolaryngol 14: 891-903
  • Borton SA, Mauze E, Lieu JEC. 2010; Quality of life in children with unilateral hearing loss: a pilot study. Am J Audiol 19: 61-72
  • Brookhouser PE, Worthington DW, Kelly WJ. 1991; Unilateral hearing loss in children. Laryngoscope 101: 1264-1272
  • Cañete OM, Purdy SC, Neeff M, Brown CR, Thorne PR. 2017; Cortical auditory evoked potential (CAEP) and behavioural measures of auditory function in a child with a single-sided deafness. Cochlear Implants Int 18: 335-346
  • Clark JG. 1981; Uses and abuses of hearing loss classification. ASHA 23: 493-500
  • Colletti V, Fiorino FG, Carner M, Rizzi R. 1988; Investigation of the long-term effects of unilateral hearing loss in adults. Br J Audiol 22: 113-118
  • Cullington HE, Bele D, Brinton JC, Lutman ME. 2011; United Kingdom national paediatric bilateral audit. Cochlear Implants Int 12: S18
  • Douglas SA, Yeung P, Daudia A, Gatehouse S, O'Donoghue GM. 2007; Spatial hearing disability after acoustic neuroma removal. Laryngoscope 117: 1648-1651
  • Eichele T, Nordby H, Rimol LM, Hugdahl K. 2005; Asymmetry of evoked potential latency to speech sounds predicts the ear advantage in dichotic listening. Cogn Brain Res 24: 405-412
  • Fujiki N, Naito Y, Nagamine T, Shiomi Y, Hirano S, Honjo I, Shibasaki H. 1998; Influence of unilateral deafness on auditory evoked magnetic field. Neuroreport 9: 3129-3133
  • Gatehouse S, Noble W. 2004; The speech, spatial and qualities of hearing scale (SSQ). Int J Audiol 43: 85
  • Giolas TG, Wark DJ. 1967; Communication problems associated with unilateral hearing loss. J Speech Hear Disord 32: 336
  • Gustafson TJ, Hamill TA. 1995; Differences in localization ability in cases of right versus left unilateral simulated conductive hearing loss. J Am Acad Audiol 6: 124-128
  • Hanss J, Veuillet E, Adjout K, Besle J, Collet L, Thai-Van H. 2009; The effect of long-term unilateral deafness on the activation pattern in the auditory cortices of French-native speakers: influence of deafness side. BMC Neurosci 10: 23
  • Häusler R, Colburn S, Marr E. 1983; Sound localization in subjects with impaired hearing: spatial-discrimination and interaural-discrimination tests. Acta Otolaryngol 96: 1-62
  • Hine J, Thornton R, Davis A, Debener S. 2008; Does long-term unilateral deafness change auditory evoked potential asymmetries?. Clin Neurophysiol 119: 576-586
  • Hutson KA, Durham D, Imig T, Tucci DL. 2008; Consequences of unilateral hearing loss: cortical adjustment to unilateral deprivation. Hear Res 237: 19-31
  • Hyde M. 1997; The N1 response and its applications. Audiol Neuro-Otol 2: 281-307
  • Jacobson GP, Lombardi DM, Gibbens ND, Ahmad BK, Newman CW. 1992; The effects of stimulus frequency and recording site on the amplitude and latency of multichannel cortical auditory evoked potential (CAEP) component N1. Ear Hear 13: 300-306
  • Johnstone P, Nabelek A, Robertson V. 2010; Sound localization acuity in children with unilateral hearing loss who wear a hearing aid in the impaired ear. J Am Acad Audiol 21: 522-534
  • Kacelnik O, Nodal FR, Parsons CH, King AJ. 2006; Training-induced plasticity of auditory localization in adult mammals. PLoS Biol 4: e71
  • Keating P, Dahmen JC, King AJ. 2015; Complementary adaptive processes contribute to the developmental plasticity of spatial hearing. Nat Neurosci 18 (02) 185-187
  • Keating P, King AJ. 2013; Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications. Front Syst Neurosci 7: 123
  • Kelly AS, Purdy SC, Thorne PR. 2005; Electrophysiological and speech perception measures of auditory processing in experienced adult cochlear implant users. Clin Neurophysiol 116: 1235-1246
  • Khosla D, Ponton CW, Eggermont JJ, Kwong B, Dor M, Vasama JP. 2003; Differential ear effects of profound unilateral deafness on the adult human central auditory system. J Assoc Res Otolaryngol 4: 235-249
  • Kral A. 2013; Auditory critical periods: a review from system’s perspective. Neuroscience 247: 117-133
  • Kral A, Heid S, Hubka P, Tillein J. 2013; Unilateral hearing during development: hemispheric specificity in plastic reorganizations. Front Syst Neurosci 7: 93
  • Kral A, Hubka P, Tillein J. 2015; Strengthening of hearing ear representation reduces binaural sensitivity in early single-sided deafness. Audiol Neurootol 20: 7-12
  • Kuruvilla-Mathew A, Purdy SC, Welch D. 2015; Cortical encoding of speech acoustics: effects of noise and amplification. Int J Audiol 54: 852-864
  • Langers DR, van Dijk P, Backes WH. 2005; Lateralization, connectivity and plasticity in the human central auditory system. Neuroimage 28: 490-499
  • Lieu JEC. 2004; Speech-language and educational consequences of unilateral hearing loss in children. Arch Otolaryngol Head Neck Surg 130: 524-530
  • Lieu JEC, Tye-Murray N, Karzon RK, Piccirillo JF. 2010; Unilateral hearing loss is associated with worse speech-language scores in children. Pediatrics 125: 1348
  • Majkowski J, Bochenek Z, Bochenek W, Knapik-Fijałkowska D, Kopeć J. 1971; Latency of averaged evoked potentials to contralateral and ipsilateral auditory stimulation in normal subjects. Brain Res 25: 416-419
  • Makhdoum MJ, Groenen PA, Snik AF, Broek Pvd. 1998; Intra-and interindividual correlations between auditory evoked potentials and speech perception in cochlear implant users. Scand Audiol 7: 13-20
  • Martin BA, Stapells DR. 2005; Effects of low-pass noise masking on auditory event-related potentials to speech. Ear Hear 26: 195-213
  • Martin BA, Tremblay KL, Korczak P. 2008; Speech evoked potentials: from the laboratory to the clinic. Ear Hear 29: 285-313
  • Maslin MRD, Munro KJ, El-Deredy W. 2013; a Evidence for multiple mechanisms of cortical plasticity: a study of humans with late- onset profound unilateral deafness. Clin Neurophysiol 124: 1414-1421
  • Maslin MRD, Munro KJ, El-Deredy W. 2013; b Source analysis reveals plasticity in the auditory cortex: evidence for reduced hemispheric asymmetries following unilateral deafness. Clin Neurophysiol 124: 391-399
  • Maslin MRD, Taylor M, Plack CJ, Munro KJ. 2015; Enhanced intensity discrimination in the intact ear of adults with unilateral deafness. J Acoust Soc Am 137: 408-414
  • McAlpine D, Martin RL, Mossop JE, Moore DR. 1997; Response properties of neurons in the inferior colliculus of the monaurally deafened ferret to acoustic stimulation of the intact ear. J Neurophysiol 78: 767-779
  • Mishra SK, Dey R, Davessar JL. 2015; Temporal resolution of the normal ear in listeners with unilateral hearing impairment. J Assoc Res Otolaryngol 16: 773-782
  • Most T, Adi-Bensaid L, Shpak T, Sharkiya S, Luntz M. 2012; Everyday hearing functioning in unilateral versus bilateral hearing aid users. Am J Otolaryngol 33: 205-211
  • Musiek FE. 1986; Neuroanatomy, neurophysiology, and central auditory assessment. Part II: the cerebrum. Ear Hear 7: 283-294
  • Näätänen R, Picton T. 1987; The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology 24: 375-425
  • Neuroscan.Inc 2007. SCAN 4.4—Vol II, Edit 4.4: Offline Analysis of Acquired Data (Document Number 2203, Revision E). Charlotte, NC: Compumedics Neuroscan; 141-148
  • Noble W, Byrne D, Lepage B. 1994; Effects on sound localization of configuration and type of hearing impairment. J Acoust Soc Am 95: 992
  • Noble W, Gatehouse S. 2004; Interaural asymmetry of hearing loss, speech, spatial and qualities of hearing scale (SSQ) disabilities, and handicap. Int J Audiol 43: 100-114
  • Noble W, Hétu R. 1994; An ecological approach to disability and handicap in relation to impaired hearing: original paper. Int J Audiol 33: 117-126
  • Noble W, Jensen NS, Naylor G, Bhullar N, Akeroyd MA. 2013; A short form of the speech, spatial and qualities of hearing scale suitable for clinical use: the SSQ12. Int J Audiol 52: 409-412
  • Parbery-Clark A, Marmel F, Bair J, Kraus N. 2011; What subcortical–cortical relationships tell us about processing speech in noise. Eur J Neurosci 33: 549-557
  • Polley DB, Thompson JH, Guo W. 2013; Brief hearing loss disrupts binaural integration during two early critical periods of auditory cortex development. Nat Commun 4: 2547
  • Ponton CW, Don M, Eggermont JJ, Waring MD, Kwong B, Masuda A. 1996; Auditory system plasticity in children after long periods of complete deafness. Neuroreport 8: 61-65
  • Ponton CW, Vasama JP, Tremblay K, Khosla D, Kwong B, Don M. 2001; Plasticity in the adult human central auditory system: evidence from late-onset profound unilateral deafness. Hear Res 154: 32-44
  • Ross B, Fujioka T, Tremblay KL, Picton TW. 2007; Aging in binaural hearing begins in mid-life: evidence from cortical auditory-evoked responses to changes in interaural phase. J Neurosci 27: 11172-11178
  • Rothpletz AM, Wightman FL, Kistler DJ. 2012; Informational masking and spatial hearing in listeners with and without unilateral hearing loss. J Speech Lang Hear Res 55: 511
  • Ruscetta MN, Arjmand EM, Pratt SR. 2005; Speech recognition abilities in noise for children with severe-to-profound unilateral hearing impairment. Int J Pediatr Otorhinolaryngol 69: 771-779
  • Salvi RJ, Wang J, Ding D. 2000; Auditory plasticity and hyperactivity following cochlear damage. Hear Res 147: 261-274
  • Scheffler K, Bilecen D, Schmid N, Tschopp K, Seelig J. 1998; Auditory cortical responses in hearing subjects and unilateral deaf patients as detected by functional magnetic resonance imaging. Cereb Cortex 8: 156-163
  • Smeds K, Wolters F, Rung M. 2015; Estimation of signal-to-noise ratios in realistic sound scenarios. J Am Acad Audiol 26: 183-196
  • Tremblay KL, Friesen L, Martin BA, Wright R. 2003; Test-retest reliability of cortical evoked potentials using naturally produced speech sounds. Ear Hear 24: 225-232
  • Tremblay KL, Moore D. 2012. Current issues in auditory plasticity and auditory training. In: Kelly Tremblay RB. Translational Perspectives in Auditory Neuroscience: Special Topics. San Diego, CA: Plural Publishing; 165
  • Tremblay KL, Billings C, Rohila N. 2004; Speech evoked cortical potentials: effects of age and stimulus presentation rate. J Am Acad Audiol 15: 226-237
  • Vasama JP, Mäkelä JP. 1996. Auditory pathway reorganization in humans with congenital or acquired unilateral hearing loss. In: Henderson D, Fiorino F, Colletti V. Auditory System Plasticity and Regeneration. New York: Thieme Medical Publishers; 359-370
  • Vasama JP, Mäkelä JP. 1997; Auditory cortical responses in humans with profound unilateral sensorineural hearing loss from early childhood. Hear Res 104: 183-190
  • Vasama JP, Mäkelä JP, Pyykko I, HariI R. 1995; Abrupt unilateral deafness modifies function of human auditory pathways. Neuroreport 6: 961-964
  • Welsh LW, Welsh JJ, Rosen LF, Dragonette JE. 2004; Functional impairments due to unilateral deafness. Ann Otol Rhinol Laryngol 113: 987
  • Whiting KA, Martin BA, Stapells DR. 1998; The effects of broadband noise masking on cortical event-related potentials to speech sounds/ba/and/da. Ear Hear 19: 218-231
  • Wie OB, Pripp AH, Tvete O. 2010; Unilateral deafness in adults: effects on communication and social interaction. Ann Otol Rhinol Laryngol 119: 772

Corresponding author

Oscar M. Cañete
Speech Science, School of Psychology, The University of Auckland
Auckland 92019, New Zealand

  • REFERENCES

  • Agung K, Purdy SC, McMahon CM, Newall P. 2006; The use of cortical auditory evoked potentials to evaluate neural encoding of speech sounds in adults (report). J Am Acad Audiol 17: 559
  • Anderson S, Chandrasekaran B, Yi H, Kraus N. 2010; Cortical‐evoked potentials reflect speech‐in‐noise perception in children. Eur J Neurosci 32: 1407-1413
  • Augustine AM, Chrysolyte SB, Thenmozhi K, Rupa V. 2013; Assessment of auditory and psychosocial handicap associated with unilateral hearing loss among Indian patients. Indian J Otolaryngol Head Neck Surg 65: 120-125
  • Bellis TJ, Nicol T, Kraus N. 2000; Aging affects hemispheric asymmetry in the neural representation of speech sounds. J Neurosci 20: 791-797
  • Bilecen D, Seifritz E, Radü EW, Schmid N, Wetzel S, Probst R, Scheffler K. 2000; Cortical reorganization after acute unilateral hearing loss traced by fMRI. Neurology 54: 765-767
  • Billings C, McMillan G, Penman T, Gille S. 2013; Predicting perception in noise using cortical auditory evoked potentials. J Assoc Res Otolaryngol 14: 891-903
  • Borton SA, Mauze E, Lieu JEC. 2010; Quality of life in children with unilateral hearing loss: a pilot study. Am J Audiol 19: 61-72
  • Brookhouser PE, Worthington DW, Kelly WJ. 1991; Unilateral hearing loss in children. Laryngoscope 101: 1264-1272
  • Cañete OM, Purdy SC, Neeff M, Brown CR, Thorne PR. 2017; Cortical auditory evoked potential (CAEP) and behavioural measures of auditory function in a child with a single-sided deafness. Cochlear Implants Int 18: 335-346
  • Clark JG. 1981; Uses and abuses of hearing loss classification. ASHA 23: 493-500
  • Colletti V, Fiorino FG, Carner M, Rizzi R. 1988; Investigation of the long-term effects of unilateral hearing loss in adults. Br J Audiol 22: 113-118
  • Cullington HE, Bele D, Brinton JC, Lutman ME. 2011; United Kingdom national paediatric bilateral audit. Cochlear Implants Int 12: S18
  • Douglas SA, Yeung P, Daudia A, Gatehouse S, O'Donoghue GM. 2007; Spatial hearing disability after acoustic neuroma removal. Laryngoscope 117: 1648-1651
  • Eichele T, Nordby H, Rimol LM, Hugdahl K. 2005; Asymmetry of evoked potential latency to speech sounds predicts the ear advantage in dichotic listening. Cogn Brain Res 24: 405-412
  • Fujiki N, Naito Y, Nagamine T, Shiomi Y, Hirano S, Honjo I, Shibasaki H. 1998; Influence of unilateral deafness on auditory evoked magnetic field. Neuroreport 9: 3129-3133
  • Gatehouse S, Noble W. 2004; The speech, spatial and qualities of hearing scale (SSQ). Int J Audiol 43: 85
  • Giolas TG, Wark DJ. 1967; Communication problems associated with unilateral hearing loss. J Speech Hear Disord 32: 336
  • Gustafson TJ, Hamill TA. 1995; Differences in localization ability in cases of right versus left unilateral simulated conductive hearing loss. J Am Acad Audiol 6: 124-128
  • Hanss J, Veuillet E, Adjout K, Besle J, Collet L, Thai-Van H. 2009; The effect of long-term unilateral deafness on the activation pattern in the auditory cortices of French-native speakers: influence of deafness side. BMC Neurosci 10: 23
  • Häusler R, Colburn S, Marr E. 1983; Sound localization in subjects with impaired hearing: spatial-discrimination and interaural-discrimination tests. Acta Otolaryngol 96: 1-62
  • Hine J, Thornton R, Davis A, Debener S. 2008; Does long-term unilateral deafness change auditory evoked potential asymmetries?. Clin Neurophysiol 119: 576-586
  • Hutson KA, Durham D, Imig T, Tucci DL. 2008; Consequences of unilateral hearing loss: cortical adjustment to unilateral deprivation. Hear Res 237: 19-31
  • Hyde M. 1997; The N1 response and its applications. Audiol Neuro-Otol 2: 281-307
  • Jacobson GP, Lombardi DM, Gibbens ND, Ahmad BK, Newman CW. 1992; The effects of stimulus frequency and recording site on the amplitude and latency of multichannel cortical auditory evoked potential (CAEP) component N1. Ear Hear 13: 300-306
  • Johnstone P, Nabelek A, Robertson V. 2010; Sound localization acuity in children with unilateral hearing loss who wear a hearing aid in the impaired ear. J Am Acad Audiol 21: 522-534
  • Kacelnik O, Nodal FR, Parsons CH, King AJ. 2006; Training-induced plasticity of auditory localization in adult mammals. PLoS Biol 4: e71
  • Keating P, Dahmen JC, King AJ. 2015; Complementary adaptive processes contribute to the developmental plasticity of spatial hearing. Nat Neurosci 18 (02) 185-187
  • Keating P, King AJ. 2013; Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications. Front Syst Neurosci 7: 123
  • Kelly AS, Purdy SC, Thorne PR. 2005; Electrophysiological and speech perception measures of auditory processing in experienced adult cochlear implant users. Clin Neurophysiol 116: 1235-1246
  • Khosla D, Ponton CW, Eggermont JJ, Kwong B, Dor M, Vasama JP. 2003; Differential ear effects of profound unilateral deafness on the adult human central auditory system. J Assoc Res Otolaryngol 4: 235-249
  • Kral A. 2013; Auditory critical periods: a review from system’s perspective. Neuroscience 247: 117-133
  • Kral A, Heid S, Hubka P, Tillein J. 2013; Unilateral hearing during development: hemispheric specificity in plastic reorganizations. Front Syst Neurosci 7: 93
  • Kral A, Hubka P, Tillein J. 2015; Strengthening of hearing ear representation reduces binaural sensitivity in early single-sided deafness. Audiol Neurootol 20: 7-12
  • Kuruvilla-Mathew A, Purdy SC, Welch D. 2015; Cortical encoding of speech acoustics: effects of noise and amplification. Int J Audiol 54: 852-864
  • Langers DR, van Dijk P, Backes WH. 2005; Lateralization, connectivity and plasticity in the human central auditory system. Neuroimage 28: 490-499
  • Lieu JEC. 2004; Speech-language and educational consequences of unilateral hearing loss in children. Arch Otolaryngol Head Neck Surg 130: 524-530
  • Lieu JEC, Tye-Murray N, Karzon RK, Piccirillo JF. 2010; Unilateral hearing loss is associated with worse speech-language scores in children. Pediatrics 125: 1348
  • Majkowski J, Bochenek Z, Bochenek W, Knapik-Fijałkowska D, Kopeć J. 1971; Latency of averaged evoked potentials to contralateral and ipsilateral auditory stimulation in normal subjects. Brain Res 25: 416-419
  • Makhdoum MJ, Groenen PA, Snik AF, Broek Pvd. 1998; Intra-and interindividual correlations between auditory evoked potentials and speech perception in cochlear implant users. Scand Audiol 7: 13-20
  • Martin BA, Stapells DR. 2005; Effects of low-pass noise masking on auditory event-related potentials to speech. Ear Hear 26: 195-213
  • Martin BA, Tremblay KL, Korczak P. 2008; Speech evoked potentials: from the laboratory to the clinic. Ear Hear 29: 285-313
  • Maslin MRD, Munro KJ, El-Deredy W. 2013; a Evidence for multiple mechanisms of cortical plasticity: a study of humans with late- onset profound unilateral deafness. Clin Neurophysiol 124: 1414-1421
  • Maslin MRD, Munro KJ, El-Deredy W. 2013; b Source analysis reveals plasticity in the auditory cortex: evidence for reduced hemispheric asymmetries following unilateral deafness. Clin Neurophysiol 124: 391-399
  • Maslin MRD, Taylor M, Plack CJ, Munro KJ. 2015; Enhanced intensity discrimination in the intact ear of adults with unilateral deafness. J Acoust Soc Am 137: 408-414
  • McAlpine D, Martin RL, Mossop JE, Moore DR. 1997; Response properties of neurons in the inferior colliculus of the monaurally deafened ferret to acoustic stimulation of the intact ear. J Neurophysiol 78: 767-779
  • Mishra SK, Dey R, Davessar JL. 2015; Temporal resolution of the normal ear in listeners with unilateral hearing impairment. J Assoc Res Otolaryngol 16: 773-782
  • Most T, Adi-Bensaid L, Shpak T, Sharkiya S, Luntz M. 2012; Everyday hearing functioning in unilateral versus bilateral hearing aid users. Am J Otolaryngol 33: 205-211
  • Musiek FE. 1986; Neuroanatomy, neurophysiology, and central auditory assessment. Part II: the cerebrum. Ear Hear 7: 283-294
  • Näätänen R, Picton T. 1987; The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology 24: 375-425
  • Neuroscan.Inc 2007. SCAN 4.4—Vol II, Edit 4.4: Offline Analysis of Acquired Data (Document Number 2203, Revision E). Charlotte, NC: Compumedics Neuroscan; 141-148
  • Noble W, Byrne D, Lepage B. 1994; Effects on sound localization of configuration and type of hearing impairment. J Acoust Soc Am 95: 992
  • Noble W, Gatehouse S. 2004; Interaural asymmetry of hearing loss, speech, spatial and qualities of hearing scale (SSQ) disabilities, and handicap. Int J Audiol 43: 100-114
  • Noble W, Hétu R. 1994; An ecological approach to disability and handicap in relation to impaired hearing: original paper. Int J Audiol 33: 117-126
  • Noble W, Jensen NS, Naylor G, Bhullar N, Akeroyd MA. 2013; A short form of the speech, spatial and qualities of hearing scale suitable for clinical use: the SSQ12. Int J Audiol 52: 409-412
  • Parbery-Clark A, Marmel F, Bair J, Kraus N. 2011; What subcortical–cortical relationships tell us about processing speech in noise. Eur J Neurosci 33: 549-557
  • Polley DB, Thompson JH, Guo W. 2013; Brief hearing loss disrupts binaural integration during two early critical periods of auditory cortex development. Nat Commun 4: 2547
  • Ponton CW, Don M, Eggermont JJ, Waring MD, Kwong B, Masuda A. 1996; Auditory system plasticity in children after long periods of complete deafness. Neuroreport 8: 61-65
  • Ponton CW, Vasama JP, Tremblay K, Khosla D, Kwong B, Don M. 2001; Plasticity in the adult human central auditory system: evidence from late-onset profound unilateral deafness. Hear Res 154: 32-44
  • Ross B, Fujioka T, Tremblay KL, Picton TW. 2007; Aging in binaural hearing begins in mid-life: evidence from cortical auditory-evoked responses to changes in interaural phase. J Neurosci 27: 11172-11178
  • Rothpletz AM, Wightman FL, Kistler DJ. 2012; Informational masking and spatial hearing in listeners with and without unilateral hearing loss. J Speech Lang Hear Res 55: 511
  • Ruscetta MN, Arjmand EM, Pratt SR. 2005; Speech recognition abilities in noise for children with severe-to-profound unilateral hearing impairment. Int J Pediatr Otorhinolaryngol 69: 771-779
  • Salvi RJ, Wang J, Ding D. 2000; Auditory plasticity and hyperactivity following cochlear damage. Hear Res 147: 261-274
  • Scheffler K, Bilecen D, Schmid N, Tschopp K, Seelig J. 1998; Auditory cortical responses in hearing subjects and unilateral deaf patients as detected by functional magnetic resonance imaging. Cereb Cortex 8: 156-163
  • Smeds K, Wolters F, Rung M. 2015; Estimation of signal-to-noise ratios in realistic sound scenarios. J Am Acad Audiol 26: 183-196
  • Tremblay KL, Friesen L, Martin BA, Wright R. 2003; Test-retest reliability of cortical evoked potentials using naturally produced speech sounds. Ear Hear 24: 225-232
  • Tremblay KL, Moore D. 2012. Current issues in auditory plasticity and auditory training. In: Kelly Tremblay RB. Translational Perspectives in Auditory Neuroscience: Special Topics. San Diego, CA: Plural Publishing; 165
  • Tremblay KL, Billings C, Rohila N. 2004; Speech evoked cortical potentials: effects of age and stimulus presentation rate. J Am Acad Audiol 15: 226-237
  • Vasama JP, Mäkelä JP. 1996. Auditory pathway reorganization in humans with congenital or acquired unilateral hearing loss. In: Henderson D, Fiorino F, Colletti V. Auditory System Plasticity and Regeneration. New York: Thieme Medical Publishers; 359-370
  • Vasama JP, Mäkelä JP. 1997; Auditory cortical responses in humans with profound unilateral sensorineural hearing loss from early childhood. Hear Res 104: 183-190
  • Vasama JP, Mäkelä JP, Pyykko I, HariI R. 1995; Abrupt unilateral deafness modifies function of human auditory pathways. Neuroreport 6: 961-964
  • Welsh LW, Welsh JJ, Rosen LF, Dragonette JE. 2004; Functional impairments due to unilateral deafness. Ann Otol Rhinol Laryngol 113: 987
  • Whiting KA, Martin BA, Stapells DR. 1998; The effects of broadband noise masking on cortical event-related potentials to speech sounds/ba/and/da. Ear Hear 19: 218-231
  • Wie OB, Pripp AH, Tvete O. 2010; Unilateral deafness in adults: effects on communication and social interaction. Ann Otol Rhinol Laryngol 119: 772

Zoom Image
Figure 1 Consonant–vowel (CVs) acoustic stimuli used to elicit CAEPs. Time-domain waveforms (left column) and respective spectrograms (right column) derived using Praat 5.3.53 software.
Zoom Image
Figure 2 (A) Grand mean for binaural listeners (N = 11) and (B) UHL individuals (N = 13) for contralateral and ipsilateral waveforms at central and frontal electrodes for right and left ear presentation for /di/, /gi/, and /ti/ speech stimuli.
Zoom Image
Figure 3 N1 latency for BL (N = 11) and UHL individuals (N = 13) for central (C3–C4) and frontal (F3–F4) locations for right (RE) and left (LE) ear presentation for /gi/ sound.
Zoom Image
Figure 4 Mean N1 ACHD values (central electrodes) for normal and unilateral hearing groups in function of ear of stimulation. (A) For right ear presentation, binaural listeners group (RE-BL) and left ear presentation, binaural listeners group (LE-BL). (B) Left ear presentation (REUHL, right ear unilateral hearing loss), right ear presentation (LEUHL, left ear unilateral hearing loss). Error bars represent the 95% confidence interval. Asterisks represent significant difference, p < 0.005.
Zoom Image
Figure 5 Spearman correlation between N1 ACHD % for /di/ and duration of deafness for central and frontal locations.
Zoom Image
Figure 6 Spearman correlation between composite N1 amplitude (Cz and Fz) for /di/ and speech scores for words (A) and sentences (B) for MI condition.