Subscribe to RSS
DOI: 10.3766/jaaa.16126
Speech Understanding and Sound Source Localization by Cochlear Implant Listeners Using a Pinna-Effect Imitating Microphone and an Adaptive Beamformer
Corresponding author
Publication History
Publication Date:
29 May 2020 (online)
- Abstract
- SIGNAL PROCESSING FOR NOISE REDUCTION
- SOUND SOURCE LOCALIZATION
- EXPERIMENT 1: SPEECH UNDERSTANDING
- RESULTS
- EXPERIMENT 2: SOUND SOURCE LOCALIZATION
- RESULTS
- DISCUSSION
- SIGNAL PROCESSING FOR A SINGLE CI
- SIGNAL PROCESSING FOR BILATERAL CIS
- SOUND SOURCE LOCALIZATION WITH NATURAL AND ADAPTIVE MICROPHONE SETTINGS
- SUMMARY
- REFERENCES
Abstract
Background:
Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise.
Purpose:
To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization.
Research Design:
Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise).
Study Sample:
Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study.
Intervention:
Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types.
Data Collection and Analysis:
In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error.
Results:
Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli.
Conclusion:
The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet.
#
A very large literature has established that monaural cochlear implants (CIs) can restore high levels of sentence understanding in quiet (for a recent review, see [Wilson et al, 2016]). Another large literature has established that sentence understanding is severely compromised when signals are presented in noise. For example, [Spahr et al (2007)] described the performance of 39 high-performing patients with CIs, that is, those with 50% or better word identification in quiet, on AzBio sentence understanding in quiet and in noise. The mean score in quiet was 82% correct; at +10 dB SNR, the mean score was 55% correct; and at +5 dB SNR, the mean score was 35% correct. In comparison, sentence understanding scores for normal hearing listeners are unaltered, relative to performance in quiet, by noise presented at +10 and +5 dB SNR ([Dorman and Gifford, 2017]).
The poor performance of patients with CIs in noise is not surprising given that a CI’s output is sparse—limited to envelope information in a small number of frequency bands. This output, even in quiet, is only rarely sufficient for completely successful lexical access, that is, 100% word recognition in isolation or in sentence context ([Spahr et al, 2007]; [Gifford et al, 2008]). When this sparse set of amplitude information is degraded further by noise, poor performance is not unreasonable.
SIGNAL PROCESSING FOR NOISE REDUCTION
Recently [Kokkinakis et al (2012)] reviewed attempts to improve speech understanding in noise for patients with CIs by a variety of noise reduction technologies. In this article, we describe the benefit to speech understanding of one of those technologies, that is, an adaptive beamfomer (e.g., [Griffiths and Jim, 1982]). We also describe the benefit of another technology—a directional microphone with a frequency response imitating that of the pinna effect, that is, directional for higher frequencies and less so for lower frequencies (e.g., [Kuk et al, 2013]). We will refer to this as the “natural” microphone setting.
The value of monaural adaptive beamformers for patients with CIs has been described by multiple authors (e.g., [Spriet et al, 2007]; [Hehrmann et al, 2012]; [Wolfe et al, 2012]; [Buechner et al, 2014]). In these studies, when target and masker signals were spatially separated, monaural beamforming improved speech reception thresholds (SRTs) by 5–7 dB relative to thresholds obtained with omnidirectional microphones.
The value of the natural microphone setting for patients with CIs has been described by [Wimmer et al (2016)]. For a test environment with speech front and a single noise source at 180°, SRT improved by 3.6 dB; when noise was ipsilateral to the implant, SRT improved by 2.2 dB; and when noise was contralateral to the implant, it improved by 1.3 dB.
In Experiment 1, we assessed, using percent words correct as the response measure, the benefit to speech understanding of unilateral and bilateral implementations of the natural microphone setting and the adaptive beamformer. Speech understanding was assessed in two noise environments. One environment simulated listening in a restaurant. Directionally appropriate noise was presented from eight loudspeakers surrounding the listener. This is the R-Space™ environment of [Revit et al (2007)]. The second environment simulated some aspects of listening in a cocktail party. In this environment, a continuous male voice was presented from the loudspeaker at +90° and a different male voice was presented continuously from the loudspeaker at −90°. The target was female-voice speech at 0°. Informational masking is prominent in this environment and less so in the restaurant environment.
#
SOUND SOURCE LOCALIZATION
Bilateral CIs allow patients a modest level of sound source localization on the horizontal plane (e.g., [Grantham et al, 2007]) and that ability is responsible for some proportion of the increased quality of life reported by patients fit with bilateral CIs ([Bichey and Miyamoto, 2008]). In Experiment 2, we assessed whether the natural or adaptive settings altered sound source localization for patients fit with bilateral CIs. If the settings impair sound source localization, then the overall value of bilateral noise reduction programs to the patient would be significantly reduced.
#
EXPERIMENT 1: SPEECH UNDERSTANDING
Methods
Participants
Ten listeners fit with bilateral CIs (MED EL Corporation, Durham, NC) were tested. Patient demographics are shown in [Table 1]. The speech understanding scores were collected in a standard audiometric booth with 60 dB SPL signal level.
Note: DNT = Did not test.
#
Test Signals
The target signals were sentences from the AzBio sentence corpus ([Spahr et al, 2012]) or from the AzBio Pediatric test corpus ([Spahr et al, 2014]).
#
Restaurant Test Environment
The listeners were seated in the middle of eight loudspeakers arrayed in a 360° arc around the listener, that is, the R-Space™ test environment ([Revit et al, 2007]). Sentences from the AzBio sentence lists were presented from the loudspeaker at 0° azimuth and directionally appropriate restaurant noise was presented from all eight loudspeakers including the speaker from which the target sentences were delivered.
#
Cocktail Party Environment
Female voice sentences (from the AzBio Pediatric lists) were presented from the R-Space™ loudspeaker at 0°. The maskers were male voice sentences presented from the loudspeakers at ±90°. All other loudspeakers were muted. Different male voices and sentences were presented from the loudspeakers at ±90°. The sentences were concatenated and looped so as to be presented continuously. A female-voice talker was chosen for this condition in order to make the target sentences stand out from the masker sentences.
#
Microphone Configurations
Three microphone settings were tested: (a) omnidirectional, (b) natural, and (c) adaptive beamformer. The polar pattern of the natural microphone response is shown in [Figure 1] (top). At 0° azimuth there is 1–2 dB amplification for signals at 2, 4, and 8 kHz. At 180°, these signals were greatly attenuated. For a 500-Hz signal, the response is nearly omnidirectional. The polar patterns of the adaptive beamformer differed in the restaurant and cocktail party environments. As shown in [Figure 1] (middle), in the restaurant environment with eight noise sources in a 360° field, the response pattern was, most generally, supercartiod. However, in the cocktail party environment, as shown in [Figure 1] (bottom), with two noise sources at ±90°, the pattern was bidirectional. For listeners with bilateral CIs, the natural and adaptive beamformer configurations were implemented independently for each ear.


#
Test Conditions
Speech understanding using a single CI and bilateral CIs was assessed in the two noise environments using (a) the omni setting, (b) the natural setting, and (c) the adaptive setting. In the single CI condition, the patients used the CI that had allowed the higher speech understanding scores in audiometric testing.
#
Procedure
Signals were presented at 65 dBA. The single CI, omni-in-quiet condition was tested first. The bilateral omni-in-noise condition was tested second. In this condition for each listener, performance was driven off of the ceiling by increasing the level of noise. This level of noise was used in all of the other test conditions for that listener. These conditions were randomized across listeners.
#
#
#
RESULTS
Restaurant Environment
The results in the restaurant environment are shown in [Figure 2] and [Table 2]. For the single CI test conditions, the mean scores were as follows: omni in quiet, 83% correct; omni in noise, 28% correct; natural in noise, 44% correct; adaptive in noise, 51% correct. In the bilateral CI test conditions, the mean scores were as follows: omni in noise, 40% correct; natural in noise, 59% correct; adaptive in noise, 71% correct.


A repeated-measures analysis of variance (ANOVA) showed a main effect for the test conditions (F = 28.6, p < 0.0001). Posttests were conducted using the Holm–Sidak method. Of particular interest are the following outcomes:
-
(a) In both the single and the bilateral CI conditions, the natural and adaptive settings allowed significantly higher scores than the omni setting.
-
(b) In both the single and the bilateral CI conditions, the difference in benefit from the natural and adaptive settings was not statistically significant.
-
(c) In the bilateral CI condition scores were significantly better than corresponding scores in the single CI test conditions.
-
(d) In the single CI condition, the natural setting allowed scores as good as scores in the bilateral omni condition.
-
(e) In the single CI condition, the adaptive setting produced significantly higher scores than in the bilateral omni condition.
-
(f) The adaptive setting for bilateral CIs produced a mean score that was 86% of the mean score for a single CI score in quiet.
#
Cocktail Party Environment
The results from testing in the cocktail party environment are shown in [Figure 3] and [Table 2]. For the single CI test conditions, the mean scores were omni in quiet, 92% correct; omni in noise, 26% correct; natural in noise, 45% correct; adaptive in noise, 63% correct. In the bilateral CI test conditions, the mean scores were omni in noise, 43% correct; natural in noise, 61% correct; adaptive in noise, 78% correct.


A repeated-measures ANOVA showed a main effect for the test conditions (F = 55.47, p < 0.0001). Posttests were conducted using the Holm–Sidak method. The outcomes are ordered in the same way as the outcomes in the restaurant environment.
-
(a) In both the single and the bilateral CI conditions, the natural and adaptive settings allowed significantly higher scores than the omni setting.
-
(b) In both the single and the bilateral CI conditions, the adaptive setting allowed significantly higher scores than the natural setting.
-
(c) In the bilateral CI condition scores were significantly better than corresponding scores in the single CI test conditions.
-
(d) In the single CI condition, the natural setting in the single CI condition allowed scores as good as scores in the bilateral omni condition.
-
(e) In the single CI condition, the adaptive setting produced significantly higher scores than the bilateral omni condition.
-
(f) In noise, the bilateral adaptive setting allowed a mean score that was 84% of the mean score for a single CI in quiet.
#
#
EXPERIMENT 2: SOUND SOURCE LOCALIZATION
Methods
Participants
The listeners were the same as in Experiment 1.
#
Test Signals
Three 200-msec noise-band signals were created and shaped with 20-msec rise/decay times. The wideband (WB) signal was band-pass filtered between 125 and 6000 Hz. The low-pass (LP) signal was filtered between 125 and 500 Hz. The high-pass (HP) signal was filtered between 1500 and 6000 Hz. In all cases, the filter roll-offs were 48 dB/octave. Broadband overall signal level was 65 dBA.
#
Test Environment
The stimuli were presented from 11 of 13 loudspeakers arrayed within an arc of 180° on the frontal plane. The speakers (Boston Acoustics 100×; Woburn, MA) were 15° apart. An additional speaker was appended to each end of the 11-loudspeaker array but was not used for signal delivery. The 3.04 m × 3.35 m room was lined with 4-inch acoustic foam (noise reduction coefficient = 0.9) on all six surfaces along with special sound treatment on the floor and ceiling. The broadband reverberation time (RT60) was 90 msec. Participants sat in a chair at a distance of 1.67 m from the loudspeakers. Loudspeakers were located at the height of the listeners’ pinna.
#
Test Conditions
Presentation of the three noise stimuli was controlled by MATLAB (Natick, MA). Each stimulus was presented four times from each loudspeaker. The presentation level was 65 dBA with a 2-dB rove in level. Level roving was used to reduce any cues that might be provided by the acoustic characteristics of the loudspeakers. Participants were instructed to look at the midline (center loudspeaker) until a stimulus was presented. They entered the number of the loudspeaker (1–13) on a keypad.
#
#
#
RESULTS
Localization accuracy was calculated in terms of root-mean-square error using the D statistic of [Rakerd and Hartmann (1986)]. Chance performance, calculated using a Monte Carlo method, was 73.5° (standard deviation = 3.2).
The results are shown in [Figure 4] and were subjected to a repeated-measures ANOVA. There was a main effect for test condition (F = 17.6, p < 0.0007). Post tests were conducted according to Holm–Sidak. The mean error scores in the LP test condition for the omni, natural, and adaptive microphones were 45, 45, and 44°, respectively. These scores were not significantly different. The mean error scores in the HP test condition were 18, 19, and 18°, respectively. These scores were not significantly different. All scores in the HP condition were significantly lower than all scores in the LP condition. The mean error scores in the WB test condition were 19, 20, and 18°, respectively. These scores were not significantly different. In addition, these scores were not significantly different than scores in the HP condition. However, all of these scores were significantly lower than scores in the LP condition.


#
DISCUSSION
The aim of Experiment 1 was to determine for patients with CIs the value to speech understanding in noise of (a) a microphone setting that mimicked the frequency filtering effect of the pinna and (b) an adaptive beamformer. We found that both microphone settings significantly improved speech understanding in restaurant and cocktail party noise environments. This outcome, when combined with previous results in other noise environments (e.g., [Wimmer et al, 2016]), suggests that the effects are robust and generalizable.
#
SIGNAL PROCESSING FOR A SINGLE CI
Natural (Pinna Effect) Signal Processing
A digital implementation of the frequency filtering caused by the pinna has been shown to improve speech understanding in noise by both patients with hearing aids ([Kuk et al, 2013]) and CIs ([Wimmer et al, 2016]). At issue in this article was the magnitude of the benefit from this digital implementation expressed in terms of percent words correct, rather than SRT, and the magnitude of the effect relative to other means of noise reduction. For the single CI implementation, the improvement was 16 percentage points in the restaurant environment and 19 percentage points in the cocktail party environment.
It is of interest to compare the benefit from the digital implementation of the pinna effect to the benefit from a device that (a) uses a microphone proximal to the pinna at the opening of the external auditory canal and (b) uses the “real” pinna to filter the signal. This processing is implemented for patients fit with a T-Mic™ (Advanced Bionics, Valencia, CA). Using a restaurant test environment exactly like that in the current study, [Gifford and Revit (2010)] reported a 4.4-dB improvement in SRT for a T-Mic™ listening condition versus an omni listening condition when using Hearing in Noise Test sentences.
Unfortunately, it is difficult to directly compare the results of the present study to that of [Gifford and Revit (2010)] because the Hearing in Noise Test sentences are significantly less difficult than the AzBio sentences used in the present study ([Gifford et al, 2008]). Given that is the case, it is reasonable to suppose that the digital implantation captures at least a substantial portion of the improvement in speech understanding in noise shown by patients who benefit from a “real” pinna. Moreover, the digital implementation avoids the hardware problems with an external microphone noted by [Gifford and Revit (2010)].
#
Adaptive Beamformer
For the single CI fitting in the restaurant environment, the mean improvement using the adaptive setting was 23 percentage points; in the cocktail party the improvement was 37 percentage points.
In the restaurant environment, the natural setting and the adaptive setting produced a similar advantage (16 percentage points versus 23 percentage points). This outcome is due, most likely, to the semidiffuse noise field in the restaurant environment, which reduces the value of the deep nulls available with an adaptive beamformer. In the cocktail party environment, with point sources for noise at ±90° and deep nulls at those locations, the adaptive beamformer provided significantly greater benefit than the natural setting (37 percentage points versus 19 percentage points).
#
#
SIGNAL PROCESSING FOR BILATERAL CIS
For each microphone setting, in both noise environments, scores with bilateral CIs were significantly higher than scores with a single CI. The smallest mean advantage was 12 percentage points (omni setting in the restaurant environment) and the largest was 20 percentage points (adaptive setting in the restaurant environment). Thus, we find, as many have found before, that bilateral CIs are of significant value for speech understanding in noise (e.g., [Litovsky et al, 2006]; [Ricketts et al, 2006]; [Buss et al, 2008]; [Mosnier et al, 2009]). We also find that the relative value of the natural and adaptive microphone settings was little changed in the single and the bilateral conditions—the major influence on the value of the two settings was the type of noise environment.
Single CI with Best Technology versus Bilateral Omni CIs
The performance of patients fit with a single CI using the natural or adaptive settings is of particular importance when gauged against the performance of patients fit with bilateral omni directional microphones, which is the standard fitting for patients with bilateral MED-EL CIs. In both noise environments, the natural setting implemented on a single CI produced scores that were as high as scores with bilateral omni microphones. In both environments, the adaptive setting on a single CI produced scores that were significantly higher than bilateral omni microphones. These outcomes are encouraging for patients in health-care systems in which bilateral CIs are not commonly provided because of cost.
#
#
SOUND SOURCE LOCALIZATION WITH NATURAL AND ADAPTIVE MICROPHONE SETTINGS
We noted in the introduction that, if the natural and adaptive settings reduced the accuracy of sound source localization, then the substantial value of the new technology, as described above, would be significantly reduced. We find, however, that neither microphone setting altered sound source localization for signals with predominately low-frequency content, predominately high-frequency content, or broad frequency content.
In the previous section on signal processing with a single CI, we noted that a single CI equipped with an adaptive beamformer produced scores that were higher than scores for bilateral CIs equipped with omni microphones. This should not be interpreted as a recommendation for fitting a single CI versus bilateral CIs. Rather, it is a point to consider when only a single CI can be fit to a patient or to a patient population.
The bilateral fittings for all microphone settings produced significantly higher scores in both noise environments than the single CI fittings. Moreover, most patients with a single CI show extremely poor sound-source localization (e.g., [Grantham et al, 2007]). As shown recently by [van Hoesel (2015)], bilateral CIs allow patients to localize (or find) sound sources, that is, talkers, that change location. Moreover, patients can find talkers in time to use the visual information available from speech reading. Visual information can add 30 percentage points or more to speech understanding for patients with CI (e.g., [Dorman et al, 2016]). Patients fit with a single CI cannot find speakers sufficiently quickly to take advantage of the visual information. Thus, bilateral CIs improve speech understanding for patients with CI in multiple ways.
#
SUMMARY
Both the natural and adaptive microphone settings significantly improved speech understanding in two noise environments for patients fit with a single CI. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment.
In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition.
In the restaurant and cocktail party environments, bilateral CIs with the adaptive setting produced scores that were 86% and 84%, respectively, of the single CI score in quiet. Thus, bilateral CIs equipped with the best technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet.
Sound source localization accuracy is unaltered by either the natural or adaptive settings for LP, HP, or WB noise stimuli. Thus, patients fit with bilateral CIs equipped with either technology can use sound source localization to “find” talkers and to benefit from the substantial information about speech that is available in the visual signal.
Finally, the data support the use of the natural microphone setting as a default setting. The natural setting (a) provides better speech understanding in noise than the omni setting, (b) does not impair sound source localization, and (c) retains low-frequency sensitivity to signals from the rear.
#
Abbreviations
#
No conflict of interest has been declared by the author(s).
This work was supported by grants from the NIDCD (R01 DC DC008329) and from the MED-EL Corporation to M.F.D.
-
REFERENCES
- Bichey BG, Miyamoto RT. 2008; Outcomes in bilateral cochlear implantation. Otolaryngol Head Neck Surg 138 (05) 655-661
- Buechner A, Dyballa K-H, Hehrmann P, Fredelake S, Lenarz T. 2014; Advanced beamformers for cochlear implant users: acute measurement of speech perception in challenging listening conditions. PLoS One 9 (04) e95542
- Buss E, Pillsbury HC, Buchman CA. 2008; Multicenter U.S. bilateral MED-EL cochlear implantation study: speech perception over the first year of use. Ear Hear 29 (01) 20-32
- Dorman MF, Gifford RH. 2017; Speech understanding in complex listening environments by listeners fit with cochlear implants. J Speech Lang Hear Res 60 (10) 3019-3026
- Dorman MF, Liss J, Wang S, Berisha V, Ludwig C, Natale SC. 2016; Experiments on auditory-visual perception of sentences by unilateral, bimodal and bilateral cochlear implant patients. J Speech Lang Hear Res 59 (06) 1505-1519
- Gifford RH, Revit LJ. 2010; Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise. J Am Acad Audiol 21 (07) 441-451, quiz 487–488
- Gifford RH, Shallop JK, Peterson AM. 2008; Speech recognition materials and ceiling effects: considerations for cochlear implant programs. Audiol Neurootol 13 (03) 193-205
- Grantham DW, Ashmead DH, Ricketts TA, Labadie RF, Haynes DS. 2007; Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants. Ear Hear 28 (04) 524-541
- Griffiths LJ, Jim WC. 1982; An alternative approach to linearly constrained adaptive beamforming. IEEE Trans Antenn Propag 30: 27-34
- Hehrmann P, Fredelake S, Hamacher V, Dyballa K-H, Buchner A. 2012 Improved speech intelligibility with cochlear implants using state of the art noise reduction algorithms. ITG Report 236, 10th ITG Conference on Speech Communication, Braunschweig, Germany, September 26–28
- Kokkinakis K, Azimi B, Hu Y, Friedland DR. 2012; Single and multiple microphone noise reduction strategies in cochlear implants. Trends Amplif 16 (02) 102-116
- Kuk F, Korhonen P, Lau C, Keenan D, Norgaard M. 2013; Evaluation of a pinna compensation algorithm for sound localization and speech perception in noise. Am J Audiol 22 (01) 84-93
- Litovsky R, Parkinson A, Arcaroli J, Sammeth C. 2006; Simultaneous bilateral cochlear implantation in adults: a multicenter clinical study. Ear Hear 27 (06) 714-731
- Mosnier I, Sterkers O, Bebear JP, Godey B, Robier A, Deguine O, Fraysse B, Bordure P, Mondain M, Bouccara D, Bozorg-Grayeli A, Borel S, Ambert-Dahan E, Ferrary E. 2009; Speech performance and sound localization in a complex noisy environment in bilaterally implanted adult patients. Audiol Neurootol 14 (02) 106-114
- Rakerd B, Hartmann WM. 1986; Localization of sound in rooms, III: onset and duration effects. J Acoust Soc Am 80 (06) 1695-1706
- Revit LJ, Killion MC, Compton-Conley CL. 2007; Developing and testing a laboratory sound system that yields accurate real-world results. Hear Rev 14 (11) 54-62
- Ricketts TA, Grantham DW, Ashmead DH, Haynes DS, Labadie RF. 2006; Speech recognition for unilateral and bilateral cochlear implant modes in the presence of uncorrelated noise sources. Ear Hear 27 (06) 763-773
- Spahr AJ, Dorman MF, Litvak LM, Cook SJ, Loiselle LM, DeJong MD, Hedley-Williams A, Sunderhaus LS, Hayes CA, Gifford RH. 2014; Development and validation of the pediatric AzBio sentence lists. Ear Hear 35 (04) 418-422
- Spahr AJ, Dorman MF, Litvak LM, Van Wie S, Gifford RH, Loizou PC, Loiselle LM, Oakes T, Cook S. 2012; Development and validation of the AzBio sentence lists. Ear Hear 33 (01) 112-117
- Spahr AJ, Dorman MF, Loiselle LH. 2007; Performance of patients using different cochlear implant systems: effects of input dynamic range. Ear Hear 28 (02) 260-275
- Spriet A, Van Deun L, Eftaxiadis K, Laneau J, Moonen M, van Dijk B, van Wieringen A, Wouters J. 2007; Speech understanding in background noise with the two-microphone adaptive beamformer BEAM in the Nucleus Freedom Cochlear Implant System. Ear Hear 28 (01) 62-72
- van Hoesel RJ. 2015; Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies. J Assoc Res Otolaryngol 16 (02) 309-315
- Wilson R, Dorman M, Gifford R, McAlpine D. 2016. Cochlear implant design considerations. In: Young N, Kirk KI. Cochlear Implants in Children: Learning and the Brain. Springer; New York, NY: 3-23
- Wimmer W, Weder S, Caversaccio M, Kompis M. 2016; Speech intelligibility in noise with a pinna effect imitating cochlear implant processor. Otol Neurotol 37 (01) 19-23
- Wolfe J, Parkinson A, Schafer EC, Gilden J, Rehwinkel K, Mansanares J, Coughlan E, Wright J, Torres J, Gannaway S. 2012; Benefit of a commercially available cochlear implant processor with dual-microphone beamforming: a multi-center study. Otol Neurotol 33 (04) 553-560
Corresponding author
-
REFERENCES
- Bichey BG, Miyamoto RT. 2008; Outcomes in bilateral cochlear implantation. Otolaryngol Head Neck Surg 138 (05) 655-661
- Buechner A, Dyballa K-H, Hehrmann P, Fredelake S, Lenarz T. 2014; Advanced beamformers for cochlear implant users: acute measurement of speech perception in challenging listening conditions. PLoS One 9 (04) e95542
- Buss E, Pillsbury HC, Buchman CA. 2008; Multicenter U.S. bilateral MED-EL cochlear implantation study: speech perception over the first year of use. Ear Hear 29 (01) 20-32
- Dorman MF, Gifford RH. 2017; Speech understanding in complex listening environments by listeners fit with cochlear implants. J Speech Lang Hear Res 60 (10) 3019-3026
- Dorman MF, Liss J, Wang S, Berisha V, Ludwig C, Natale SC. 2016; Experiments on auditory-visual perception of sentences by unilateral, bimodal and bilateral cochlear implant patients. J Speech Lang Hear Res 59 (06) 1505-1519
- Gifford RH, Revit LJ. 2010; Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise. J Am Acad Audiol 21 (07) 441-451, quiz 487–488
- Gifford RH, Shallop JK, Peterson AM. 2008; Speech recognition materials and ceiling effects: considerations for cochlear implant programs. Audiol Neurootol 13 (03) 193-205
- Grantham DW, Ashmead DH, Ricketts TA, Labadie RF, Haynes DS. 2007; Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants. Ear Hear 28 (04) 524-541
- Griffiths LJ, Jim WC. 1982; An alternative approach to linearly constrained adaptive beamforming. IEEE Trans Antenn Propag 30: 27-34
- Hehrmann P, Fredelake S, Hamacher V, Dyballa K-H, Buchner A. 2012 Improved speech intelligibility with cochlear implants using state of the art noise reduction algorithms. ITG Report 236, 10th ITG Conference on Speech Communication, Braunschweig, Germany, September 26–28
- Kokkinakis K, Azimi B, Hu Y, Friedland DR. 2012; Single and multiple microphone noise reduction strategies in cochlear implants. Trends Amplif 16 (02) 102-116
- Kuk F, Korhonen P, Lau C, Keenan D, Norgaard M. 2013; Evaluation of a pinna compensation algorithm for sound localization and speech perception in noise. Am J Audiol 22 (01) 84-93
- Litovsky R, Parkinson A, Arcaroli J, Sammeth C. 2006; Simultaneous bilateral cochlear implantation in adults: a multicenter clinical study. Ear Hear 27 (06) 714-731
- Mosnier I, Sterkers O, Bebear JP, Godey B, Robier A, Deguine O, Fraysse B, Bordure P, Mondain M, Bouccara D, Bozorg-Grayeli A, Borel S, Ambert-Dahan E, Ferrary E. 2009; Speech performance and sound localization in a complex noisy environment in bilaterally implanted adult patients. Audiol Neurootol 14 (02) 106-114
- Rakerd B, Hartmann WM. 1986; Localization of sound in rooms, III: onset and duration effects. J Acoust Soc Am 80 (06) 1695-1706
- Revit LJ, Killion MC, Compton-Conley CL. 2007; Developing and testing a laboratory sound system that yields accurate real-world results. Hear Rev 14 (11) 54-62
- Ricketts TA, Grantham DW, Ashmead DH, Haynes DS, Labadie RF. 2006; Speech recognition for unilateral and bilateral cochlear implant modes in the presence of uncorrelated noise sources. Ear Hear 27 (06) 763-773
- Spahr AJ, Dorman MF, Litvak LM, Cook SJ, Loiselle LM, DeJong MD, Hedley-Williams A, Sunderhaus LS, Hayes CA, Gifford RH. 2014; Development and validation of the pediatric AzBio sentence lists. Ear Hear 35 (04) 418-422
- Spahr AJ, Dorman MF, Litvak LM, Van Wie S, Gifford RH, Loizou PC, Loiselle LM, Oakes T, Cook S. 2012; Development and validation of the AzBio sentence lists. Ear Hear 33 (01) 112-117
- Spahr AJ, Dorman MF, Loiselle LH. 2007; Performance of patients using different cochlear implant systems: effects of input dynamic range. Ear Hear 28 (02) 260-275
- Spriet A, Van Deun L, Eftaxiadis K, Laneau J, Moonen M, van Dijk B, van Wieringen A, Wouters J. 2007; Speech understanding in background noise with the two-microphone adaptive beamformer BEAM in the Nucleus Freedom Cochlear Implant System. Ear Hear 28 (01) 62-72
- van Hoesel RJ. 2015; Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies. J Assoc Res Otolaryngol 16 (02) 309-315
- Wilson R, Dorman M, Gifford R, McAlpine D. 2016. Cochlear implant design considerations. In: Young N, Kirk KI. Cochlear Implants in Children: Learning and the Brain. Springer; New York, NY: 3-23
- Wimmer W, Weder S, Caversaccio M, Kompis M. 2016; Speech intelligibility in noise with a pinna effect imitating cochlear implant processor. Otol Neurotol 37 (01) 19-23
- Wolfe J, Parkinson A, Schafer EC, Gilden J, Rehwinkel K, Mansanares J, Coughlan E, Wright J, Torres J, Gannaway S. 2012; Benefit of a commercially available cochlear implant processor with dual-microphone beamforming: a multi-center study. Otol Neurotol 33 (04) 553-560







