J Am Acad Audiol 2019; 30(08): 731-734
DOI: 10.3766/jaaa.18030
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

AutoAdaptive: A Noise Level–Sensitive Beamformer for MED EL Cochlear Implant Patients

Michael F. Dorman
*   Arizona State University, Tempe, AZ
,
Sarah Cook Natale
*   Arizona State University, Tempe, AZ
› Author Affiliations
Further Information

Corresponding author

Michael F. Dorman
Department of Speech and Hearing Science, Arizona State University
Tempe, AZ 85287-0102

Publication History

Publication Date:
25 May 2020 (online)

 

Abstract

Background:

When cochlear implant (CI) listeners use a directional microphone or beamformer system to improve speech understanding in noise, the gain in understanding for speech presented from the front of the listener coexists with a decrease in speech understanding from the back. One way to maximize the usefulness of these systems is to keep a microphone in the omnidirectional mode in low noise and then switch to directional mode in high noise.

Purpose:

The purpose of this experiment was to assess the levels of speech understanding in noise allowed by a new signal processing algorithm for MED EL CIs, AutoAdaptive, which operates in the manner described previously.

Research Design:

Seven listeners fit with bilateral CIs were tested in a simulation of a crowded restaurant with speech presented from the front and from the back at three noise levels, 45, 55, and 65 dB SPL.

Data Collection and Analysis:

The listeners were seated in the middle of an array of eight loudspeakers. Sentences from the AzBio sentence lists were presented from loudspeakers at 0 or 180° azimuth. Restaurant noise at 45, 55, and 65 dB SPL was presented from all eight loudspeakers. The speech understanding scores (words correct) were subjected to a two-factor (speaker location and noise level), repeated measures, analysis of variance with posttests.

Results:

The analysis of variance showed a main effect for level and location and a significant interaction. Posttests showed that speech understanding scores from front and back loudspeakers did not differ significantly at the 45- and 55-dB noise levels but did differ significantly at the 65-dB noise level—with increased scores for signals from the front and decreased scores for signals from the back.

Conclusions:

The AutoAdaptive feature provides omnidirectional benefit at low noise levels, i.e., similar levels of speech understanding for talkers in front of, and in back of, a listener and beamformer benefit at higher noise levels, i.e., increased speech understanding for signals from in front. The automatic switching feature will be of value to the many patients who prefer not to manually switch programs on their CIs.


#

INTRODUCTION

Multiple studies have described the value to cochlear implant (CI) patients of adaptive beamformers or directional microphones for speech understanding in noise ([Spriet et al, 2007]; [Buechner et al, 2014]; [Dorman et al, 2017]; [2018]; [Wolfe et al, 2012]; for a review see; [Kokkinakis et al, 2012]). By reducing signal inputs from off the frontal axis, these systems improve the signal-to-noise ratio (SNR) and improve speech understanding in noise. In a previous experiment, using a simulation of a restaurant with a semidiffuse noise field, we found improvements of 20–30% points for patients fit with monaural and bilateral beamformers ([Dorman et al, 2018]).

The signal processing that allows better understanding in noise for speech presented from the front (relative to the listener) will, most commonly, impede speech understanding when speech is presented from the back ([Kuk et al, 2005]). In a recent pilot experiment with the adaptive beamformer used in the experiment cited previously, we found, in a semidiffuse noise field, a 40% point decrease in performance for speech presented from the back relative to speech presented from the front (see [Archer-Boyd et al, 2018] for a review of technical issues in the design of directional microphones and the usefulness of off-axis signals).

Given the tradeoff for speech understanding in noise for speech presented from the front versus the back, it would be reasonable to employ a simple, auditory scene analysis algorithm to hold a microphone in an omnidirectional mode when the noise level is relatively low, to allow similar levels of speech understanding from all directions. Then, when the noise level is relatively high, it would be reasonable to switch to beamformer mode, to enjoy the benefit of the improved SNR. In this study, we present the first report on the performance of such an algorithm implemented commercially on MED EL CIs as AutoAdaptive.

In this experiment, listeners were tested with speech signals presented from loudspeakers at 0 and 180° azimuth in a semidiffuse noise environment simulating a busy restaurant. The noise levels were 45, 55, and 65 dB. The signal processing algorithm was set to switch from omnidirectional mode to beamformer mode when a noise level of 60 dB or greater was detected. The hypothesis to be tested was that speech understanding scores for speech presented from front and back loudspeakers (1) would be similar at noise levels of 45 and 55 dB SPL and (2) would be significantly different at a noise level of 65 dB when the directionality of the microphone would increase scores for speech presented from the front loudspeaker and depress scores for speech presented from the back loudspeaker.


#

METHOD

Subjects

Seven individuals (five female and two male) fit with bilateral MED EL CIs participated in this experiment. The listeners ranged in age from 53 to 75 years with a mean age of 61 years. The duration of bilateral profound deafness ranged from 0.5 to 25 years with a mean duration of 8 years. The duration of CI use ranged from 1.4 to 15 years with a mean duration of 8 years.


#

Test Environment

The listeners were seated in the middle of eight loudspeakers arrayed in a 360° arc around the listener, i.e., the R-SPACE™ (Revitronix, Braintree, VT) test environment ([Compton-Conley et al, 2004]; [Revit et al, 2007]). Loudspeaker-to-listener distance was 24 inches. The sentences from the AzBio sentence lists ([Spahr et al, 2012]) were presented from loudspeakers at 0 or 180° azimuth and directionally appropriate restaurant noise was presented from all eight loudspeakers, including the speaker from which the target sentences were delivered.


#

Signal Processing Algorithm

The algorithm keeps microphones in omnidirectional mode until a noise level of 60 dB is reached. At that noise level, the algorithm switches to an adaptive beamformer mode. In the semidiffuse noise environment of R-SPACE™ the microphone assumes a surpercardiod polar pattern (see Figure 1 in [Dorman et al, 2018]).


#

Procedure

First, for each listener the SNR necessary to drive performance to between 30% and 50% correct was determined. For this condition the noise level was 55 dB. For the seven listeners, this ratio ranged from +10 to 0 dB with a mean SNR of 5.8 dB. This SNR was then used in each of the test conditions.


#

Test Conditions

The listeners were tested in noise levels of 45, 55, and 65 dB SPL. The order of test conditions was randomized across listeners. In each noise-level condition, 20 sentences each were presented from the front (0° azimuth) and back (180° azimuth) loudspeakers. The sentences from the front and back were blocked for presentation and order was randomized across listeners.


#
#

RESULTS

The sentence understanding scores (percent words correct) are shown in [Figure 1] for all test conditions. The data were input to a two-factor (presentation level and loudspeaker location) repeated measures analysis of variance. All statistics were computed with the aid of the GraphPad Prism statistical package (La Jolla, CA). The analysis, using the Geisser–Greenhouse correction, showed a main effect for level [F (2,12) = 6.64, p < 0.01], a main effect for location [F (1,6) = 159.7, p < 0.0001] and a significant interaction [F (2,12) = 29.7, p < 0.001].

Zoom Image
Figure 1 Speech understanding (percent words correct) as a function of noise level for speech from the front and back of the listener. Error bars = ±1 standard error of the mean.

Post hoc tests using the Holm–Sidak method (correcting for multiple comparisons) indicated that speech understanding scores from front and back loudspeakers did not differ significantly at the 45- and 55-dB noise levels. At 45 dB, the mean score front = 39.7% correct and mean score back = 34.1% correct. At 55 dB, the mean score front = 48.2% correct and the mean score back = 45.7% correct. At the 65-dB noise level the mean scores for signals presented from the front and back loudspeakers were significantly different (mean score front = 72.9% correct; mean score back = 29.9% correct).


#

DISCUSSION

This study was motivated by two observations. The first was that, when CI listeners use a beamformer or directional microphone system, the gain in understanding for speech presented from the front of the listener coexists with a decrease in speech understanding when speech is presented from the back of the listener. The second was that, in our clinical experience, many CI listeners do not wish to manually switch settings on their CI when they enter a new listening environment. Given this is the case, as we noted in the introduction, it would be reasonable in low-noise environments to keep a microphone in an omnidirectional mode to maximize speech intelligibility for inputs from multiple locations and then, in high noise environments, to switch to beamformer mode to improve the SNR. The system tested here, AutoAdaptive, was designed to operate in this manner.

As expected, at the 45- and 55-dB SPL noise levels, performance for speech presented from the front and back did not differ significantly—as should be the case for a microphone in omnidirectional mode. By contrast, at the 65-dB noise level, performance was much better for signals presented from the front than from the back. Relative to scores in the 55 dB condition, where the microphones were in omni mode, in 65 dB noise, performance for speech from the front improved by 25% points—a gain similar to that we have reported previously for beamformers in our simulation of a restaurant ([Dorman et al, 2018]). At the same time, relative to scores in the 55-dB condition, scores for speech from the back decreased by 17% points.

In experiments with directional microphones and beamformers, the magnitude of the improvement in performance for speech from the front and the magnitude of the decrease for speech from the back is conditioned by multiple factors including the SNR employed and the level of performance in the omni condition. A relatively high starting point (in terms of percent correct) will minimize the gain seen for speech from the front and will emphasize the decrease in performance for speech from the back. Conversely, a relatively low starting point will emphasize the gain obtained for speech from the front and will minimize the decrease for speech from the back. Thus, the magnitude of the gain/decrease will vary across studies.

Automatic scene analysis and automatic switching systems have been found to be useful for patients fit with hearing aids because patients’ use of “advanced” features is increased with automatic switching ([Surr et al, 2002]; [Cord et al, 2004]). We suspect the same will be true for many patients fit with a CI.


#

Abbreviations

CI: cochlear implant
dB: decibel
SNR: signal-to-noise ratio
SPL: sound pressure level


#

No conflict of interest has been declared by the author(s).

This work was conducted at Arizona State University and was supported by a grant from MED EL Corporation.


  • REFERENCES

  • Archer-Boyd A, Homan J, Brimijoin W. 2018; The minimum monitoring signal-to-noise ratio for off-axis signals and its impoication for directional hearing aids. Hear Res 357: 64-72
  • Buechner A, Dyballa K-H, Hehrmann P, Fredelake S, Lenarz T. 2014; Advanced beamformers for cochlear implant users: acute measurement of speech perception in challenging listening conditions. PLoS One 9 (04) e95542
  • Compton-Conley CL, Neuman AC, Killion MC, Levitt H. 2004; Performance of directional microphones for hearing aids: real-world versus simulation. J Am Acad Audiol 15 (06) 440-55
  • Cord M, Surr R, Walden B, Dyrlund O. 2004; Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. J Am Acad Audiol 15 (05) 353-364
  • Dorman M, Natale S, Loiselle L. 2018; Speech understanding and sound source localization by cochlear implant listeners using a pinna-effect imitating microphone and an adaptive beamformer. J Am Acad Audiol 29 (03) 197-205
  • Dorman M, Natale S, Spahr A, Castioni E. 2017; Speech understanding in noise by cochlear implant patients using a monaural adaptive beamformer. J Speech, Lang Hear Res 60 (08) 2360-2363
  • Kokkinakis K, Azimi B, Hu Y, Friedland D. 2012; Single and multiple microphone noise reduction strategies in cochlear implants. Trends Amplif 16 (02) 102-116
  • Kuk F, Keenan D, Lau S, Ludvigsen C. 2005; Performance of a fully adaptive directional microphone to signals presented from various azimuths. J Am Acad Audiol 16 (06) 335-347
  • Revit LJ, Killion MC, Compton-Conley CL. 2007; Developing and testing a laboratory sound system that yields accurate real-world results. Hear Rev 14 (11) 54-62
  • Spahr A, Dorman M, Litvak L, van Wie S, Gifford R, Loiselle L, Oakes T, Cook S. 2012; Development and validation of the AzBio sentence lists. Ear Hear 33 (01) 112-117
  • Surr R, Walden B, Cord M, Olson L. 2002; Influence of environmental factors on hearing aid microphone preference. J Am Acad Audiol 13 (06) 308-322
  • Spriet A, van Deun L, Eftaxiadis K, Laneau J, Moonen M, van Dijk B, van Wieringen A, Wouters J. 2007; Speech understanding in background noise with the two-microphone adaptive beamformer BEAM in the Nucleus Freedom Cochlear Implant System. Ear Hear 28 (01) 62-72
  • Wolfe J, Parkinson A, Schafer A, Gilden J, Rehwinkel K, Mansanares J, Coughlan E, Wright J, Torres J, Gannaway S. 2012; Benefit of a commercially available cochlear implant processor with dual-microphone beamforming: a multi-center study. Oto Neurotol 33 (04) 553-560

Corresponding author

Michael F. Dorman
Department of Speech and Hearing Science, Arizona State University
Tempe, AZ 85287-0102

  • REFERENCES

  • Archer-Boyd A, Homan J, Brimijoin W. 2018; The minimum monitoring signal-to-noise ratio for off-axis signals and its impoication for directional hearing aids. Hear Res 357: 64-72
  • Buechner A, Dyballa K-H, Hehrmann P, Fredelake S, Lenarz T. 2014; Advanced beamformers for cochlear implant users: acute measurement of speech perception in challenging listening conditions. PLoS One 9 (04) e95542
  • Compton-Conley CL, Neuman AC, Killion MC, Levitt H. 2004; Performance of directional microphones for hearing aids: real-world versus simulation. J Am Acad Audiol 15 (06) 440-55
  • Cord M, Surr R, Walden B, Dyrlund O. 2004; Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. J Am Acad Audiol 15 (05) 353-364
  • Dorman M, Natale S, Loiselle L. 2018; Speech understanding and sound source localization by cochlear implant listeners using a pinna-effect imitating microphone and an adaptive beamformer. J Am Acad Audiol 29 (03) 197-205
  • Dorman M, Natale S, Spahr A, Castioni E. 2017; Speech understanding in noise by cochlear implant patients using a monaural adaptive beamformer. J Speech, Lang Hear Res 60 (08) 2360-2363
  • Kokkinakis K, Azimi B, Hu Y, Friedland D. 2012; Single and multiple microphone noise reduction strategies in cochlear implants. Trends Amplif 16 (02) 102-116
  • Kuk F, Keenan D, Lau S, Ludvigsen C. 2005; Performance of a fully adaptive directional microphone to signals presented from various azimuths. J Am Acad Audiol 16 (06) 335-347
  • Revit LJ, Killion MC, Compton-Conley CL. 2007; Developing and testing a laboratory sound system that yields accurate real-world results. Hear Rev 14 (11) 54-62
  • Spahr A, Dorman M, Litvak L, van Wie S, Gifford R, Loiselle L, Oakes T, Cook S. 2012; Development and validation of the AzBio sentence lists. Ear Hear 33 (01) 112-117
  • Surr R, Walden B, Cord M, Olson L. 2002; Influence of environmental factors on hearing aid microphone preference. J Am Acad Audiol 13 (06) 308-322
  • Spriet A, van Deun L, Eftaxiadis K, Laneau J, Moonen M, van Dijk B, van Wieringen A, Wouters J. 2007; Speech understanding in background noise with the two-microphone adaptive beamformer BEAM in the Nucleus Freedom Cochlear Implant System. Ear Hear 28 (01) 62-72
  • Wolfe J, Parkinson A, Schafer A, Gilden J, Rehwinkel K, Mansanares J, Coughlan E, Wright J, Torres J, Gannaway S. 2012; Benefit of a commercially available cochlear implant processor with dual-microphone beamforming: a multi-center study. Oto Neurotol 33 (04) 553-560

Zoom Image
Figure 1 Speech understanding (percent words correct) as a function of noise level for speech from the front and back of the listener. Error bars = ±1 standard error of the mean.