Keywords
directional microphones - binaural hearing - improved signal-to-noise ratio - situational
awareness
Individuals with hearing loss have reduced audibility, but they often also have increased
difficulty hearing in noise. While the loss of audibility can be compensated by amplification,
it is more challenging to compensate for difficulties hearing in noise using hearing
aids. Manufacturers of hearing aids strive to provide technology that improves hearing
and hearing-related quality of life for users as much as possible. They are particularly
focused on ways to alleviate problems with hearing in noise. According to MarkeTrak
10,[1] people with hearing loss who use hearing aids are less satisfied with how they perform
in noisy places than in other listening environments. Yet, they report vastly greater
satisfaction with hearing in noise than those who do not use hearing aids. This same
report also found that almost half of hearing aid users perceived their hearing aids
as exceeding their expectations. So while there is still great potential for improvement
in hearing aid benefit and satisfaction, the current status provides a positive starting
point for further progress. This article discusses directional microphone technology
in hearing aids and how studies on the real-world use of directional microphones have
influenced its development and application.
Defining the Problem and Common Solution Approaches
Defining the Problem and Common Solution Approaches
In addition to greater perceived difficulties hearing in noise, people with hearing
loss perform worse on speech-recognition-in-noise tasks than people with normal hearing.
One way to quantify performance differences is by measuring an individual's signal-to-noise
ratio (SNR) loss.[2] SNR loss is the increase in SNR (in dB) required by a person with hearing loss to
understand speech in noise, relative to the average SNR required for a person with
normal hearing. Research has shown that people with hearing loss may require SNR improvements
of 2 to 18 dB, depending on the magnitude of the hearing loss, to hear as well as
people with normal hearing under the same listening conditions.[2]
[3]
[4]
[5]
[6]
[7]
Beginning in the 1970s, directional microphones were incorporated into hearing aids
as a way to help hearing aid users hear better in noisy situations.[8] Directional microphone technology improves SNR by amplifying sounds coming from
the direction of the signal relatively more than sounds (“noise”) coming from other
directions, thereby providing a directional benefit. Apart from the benefits of amplification
itself, directionality is the only hearing aid technology that has been shown to improve
speech intelligibility in prototypical noisy situations where users are listening
to speech coming from in front of them with competing sound from other locations.[9]
[10]
There are excellent overviews of the principles of how directional microphones in
hearing aids work and factors that affect benefit.[11] It also bears mentioning that terminology has evolved such that directional technology
in hearing aids is often generically named and not technically accurate. Originally,
a “directional microphone” in a hearing aid referred to a single transducer with two
sound ports. The sound entering each port strikes the microphone's diaphragm from
opposite directions, and depending on phase relationships of the sound coming from
each port, it is canceled. Today, the term “directional microphone” is loosely applied
to any type of directional response exhibited by the hearing aid regardless of how
it is achieved. Virtually all modern hearing aids are fully digital and use dual-microphone
designs that build on the principles of the classic directional microphone. Delays
applied to the input received by one of the microphones — usually the rear one — result
in cancellation or partial cancellation of sound coming from specific directions.
Because this happens in the digital domain, the algorithms responsible for this technology
can be quite sophisticated, incorporating frequency specificity and adaptive behavior.
Steering algorithms can also switch between microphone modes or behaviors depending
on an analysis of the listening environment. Directional processing can even contribute
to the decision-making of such a steering algorithm. For example, the dual microphones
on a hearing aid may be used to “measure” the level of noise to the rear of the user
by applying a pattern that is most sensitive to the rear. This information can then
feed the steering algorithm that decides the likely optimum microphone response for
hearing in the given listening environment (see the article by Andersen et al in this
issue for a similar approach that uses beamformers to estimate SNR). The same two
microphones may then be activated to provide preferential amplification for sound
from in front of the user. The possible permutations and functions of the digital
dual-microphone system and control of the system are limitless, but some commonalities
can be observed. [Table 1] provides a high-level overview of the different directional options in today's hearing
aids with dual-microphone directionality. In addition, readers are referred to the
article by Derleth et al in this issue for details on how wireless communication between
hearing aids can be used to form an array of microphones that can further increase
SNR beyond what can be achieved using independently operating ear-level hearing aids.
Table 1
Overview of Common Types of Microphone Directionality
Directionality type
|
Description
|
Omnidirectional microphone
|
Microphone which is equally sensitive to sounds coming from all directions
|
Fixed directionality
|
Microphone system for which the directivity pattern is unchanging. Usually, optimized
to be most sensitive to sound coming from directly in front
|
Adaptive directionality
|
Microphone system for which the angle or angles at which the directional response
is least sensitive (called a null) can be moved. Typically, the nulls are positioned
such that the output is minimized[12] or the SNR is maximized
|
Automatic switching directionality
|
Microphone system which switches between directional patterns automatically, for example,
between being directional and omnidirectional
|
Bilateral beamformer
|
Microphone system that uses the microphones of both hearing aids in a bilateral fitting
to cancel more noise and achieve more directivity than is possible with a microphone
system using only the microphones on one hearing aid
|
Pinna compensation directionality
|
Directionality which attempts to mimic the directionality of a pinna to compensate
for the disadvantageous microphone location above/behind the pinna
|
Factors in Real-World Effectiveness of Directional Microphones
Factors in Real-World Effectiveness of Directional Microphones
A limitation of directional microphone technology is that the SNR benefit is realized
only when certain conditions are met. First, the signal of interest, which hearing
aid developers assume to be speech, must be spatially separated from the noise sources.
Second, the signal of interest must be located within the directional beam. Third,
the signal of interest should be within 2 m of the hearing aid user depending on the
amount of reverberation. Another limitation is that directional microphone systems
are, with few exceptions, designed to have a forward-facing beam as worn on the head.
This means that hearing aid users must face what they want to listen to, which limits
the situations where the directional technology can potentially benefit hearing in
noise.
Directional benefit is easily demonstrated in a laboratory test setup where the earlier-mentioned
conditions are met,[13]
[14]
[15]
[16]
[17]
[18]
[19] but in real-world settings, the perceived benefits of directionality have shown
to be less dramatic.[20]
[21] Numerous factors likely play a role in this discrepancy. Real-world listening environments
often bear little resemblance to laboratory test environments (“acoustic scenes”),
as they are unpredictable in terms of access to visual cues, reverberation, type and
location of sounds of interest, and type and location of interfering noises. To complicate
matters further, any of these sounds may move and/or the hearing aid user may move
or change position.
Walden et al[22] observed significant directional benefit under laboratory conditions but then found
that this benefit did not carry over to everyday situations because subjective ratings
of directionality were not significantly different than omnidirectionality. They explained
individual factors which could account for the inconsistency. For one, hearing aid
users may not have learned to use the directionality feature. Appropriate use of selectable
directionality, which allows users to manually select a directional or omnidirectional
response, requires that they (1) analyze and recognize situations in which directionality
would be advantageous, (2) know how to activate it and do it, and (3) are able to
manipulate their surroundings to maximize the directional advantage. Additionally,
in order for hearing aid users to benefit from directionality, they must encounter
enough real-world situations where directionality is potentially beneficial. Finally,
directionality has the potential to interfere with users' ability to maintain awareness
of the listening environment or their ability to shift attention to other sound sources
in the environment.
Based on the aforementioned findings, it should come as no surprise that many hearing
aid users fit with selectable directionality do not use this feature. Cord et al[21] interviewed 112 hearing aid users who had been fit with selectable directionality
at least 6 months prior. They found that over 30% of them did not switch between the
omnidirectional and directional microphone modes. The participants who used the directional
mode described the listening environments where it was most helpful as those where
(1) noise was present, (2) the signal of interest was in front, and (3) the signal
was relatively near them. The influence of these acoustic and spatial characteristics
in determining microphone preference was corroborated by Surr et al,[23] who fit 11 individuals with selectable directional hearing aids and asked them to
keep detailed journals describing situations in which a particular microphone mode
was preferred. In a follow-up study, Walden et al[24] investigated to what extent characteristics of listening environments could predict
preference for omnidirectional versus directional microphone modes. They, too, found
that the directional microphone mode was preferred in listening environments with
background noise and where the signal source was located in front of and near the
hearing aid user. They suggested that “knowing only signal location and distance and
whether background noise is present or absent, omnidirectional/directional hearing
aids can be set in the preferred mode in most everyday listening situations.”
An important technological implication of the discussion thus far is that since hearing
aids can accurately characterize a listening environment's acoustic and spatial characteristics,
they should also be able to select the preferred microphone mode much of the time.
Today's hearing aids typically implement a steering strategy that attempts to do just
that, making it easier for users to benefit from directionality without having to
consciously activate it. Most often, the hearing aid will use environmental classification
(see the article by Hayes in this issue for more details) to estimate the SNR of speech
and will select a directional response when SNR is low. The exact rationale, criteria,
and technical implementation differ depending on the hearing aid manufacturer.
The relationship between directional benefit in the real world and in the clinic test
booth was also examined by Cord et al.[25] They examined whether successful users of directional microphones in everyday living
showed greater directional benefit in the clinic than people who were unsuccessful
users. If this were the case, this finding could be used clinically to identify users
who could benefit the most from directional microphone technology. Their findings
revealed that everyday success with directional microphone hearing aids could not
be reliably predicted by the magnitude of directional benefit in the clinic; the mean
directional benefit obtained in the test booth did not differ significantly between
participants who reported everyday success and participants who reported little or
no success. These findings support the conclusion that the benefit of directional
microphones in everyday life may depend more on the lifestyle, listening environments,
hearing aid usage patterns of users, and other factors rather than an inherent and
individual ability to benefit.
Development of a Binaural Strategy for Using Directional Microphones
Development of a Binaural Strategy for Using Directional Microphones
Based on the body of research, several things are clear about directional microphones
in hearing aids:
-
There is a potential benefit of significantly improved SNR that could help listening
in noisy conditions.
-
The real-world benefit may not be realized, and it is not easy to predict who will
benefit.
-
The real-world benefit may be enhanced by microphone mode switching based on acoustic
analysis of the environment.
-
The automatic switching to a particular microphone mode can also potentially be at
odds with listener intent and natural listening behavior.
A novel way to think about solutions for hearing in noise in real-world situations
is to consider how technology might work in synergy with abilities that are inherent
to the user. An example of how this type of innovation has been applied in a completely
different domain is how smart devices use natural human movements and gestures to
control them. Rather than clicking buttons or turning knobs to navigate a modern smart
device, users can use their fingers in an intuitive way to touch, swipe, write, and
draw on touch-sensitive screens to carry out their desired tasks. In the case of directional
microphones, conventional wisdom has dictated that they should always be fit on both
ears to maximize the potential SNR benefit, which assumes that the technology does
the heavy lifting. This axiom ignores the contribution of the user's inherent auditory
processing abilities and how the technology might supplement these abilities.
Having two eyes allows the brain to construct a three-dimensional landscape of our
surroundings in the visual modality; similarly, having two ears enables us to form
a sound-based image of the surroundings in the auditory modality. The human auditory
system integrates, analyzes, and compares information from both ears. Collectively,
these processes are called binaural hearing. Among other benefits, binaural hearing
helps us hear in noise[26] (see the article by Derleth et al in this issue for more discussion about binaural
hearing). Additional perceptual improvements associated with binaural hearing include
localization, loudness, sound quality, noise suppression, and speech clarity. Binaural
hearing enables us to quickly and reliably detect and recognize sounds, making it
possible to selectively attend to something of specific interest, like a single voice
among many talkers.[27]
When in noise, a person can understand speech more easily with two ears than with
one because of the head shadow effect, binaural redundancy, and binaural squelch.
The head shadow effect is a purely physical phenomenon: When noise and the desired
signal come from different directions, the SNR will be better at the ear away from
the noise. This advantage is often referred to as the “better ear effect.” Binaural
redundancy refers to the advantage of receiving the same signal in both ears. This
includes binaural loudness summation (the loudness of a sound is greater if heard
with two ears than with only one). Binaural squelch relates to the auditory system's
ability to employ interaural timing differences (ITDs) and interaural level differences
(ILDs) to spatially separate competing sounds and to attend to the ear with the better
SNR.[26]
Implicit in the discussion of binaural hearing is that it also enables spatial hearing,
which is the ability to localize and externalize sounds in terms of direction and
distance. Spatial hearing is critical for constructing auditory images. In addition,
by providing a sense of the environment we are in, spatial hearing again helps us
segregate sounds to choose what to focus on. The acoustic cues upon which sound localization
is based — ITDs, ILDs, and monaural spectral cues — also enable spatial hearing.
In developing a strategy for directional microphones, it is key to understand that
binaural hearing advantages exist even in the presence of peripheral damage to the
auditory system. These advantages are almost as substantial as for normal hearing
individuals when audibility of auditory cues is provided via amplification. Essentially,
hearing aids make sounds audible, which can enable the same binaural advantages as
in normal hearing.[27]
Binaural hearing benefits become meaningful when listeners leverage them to achieve
their listening goals, even though this is somewhat unconscious. Binaural listening
can be described in terms of two broad strategies: the “better ear strategy” and the
“awareness strategy.” According to the better ear strategy described by Zurek,[28] listeners will position themselves relative to the desired sound to maximize the
audibility of that sound, and they will rely on the ear with the best representation
or best SNR of that sound. The directivity patterns of both ears contribute to this
ability to focus, with the head shadow effect playing a critical role. The combined
directivity characteristics of the two ears form a perceptually focused beam that
the listener can take advantage of depending on the desired sound's location. Therefore,
if at least one ear has a favorable SNR, then the auditory system will take advantage
of it.
The awareness strategy is an extension of the better ear strategy. This strategy includes
the omnidirectional aspects of binaural listening that allow the listener to remain
connected and aware of the surrounding soundscape when the head shadow effect improves
the SNR in one of the two ears. Due to the geometric location of the two ears on the
head, the brain can either use the head shadow to enhance the sound of interest or
make the head acoustically “disappear” from the sound scene so that the listener can
attend to sounds all around.
Microphone directivity can help or hinder either of these listening strategies. For
example, hearing aids with an omnidirectional response might not help with listening
to speech in front as much as hearing aids with a directional response. Unfortunately,
a directional response could reduce the listener's awareness of their surroundings
and interfere with conversation following behaviors in group situations. These conflicting
demands can be resolved by leveraging the better ear strategy and the awareness strategy.
Specifically, if the hearing aids on the two ears are fitted with different directional
responses, then the user can apply the strategy that is consistent with their listening
intent.
Multiple studies have demonstrated the viability of the approach described earlier.
For example, Bentler and colleagues[29] measured speech recognition in noise and sound quality judgments in listeners with
different microphone configurations. These included bilateral omnidirectionality,
bilateral directionality, and two conditions where either the right hearing aid was
directional and the left was omnidirectional or vice versa. They found that directional
benefit on the speech recognition task was equivalent for all conditions where a directional
microphone was used, regardless of whether it was on one or both ears. There were
also no significant differences in sound quality judgments among the directional conditions.
This result confirmed that listeners take advantage of the ear with the best SNR and
that the ear with the poorer SNR did not improve or hinder speech recognition in the
better ear. Other investigations have corroborated these findings,[30]
[31]
[32]
[33] although some have suggested that directional benefit could be slightly better with
a bilateral directional response depending on the test conditions.[34]
The idea of applying directional microphone technology asymmetrically in hearing aid
fittings was tested in both laboratory and real-world conditions by Cord et al.[30] They specifically wanted to determine if an asymmetric fitting would be advantageous
in situations where directional processing is often preferred, and if it would be
detrimental in situations where omnidirectional processing is often preferred. In
the laboratory, participants performed significantly better with the asymmetric fitting
compared with the binaural omnidirectional fitting. In real-world use, participants
rated the asymmetric fitting significantly better than the omnidirectional fitting
in terms of ease of listening. Greater ease of listening with the asymmetric fitting
was found in listening environments that typically favor directional microphones.
In other environments, ease of listening was not significantly different with the
asymmetric fitting than the omnidirectional fitting. These real-world findings validated
the viability of asymmetric directionality as a strategy that can improve the overall
effectiveness of directional microphones.
A Strategy for Directionality Based on Binaural Hearing
A Strategy for Directionality Based on Binaural Hearing
With this literature review in mind, the remainder of this article describes the strategy
for using directional microphones in ReSound hearing aids. The underlying philosophy
is to supply the hearing aid user with the acoustic information that the auditory
system needs to resolve difficulties hearing in noise in a natural way[35] by allowing the user to selectively attend to a target sound while simultaneously
monitoring other sounds (see Broadbent[36] for further discussion on selective attention).
As described by Piechowiak et al,[37] the system design in ReSound hearing aids is built on a model of natural binaural
auditory and cognitive processing. This system accounts for acoustic effects, such
as the way hearing aids amplify sounds differently depending on frequency and direction
of arrival and head shadow effects. It also accounts for perceptual effects, such
as binaural unmasking.[28] Moreover, it incorporates a model of the binaural auditory process that combines
the signal from the left and right ears in a way that allows selective attention to
either a target sound or a background sound. By improving the SNR and increasing the
audibility of surroundings, the system optimizes both speech intelligibility and situational
awareness. Indeed, perceptual data from simulated bilateral responses in the study
by Piechowiak et al showed that this approach maintained speech recognition for speech
in front while increasing recognition for speech that was not in front. At the time
of writing, the most advanced application of this directional strategy in ReSound
hearing aids is called “All Access Directionality.”
All Access Directionality controls the microphone mode of each hearing aid depending
on the presence and direction-of-arrival of speech and noise in the environment. It
advances the asymmetric approach to include other microphone mode combinations to
best support listener intent and preferences for sound quality in different listening
environments. Front and rear speech detectors on each hearing aid estimate the location
of speech relative to the hearing aid user. As alluded to in the “Introduction,” the
dual-microphone system on each hearing aid is used as a noise detector in addition
to a speech detector. When speech and/or noise is detected in the bilateral pair of
hearing aids, they use 2.4 GHz wireless technology to communicate with each other
and coordinate the microphone modes for an optimal binaural response.
All Access Directionality switches between three modes depending on the listening
environment: (1) a bilateral omnidirectional response called “Spatial Cue Preservation
mode”; (2) an asymmetric directional response called “Binaural Listening mode”; and
(3) a bilateral directional response called “Speech Intelligibility mode.” The three
All Access Directionality modes are derived from research on the optimal microphone
response of two hearing aids in different sound environments described earlier. [Table 2] provides the justification for each possible binaural microphone response.
Table 2
Published Findings on Optimal Binaural Microphone Response have been Instrumental
in Developing the Four Bilateral Microphone Responses of All Access Directionality
All Access Directionality microphone mode
|
Research finding
|
Bilateral omnidirectional (Spatial Cue Preservation mode)
|
In quiet environments, a bilateral omnidirectional response is strongly preferred
by users[24]
[38]
|
Asymmetric with omnidirectional on right side and directional on left side or directional
on right side and omnidirectional on left side (Binaural Listening mode)
|
A directional response for one hearing aid and an omnidirectional response for the
other hearing aid can improve ease of listening and awareness of surroundings as compared
with bilateral fixed directional fittings[30]
|
Bilateral directional (Speech Intelligibility mode)
|
A bilateral directional response provides the greatest benefit when the speech signal
is predominantly in front of the listener[39]
|
Spatial Cue Preservation Mode
Spatial Cue Preservation Mode
All Access Directionality is designed to be in the Spatial Cue Preservation mode when
the hearing aid user is in quiet and moderately complex listening environments, with
or without speech. Spatial Cue Preservation mode focuses on maintaining spatial hearing
cues as well as the naturalness of sound. The microphone configuration in Spatial
Cue Preservation mode differs depending on the location of the microphone on the hearing
aid. The spectral cues provided by the pinna are largely maintained with the hearing
aid microphone placement in custom in-the-ear hearing aids, as the microphone of these
hearing aids is located at the concha or in the ear canal, where incoming sounds are
picked up after the pinna filters them. Spatial hearing is provided naturally by in-the-ear
microphone placement; therefore, it makes sense for the Spatial Cue Preservation mode
to use an omnidirectional response for each hearing aid. However, the microphone location
on behind-the-ear (BTE) and receiver-in-the-ear (RIE) style hearing aids (above and
behind the pinna) compromises monaural spectral cues, disturbing hearing aid users'
ability to localize sounds. Therefore, in BTE and RIE hearing aids, Spatial Cue Preservation
mode supports spatial hearing in one of two ways.
Pinna Compensation
Spatial Cue Preservation mode uses a feature called “Spatial Sense” in BTE and conventional
RIE hearing aids. Spatial Sense combines a binaural compression algorithm with a pinna
compensation algorithm (see [Table 1]). The binaural compression algorithm uses wireless communication between the hearing
aids to restore high-frequency ILDs that are otherwise compromised when wide dynamic
range compression operates independently in each hearing aid (see the article by Derleth
et al in this issue for more discussion about binaural localization cues). The pinna
compensation part of the feature corrects the acoustical side effects caused by the
placement of the hearing aid microphones above and behind the ear. It recreates the
directionality of an average pinna by applying a front-facing directional mode in
the higher frequencies. The pinna compensation algorithm of Spatial Cue Preservation
mode has been shown to effectively reduce the number of front/back confusions compared
with traditional omnidirectional microphones.[40]
[41] In a review of studies on pinna compensation, Xu and Han suggested that individual
differences in real-world performance with pinna compensation indicate that some hearing
aid users may experience a localization benefit relative to omnidirectionality while
others may not.[42] This agrees with our own research where some participants performed better with
pinna compensation than omnidirectional microphones while others did not.[41] We speculate that participants who benefitted the most from pinna compensation were
those whose unique pinna anatomy resulted in spectral filtering that closely resembled
the average that was used in the pinna compensation algorithm.
Microphone and Receiver-in-the-Ear
Some ReSound RIE hearing aids can be fit with a microphone and receiver-in-the-ear
(M&RIE) type of receiver. The M&RIE receiver (illustrated in [Fig. 1]) includes a microphone placed within the ear canal. In this case, Spatial Cue Preservation
mode uses the microphone in the ear canal instead of the microphones in the body of
the hearing aid. By using the microphone in the ear canal, the influence of the hearing
aid user's unique pinna anatomy is maintained. Compared with traditional omnidirectional
hearing aids, this technology has been shown to reduce localization errors and to
be preferred for sound quality.[41]
Figure 1 Illustration of Microphone and Receiver-In-the-Ear (M&RIE) type of receiver.
Binaural Listening Mode
All Access Directionality switches to Binaural Listening mode when the rear-facing
noise detector measures noise above a certain threshold. Speech may be present but
not solely in front of the user. Binaural Listening mode is intended to balance the
need to provide the user with access to their surroundings while, at the same time,
enhancing the intelligibility of speech coming from the front. Binaural Listening
mode puts the hearing aids in an asymmetric microphone mode where one hearing aid
is in a directional mode, and the other hearing aid is in a specially designed omnidirectional
mode that attempts to compensate for the head shadow effect when the hearing aid is
worn.[37] The hearing aid in directional mode provides enhanced SNR for speech in front, while
the hearing aid in omnidirectional mode maintains environmental awareness. It has
been shown that this asymmetric approach can significantly improve the speech intelligibility
of talkers that are not positioned in front of the user.[43] In this study, speech intelligibility for Binaural Listening mode was compared with
that for bilateral beamformers in an acoustic scene where the target talkers were
from the left of and behind the hearing aid user. As shown in [Figs. 2] and [3], speech reception thresholds were approximately 10 to 20 dB better with Binaural
Listening mode compared with the bilateral beamformers.
Figure 2 The green arrow indicates the position of the target talker, while the red arrows
show the positions of masking talkers. The white arrows indicate the positions of
loudspeakers that played speech-shaped noise. Mean speech reception threshold results
are shown for the target talker to the left for Binaural Listening mode in a pair
of ReSound hearing aids and bilateral beamformers in two pairs of hearing aids of
other brands. Lower values are better. The asterisks show significant differences,
where ***indicates p < 0.001. (Redrawn with permission from Canadian audiologist).
Figure 3 Same as [Fig. 2], but for the target talker from behind. (Redrawn with permission from Canadian audiologist).
The asterisks show significant differences where ** indicates p < 01 and *** indicates p < 001.
In Binaural Listening mode, the side that is in directional mode and the side that
is in omnidirectional mode are optimized depending on the listening environment. It
has previously been shown that intelligently changing the hearing aid which is in
directional mode and which is in omnidirectional mode improves speech intelligibility
in an acoustic scene where a talker is to one side of the hearing aid user and noise
is to the other side.[33] All Access Directionality adapts to such situations automatically.
A schematic example of Binaural Listening mode is shown in [Fig. 4]. The illustration in the left panel shows a directional response on the left ear.
To maintain situational awareness in this case, the right hearing aid must compensate
for the head shadow. This implies that the left quadrant behind the directional ear
may have reduced audibility. In Binaural Listening mode, the response in the right
hearing aid has been designed to “fill in” this “blind spot.” The different directional
responses on the two sides maximize the user's opportunity to hear in all directions.
The response on the enhanced omnidirectional side is based on the binaural hearing
model that optimizes SNR improvement and audibility of surroundings/situational awareness.[37]
Figure 4 Illustration of how the asymmetric directional configuration maximizes the audibility
of sounds all around the user by expanding the omnidirectional microphone response
to pick up sounds from the directional side.
Compared with earlier versions of the Binaural Listening mode, All Access Directionality
incorporates a bilateral beamformer to enhance SNR benefit of the directional side
as much as possible.[44]
Speech Intelligibility Mode
Speech Intelligibility Mode
The Speech Intelligibility mode is active when there is noise and when speech is detected
only in front of the hearing aid user. In this type of listening environment, it has
been shown that speech recognition scores improve in laboratory testing with a bilateral
directional response compared with an asymmetric response.[33]
[34] In this mode, both hearing aids are configured to be directional. The Speech Intelligibility
mode of All Access Directionality uses a bilateral beamformer where wireless communication
between the hearing aids enables all four microphones in the bodies of the two hearing
aids to create a more directional beam than is possible with only two microphones[44] (for more information about the technology behind bilateral beamformers, see the
articles by Derleth et al and by Andersen et al in this issue). In listening environments
where the noise is diffuse, the beamforming algorithm weights the inputs of both hearing
aids' microphones equally. However, in listening environments with more noise on one
side or the other, the algorithm takes advantage of the head shadow by assigning greater
weights to the side with less noise than the side with more noise. In addition, the
bilateral beamformer is a three-band system. Below an adjustable crossover frequency,
the processing is omnidirectional.[45] This preserves ITDs, which are the dominant localization cue in the low frequencies.
Sound quality differences between different microphone modes are also minimized with
this approach.[46]
[47]
[48] Above 5000 Hz, a monaural hypercardioid response is applied in each hearing aid.
This prevents the adaptive bilateral beamformer from interfering with high-frequency
ILD cues for localization. The bilateral beamforming is then applied in the mid-band,
which contains the most salient frequencies for speech recognition.
Summary
Directional microphone technology is readily available in today's hearing aids and
can significantly improve speech recognition in noise. However, despite its level
of sophistication, the technology itself does not guarantee that these benefits are
realized in daily use situations. In fact, directional microphones have the potential
to interfere with the user's awareness of their surroundings and their ability to
follow conversations. ReSound developed an approach to applying directional microphones
based on the philosophy that they leverage binaural hearing and inherent listening
strategies. The current strategy encompasses three listening modes that support the
natural ways in which people listen. Compared with a more conventional approach that
switches between bilateral omnidirectional and bilateral directional processing, this
strategy has been shown to have advantages in terms of sound quality, spatial hearing,
and improved SNR with maintained awareness of surroundings.