J Am Acad Audiol 2020; 31(01): 050-060
DOI: 10.3766/jaaa.18065
Articles
Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

Evaluation of a Remote Microphone System with Tri-Microphone Beamformer

Jace Wolfe
*   Hearts for Hearing Foundation, Oklahoma City, OK
,
Mila Duke
*   Hearts for Hearing Foundation, Oklahoma City, OK
,
Erin Schafer
†   Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX
,
Christine Jones
‡   Phonak LLC, Warrenville, IL
,
Lori Rakita
‡   Phonak LLC, Warrenville, IL
,
Jarrod Battles
*   Hearts for Hearing Foundation, Oklahoma City, OK
› Author Affiliations
Further Information

Corresponding author

Jace Wolfe
Hearts for Hearing Foundation
Oklahoma City, OK 73012

Publication History

Publication Date:
25 May 2020 (online)

 

Abstract

Background:

Children with hearing loss often experience difficulty understanding speech in noisy and reverberant classrooms. Traditional remote microphone use, in which the teacher wears a remote microphone that captures her speech and wirelessly delivers it to radio receivers coupled to a child’s hearing aids, is often ineffective for small-group listening and learning activities. A potential solution is to place a remote microphone in the middle of the desk used for small-group learning situations to capture the speech of the peers around the desk and wirelessly deliver the speech to the child’s hearing aids.

Purpose:

The objective of this study was to compare speech recognition of children using hearing aids across three conditions: (1) hearing aid in an omnidirectional microphone mode (HA-O), (2) hearing aid with automatic activation of a directional microphone (HA-ADM) (i.e., the hearing aid automatically switches in noisy environments from omnidirectional mode to a directional mode with a cardioid polar plot pattern), and (3) HA-ADM with simultaneous use of a remote microphone (RM) in a “Small Group” mode (HA-ADM-RM). The Small Group mode is designed to pick up multiple near-field talkers. An additional objective of this study was to compare the subjective listening preferences of children between the HA-ADM and HA-ADM-RM conditions.

Research Design:

A single-group, repeated measures design was used to evaluate performance differences obtained in the three technology conditions. Sentence recognition in noise was assessed in a classroom setting with each technology, while sentences were presented at a fixed level from three different loudspeakers surrounding a desk (0, 90, and 270° azimuth) at which the participant was seated. This arrangement was intended to simulate a small-group classroom learning activity.

Study Sample:

Fifteen children with moderate to moderately severe hearing loss.

Data Collection and Analysis:

Speech recognition was evaluated in the three hearing technology conditions, and subjective auditory preference was evaluated in the HA-ADM and HA-ADM-RM conditions.

Results:

The use of the remote microphone system in the Small Group mode resulted in a statistically significant improvement in sentence recognition in noise of 24 and 21 percentage points compared with the HA-O and HA-ADM conditions, respectively (individual benefit ranged from −8.6 to 61.1 and 3.4 to 44 percentage points, respectively). There was not a significant difference in sentence recognition in noise between the HA-O and HA-ADM conditions when the remote microphone system was not in use. Eleven of the 14 participants who completed the subjective rating scale reported at least a slight preference for the use of the remote microphone system in the Small Group mode.

Conclusions:

Objective and subjective measures of sentence recognition indicated that use of remote microphone technology with the Small Group mode may improve hearing performance in small-group learning activities. Sentence recognition in noise improved by 24 percentage points compared to the HA-O condition, and children expressed a preference for the use of the remote microphone Small Group technology regarding listening comfort, sound quality, speech intelligibility, background noise reduction, and overall listening experience.


#

INTRODUCTION

Research has clearly indicated that children with hearing loss are likely to encounter difficulty with communication in noisy and reverberant situations and when the speech signal of interest originates from a distance (e.g., more than a meter) from the listener ([Finitzo-Hieber and Tillman, 1978]; [Nabelek and Nabelek, 1985]; [Crandell and Bess, 1986]; [Crandell, 1991]; [1992]; [1993]; [Wolfe et al, 2013]). It is also well known that classroom acoustics are often characterized by moderate to high levels of competing noise and reverberation ([Knecht et al, 2002]; [Choi and McPherson, 2005]; [Massie and Dillon, 2006]; [Crukley et al, 2011]; [Ronsse and Wang, 2013]). Remote microphone technologies have been currently established as the most effective means to improve speech recognition in noise for hearing aid (HA) users ([Hawkins, 1984]; [Ricketts, 2001]; [Wolfe et al, 2013]). In a conventional remote microphone system, a microphone worn by the primary talker (e.g., the classroom teacher) of interest captures his/her speech and wirelessly delivers it via a radio transmitter to radio receivers that are coupled to the student’s HAs. In previous research studies in which sentence recognition has been evaluated at SNRs often encountered in real-world settings (e.g., 0 to +5-dB SNR) ([Pearsons et al, 1977]; [Crukley et al, 2011]), the use of remote microphone systems has allowed for an average improvement of 30 to 60 percentage points in sentence recognition in noise relative to use of HAs alone ([Hawkins, 1984]; [Madell, 1992]; [Lewis et al, 2004]; [Wolfe et al, 2009]; [2013]). As a result, remote microphone technology is most likely the best solution to improve speech recognition in noise in situations in which the primary talker is known and constant over a period of time (e.g., traditional classroom lecture). Of note, when used in this traditional or conventional mode, the remote microphone polar plot pattern is typically directional so that when pointed toward the talker’s mouth, the primary speech signal of interest is captured while surrounding noise is attenuated.

Unfortunately, there are several instances in which the traditional use of a single, remote microphone system may not fully address the communication needs of a student with hearing loss. For example, students may spend a portion of the day in a classroom working on assignments in small groups composed of their peers. In fact, this type of learning activity occurs quite frequently in contemporary educational settings. [Feilner (2016)] observed students with hearing loss in multiple classrooms and schools in several countries during daily activities. The various educational activities were classified into different categories based on the teaching style or teaching method. Overall, 22% of the school day consisted of frontal instruction/traditional lecture (e.g., the teacher lectured from the front of the classroom), whereas thirteen percent of the school day consisted of students working individually (e.g., the student worked on an assignment at his/her desk and received instruction/direction from the teacher or asked questions to the teacher as needed). Because the teacher’s speech is the primary signal of interest in these situations, traditional use of remote microphone technology would be expected to provide substantial improvement in speech recognition and is likely to be the best technological solution available to a student with HAs.

[Feilner (2016)] also found that twenty-two percent of the school day consisted of students with hearing loss working in small groups with their peers, whereas thirteen percent of the day consisted of “interactive lessons” in which there were multiple signals of interests that changed rapidly in regard to source and direction. In addition, Feilner noted that 22% of the day consists of “exciting or other activities” (e.g., lunch in the cafeteria, walking in the halls between classes, and recess on the playground), in which a remote microphone may not be the ideal solution to hear multiple talker and/or environmental sounds of interest. In short, the proportion of the day in which traditional remote microphone use would be expected to optimize hearing performance (35%) is less than the proportion of the day in which remote microphone use would be expected to offer little to no improvement in listening abilities (65%).

Another potential solution to improve speech recognition during small-group educational activities is the use of a remote microphone transmitter that may be placed on a desk or table in the center of the small group whereby the speech of the talkers may be captured closer to the source of the signal. In this application, the polar plot pattern of the remote microphone is typically omnidirectional to allow for the capture of speech around the entire group. In addition, the sensitivity of the microphone is often reduced to capture the signals of interest that are proximal in the small group setting while limiting the acquisition of the competing noise from outside the group. This approach differs from traditional remote microphone use in which the microphone is worn next to a single talker’s mouth and uses directional technology to focus on the talker’s speech and attenuate sounds from other directions. As a result, remote microphone systems featuring these capabilities often allow for manual (via a user-controlled switch) or automatic switching of the function of the microphone mode.

For example, some modern remote microphone systems are equipped with accelerometers and/or gyroscopes that allow for detection of the spatial orientation of the microphone/transmitter and automatic activation of the microphone mode deemed most appropriate for the use case associated with the detected orientation. Specifically, when the microphone is used in the “Small Group” mode (i.e., lying on a table in the horizontal plane), a three-microphone beamformer is activated to adapt the beam of focus to any input from 360° around the device (within four feet from the device) (see [Figure 1A]). By contrast, when the microphone/transmitter is determined to be worn in the “Lanyard” mode (i.e., positioned in the vertical plane as it would be when worn around the neck on a lanyard and in close proximity to a teacher’s mouth), the microphone automatically switches to fixed directional mode to focus on the teacher’s speech and to attenuate competing noise (see [Figure 1B]). There are no published studies evaluating the potential benefits and limits of the use of a remote microphone system that automatically switches to an adaptive directional mode designed to optimize hearing performance in small-group educational settings.

Zoom Image
Figure 1 Microphone activation associated with two different remote microphone use applications, (A) Small Group mode using the three-microphone beamformer for adaptively targeting multiple talkers, (B) Lanyard (Teacher) mode using a dual-microphone beamformer for maximum focus on a single talker. (This figure appears in color in the online version of this article.)

Study Objectives

In light of the aforementioned discussion, the objectives of the present study were as follows:

  • To compare speech recognition of school-age children HA users between three technology conditions: (a) HAs alone in the omnidirectional microphone mode (HA-O), (b) HAs alone in adaptive directional mode (HA-ADM) (i.e., the HA automatically switches in noisy environments from omnidirectional mode to a directional mode with a cardioid polar plot pattern), and finally (c) HA-ADM mode with simultaneous use of a remote microphone system designed for small-group educational activities (HA-ADM + RM) (i.e., the remote microphone automatically switches to the Small Group mode when lying on a desk) in a classroom setting that mimics a school small-group learning activity in omnidirectional mode.

  • To compare the perceptual preferences of the same school-age children between HA-ADM only or HA-ADM + RM.


#
#

MATERIALS AND METHODS

Participants

Information about the participants in this study is provided in [Table 1]. The participant inclusion criteria were as follows:

  • Participants were 7 to 17 years old (mean = 12.95). This age range was selected to allow evaluation of school-age children and also to include children who were mature enough to participate in the testing being completed in this study. Also, this age range is similar to previous research examining remote microphone technologies in children (e.g., [Wolfe et al, 2013]).

  • Participants had a four-frequency pure-tone average between 35- and 75-dB HL in the better ear. This hearing loss range was selected because it allowed assessment of children for whom the use of amplification is imperative for success in the classroom while also excluding children who may have had too much hearing loss to successfully complete the aided assessments conducted in this study. The participants had bilateral hearing loss with symmetrical degrees of hearing loss (e.g., interaural pure-tone averages within 20 dB). Study participants were recruited from the patient database of the clinic where the data were collected. Air and bone conduction pure-tone thresholds were measured using conventional Hughson-Westlake clinician procedures ([Carhart and Jerger, 1959]) at the beginning of this study to ensure the children met audiometric inclusion criteria. Audiometric evaluation was completed with a Grasson-Stadler Industry (GSI) AudioStar Pro audiometer coupled to RadioEar IP-30 insert earphones and a RadioEar B-81 bone conduction oscillator.

  • All children were experienced binaural users of behind-the-ear HAs.

  • All children had a consonant-nucleus-consonant (CNC) ([Petersen and Lehiste, 1962]) word recognition score at 60 dBA of at least 60% correct in quiet in the best aided condition. This criterion was selected to ensure that children were unlikely to encounter floor effects on the speech recognition in noise testing completed in this study. CNC word testing was evaluated before aided testing was completed in this study. CNC word recognition testing was completed in an audiometric test both with words presented from a loudspeaker located 1 m directly in front of the participant at a presentation level of 60 dBA at the location of the participant.

  • All participants were able to read written instructions and complete questionnaires in English.

  • All participants in this study used spoken language as their primary mode of communication.

Table 1

Demographic Information and Speech Recognition Scores for Individual Participants

Participant #

Age

Years of HA Use

PTA Right

PTA Left

CNC Score

SNR dB

HA-O Score

HA-ADM Score

HA-ADM-RM Score

1

12.3

6.5

63.8

62.5

92

0

59.7

62.3

81.5

2

13.2

6.8

56.3

75.0

90

0

61

60.9

69.5

3

12.4

6.5

47.5

45.0

100

0

75.5

88.1

97.4

4

14.2

9.5

55.0

52.5

94

0

73.6

47.4

91.4

5

11.4

8.5

45.0

45.0

96

−10

8.4

14.3

33.8

6

10.8

10.8

73.8

70.0

100

0

65.4

75.3

96.69

7

11.0

6.2

48.8

47.5

100

0

47.7

61.7

83.4

8

15.3

11.5

50.0

36.3

98

−5

41.5

37

64.9

9

13.8

7.0

68.8

67.5

94

0

37.1

34.4

72.9

10

10.0

7.3

63.8

35.0

100

+5

52.8

27.2

44.2

11

14.8

15.0

60.0

47.5

84

0

7.1

49.7

68.2

12

14.3

8.7

61.3

60.0

90

0

65.4

84.1

87.7

13

15.9

4.0

62.5

50.0

96

0

37.1

41.1

61.7

14

15.6

10.0

73.8

76.3

82

0

53.5

40.4

75.9

15

9.3

9.33

53.8

50.0

90

0

34.6

39.6

55

Avg

12.9

8.5

58.9

54.7

93.7

−0.7

48.0

50.9

72.3

SD

2.1

2.7

8.8

12.7

5.7

3.2

20.8

20.9

18.5

Note: All scores are in percent correct. HA-ADM = hearing aid with automatic activation of a directional microphone; HA-ADM-RM = HA-ADM mode and remote microphone technology; PTA = pure-tone average at 500, 1000, 2000, and 4000 Hz; age is in years; SNR = signal-to-noise ratio during testing with the speech signal at 70 dBA.


Fifteen children (10 males and 5 females) ranging in age from 9 to 15 years (mean age = 12.9 years; SD = 2.1) participated in this study. Mean audiometric thresholds of the participants are provided in [Figure 2], and additional participant demographic data are provided in [Table 1]. All of the children had sensorineural hearing loss. The study was approved by the Western Institutional Review Board (WIRB).

Zoom Image
Figure 2 Mean audiometric thresholds in dB HL (bars indicate one standard deviation).

#

Equipment

Participants in this study were fitted binaurally with Phonak Sky V90 behind-the-ear HAs coupled to their personal earmolds. Probe microphone measures were conducted with the Audioscan Verifit HA analyzer to verify the output of the HAs for each child. The HAs were programmed to provide an output that matched the Desired Sensation Level version 5.0 (DSL 5.0) pediatric targets (±5 dB) at 250, 500, 1000, 2000, 4000, and 6000 Hz for the Audioscan Verifit Standard Speech signal presented at 55, 65, and 75-dB SPL. Furthermore, the prescribed maximum output was set to DSL 5.0 real ear aided response targets for an 85-dB SPL input signal. The aided speech recognition and subjective preference was evaluated in the same session that the HAs were fitted. Although the children were not provided with an opportunity to acclimate to the HAs, it should be noted that all children used binaural Phonak HAs fitted to DSL 5.0 prescriptive targets before their inclusion in this study.


#

Test Environment and Materials

Assessment of speech recognition was conducted in a 25′3″ by 24′8″ by 9′ unoccupied classroom with an ambient noise level of approximately 44 dBA as measured with a Quest Technologies 1200 Type 1 sound level meter. The reverberation time required for 60-dB attenuation (RT60) was 0.6 seconds (averaged across frequency), which is similar to a typical classroom ([Knecht et al, 2002]; [Crukley et al, 2011]). Speech perception in noise was assessed using a multiple loudspeaker array with Genelec 8020B loudspeakers. The participant sat in the middle of the room behind a desk that was surrounded by three near-field loudspeakers located at 0, 90, and 270° relative to the desk (see [Figure 3]). The target speech was presented from three loudspeakers surrounding the desk. Four loudspeakers were located in the corners of the room and were used to present uncorrelated classroom noise ([Schafer and Thibodeau, 2006]) (see [Figure 3]).

Zoom Image
Figure 3 Loudspeaker arrangement used in the study. Relative positions of the listener, touchscreen microphone (shaded box), near-field speakers (black), and far-field speakers (white). The listener and near-field speakers were positioned 0.7 m away from the touchscreen microphone; far-field speakers were positioned 2.8 m away from the touchscreen microphone.

#

Test Methods and Equipment

All assessments administered in this study were completed within one test session which typically took about two hours to complete. The children were provided with 5- to 10-minute breaks between the HA fitting and speech recognition assessment portions of this study and again before the assessment of subjective preference.

Evaluation of Speech Recognition in Noise

Assessment of the participants’ speech recognition in noise was evaluated in three conditions: (a) HA-O, (b) HA-ADM, which automatically switched to a directional microphone mode with a cardioid polar plot pattern in the noisy situation evaluated in this study, and (c) HA-ADM with simultaneous use of the Phonak Roger Touchscreen remote microphone in the Small Group mode (i.e., HA-ADM-RM). The Small Group mode was automatically activated and could be verified by an icon on the touchscreen.

The Phonak Roger Touchscreen remote microphone is a digital radio frequency transmitter (2.4-GHz bandwidth) that adaptively increases the gain of the radio frequency signal when the ambient noise exceeds 57-dB SPL. The Small Group mode of the Roger Touchscreen uses a three-microphone beamformer to adaptively direct the beam toward the direction from which the speech signal is arriving and suppressing noise arriving from other directions originating from any azimuth 360° around the transmitter. The Small Group mode is designed so that the Roger Touchscreen microphone may be placed in the center of 2 to 5 group members during group learning or listening activities. Specific signal characteristics, such as the SNR and signal level, are analyzed and used to localize speech information and to identify the talker’s direction. The automatic analysis of the acoustics of the listening environment allows the Small Group mode to automatically follow the conversation by always focusing the beam of sensitivity toward the active talker. The Small Group mode uses a system of three microphones which are oriented orthogonally to one another (i.e., a 90o angle between each of the microphones) and which allow creation of four-directional beams. [Figure 4] provides a visual representation of the beams that may be automatically directed toward active talker. The Phonak Roger Touchscreen contains an accelerometer, which is used to determine the orientation and motion of the device. The Small Group mode is automatically activated when the Roger Touchscreen is placed on a flat surface, such as a desk or table.

Zoom Image
Figure 4 A visual representation of the beamforming capabilities of the Phonak Roger Touchscreen Small Group mode. (This figure appears in color in the online version of this article.)

Sentence recognition was evaluated with Institute of Electrical and Electronics Engineers (IEEE) sentences. As discussed in the following paragraphs, the sentences were randomly presented from each of three different loudspeakers. The Quest Technologies 1200 sound level meter was used to ensure that the presentation level of the sentences presented from each of the three loudspeakers was fixed at 70 dBA when measured at the location of the remote microphone. Before assessment in each of the three technology conditions, the level of classroom noise was adaptively adjusted to determine the SNR required for IEEE sentence recognition to fall in the 30–50% range (SNR-50) with the HA in the omnidirectional microphone mode and without the use of remote microphone technology. More specifically, the level of the competing noise was initially set at 60 dBA (i.e., +10-dB SNR), and sentence recognition was evaluated with a full list (10 sentences) of IEEE sentences. If the participant’s score exceeded 50% correct, the level of the competing noise was increased in 5-dB steps until a sentence recognition score between 30% and 50% correct was obtained for a full list (10 sentences) of IEEE sentences. The examiner noted that SNR that yielded a score between 30% and 50% correct, and the assessment of the three technology conditions (HA-O, HA-ADM, and HA-ADM-RM) was completed at the same SNR. The order in which performance was tested for each of the three technology conditions was counter-balanced across participants. For each of the three technology conditions, speech recognition in noise (in percent correct) was evaluated with two full lists (ten sentences per list; 20 sentences total) of IEEE sentences. The children were asked to repeat the words in the sentences that were presented, and the examiner scored each word that was correctly repeated (i.e., total percent correct = number of words presented correctly/number of total words within the sentences). Of note, although the HA-O served as the baseline condition for which the examiner determined the SNR yielding a score of 30–50% correct, the HA-O condition was evaluated again in a counter-balanced order with the remaining conditions to ensure that an order effect did not influence performance measured in the HA-O condition. Also of note, some of the participants’ scores in the HA-O study condition fell outside of the 30–50% correct range, a finding for which there is no definitive explanation.

Within each condition, the sentences were randomly presented from each of the three loudspeakers surrounding the participant’s desk, whereas uncorrelated classroom noise was presented from the four loudspeakers located in the corners of the classroom at a level corresponding to the previously determined SNR-50 (see [Figure 3]). Although the speaker from which the sentences were presented was randomly alternated, within each technology condition, seven sentences were presented from one loudspeaker, seven sentences were presented from another loudspeaker, and six sentences were presented from the third loudspeaker. Because the order of technology conditions and sentence lists were counter-balanced across participants, an equal number of sentences were presented from each of the three loudspeaker locations (i.e., 0, 90, and 270°).


#

Assessment of Subjective Preference

After the speech recognition assessment was completed, the examiner evaluated the participant’s subjective preference for the remote microphone in group mode with the HA in directional mode compared with the HA in directional mode but without remote microphone use. Specifically, the participants listened to a list of IEEE sentences in two conditions: (a) HA-ADM and (b) HA-ADM-RM (of note, the HA-O condition was not evaluated because it was presumed that the HA would adaptively switch to the directional microphone mode in a noisy environment with a noise level similar to what was used in this study). The presentation level of the sentences was fixed at 70 dBA at the location of the remote microphone, and the classroom noise level was set to the level that was used in the speech recognition portion of this study. After listening to a full list of 10 IEEE sentences in each condition, the children were asked to rate their preference for the technology conditions under evaluation across five listening domains: (a) listening comfort (e.g., the children were asked, “which made it more comfortable for you to listen?”), (b) intelligibility of speech (e.g., the children were asked, “which made the speech more clear and easier to understand?”), (c) sound quality (e.g., the children were asked, “which had the best sound quality?”; “which sounded the best?”), (d) reduction of background noise (e.g., the children were asked, “which made the noise go away the most?”), and (e) overall preference (e.g., the children were asked, “which was your overall favorite?”; “which did you like the most?”).

For each listening domain, the participants expressed their preference for one of the two listening programs by selecting from five options, each of which was assigned a value by the examiner, (a) Program A is much better than program B (+2), (b) Program A is a little better than program B (+1), (c) Program A and B are the same (0), (d) Program B is a little better than program A (−1), and (e) Program B is much better than program A (−2). The order in which each program was evaluated was counter-balanced across each listening domain. The children were blinded as to the function of the technology corresponding to programs A and B (and they were unaware of the function of the remote microphone, which was placed in front of them on the table throughout the entire assessment). The Appendix provides an example of the form used to acquire the children’s preference across each listening domain.


#
#
#

RESULTS

Mean IEEE sentence recognition scores (with 1 SD bars) are shown in [Figure 5], and individual participant data are provided in [Table 1]. To examine any significant differences across the conditions, a one-factor repeated measures analysis of variance was used. The analysis revealed a significant main effect of condition, F (2, 45) = 24.8, p < 0.0001. A post-hoc analysis was conducted with the Tukey–Kramer multiple comparisons test, which revealed that the HA in directional mode with simultaneous use of the Phonak Roger Touchscreen remote microphone in the Small Group mode resulted in significantly better speech recognition than the remaining two hearing aid–alone conditions (p < 0.05).

Zoom Image
Figure 5 Mean speech recognition scores using three different technology conditions: (1) HA alone with omnidirectional microphone mode, (2) HA alone with adaptive directional microphone mode, and (3) HA with adaptive directional microphone technology along with simultaneous use of a remote microphone in the Small Group mode. Vertical lines represent one standard deviation.

The individual subjective ratings (n = 14) across the five domains (comfort, intelligibility, quality, background noise, and overall preference) are shown in [Figure 6] (of note, the subjective task was not completed with one participant because of technical difficulties with the study equipment). Across all domains, most participants rated HA-ADM-RM equal to or higher than the HA-ADM.

Zoom Image
Figure 6 Mean subjective ratings across listening domains while using the (1) HA alone with adaptive directional microphone mode and (2) HA with adaptive directional microphone technology along with simultaneous use of a remote microphone in the Small Group mode. Vertical lines represent one standard deviation.

#

DISCUSSION

Speech Recognition in Noise

The most clinically significant finding of this study is the fact that the Small Group application of the remote microphone provided a significant improvement in speech recognition in noise in a simulated small-group learning environment relative to the remaining conditions. This represents the first published report showing the potential benefit of placing a remote microphone in the middle of a desk to allow a child with HAs to better understand his or her peers when involved in a small-group learning activity. This finding is relevant because it supports a potential solution for a problematic situation for which until now, there were few alternatives to optimize hearing performance of children with hearing loss.

As previously mentioned, it is possible for children to use a multitalker network of remote microphone systems so that each peer in a small group is outfitted with a personal remote microphone. Most likely, this alternative would allow better speech recognition in noise than what was obtained in this study with a single remote microphone in the Small Group mode. However, a multitalker network of remote microphone systems may not be a viable option for some classrooms because of school district budget limitations. Also, some classroom teachers and peers may find the use and implementation of a multitalker network to be inconvenient or undesirable. Additional research is needed to develop a better understanding of the advantages and limitations of different types of remote microphone and HA technologies for the purpose of improving hearing performance in small-group learning activities. This research should attempt to clarify the magnitude of speech recognition improvement in noise one would expect to obtain with a multitalker remote microphone system, a single remote microphone in the Small Group mode, and HA noise management technology, such as adaptive beamforming.

Of note, the mean speech recognition in noise score was similar between the omnidirectional and directional conditions (i.e., no significant benefit or detriment occurred with directional use relative to performance in the omnidirectional condition) when the HAs were used without the remote microphone system. Previous research examining the potential pros and cons of directional HA use in classroom situations has generally shown improvement in speech recognition in noise with the use of directional technology when the signal of interest arrives from in front of the child ([Ricketts et al, 2007]; [Ricketts and Picou, 2013]; [Wolfe et al, 2017]). In addition, directional decrement, has been reported for situations in which the signal of interest does not arrive from in front of the listener and when the origin of signal of interest changes throughout testing ([Ricketts et al, 2007]; [Ricketts and Picou, 2013]; [Wolfe et al, 2017]). In the present study, because target sentences were presented randomly from loudspeaker located at three different locations (0, 90, and 270°) without the use of a video monitor (i.e., auditory-only presentation rather than auditory-visual), there were no visual cues available to participants to assist in identifying the location from which the signal was originating. As a result, the participants may have been unable to promptly orient toward the loudspeaker in which the signal was being presented, an action that is necessary to obtain benefit from directional microphone technology. Of note, although there were no differences in the group mean scores for the HA-O and HA-ADM conditions, one child in this study did obtain a statistically significant improvement in speech recognition in noise with use of the directional microphone (HA-ADM) relative to the omnidirectional condition (HA-O). By contrast, two children did experience a statistically significant decrease in speech recognition in noise with the use of the directional microphone (HA-ADM). It is unclear why one child obtained benefit from the use of the adaptive directional microphone, whereas two children experienced detriment from the use of the directional microphone. Also of note, one of the children who experienced a significant decrease in speech recognition in noise with the use of the directional microphone also did not obtain an improvement in speech recognition in noise in the remote microphone condition (HA-ADM-RM). It is possible that the apparent detriment associated with the use of the directional microphone limited the child’s overall performance when the remote microphone was used in conjunction with the HA directional microphone. Further research is needed to better understand the potential pros and cons of adaptive directional microphone use in small-group classroom situations.

In real-world use, it is possible that children with HAs may have used visual cues to locate the talker and orient in that direction. If this did indeed occur, then use of adaptive directional microphone technology may prove to offer better speech recognition in noise than omnidirectional use. However, it should be noted that children do not always orient toward the signal of interest in classroom or other daily situations ([Valente et al, 2012]; [Lewis and Wannagot, 2014]; [Lewis et al, 2015]), and when they do make a concerted effort to look at a talker, speech comprehension often decreases ([Lewis and Wannagot, 2014]; [Lewis et al, 2015]). The decrease in speech recognition and comprehension associated with a greater tendency to attempt to look at the talker is attributed to a greater number of cognitive resources being allocated toward identifying the location of the talker which results in fewer cognitive resources available for listening comprehension. If better speech comprehension is associated with lower looking behavior, then it is possible that the use of a remote microphone in the Small Group mode may allow better hearing performance than use of directional HA technology in small-group listening activities. Additional research is needed to further understand the potential benefits and limitations of adaptive directional technology and remote microphone technology when visual cues are also available to the child with hearing loss.


#

Subjective Preference

In agreement with the speech recognition score, the participants (n = 14) typically reported a preference for the use of the remote microphone system in the Small Group mode when compared with use of the HA alone ([Figure 5]). More specifically, for each domain, the following number of participants preferred (i.e., combined ratings of “Slightly Better” or “Much Better”) the remote microphone Small Group mode: 10 for Comfort, 10 for Intelligibility, 10 for Quality, 8 for Background Noise, and 11 for Overall Preference.


#

Summary and Clinical Implications

Behavioral and subjective measures of speech perception conducted in this study indicate that use of remote microphone technology with the Small Group mode may improve hearing performance in small-group learning activities. Specifically, sentence recognition in noise improves by almost 25 percentage points compared with the HA alone condition, and more than two-thirds of the children who completed the subjective ratings expressed a slight to moderate preference for the use of the remote microphone Small Group technology in regard to listening comfort, sound quality, speech intelligibility, background noise reduction, and overall listening experience. Taken collectively, the use of a remote microphone with an adaptive Small Group listening mode appears to be a viable solution to improve hearing performance in small-group learning activities in the classroom.


#
#

Abbreviations

CNC: consonant-nucleus-consonant
DSL: Desired Sensation Level
HA: hearing aid
HA-ADM: hearing aid with automatic activation of a directional microphone
HA-O: hearing aid in omnidirectional microphone mode
IEEE: Institute of Electrical and Electronics Engineers
RM: repeated measures
RT60: reverberation time for signal to attenuate by 60 decibels
SNR: signal-to-noise ratio

APPENDIX

Zoom Image

#
#

No conflict of interest has been declared by the author(s).

Acknowledgments

The authors would like to express their gratitude to Phonak, LLC, which provided a grant that partially funded the research study described in this manuscript. Christine Jones and Lori Rakita are employees of Phonak, LLC, and Jace Wolfe is a member of the Phonak Pediatric Advisory Board.

This research was partially funded by a grant from Phonak, LLC. These data have not been presented in any other journal or at any professional meeting.


  • REFERENCES

  • Carhart R, Jerger JF. 1959; Preferred method for clinical determination of pure-tone thresholds. J Speech Hear Disorders 24: 330-345
  • Choi YC, McPherson B. 2005; Noise levels in Hong Kong primary schools; Implications for classroom listening. Intl J Disabil Dev Educ 52: 345-360
  • Crandell C. 1991; Classroom acoustics for normal-hearing children: implications for rehabilitation. Educ Audiol Monogr 2: 18-38
  • Crandell C. 1992; Classroom acoustics for hearing impaired children. J Acoust Soc Am 92: 2470
  • Crandell C. 1993; Noise effects on the speech recognition of children with minimal hearing loss. Ear Hear 14: 210-216
  • Crandell C, Bess F. 1986; Speech recognition of children in a “typical” classroom setting. Asha 28: 82
  • Cruckley J, Scollie S, Parsa V. 2011; An exploration of nonquiet listening at school. J Educ Audiol 17: 23-35
  • Feilner M, Rich S, Jones C. 2016; Scientific background and implementation of pediatric optimized automatic functions. Phonak Insight 1-5
  • Finitzo-Hieber T, Tillman T. 1978; Room acoustics effects on monosyllabic word discrimination ability for normal and hearing-impaired children. J Speech Hear Res 21: 440-458
  • Hawkins DB. 1984; Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. J Speech Hear Disord 49 (04) 409-418
  • Knecht HA, Nelson PB, Whitelaw GM, Feth LL. 2002; Background noise levels and reverberation times in unoccupied classrooms: predictions and measurements. Am J Audiol 11: 65-71
  • Lewis D, Wannagot S. 2014; Effects of looking behavior on listening and understanding in a simulated classroom. J Educ Audiol 20: 1-10
  • Lewis DE, Valente DL, Spalding JL. 2015; Effect of minimal/mild hearing loss on children’s speech understanding in a simulated classroom. Ear Hear 36 (01) 136-144
  • Lewis MS, Crandell CC, Valente M, Horn JE. 2004; Speech perception in noise: directional microphones versus frequency modulation (FM) systems. J Am Acad Audiol 15 (06) 426-439
  • Madell JR. 1992; FM systems as primary amplification for children with profound hearing loss. Ear Hear 13 (02) 102-107
  • Massie R, Dillon H. 2006; The impact of sound-field amplification in mainstream cross-cultural classrooms, Part 1: educational outcomes. Austral J Educ 50: 62-77
  • Nabelek AK, Nabelek IV. 1985. Room acoustics and speech perception. In: Katz J. Clinical Audiology. 3rd ed. Baltimore, MD: Williams & Wilkins; 834-846
  • Pearsons KS, Bennett RL, Fidell S. 1977. Speech Levels in Various Noise Environments. Report No. EPA-600/1-77-025 Washington, DC: U.S. Environmental Protection Agency;
  • Peterson GE, Lehiste I. 1962; Revised CNC lists for auditory tests. J Speech Hear Disord 27: 62-70
  • Ricketts TA. 2001; Directional hearing aids. Trends in Amplif 5 (04) 139-176
  • Ricketts T, Galster J, Tharpe AM. 2007; Directional benefit in simulated classroom environments. Am J Audiol 16 (02) 130-144
  • Ricketts TA, Picou EM. 2013; Speech recognition for bilaterally asymmetric and symmetric hearing aid microphone modes in simulated classroom environments. Ear Hear 34 (05) 601-609
  • Ronsse LM, Wang LM. 2013; Relationships between unoccupied classroom acoustical conditions and elementary student achievement measured in eastern Nebraska. J Acoust Soc Am 133 (03) 1480-1495
  • Schafer EC, Thibodeau LM. 2006; Speech recognition in noise in children with cochlear implants while listening in bilateral, bimodal, and FM-system arrangements. Am J Audiol 15 (02) 114-126
  • Valente DL, Plevinsky HM, Franco JM, Heinrichs-Graham EC, Lewis DE. 2012; Experimental investigation of the effects of the acoustical conditions in a simulated classroom on speech recognition and learning in children. J Acoust Soc Am 131 (01) 232-246
  • Wolfe J, Duke M, Schafer E, Jones C, Rakita L. 2017; Evaluation of adaptive noise management technologies for school-age children with hearing loss. J Am Acad Audiol 5 (01) 415-435
  • Wolfe J, Morais M, Neumann S, Schafer E, Mülder HE, Wells N, Hudson M. 2013; Evaluation of speech recognition with personal FM and classroom audio distribution systems. J Educ Audiol 19: 65-79
  • Wolfe J, Schafer EC, Heldner B, Mulder H, Ward E, Vincent B. 2009; Evaluation of speech recognition in noise with cochlear implants and dynamic FM. J Am Acad Audiol 20 (07) 409-421

Corresponding author

Jace Wolfe
Hearts for Hearing Foundation
Oklahoma City, OK 73012

  • REFERENCES

  • Carhart R, Jerger JF. 1959; Preferred method for clinical determination of pure-tone thresholds. J Speech Hear Disorders 24: 330-345
  • Choi YC, McPherson B. 2005; Noise levels in Hong Kong primary schools; Implications for classroom listening. Intl J Disabil Dev Educ 52: 345-360
  • Crandell C. 1991; Classroom acoustics for normal-hearing children: implications for rehabilitation. Educ Audiol Monogr 2: 18-38
  • Crandell C. 1992; Classroom acoustics for hearing impaired children. J Acoust Soc Am 92: 2470
  • Crandell C. 1993; Noise effects on the speech recognition of children with minimal hearing loss. Ear Hear 14: 210-216
  • Crandell C, Bess F. 1986; Speech recognition of children in a “typical” classroom setting. Asha 28: 82
  • Cruckley J, Scollie S, Parsa V. 2011; An exploration of nonquiet listening at school. J Educ Audiol 17: 23-35
  • Feilner M, Rich S, Jones C. 2016; Scientific background and implementation of pediatric optimized automatic functions. Phonak Insight 1-5
  • Finitzo-Hieber T, Tillman T. 1978; Room acoustics effects on monosyllabic word discrimination ability for normal and hearing-impaired children. J Speech Hear Res 21: 440-458
  • Hawkins DB. 1984; Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. J Speech Hear Disord 49 (04) 409-418
  • Knecht HA, Nelson PB, Whitelaw GM, Feth LL. 2002; Background noise levels and reverberation times in unoccupied classrooms: predictions and measurements. Am J Audiol 11: 65-71
  • Lewis D, Wannagot S. 2014; Effects of looking behavior on listening and understanding in a simulated classroom. J Educ Audiol 20: 1-10
  • Lewis DE, Valente DL, Spalding JL. 2015; Effect of minimal/mild hearing loss on children’s speech understanding in a simulated classroom. Ear Hear 36 (01) 136-144
  • Lewis MS, Crandell CC, Valente M, Horn JE. 2004; Speech perception in noise: directional microphones versus frequency modulation (FM) systems. J Am Acad Audiol 15 (06) 426-439
  • Madell JR. 1992; FM systems as primary amplification for children with profound hearing loss. Ear Hear 13 (02) 102-107
  • Massie R, Dillon H. 2006; The impact of sound-field amplification in mainstream cross-cultural classrooms, Part 1: educational outcomes. Austral J Educ 50: 62-77
  • Nabelek AK, Nabelek IV. 1985. Room acoustics and speech perception. In: Katz J. Clinical Audiology. 3rd ed. Baltimore, MD: Williams & Wilkins; 834-846
  • Pearsons KS, Bennett RL, Fidell S. 1977. Speech Levels in Various Noise Environments. Report No. EPA-600/1-77-025 Washington, DC: U.S. Environmental Protection Agency;
  • Peterson GE, Lehiste I. 1962; Revised CNC lists for auditory tests. J Speech Hear Disord 27: 62-70
  • Ricketts TA. 2001; Directional hearing aids. Trends in Amplif 5 (04) 139-176
  • Ricketts T, Galster J, Tharpe AM. 2007; Directional benefit in simulated classroom environments. Am J Audiol 16 (02) 130-144
  • Ricketts TA, Picou EM. 2013; Speech recognition for bilaterally asymmetric and symmetric hearing aid microphone modes in simulated classroom environments. Ear Hear 34 (05) 601-609
  • Ronsse LM, Wang LM. 2013; Relationships between unoccupied classroom acoustical conditions and elementary student achievement measured in eastern Nebraska. J Acoust Soc Am 133 (03) 1480-1495
  • Schafer EC, Thibodeau LM. 2006; Speech recognition in noise in children with cochlear implants while listening in bilateral, bimodal, and FM-system arrangements. Am J Audiol 15 (02) 114-126
  • Valente DL, Plevinsky HM, Franco JM, Heinrichs-Graham EC, Lewis DE. 2012; Experimental investigation of the effects of the acoustical conditions in a simulated classroom on speech recognition and learning in children. J Acoust Soc Am 131 (01) 232-246
  • Wolfe J, Duke M, Schafer E, Jones C, Rakita L. 2017; Evaluation of adaptive noise management technologies for school-age children with hearing loss. J Am Acad Audiol 5 (01) 415-435
  • Wolfe J, Morais M, Neumann S, Schafer E, Mülder HE, Wells N, Hudson M. 2013; Evaluation of speech recognition with personal FM and classroom audio distribution systems. J Educ Audiol 19: 65-79
  • Wolfe J, Schafer EC, Heldner B, Mulder H, Ward E, Vincent B. 2009; Evaluation of speech recognition in noise with cochlear implants and dynamic FM. J Am Acad Audiol 20 (07) 409-421

Zoom Image
Figure 1 Microphone activation associated with two different remote microphone use applications, (A) Small Group mode using the three-microphone beamformer for adaptively targeting multiple talkers, (B) Lanyard (Teacher) mode using a dual-microphone beamformer for maximum focus on a single talker. (This figure appears in color in the online version of this article.)
Zoom Image
Figure 2 Mean audiometric thresholds in dB HL (bars indicate one standard deviation).
Zoom Image
Figure 3 Loudspeaker arrangement used in the study. Relative positions of the listener, touchscreen microphone (shaded box), near-field speakers (black), and far-field speakers (white). The listener and near-field speakers were positioned 0.7 m away from the touchscreen microphone; far-field speakers were positioned 2.8 m away from the touchscreen microphone.
Zoom Image
Figure 4 A visual representation of the beamforming capabilities of the Phonak Roger Touchscreen Small Group mode. (This figure appears in color in the online version of this article.)
Zoom Image
Figure 5 Mean speech recognition scores using three different technology conditions: (1) HA alone with omnidirectional microphone mode, (2) HA alone with adaptive directional microphone mode, and (3) HA with adaptive directional microphone technology along with simultaneous use of a remote microphone in the Small Group mode. Vertical lines represent one standard deviation.
Zoom Image
Figure 6 Mean subjective ratings across listening domains while using the (1) HA alone with adaptive directional microphone mode and (2) HA with adaptive directional microphone technology along with simultaneous use of a remote microphone in the Small Group mode. Vertical lines represent one standard deviation.
Zoom Image