CC BY-NC-ND 4.0 · Int Arch Otorhinolaryngol 2021; 25(02): e235-e241
DOI: 10.1055/s-0040-1712482
Original Research

Balancing the Loudness in Speech Processors and Contralateral Hearing Aids in Users of Unilateral Cochlear Implants

1   Department of Otorhinolaryngology, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
,
1   Department of Otorhinolaryngology, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
,
1   Department of Otorhinolaryngology, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
,
1   Department of Otorhinolaryngology, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
,
1   Department of Otorhinolaryngology, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
› Institutsangaben
 

Abstract

Introduction The use of cochlear implants and hearing aids (bimodal) has been growing with the expansion of the indication for them, and it is important to ensure protocols so that there is a balance of the loudness regarding the two devices.

Objective To evaluate if the limited complex sounds present in the frequency bands of the current devices enable the balance of the loudness in adult users of bimodal stimulation, and to analyze if speech recognition improves after balancing.

Methods A prospective cross-sectional study with convenience sampling. The sample was composed of 25 adults who had used either a cochlear implant for at least 6 months or a contralateral hearing aid, with a mean age of 46 years. The balancing of the loudness was performed in an acoustic room with the computer's sound box (0° azimuth at 70 dB SPL). The instrumental sounds were filtered through eight different frequency bands. The patients used both hearing devices and were asked if the sound was perceived to be louder in one of the ears or centrally. The speech test was evaluated with sentence silence (65 dB SPL) and/or noise signal ratio of 0 dB/+ 10 dB in free field at 0° azimuth, before and after balancing.

Results: Out of the 25 patients, 5 failed to achieve balance at every tested frequency, and 3 achieved balance at almost every frequency, except 8 kHz. There was a significant difference between the speech recognition test only in silence before and after balancing.

Conclusion: Most patients achieved sound equalization at all evaluated frequencies under the complex-sound protocol. Additionally, most patients experienced improved speech recognition after balancing.


#

Introduction

Bilateral cochlear implants (CIs) are considered the gold standard to treat individuals with severe to profound sensorineural hearing loss who do not benefit from hearing aids (HAs). For patients who do not meet the criteria for bilateral implantation,[1] a solution could be bimodal stimulation, in which the patient uses a CI and an HA contralaterally. Moreover, cases that have higher levels of contralateral residual hearing may benefit from this solution. There are concerns regarding the selection and adaptation protocols of HAs in bimodal stimulation.[2]

The literature has shown that bimodal stimulation is very advantageous and can improve sound recognition in noisy environments,[2] [3] as well as location of speech.[4] However, it has not been widely adopted. One of the main concerns is that there is a difference between the sensations induced by the electrical stimulations through the CIs and the acoustic stimulations through the HAs.[5]

According to Ching et al,[6] amplification in a non-implanted ear is important to prevent hearing deprivation and the possible deterioration of speech recognition. Given the importance of bimodal stimulation in patients who may benefit from HAs and in those who have not been offered bilateral CIs, we need to be aware that loudness may be processed differently by the CI and the HA.[7]

Scherf and Arnold[8] showed the HA gains, and the balance of the loudness of both devices was the most common parameter that needed to be adjusted. Ching et al[9] recommend comparing the devices to identify the best frequency response to understand speech and balance loudness. This then helps find the HA gain that best promotes an auditory input at the same volume as that of the opposite side.

The balancing protocol for bimodal patients remains to be fully explored. Since the indication for this adaptation has been growing with the expansion of the criteria for CI indication, audiologists must have various options to ensure balance is achieved between the HA and CI and to validate the programming of the devices. In the present study, we considered binaural balance as an important factor that needed to be approached when trying to achieve the best results from each device in users of bimodal stimulation.

Thus, the objectives of the present study were to evaluate if complex sounds present in the limited frequency bands of the current devices enable the balance of loudness in adult users of bimodal stimulation, and to analyze the improvement in speech recognition after balancing.


#

Methods

The present prospective cross-sectional study was approved by the Ethics in Research Committee of a tertiary hospital under CAPPesq number 941.254. The sample was randomly selected using convenience sampling from patients examined between January 2014 and November 2016.

The inclusion criteria were adult users of unilateral CIs and contralateral HAs for at least six months, who were assisted in adjusting their contralateral HAs at the hospital during the research period. They were selected and invited to participate in the balancing protocol. The exclusion criteria were participants who used their HAs inconsistently or who had difficulty in collaborating in the balancing protocol.

In total, 25 participants with bimodal stimulation agreed to participate in the study after the purpose and procedures were explained. Of these, 16 were female, and 9 were male, and all of them had attended the CI clinic between January 2014 and November 2016. The demographic data of these participants are shown in [Tables 1] and [2].

Table 1

Demographic data of the sample

Type of deafness: N (%)

 Postlingual

19 (76%)

 Prelingual

03 (12%)

 Perilingual

03 (12%)

Electrode insertion: N (%)

 Complete

23 (92%)

 Incomplete

02 (8%)

Time of implant use (minimum-maximum)

32 (7–87) months

Average age (minimum-maximum)

46 (18–71) years

Table 2

Mean pure tone thresholds on the side with a hearing aid

250 Hz

500 Hz

1 kHz

1.5 kHz

2 kHz

3 kHz

4 kHz

6 kHz

8 kHz

Mean

82 dB

90 dB

99 dB

109 dB

105 dB

103 dB

105 dB

111 dB

118 dB

Minimum

20 dB

40 dB

70 dB

75 dB

75 dB

60 dB

65 dB

70 dB

70 dB

Maximum

130 dB

115 dB

120 dB

130 dB

130 dB

130 dB

130 dB

130 dB

130 dB

Note: A 130-dB absent response was used for statistical purposes.


The sample was composed of users of CIs made by every existing manufacturer: Med-El (Innsbruck, Tyrol, Austria), 10 users; Oticon Medical (Vallauris, Alpes-Maritmes, France) 6 users; Cochlear (Sydney, NSW, Australia); and Advanced Bionics (Valencia, CA, USA) 2 users. Mapping data and information about the CIs are shown in [Table 3].

Table 3

Hearing device data

Patient ID

Cochlear Implant Speech Processor

Hearing Aid

Frequency allocation table (Hz)

Hearing Aid frequency limits (Hz)

S1

Opus 21

Chili 5SP5

100–8500

100–6500

S2

Opus 21

Xtreme 1216

100–8500

100–4000

S3

Opus 21

Naida s III SP7

350–8500

100–5000

S4

Opus 21

411 Extra7

100–8500

100–6800

S5

Opus 21

Chili 5SP5

100–8500

100–6500

S6

Saphyr2

Xtreme 1216

195–8008

100–4000

S7

Saphyr2

Naida III UP7

195–8008

100–5000

S8

Freedom3

Naida III UP7

188–7938

100–5000

S9

Harmony4

Chili 5SP5

250–8700

100–6500

S10

Saphyr2

Naida I SP7

195–8008

100–6900

S11

Naida CI4

Chili 5SP5

250–8700

100–6500

S12

Freedom3

Naida I UP7

188–7938

100–5000

S13

Nucleus 53

Xtreme 1216

188–7938

100–4000

S14

Opus 21

Naida I UP7

100–8500

100–5000

S15

Saphyr2

Naida I UP7

195–8008

100–5000

S16

Digi SP2

Naida S III SP7

195–8008

100–5000

S17

Saphyr2

Xtreme 1216

195–8008

100–4000

S18

Freedom3

Xtreme 1216

188–7938

100–4000

S19

Nucleus 53

Xtreme 1216

188–7938

100–4000

S20

Opus 21

Chili 5SP5

100–8500

100–6500

S21

Opus21

Sumo DM5

100–8000

100–5000

S22

Freedom3

Naida III SP7

188–7938

100–5000

S23

Opus 21

Xtreme 1216

310–8500

100–4000

S24

Opus 21

Sumo XP5

100–8500

100–5000

S25

Freedom3

Sumo DM5

188–7938

100–5000

Legend: 1-Med-EL (Innsbruck, Tyrol, Austria); 2-Oticon Medical (Vallauris, Alpes-Maritmes, France); 3-Cochlear (Sidney, NSW, Australia); 4-Advanced Bionics (Valencia, CA, USA); 5-Oticon (Copenhagen, Denmark); 6-Bernafon (Bern, Switserland); 7-Phonak (Staefa, Zurich, Switserland).


The following data were analyzed from the medical records: age, length of CI use, model of the speech processor, length of HA use, and frequency table for CIs and HAs. We also analyzed the free-field audiometry of the CIs and HAs before and after the balancing protocol.

The balancing protocol was based on the one proposed by Ching et al.[9] The test was applied in an acoustic room with the computer's sound box at 0° azimuth and the stimulus at 70 dB SPL. We used filtered instrumental sounds from a CD sonar system (Carapicuiba, SP, Brasil) recorded at the following frequency bands: 500 Hz and 700 Hz, and 1 kHz, 2 kHz, 3 kHz, 4 kHz, and 8 kHz.[10] The participants, using the two devices simultaneously (HA and CI), were asked to report if the sound was perceived equally or if it was louder in one of the ears. The prescription of HA followed the National Acoustic Laboratories' nonlinear fitting procedure, version 1 (NAL-NL1),[11] or the desired sensation level (DSL),[12] according the preference of the participant.

The sounds were first presented in the initial HA settings used by the participant. After the unbalanced frequencies were identified, adjustments were made to the HA gain or maximum output. To achieve balance between the devices, those adjustments were made through either increasing or decreasing the HA volume to equate it with the IC volume.

The participants were already users of HAs, and the prescription and configuration were verified using the GN Otometrics Aurical Plus equipament (Taastrup, Denmark) with an in situ probe in the ear contralateral to the one with the CI. After the balancing adjustments, the verification of the prescription target was repeated.

To analyze the gain from the frequency bands in the different HA brands, the frequency bands were chosen as bass, medium, and high, and grouped according to the following frequencies: low frequency, 250 Hz or 300 Hz; mid-frequency, 1,000 Hz or 1,500 Hz; and high frequency, 3,500 Hz or 4,000 Hz.

We analyzed the speech recognition test results using the participant's performance before and after the balancing protocol. This data was collected as part of the CI care routine.[13] The difficulties in speech recognition evaluation were in accordance with the performance of the participants. The sentences conducted in silence (closed or open set)[14] were performed in the free field at 0° azimuth and at 65 dB SPL, and/or the sentences in noise with a signal to noise ratio (SNR) at 0 dB or +10 dB.

The data was collated in Microsoft Excel (Microsoft Corp., Redmond, WA, US) spreadsheets and analyzed with the BioEstat (Belem, PA, Brasil) software, version 5.3, using descriptive statistics and a paired t-test. Values of p < 0.05 were considered statistically significant.


#

Results

Out of the 25 participants, 5 failed to achieve balance at every tested frequency, and 3 reached equilibrium at almost every frequency, except for 8,000 Hz ([Table 4]). We observed no statistical difference between the pure tone audiometry (PTA) means, the length of use, and the characteristics of the HAs.

Table 4

Participant data organized by the results of the loudness balancing test

Achieved balance in every frequency

(n = 17)

Achieved balance in every frequency, except 8 khz

(n = 3)

Achieved balance only in 4 to 6 frequencies (n = 5)

PTA (dB HL)

96

94

107

 Minimum

40

80

90

 Maximum

130

120

130

Time of CI use (months)

Mean (minimum–maximum)

36 (7–87)

22 (16–31)

24 (7–52)

HA fitting parameters

 Mean minimum frequency (Hz)

100

100

100

 Mean maximum frequency (Hz)

5218

5500

4800

 Frequency range

Low

Mid

High

Low

Mid

High

Low

Mid

High

 Mean MPO (dB)

111

122

117

111

119

110

118

128

122

 Mean gain (dB)

46

53

44

48

53

41

55

60

46

Abbreviations: CI, cochlear implant; HA, hearing aid; MPO, maximum power output; PTA, pure-tone average.


All participants underwent in situ verification to confirm changes to the HA after balancing, except for one participant in whom the verification could not be performed due to technical problems with the equipment. Almost all of the participants were able to maintain the target settings for the prescription, except for three participants in whom it was not possible to reach the target suggested by the software in order to ensure the participant's auditory comfort. Out of the 25 participants, 7 chose the DSL prescription.

After the statistical analyses, a significant difference was observed between the speech recognition tests conducted before and after balancing in silence in twenty participants with complete data. However, there were no statistical differences in speech recognition before and after balancing in a noisy environment ([Table 5]). In total, 4 participants had a worse performance regarding speech recognition, and 2 did not experience any changes after balancing the loudness. These participants were not in the group that failed to achieve balance in more than seven of the tested frequencies.

Table 5

Speech recognition in silence and in noise before and after loudness balancing

Before balancing

After balancing

p-value

Speech in silence ( n  = 20)

Mean

67%

75%

0.044*

Standard deviation

35.7

32.5

Speech in noise ( n  = 16)

Mean

30.6%

41.2%

0.0972

Standard deviation

39.5

36.6

Note: *Statistically significant value (p < 0.005, paired t-test).



#

Discussion

The main objective of the present study was to verify the possibility of using narrow complex sounds as a balancing protocol for users of bimodal stimulation. As a result, the use of narrow complex sounds seems to be able to produce a bare loudness balance for patients with profound hearing loss and little residual hearing. However, the best option would be speech material that can be directed to the patient's comprehension. Ching et al[6] presented a recorded story in the sound field to equalize the loudness of the speech between the ears. In the present study, the hypothesis was to use limited complex sounds due to the restricted access to speech sounds in the ear contralateral to the one with the CI. This topic is interesting as the population tends to improve their residual hearing, and it should be considered in future studies.

Most of the participants achieved sound equalization in every evaluated frequency, along with an improvement in speech recognition after balancing.

Since the present study performed a qualitative analysis of a convenience sample, it was not possible to confirm that the study population accurately represents the population of implanted patients who use bimodal stimulation at this CI Center. Moreover, the participants were users of different brands of CI and HAs. However, the results we obtained could be used to program the speech processor and contralateral HA, as it was possible to perform the balancing protocol in different brands of CIs and HAs. It was also possible to perform the balancing of loudness in pre-, peri- and postlingual patients, because the comparison of the results was performed with the participants themselves.

Several studies[15] [16] [17] with patients using bimodal stimulation show that residual hearing is a factor that contributes to using a contralateral HA and CI. Devocht et al[15] showed that the group that remained with an HA after one year of CI use had a 3-frequency pure-tone average (PTA) of 92.3 dB HL. Moreover, Neuman and Svirsky[16] showed that HAs are best used in patients who have up to 95 dB HL and up to 2 kHz. Conversely, Neuman et al[17] reported that patients who had stopped using HAs were those with a PTA lower than 99.2 dB HL. In the present study, we offered a balanced fitting, which enabled the patients to benefit from bimodal stimulation even if their average contralateral hearing was of 98 dB HL.

The present study showed that the new balancing protocol was feasible in the whole sample. The loudness balance was achieved in almost all of the participants. However, eight participants were not able to achieve the same loudness in both ears in one or more frequencies.

The balance of the loudness of the two devices has been described as a major issue in optimizing the fitting. As Scherf and Armold[8] mentioned, we believe that CI centers should be prepared and equipped to perform the procedure to optimize both hearing devices.

Since complex sounds are more easily perceived by patients with less residual hearing, this protocol using complex sounds was easy to apply and did not alter the overall time of patient care, since it was performed at the same time as the HA was adjusted. Ideally, we believe that bimodal patients can have speech processor mapping and the HA adjustment performed together to help balance the loudness between the two devices. Although ideally both devices will be adjusted, Ching et al[9] emphasized in their study the possibility of applying the balancing protocol either during the HA adjustment or CI fitting.

Dividing the sample according to the results of the balancing protocol, we observed that the groups were heterogeneous regarding the number of participants. However, we noted that the participants with more residual hearing achieved balancing in either all of the frequencies or in almost all of them, except for 8,000 Hz. The HAs of the participants who did not achieve balance had a narrower frequency range and higher gain and maximum output values, which is a consequence of greater hearing loss ([Table 4]). It would be interesting to study a larger sample to observe if this is an actual trend and if these HA characteristics could facilitate balancing. For example, a study by Neuman and Svirsky[16] showed that the HA frequency band is essential for balance.

It was also interesting to observe the HA prescription rule, since seven participants preferred the DSL either because they were already users of it before the CI surgery, or because they had better results with it. Almost all of these participants were able to achieve balance at all frequencies, except for 1 participant who was not able to balance at 8,000 Hz. Ching et al[9] recommended the NAL-NL1 prescription as a good starting point to fit an HA to a non-implanted ear. Individual fine-tuning of the gain-frequency response can be performed after fitting. Therefore, each case must be considered and offered a tailored HA adjustment option that will give the patient the best results.

Balanced loudness at the 8,000-Hz frequency cannot be expected, since it is a frequency that is not provided by the HA. Therefore, the CI side should be louder. Perhaps, as most participants had a sense of balance at every frequency, they felt that the frequency of 8,000 Hz was balanced as well. In cases that did not achieve balance at this frequency alone, 2 out of 3 of the participants improved their speech recognition after balancing. This emphasizes the importance of performing loudness balancing and speech recognition tests on each hearing device separately and together.

Regarding speech recognition, the performance of some participants worsened after balancing. Interestingly, they were not those who failed to achieve loudness balance. A bias of our analysis was the time between the pre- and post balancing evaluation, since the tests were performed in a routine-care setting at the CI Center, where we needed to respect the proposed schedule for speech-processor programming. Thus, the mean time between the test before and after the loudness balancing was 11 months, ranging from 1 day to 24 months. Nevertheless, this time was similar among the groups. When we separated the groups by their speech recognition results, the group with the best performance also had an average of 11 months, and the group with the worst performance had an average of 12 months.

The statistical analyses showed that, on average, the participants improved their speech recognition performance in silence after loudness balancing. There were no statistically significant differences when we performed separate analyses for the participants who did not achieve sound balancing. Thus, we can assume that the balance of loudness may have contributed to the improvement in test results in the bimodal condition. Although studies also show that there is a benefit of bimodal stimulation in both silence[2] [3] [4] and noise,[2] [17] no research has been found that evaluates speech recognition after a balancing protocol. Therefore, it is important to increase the sample size and to validate these results with speech recognition tests in both conditions.

In the present study, there were no statistically significant differences regarding speech recognition in noise. This may be explained by the fact that HAs contribute little to the access of speech sounds in participants with a mean PTA of 96 dB. In many cases, the participants would be considered candidates for a second CI. However, at the time of the present study, the priority for bilateral implantation was children up to 4 years old, or cases cited as exceptions. Even though, as bilateral CIs were not a possibility for adults, bimodal stimulation could be an option to maintain the stimulation of the auditory pathway. Another explanation can be found in the HA technology for difficult listening situations. We know that, for binaural hearing, communication between the two ears is essential, so communication between the electronic devices is equally important.

[Fig. 1] shows two interesting analyses. First, it shows that eight participants had worse performances with the HA after balancing. However, they found improvements in the bimodal condition. Two participants that had postlingual deafness, both with etiology of meningitis, had worse performances only with the CI, showing the importance of more frequent follow-ups in these cases, especially when a drop in performance occurs. Another interesting case was a participant with progressive deafness who had difficulty answering the questions about the device settings, which could have interfered with the participant's responses, despite the fact that they had experience using both devices (51 months with the CI).

Zoom Image
Fig. 1 Speech recognition results after loudness balancing.

Thus, we believe that speech-recognition tests are vital. These should be evaluated more frequently, as poor performances on speech tests may be a precursor to abandon the use of the HA. Neuman et al[17] asserted that almost all patients who discontinued their use of the HAs had relatively low performances in their speech recognition test. In these participants, the bimodal conditions (CI + HA) were not significantly better than with the CI alone. An audiogram analysis alone could not predict whether the participants would continue to use the contralateral HA.

Lastly, there was a case of postlingual deafness in which even after further HA adjustments, the participant did not show improvements in the bimodal stimulation and complained about the regulation of the HA. After this, further adjustments were made. Moreover, this participant had been using the CI for 9 months, showing that they were still adapting to the new auditory stimuli.

Thus, we suggest that CI manufacturers should include a bimodal fitting procedure in the CI fitting software to make the balancing procedure easier and to aid their implementation in the routine clinical practice.[8] Furthermore, communication between HAs and speech processors has been increasingly studied.[18] There are concerns regarding the ability of the two devices, HA and CI, to work together. Thus, the development of balancing protocols between the devices is very important to integrate the two devices and validate device adjustments.


#

Conclusion

A loudness balancing protocol with narrow complex sounds presented in the frequency bands of the devices helped adult users of bimodal stimulation to improve their speech recognition.


#
#

Conflicts of Interest

The authors have none to declare.

Acknowledgments

The authors would like to thank the Otorhinolaryngology Foundation (FORL) for their support of the Fellowship program on Cochlear Implants.

  • References

  • 1 Ministério da Saúde. Portaria n° 2.776, de 18 de dezembro de 2014 [Internet]. 2014 [cited 2019 April 26]. Available from: http://bvsms.saude.gov.br/bvs/saudelegis/gm/2014/prt2776_18_12_2014.html
  • 2 Ching TY, Incerti P, Hill M, van Wanrooy E. An overview of binaural advantages for children and adults who use binaural/bimodal hearing devices. Audiol Neurotol 2006; 11 (Suppl. 01) 6-11
  • 3 Hamzavi J, Pok SM, Gstoettner W, Baumgartner WD. Speech perception with a cochlear implant used in conjunction with a hearing aid in the opposite ear. Int J Audiol 2004; 43 (02) 61-65
  • 4 Potts LG, Skinner MW, Litovsky RA, Strube MJ, Kuk F. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). J Am Acad Audiol 2009; 20 (06) 353-373
  • 5 Dooley GJ, Blamey PJ, Seligman PM. et al. Combined electrical and acoustical stimulation using a bimodal prosthesis. Arch Otolaryngol Head Neck Surg 1993; 119 (01) 55-60
  • 6 Ching TY, Psarros C, Hill M, Dillon H, Incerti P. Should children who use cochlear implants wear hearing aids in the opposite ear?. Ear Hear 2001; 22 (05) 365-380
  • 7 Blamey PJ, Dooley GJ, Parisi ES, Clark GM. Pitch comparisons of acoustically and electrically evoked auditory sensations. Hear Res 1996; 99 (1-2): 139-150
  • 8 Scherf FW, Arnold LP. ; Poster presentation at the 12th International Conference on Cochlear Implants and Other Implantable Auditory Technologies, ESPO 2012, Amsterdam, the Netherlands, SFORL 2012, Paris, France. Exploring the clinical approach to the bimodal fitting of hearing aids and cochlear implants: results of an international survey. Acta Otolaryngol 2014; 134 (11) 1151-1157
  • 9 Ching TY, Incerti P, Hill M. Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear Hear 2004; 25 (01) 9-21
  • 10 Lima MCMP, Araujo AML, Araujo FCRS. Sistema Sonar: sons normalizados para avaliação audiológica. Carapicuíba: Pro Fono; 2011
  • 11 Byrne D, Dillon H, Ching T, Katsch R, Keidser G. NAL-NL1 procedure for fitting nonlinear hearing aids: characteristics and comparisons with other procedures. J Am Acad Audiol 2001; 12 (01) 37-51
  • 12 Scollie S, Seewald R, Cornelisse L. et al. The Desired Sensation Level multistage input/output algorithm. Trends Amplif 2005; 9 (04) 159-197
  • 13 Goffi-Gomez MVS, Guedes MC, Sant Anna SBG, Peralta CGO, Tsuji RK, Castilho AM. et al. Critérios de Seleção e Avaliação Médica e Audiológica dos Candidatos ao Implante Coclear: Protocolo HC-FMUSP. Arq Int Otorrinolaringol 2004; 8 (04) 303-313
  • 14 Costa MJ, Iorio MCM, Mangabeira-Albernaz PL. Reconhecimento de fala: desenvolvimento de uma lista de sentenças em português. Acta Awho. 1997; 16 (04) 164-173
  • 15 Devocht EM, George EL, Janssen AML, Stokroos RJ. Bimodal hearing aid retention after unilateral cochlear implantation. Audiol Neurotol 2015; 20 (06) 383-393
  • 16 Neuman AC, Svirsky MA. Effect of hearing aid bandwidth on speech recognition performance of listeners using a cochlear implant and contralateral hearing aid (bimodal hearing). Ear Hear 2013; 34 (05) 553-561
  • 17 Neuman AC, Waltzman SB, Shapiro WH, Neukam JD, Zeman AM, Svirsky MA. Self-Reported usage, functional benefit, and audiologic characteristics of cochlear implant patients who use a contralateral hearing aid. Trends Hear 2017; 21: 2331216517699530
  • 18 Veugen LC, Chalupper J, Snik AF, Opstal AJ, Mens LH. Matching Automatic Gain Control Across Devices in Bimodal Cochlear Implant Users. Ear Hear 2016; 37 (03) 260-270

Address for correspondence

Ana Tereza Matos Magalhães, PhD
Departamento de Otorrinolaringologia, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo
Avenida Dr. Eneas de Carvalho Aguiar, 255, Cerqueira César, São Paulo, SP, 05403-000
Brazil   

Publikationsverlauf

Eingereicht: 16. Mai 2019

Angenommen: 28. März 2020

Artikel online veröffentlicht:
23. Juni 2020

© 2020. Fundação Otorrinolaringologia. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commecial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Thieme Revinter Publicações Ltda.
Rua do Matoso 170, Rio de Janeiro, RJ, CEP 20270-135, Brazil

  • References

  • 1 Ministério da Saúde. Portaria n° 2.776, de 18 de dezembro de 2014 [Internet]. 2014 [cited 2019 April 26]. Available from: http://bvsms.saude.gov.br/bvs/saudelegis/gm/2014/prt2776_18_12_2014.html
  • 2 Ching TY, Incerti P, Hill M, van Wanrooy E. An overview of binaural advantages for children and adults who use binaural/bimodal hearing devices. Audiol Neurotol 2006; 11 (Suppl. 01) 6-11
  • 3 Hamzavi J, Pok SM, Gstoettner W, Baumgartner WD. Speech perception with a cochlear implant used in conjunction with a hearing aid in the opposite ear. Int J Audiol 2004; 43 (02) 61-65
  • 4 Potts LG, Skinner MW, Litovsky RA, Strube MJ, Kuk F. Recognition and localization of speech by adult cochlear implant recipients wearing a digital hearing aid in the nonimplanted ear (bimodal hearing). J Am Acad Audiol 2009; 20 (06) 353-373
  • 5 Dooley GJ, Blamey PJ, Seligman PM. et al. Combined electrical and acoustical stimulation using a bimodal prosthesis. Arch Otolaryngol Head Neck Surg 1993; 119 (01) 55-60
  • 6 Ching TY, Psarros C, Hill M, Dillon H, Incerti P. Should children who use cochlear implants wear hearing aids in the opposite ear?. Ear Hear 2001; 22 (05) 365-380
  • 7 Blamey PJ, Dooley GJ, Parisi ES, Clark GM. Pitch comparisons of acoustically and electrically evoked auditory sensations. Hear Res 1996; 99 (1-2): 139-150
  • 8 Scherf FW, Arnold LP. ; Poster presentation at the 12th International Conference on Cochlear Implants and Other Implantable Auditory Technologies, ESPO 2012, Amsterdam, the Netherlands, SFORL 2012, Paris, France. Exploring the clinical approach to the bimodal fitting of hearing aids and cochlear implants: results of an international survey. Acta Otolaryngol 2014; 134 (11) 1151-1157
  • 9 Ching TY, Incerti P, Hill M. Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear Hear 2004; 25 (01) 9-21
  • 10 Lima MCMP, Araujo AML, Araujo FCRS. Sistema Sonar: sons normalizados para avaliação audiológica. Carapicuíba: Pro Fono; 2011
  • 11 Byrne D, Dillon H, Ching T, Katsch R, Keidser G. NAL-NL1 procedure for fitting nonlinear hearing aids: characteristics and comparisons with other procedures. J Am Acad Audiol 2001; 12 (01) 37-51
  • 12 Scollie S, Seewald R, Cornelisse L. et al. The Desired Sensation Level multistage input/output algorithm. Trends Amplif 2005; 9 (04) 159-197
  • 13 Goffi-Gomez MVS, Guedes MC, Sant Anna SBG, Peralta CGO, Tsuji RK, Castilho AM. et al. Critérios de Seleção e Avaliação Médica e Audiológica dos Candidatos ao Implante Coclear: Protocolo HC-FMUSP. Arq Int Otorrinolaringol 2004; 8 (04) 303-313
  • 14 Costa MJ, Iorio MCM, Mangabeira-Albernaz PL. Reconhecimento de fala: desenvolvimento de uma lista de sentenças em português. Acta Awho. 1997; 16 (04) 164-173
  • 15 Devocht EM, George EL, Janssen AML, Stokroos RJ. Bimodal hearing aid retention after unilateral cochlear implantation. Audiol Neurotol 2015; 20 (06) 383-393
  • 16 Neuman AC, Svirsky MA. Effect of hearing aid bandwidth on speech recognition performance of listeners using a cochlear implant and contralateral hearing aid (bimodal hearing). Ear Hear 2013; 34 (05) 553-561
  • 17 Neuman AC, Waltzman SB, Shapiro WH, Neukam JD, Zeman AM, Svirsky MA. Self-Reported usage, functional benefit, and audiologic characteristics of cochlear implant patients who use a contralateral hearing aid. Trends Hear 2017; 21: 2331216517699530
  • 18 Veugen LC, Chalupper J, Snik AF, Opstal AJ, Mens LH. Matching Automatic Gain Control Across Devices in Bimodal Cochlear Implant Users. Ear Hear 2016; 37 (03) 260-270

Zoom Image
Fig. 1 Speech recognition results after loudness balancing.