Semin Hear 2004; 25(1): 17-24
DOI: 10.1055/s-2004-823044
Copyright © 2004 by Thieme Medical Publishers, Inc., 333 Seventh Avenue, New York, NY 10001, USA.

Neighborhood Activation in Spoken Word Recognition

Donald D. Dirks1
  • 1Professor Emeritus, UCLA School of Medicine, Los Angeles, California; and Consultant, National Center for Rehabilitative Auditory Research Veterans Administration Medical Center, Portland, Oregon
Remembering Tom Tillman During and following World War II, there was significant interest among audiologists and speech scientists in the development and standardization of speech recognition tests for clinical use. Clinical observations during the 1940s and 50s had already indicated that results from pure tone threshold tests often were not predictive of the receptive auditory communication ability among persons with hearing loss. No doubt, Tom Tillman, as a graduate student and later as a faculty member at Northwestern University, was influenced by the growing clinical interest in using speech to measure receptive communication ability. Several of Tillman's publications1 2 3 during his early professional life reflect his research interest in speech audiometry, an interest that continued and expanded4 5 6 throughout his career. He is especially remembered-in collaboration with Carhart -in the development of the Northwestern University Test No. 6, which is still in use today. Tillman's strategy for measuring speech recognition with phonemically-balanced, monosyllabic words was symptomatic of the basic and clinical speech research emphasis of that period, in particular, the general view that the perception of words required the recovery of a sequence of phonetic or phonemic elements. This orientation lead to “bottom-up” explanations of speech perception. Since the 1970s, however, basic speech research has reflected the growing recognition that any comprehensive theory of speech perception must account for the processes and representations that subserve the recognition of spoken words beyond the perception of individual consonants and vowels. The current article reviews several recent investigations conducted at the UCLA-VA Human Auditory Laboratory that provide evidence that cognitive and linguistic capabilities (“top-down processing”) play a role in rapid selection of a target word from other potential choices once an acoustic-phonetic pattern has been activated in memory. This article is dedicated to Tom Tillman, who served as an example of a dedicated, meticulous researcher to me during my pre-doctoral studies.
Further Information

Publication History

Publication Date:
02 April 2004 (online)

Abstract

Current models of spoken word recognition share the common assumption that the perception of speech includes two fundamental processes: activation and competition. A key feature of the discrimination process is competition among multiple representation of words activated in memory, and the single selection of a target word from the alternatives in the same lexical neighborhood. The focus of this article is on one of these models, the Neighborhood Activation Model (NAM). Several experiments are reviewed that support the basic tenets of NAM, and identify neighborhood density and neighborhood frequency along with word frequency as significant lexical factors affecting word recognition. The results indicate that NAM can be generalized to the word recognition performance of individuals with sensorineural hearing loss and persons for whom English is not their first language. These results provide support that activation-competition models of spoken word recognition may possess principles fundamental to the recognition of words.

REFERENCES

Donald D Dirks, Ph.D. 

11450 Waterford St.

Los Angeles, CA 90049

Email: Ddirks@UCLA.edu