Lexico-semantic and acoustic-phonetic processes in the perception of noise-vocoded speech: implications for cochlear implantation. 2014

Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
Department of Psychology, Royal Holloway, University of London Egham, UK ; Institute of Cognitive Neuroscience, University College London London, UK.

Noise-vocoding is a transformation which, when applied to speech, severely reduces spectral resolution and eliminates periodicity, yielding a stimulus that sounds "like a harsh whisper" (Scott et al., 2000, p. 2401). This process simulates a cochlear implant, where the activity of many thousand hair cells in the inner ear is replaced by direct stimulation of the auditory nerve by a small number of tonotopically-arranged electrodes. Although a cochlear implant offers a powerful means of restoring some degree of hearing to profoundly deaf individuals, the outcomes for spoken communication are highly variable (Moore and Shannon, 2009). Some variability may arise from differences in peripheral representation (e.g., the degree of residual nerve survival) but some may reflect differences in higher-order linguistic processing. In order to explore this possibility, we used noise-vocoding to explore speech recognition and perceptual learning in normal-hearing listeners tested across several levels of the linguistic hierarchy: segments (consonants and vowels), single words, and sentences. Listeners improved significantly on all tasks across two test sessions. In the first session, individual differences analyses revealed two independently varying sources of variability: one lexico-semantic in nature and implicating the recognition of words and sentences, and the other an acoustic-phonetic factor associated with words and segments. However, consequent to learning, by the second session there was a more uniform covariance pattern concerning all stimulus types. A further analysis of phonetic feature recognition allowed greater insight into learning-related changes in perception and showed that, surprisingly, participants did not make full use of cues that were preserved in the stimuli (e.g., vowel duration). We discuss these findings in relation cochlear implantation, and suggest auditory training strategies to maximize speech recognition performance in the absence of typical cues.

UI MeSH Term Description Entries

Related Publications

Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
June 2017, Journal of the Association for Research in Otolaryngology : JARO,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
June 1999, Journal of the American Academy of Audiology,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
January 2003, The Volta review,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
February 2015, Journal of the American Academy of Audiology,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
January 2012, The Journal of the Acoustical Society of America,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
October 2012, The Journal of the Acoustical Society of America,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
September 2010, Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
September 2021, JASA express letters,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
December 2025, JASA express letters,
Carolyn McGettigan, and Stuart Rosen, and Sophie K Scott
August 2022, JASA express letters,
Copied contents to your clipboard!