Research on sound, neural processing could help deaf people hear amidst the noise

March 27, 2012

Research led by Michael G. Heinz, an associate professor of speech, language and hearing sciences who specializes in auditory neuroscience, shows how the inner ear processes the temporal structure of sound. These findings could someday improve how prosthetic hearing devices are designed to help people with profound hearing loss hear better in noisy places. The findings were published last month in The Journal of Neuroscience. (Purdue University photo/Andrew Hancock)

Download image

WEST LAFAYETTE, Ind. - A new understanding about how the inner ear processes the temporal structure of sound could some day improve how prosthetic hearing devices are designed to help people with profound hearing loss hear better in noisy places, according to new Purdue University research.

"Sound can be divided into fast and slow components, and today's cochlear implants provide only the slow varying components that help people with profound hearing loss hear conversations in quiet rooms, but don't allow them to hear as well in busy restaurants," said Michael G. Heinz, an associate professor of speech, language and hearing sciences who specializes in auditory neuroscience. "It has been thought that the fast varying sound components - which can't be provided with current cochlear implant technology - help to hear in noisy environments. Evidence for this idea has come from listening experiments that were interpreted based on the assumption that the fast and slow sound components could be separated within the ear.

"We decided to approach this problem by acknowledging that this separation is theoretically impossible to achieve but not impossible to deal with. We found that slowly varying neural components actually play the primary role in helping the brain understand speech in noisy environments. The critical fast varying acoustic components are actually transformed by the normal-hearing cochlea into slower neural components to ultimately help people hear better. Additional studies will be needed to explore how current cochlear implant technology can be adjusted to account for these cochlear transformations."

Heinz and Jayaganesh Swaminathan, a Purdue graduate and post-doctoral research associate at the Massachusetts Institute of Technology, analyzed how sounds picked up by normal hearing ears are understood by the brain. Previous studies had evaluated the perception of sound's acoustic waveform. However, focusing on the neural processing in this study clarified how fast and slow varying components each contribute to speech perception. The findings were published last month in The Journal of Neuroscience.

"Some have thought that one component can exist without the other, but now we know this is impossible to achieve in the ear, and this new knowledge can help scientists who are working to improve cochlear implant design," Swaminathan says.

Cochlear implants are a surgically implanted neural prosthesis that is used by more than 200,000 patients worldwide. The device helps deaf people whose cochlea is missing its hair cells translate sound into neural responses as the implant's electrodes stimulate the auditory nerve fibers.

"But, perhaps cochlear implants are not delivering all of the useful information with their current stimulation strategies," Swaminathan says. "At this time, their design focuses on the slowly varying components in the acoustic waveform rather than what the slowly varying components look like in the neural responses of normal-hearing ears."

The researchers used a psychophysiological approach that quantitatively linked neural coding - based on a computational auditory nerve model - and perception of speech in noise that was measured using normal hearing people. The same set of five specialized acoustic stimuli produced by vocoders was used, and listeners were asked to identify one of 16 consonants in varying degrees of background noise.

"The key distinction in our results is that it was the neural slow-fluctuation cues that were shown to be important rather than the acoustic slow-component cues that cochlear implants provide," said Heinz, who also has a joint appointment in biomedical engineering. "This may sound like the same thing, but the slow neural components include the effects of fast to slow conversions that occur within the normal-hearing cochlea but do not occur in the damaged ears of cochlear implant patients. These results are promising because they provide insight into a possible way to provide the useful information from fast acoustic cues using the slow fluctuations that existing cochlear implant technology can provide."

Heinz and Swaminathan will continue studying how neural signal processing can improve our understanding of speech perception in noise, as well as how these findings can be used to improve cochlear implants.

This research was supported by the National Institutes of Health's National Institute on Deafness and Other Communication Disorders, the Purdue Research Foundation, and Weinberg funds from the Department of Speech, Language and Hearing Sciences.

Writer: Amy Patterson Neubert, 765-494-9723, apatterson@purdue.edu

Sources: Michael G. Heinz, 765-496-6627, mheinz@purdue.edu
                 Jayaganesh Swaminathan, jswamy@mit.edu

Note to Journalists: Journalists interested in a copy of the journal article can contact Amy Patterson Neubert, Purdue News Service, 765-494-9723, apatterson@purdue.edu.

ABSTRACT

Psychophysiological Analyses Demonstrate the Importance of Neural Envelope Coding for Speech Perception in Noise

Jayaganesh Swaminathan, Michael G. Heinz

Understanding speech in noisy environments is often taken for granted; however, this task is particularly challenging for people with cochlear hearing loss, even with hearing aids or cochlear implants. A significant limitation to improving auditory prostheses is our lack of understanding of the neural basis for robust speech perception in noise. Perceptual studies suggest the slowly varying component of the acoustic waveform (envelope, ENV) is sufficient for understanding speech in quiet, but the rapidly varying temporal fine structure (TFS) is important in noise. These perceptual findings have important implications for cochlear implants, which currently only provide ENV; however, neural correlates have been difficult to evaluate due to cochlear transformations between acoustic TFS and recovered neural ENV. Here, we demonstrate the relative contributions of neural ENV and TFS by quantitatively linking neural coding, predicted from a computational auditory nerve model, with perception of vocoded speech in noise measured from normal hearing human listeners. Regression models with ENV and TFS coding as independent variables predicted speech identification and phonetic feature reception at both positive and negative signal-to-noise ratios. We found that: (1) neural ENV coding was a primary contributor to speech perception, even in noise, and (2) neural TFS contributed in noise mainly in the presence of neural ENV, but rarely as the primary cue itself. These results suggest that neural TFS has less perceptual salience than previously thought due to cochlear signal processing transformations between TFS and ENV. Because these transformations differ between normal and impaired ears, these findings have important translational implications for auditory prostheses.