Seminars in Hearing Research at Purdue

Students, post docs, and faculty with interests in all aspects of hearing meeting weekly to share laboratory research, clinical case studies, and theoretical perspectives. Topics include basic and translational research as well as clinical practice. Participants are welcome from all of Purdue University including Speech, Language, and Hearing Science (SLHS), Biology (BIO), Biomedical Engineering (BME), Mechanical Engineering (ME), and Electrical Engineering (EE). Seminars provide an ideal venue for students to present their work with a supportive audience, for investigators to find common interests for collaborative efforts, and for speakers from outside Purdue to share their work. This seminar is partially supported by the Association for Research in Otolaryngology.

2018-2019 Talks

Past Talks

LYLE 1150: 1030-1120am (link to schedule)

April 18, 2019

Hari Bharadwaj (SLHS/BME) & Kelsey Dougherty (SLHS)

Characterizing "central gain" following reduced peripheral drive in the human auditory system

Abstract: The nervous system is known to adapt in many ways to changes in the statistics of the inputs it receives. A prominent example of such brain plasticity that is observed in animal models is that central auditory neurons tend to retain their firing rate outputs at roughly a constant level despite reductions in peripheral input due to hearing loss. This "central gain" is thought to come about by down-regulation of inhibitory neurotransmission. Pathological versions of such central gain are thought to underlie disorders such as tinnitus and hyperacusis. Separately, animal models of aging also show down-regulation of inhibition throughout the auditory system -- the extent to which peripheral loss contributes to such age-related changes is unknown. This presentation will describe the approach taken by our lab to characterize central gain in humans, including tinnitus sufferers, using EEG and perceptual experiments. Preliminary results will be presented with a goal of obtaining unvarnished feedback.

April 11, 2019

Ravinderjit Singh, Ph.D. Student (BME)

Neural sensitivity to Dynamic Binaural Cues: human EEG and Chinchilla single-unit responses

Abstract: This will be a preview of a Ravinderjit's upcoming presentation at ASA in May. Animals encounter dynamic binaural timing information in broadband sounds such as speech and background noise due to moving sound sources, self motion, or reverberation. Most physiological studies of interaural time delay (ITD) or interaural correlation (IAC) sensitivity have used static stimuli; neural sensitivity to dynamic ITD and IAC is rarely systematically addressed. We used a system-identification approach using maximum-length sequences (MLS) to characterize neural responses to dynamically changing ITDs and IACs in broadband sounds. Responses were recorded from humans (electroencephalogram; EEG) and from single neurons in terminally anesthetized chinchillas (auditory nerve fibers; ANFs). Chinchilla medial superior olive (MSO) responses were simulated based on binaural coincidence from recorded ANF spike times in response to left- and right-channel input. Estimated ITD and IAC transfer functions were low-pass, with corner frequencies in the range of  hundreds of Hz. Human EEG-based transfer functions, likely reflecting cortical responses, were also low-pass, but with much lower corner frequencies in the region of tens of Hz. Human behavioral detection of dynamic IAC extended beyond 100 Hz consistent with the higher brainstem limits.  On the other hand, binaural unmasking effects were only evident for low-frequency ITD/IAC dynamics in the masking noise. This suggests that subcortically coded fast dynamic cues are perceptually accessible and may support detection, whereas cortical limits may be reflected in whether cues can be utilized for binaural unmasking.

April 4, 2019

Matthew J. Thompson, Ph.D. Student (BME)

Decoding Morphogen Signaling in the Developing Organ of Corti

Abstract: The mature organ of Corti (OC) demonstrates meticulous spatial organization in cellular patterning with positional accuracy less than one cell diameter. This patterning emerges from a prosensory epithelium in response to spatiotemporal molecular cues known as morphogens. Many morphogens active during OC development have been identified, though their precise roles in spatial patterning are not yet fully characterized. It is hypothesized that a regulatory network involving Bmp4, canonical Wnt/ß-catenin, and Jag1-Notch pathways active during E11.5 and E12.5 principally refine the boundaries of the sensory epithelium, setting the stage for supporting and hair cell differentiation beginning at E13.5 as the active network topology evolves. To investigate these signals, semiquantitative confocal imaging is used to extract numeric data for spatial profiles at E12.5, when morphogen signaling levels are strongest prior to differentiation. These data are analyzed using information theoretic approaches to determine the amount of positional information provided along the medial-lateral domain, interpreted by cells as positional identity which is used for cell fate determination. Additionally, the profiles are to be used to calibrate and investigate mechanistic reaction-diffusion models and test hypotheses on network topology and morphogen transport. 

March 28, 2019

Andres Llico Gallardo, Ph.D. Student (BME)

Enhanced Speech Perception Using a Physiologically-based Cochlear Implant Stimulation-Preliminary Results

Abstract: Cochlear implants (CIs) are electronic devices capable of partially restoring hearing loss, a prevalent and disabling condition in the United States. Using electrical stimulation, CIs bypass the peripheral auditory system by directly delivering electrical stimulation patterns to the auditory nerve intended to replicate the outcomes of normal hearing. Modern stimulation patterns have generally been developed following a phenomenological approach instead of being derived from known physiological function of the auditory system. However, physiological models are usually complex and computationally expensive, creating a trade-off between accuracy and performance and thereby limiting their use for practical applications. Using a computational model of the auditory system, we have developed an optimization framework to solve the inverse problem to find the optimal stimulation sequence that generates the desired pattern for a CI user. This optimized sequence is the result of comparing neural activity patterns from the computational model of normal hearing and a CI simulator. Experiments included the presentation of phonetically balanced and HvD words to a post-lingually deaf subject in noise and quiet conditions. Preliminary results have shown significant improvements in speech perception tests under noise conditions, and more consistency between trials compared to traditional stimulation strategies. The proposed framework can be used as a ground truth for future improvements in either hardware or stimulation strategies. However, further research must be conducted to its adaptation for real-time applications.

March 21, 2019

Michael G. Heinz, Ph.D. (SLHS/BME)

Physiological and Behavioral Assays of Cochlear Synaptopathy in Chinchillas

Abstract: Moderate-level noise exposure can eliminate cochlear synapses without permanently damaging hair cells or elevating auditory thresholds in animals.  Cochlear synaptopathy has been hypothesized to contribute to human perceptual difficulties in noise that can be observed even with normal audiograms.  However, it is difficult to test this hypothesis because of 1) ethical limits in measuring human synaptopathy directly, and 2) synaptopathy has been most completely characterized in rodent models for which behavioral measures at speech frequencies are challenging.  We recently established a relevant mammalian behavioral model by showing that chinchillas have corresponding neural and behavioral amplitude-modulation (AM) detection thresholds in line with human thresholds. Furthermore, immunofluorescence histology confirmed synaptopathy occurs in chinchillas across a broad frequency range, including speech frequencies, following a lower-frequency noise exposure that avoids permanent changes in ABR thresholds and DPOAE amplitudes.  Auditory-nerve fiber responses showed that low-SR fibers were reduced in percentage (but not eliminated) following noise exposure, as in guinea pigs.  Non-invasive wideband middle‐ear muscle-reflex (MEMR) assays in awake chinchillas showed large and consistent reductions in suprathreshold amplitudes following noise exposure, whereas suprathreshold ABR wave-1 amplitude reductions were less consistent. The relative diagnostic strengths of MEMR and ABR assays were consistent with parallel studies of noise-exposed and middle-aged humans.  Behavioral assays of tonal-carrier AM detection in chinchillas before and after noise exposure found no significant performance degradation, suggesting more complex stimuli that provide a greater challenge to population neural coding may be required.  These anatomical, physiological, and behavioral data illustrate a valuable animal model for linking physiological and perceptual effects of hearing loss.  Funding: R01DC009838 (Heinz) and NIH R01DC015989 (Bharadwaj).

March 4, 2019

William Salloom, PhD. Candidate, PULS3 program (Strickland Lab)

Physiological and Psychoacoustic Measures of Two Different Auditory Efferent Systems

Abstract: The human auditory pathway has two efferent systems that can adjust our ears to incoming sound at the periphery. One system, known as the middle-ear muscle reflex (MEMR), causes contraction of the muscles in the middle-ear in response to loud sound, and decreases transmission of energy to the cochlea. A second system, known as the medial olivocochlear reflex (MOCR), decreases amplification by the outer hair cells in the cochlea. While these systems have been studied in humans and animals for decades, their functional roles are still under debate, especially their roles in auditory perception. The MOCR is thought to start being active at lower sound levels, and seems to have an effect across the frequency range, whereas the MEMR has been thought to be activated at higher sound levels and mainly affect low frequencies. The present study proposes to analyze these systems in more detail using physiological measures, and to measure perception using the same stimuli.  We hypothesize that these systems may actively adjust the dynamic range in response to incoming sound so that we are able to perceive information-bearing contrasts.

February 28, 2019

Brandon Coventry and Matthew Tharp, Weldon School of Biomedical Engineering 

Alternative coding characteristics in the medial geniculate body formed by collicular terminal conditions

Abstract: The medial geniculate body (MGB) is the primary sensory input to auditory cortex. As part of a junction between sensory and cortical neuronal populations, it is suspected that MGB neurons participate in a “coding transformation” of encoded acoustic stimuli. During this transformation, neural coding characteristics may transition from a time-dependent to a rate-dependent format. The ability of single neurons to preserve information about stimulus features such as frequency or loudness during a coding transformation is uncertain, and an understanding of the mechanisms behind a transformation provides insight for physiologically relevant encoding capabilities. To delineate possible transformation mechanisms, a model of rat MGB firing patterns was constructed in silico using NEURON software. Spike pattern inputs to MGB models were based upon neural activity evoked by the presentation of various amplitude-modulated sound stimuli, and resulting MGB output firing patterns were assessed. In this study, three metrics (information entropy, firing rate, and vector strength) are utilized to assess coding characteristics in mathematical models of MGB neurons. Model parameters were organized to represent physiological properties of either the dorsal (MGd) or ventral (MGv) region of MGB, and the relationships between information entropy, firing rate, and vector strength were observed for different stimulus frequencies. Results indicate that, depending upon the structure of inferior colliculus synaptic terminals within simulations, the same inputs of auditory information may be represented as one of two largely different coding schemes, and the corresponding physiological properties necessary for each distinct coding scheme are representative of actual physiological properties found within the MGd or the MGv. These results provide evidence for parallel pathways of information transmission within the MGB while suggesting that distinct regions of the MGB participate in divergent representations of the same auditory information. 

February 21, 2019

Alexander L. Francis, SLHS

Noise, hearing impairment, and health: the attention/effort/annoyance connection

Abstract: Noise is a significant source of annoyance and distress and is increasingly recognized as a major public health issue in Europe and around the world. Workplace noise impairs job performance and increases fatigue and susceptibility to chronic disease.  Noise is one of the top reasons given for abandoning a hearing aid. People who work in noise and people with hearing impairment are both at greater risk for cardiovascular diseases commonly associated with stress. Background noise may also be particularly troublesome for individuals with tinnitus, hyperacusis, or misophonia, all of which appear to involve atypical attentional and emotional responses to auditory stimuli. Even in non-clinical populations, sensitivity to noise varies considerably, with 20-40% of individuals reporting some sensitivity and 12% reporting high sensitivity. We hypothesize that both hearing impairment and background noise cause annoyance when irrelevant sounds interfere with task performance, e.g. through distraction and/or increased listening effort. Chronic annoyance, in turn, may induce physiological stress responses that damage long-term health. However, the effect of background noise and/or hearing impairment on long-term health may vary depending on individual differences in information processing (cognitive capacity), susceptibility to distraction (selective attention), and noise sensitivity or emotional responsivity (affective psychophysiology). In this talk I will discuss some recent studies we have been running within the context of a new research program to investigate individual differences in cognitive and affective responses to noise, and to develop objectively quantifiable measurements of psychophysiological response to noise-that could eventually be obtained through inexpensive wearable devices. 

February 14, 2019

Hari Bharadwaj, SLHS/BME

Assays of supra threshold hearing:  Integrative non-invasive windows into processes throughout the auditory system

Abstract: Everyday noisy environments with multiple sound sources place tremendous demands on the auditory system. Successful listening such environments relies on the interplay between early processes along the auditory pathway that encode the acoustic information, automatic processes throughout the auditory system that organize the encoded information, and cognitive processes such as selective attention that aid in processing of target information while ignoring irrelevant sources. Consequently, to understand an individual's performance in such complex listening tasks, it is important to integratively study the auditory system at multiple levels. Here, we illustrate non-invasive approaches that can probe different processes along the auditory pathway with applications to our understanding of suprathreshold hearing in three populations: middle-aged individuals, children with autism spectrum disorders, and young normal-hearing individuals with no hearing complaints but with widely varying performance in selective listening tasks.

February 7, 2019

A sampling of upcoming external presentations by the Purdue hearing science community

Speakers* & titles:

Prof. Lata Krishnan (SLHS): "Newborn Hearing Screening: Early Education = More Satisfied Mothers" (10 min)

Emily Han (BIOL): "Auditory Processing Deficits Correspond to Secondary Injuries along the Auditory Pathway Following Mild Blast Induced Trauma" (10 mins)

Kelsey Dougherty (SLHS) & Hannah Ginsberg (BME): "Non-Invasive Assays of Cochlear Synaptopathy in Humans and Chinchillas" (10 mins)

Agudemu Borjigan (BME): "Individual Differences in Spatial Hearing may arise from Monaural Factors" (5 mins)

Ravinderjit Singh (BME): "Neural Sensitivity to Dynamic Binaural Cues: Human EEG and Chinchilla Single-Unit Responses" (5 mins)

Vibha Viswanathan (BME): "Neurophysiological Evaluation of Envelope-based Models of Speech Intelligibility" (2 mins)

Satyabrata Parida (BME): "Effects of Noise-Induced Hearing Loss on Speech-In-Noise Envelope Coding" (2 mins)

January 31, 2019

Satyabrata (Satya) Parida, Ph.D. Student, Weldon School of Biomedical Engineering

Effects of noise-induced hearing loss on speech-in-noise envelope coding: Inferences from single-unit and non-invasive measures in animals

Abstract: Speech-intelligibility models (SIM) can be used for systematic fitting of hearing-aids and cochlear-implants, potentially improving clinical outcomes in noisy environments. Existing SIMs are suitable for predicting the performance of normal hearing subjects, but not for hearing impaired subjects due to our limited understanding of the effects of cochlear hearing impairment on speech and speech-in-noise coding. In order to address this gap, we collected auditory nerve (AN) single unit responses and envelope following responses (EFR) in normal- and hearing-impaired chinchillas to speech (sentence), spectrally-matched stationary noise, and noisy-speech mixtures. EFRs show evidence for degraded tonotopic coding, as observed in single unit responses (e.g., Henry et. al. J. Neurosci.-2016). In particular, the hearing impaired group is more susceptible to masking of medium frequency (.5-3 kHz) information by low frequency (<500 Hz) carrier energy. Our data also show an increased correlation between AN-fiber response envelopes of noisy-speech and noise-alone for hearing-impaired fibers in speech-relevant modulation frequency bands, suggesting a greater degree of distraction from inherent envelope fluctuations following cochlear hearing loss. This novel finding is significant given the emphasis recent SIMs (e.g., Jørgensen and Dau, JASA-2011) have placed on the importance of comparing inherent noise envelope fluctuations to speech envelope-coding fidelity in predicting noisy-speech perception. A future direction will be to develop SIMs based on our neuro- and electrophysiological data.

January 24, 2019

Ruth Y. Litovsky, Professor of Communication Sciences & Disorders Professor of Surgery, Division of Otolaryngology

Restoring binaural and spatial hearing in cochlear implant users

Abstract: Patients with bilateral deafness are eligible to receive bilateral cochlear implants (BiCIs), and in some countries, patients who suffer from single-sided deafness are receiving a cochlear implant (SSD-CI) in the deaf ear. In both the BiCI and SSD-CI populations there is a potential benefit from the integration of inputs arriving from both ears. One of the demonstrated benefits is improved sound localization. To understand the factors that are important for binaural integration we use research processors, delivering pulsatile stimulation to multiple binaural pairs of electrodes. Our novel stimulation paradigms are designed to restore both binaural sensitivity and speech understanding. A second known benefit is improved ability to segregate speech from background noise or competing maskers. Our recent studies are aimed at measuring both release from masking and release from cognitive load. In these studies, we use real-time pupil dilation as a means to assess listening effort while subjects listen to speech stimuli.  We are interested in the extent to which bilateral hearing in BiCI patients, and in SSD-CI promote release from masking, and the corresponding cognitive load. By understanding the cost/benefit of integrating inputs to two ears, a more complete picture of the advantages of bilateral stimulation can emerge.

January 17, 2019

Vibha Viswanathan, Ph.D. Student, Weldon School of Biomedical Engineering

Evaluating Human Neural Envelope Coding as the Basis of Speech Intelligibility in Noise

Abstract: Models of speech intelligibility that accurately reflect human listening performance across a broad range of background-noise conditions could be clinically important (e.g., for deriving hearing-aid prescriptions, and optimizing cochlear-implant signal processing). A leading hypothesis in the field is that internal representations of envelope information ultimately determine intelligibility. However, this hypothesis has not been tested neurophysiologically. Here, we address this gap by combining human electroencephalography (EEG) with simultaneous perceptual intelligibility measurements. First, we derive a neural envelope-coding metric (ENVneural) from EEG responses to speech in multiple levels of stationary noise, and identify a mapping between the neural metric and corresponding speech intelligibility. Then, using the same mapping, we use only EEG measurements to test whether ENVneural is predictive of speech intelligibility in novel background-noise conditions and in the presence of linear and non-linear distortions. Preliminary results suggest that neural envelope coding can predict speech intelligibility to varying degrees for different realistic listening conditions. These results inform modeling approaches based on neural coding of envelopes, and may lead to the future development of physiological measures for characterizing individual differences in speech-in-noise perceptual abilities.

January 7, 2019

RCR presentation on animal use ethics and protocols

The first "presentation" will be a responsible conduct of research (RCR) discussion on animal use ethics and protocols. Please come and participate in the discussion on this important topic. Note that if you are funded through  a federal training grant, attending this session may be a requirement.

Please take a moment to look through the working schedule for this semester here:

https://purdue.edu/TPAN/hearing/shrp_schedule


The titles and abstracts of the talks will be updated here:

https://purdue.edu/TPAN/hearing/shrp_abstracts

 

LYLE 1150: 1030-1120am 

December 6, 2018

Ravinderjit Singh, M.D./Ph.D. student, Weldon School of Biomedical Engineering

Neural Sensitivity to Dynamic Binaural Cues: human EEG and chinchilla single-unit responses

Animals encounter dynamic binaural information in broadband sounds such as speech and background noise. These dynamic cues can result from: 1) moving sound sources, 2) self-motion, or 3) reverberation. Two dynamic binaural cues that are investigated with this work are inter-aural time delay (ITD) and inter-aural correlation (IAC). Most studies investigating ITD or IAC sensitivity have used static or sinusoidally varying inter-aural signals while neural sensitivity to change in ITD or IAC is rarely systematically addressed. 

We are using a systems-identification technique to characterize neural responses to dynamics of changing ITD and IAC in broadband sounds. We use a maximum length sequence (MLS) to modulate either the ITD or IAC of a broadband noise carrier. Neural responses are recorded from humans using electroencephalography (EEG) and from auditory nerve fibers (ANFs) in terminally anesthetized chinchillas. Using the responses from ANFs, responses from a higher order brainstem structure, the medial superior olive (MSO), are simulated. Human behavioral data is also obtained to determine the upper limits of human detection of dynamic IAC and to quantify how thresholds for target detection in noise vary with IAC dynamics. 

Results thus far show that transfer functions from the MSO (simulated from ANF responses), are low pass with corner frequencies in the range of hundreds of Hz. In contrast, EEG-based transfer functions, presumably reflecting cortical responses, were also low pass, but with corner frequencies in the range of tens of Hz. Preliminary human behavioral results will also be presented.

 

November 29, 2018

Inyong Choi, PhD, Asst. Professor, Communication Sciences & Disorders, U. Iowa

Causal relationship between selective attention and speech unmasking during word-in-noise recognition.

This presentation will introduce results from two recent studies in normal-hearing listeners' speech-in-noise understanding: What cognitive factors explain its variability and how. The first study shows that speech unmasking, revealed by the amplitude ratio between cortical auditory evoked responses to target sound and noise, predicts accuracy and processing speed during a word-in-noise recognition task. Individual differences in speech unmasking were thought to be related to auditory selective attention process, which enhances the strength of neural responses to attended sounds while suppresses the neural responses to ignored sounds. In the second study, we tested whether training of selective attention can improve speech unmasking, which in turn improves the accuracy of word-in-noise recognition. During training, subjects were asked to attend to one of two simultaneous but asynchronous auditory streams. For the participants assigned to the experimental group, visual feedback was provided after each trial to demonstrate whether their attention was correctly decoded from their single-trial EEG response. The experimental group participants with four weeks of this neurofeedback training exhibited amplified cortical evoked responses to target speech as well as improved word-in-noise recognition, while the placebo group participants did not show consistent improvement. This result presents the causal relationship between auditory selective attention and speech-in-noise performance.

 

November 1, 2018

Miranda Skaggs and Nicole Mielnicki, Au.D. graduate students, SLHS (Strickland lab)

Behavioral measures of cochlear gain and gain reduction in listeners with normal hearing or minimal cochlear hearing loss

On the audiogram, hearing thresholds are divided into discrete categories of normal, mild, moderate, etc.  However, there is likely a continuum of hearing abilities even within the normal range. This is a continuation of a study examining the relationship between various psychoacoustic measures thought to be related to cochlear function, including gain reduction.  In the listeners tested, thresholds for long-duration tones ranged from well within the clinically normal range to just outside this range.  Where thresholds were elevated, other clinical tests were consistent with a cochlear origin.  Because the medial olivocochlear reflex (MOCR) decreases the gain of the cochlear active process in response to sound, when possible, measures were made with short stimuli.  Signal frequencies ranged from 1 to 8 kHz.  Maximum gain was estimated by measuring the threshold masker level for a masker at the signal frequency and a masker nearly an octave below the signal frequency.  One point on the lower leg of the input/output function was measured by finding the threshold masker level for a masker slightly less than one octave below the signal frequency needed to mask a signal at 5 dB SL.  Gain reduction was estimated by presenting a pink noise precursor before the signal and masker, and measuring the change in signal threshold as a function of precursor level. The relationship between these measures will be discussed.

October 25, 2018

Erik Larsen, Ph.D., Golbarg Mehraei, Ph.D., and Ann Hickox, Ph.D., Decibel Therapeutics

Making drug therapies for hearing a reality: What Does it Take?

There are still no approved drugs for preventing or treating hearing loss available, despite a massive unmet need and the limitations of current hearing assistive devices. However, recently there has been an increasing amount of investment in new companies in the hearing therapeutics space. What is actually needed to translate scientific discoveries into actual products that can meet regulatory approval? Why haven’t pharmaceutical and biotech companies been successful so far? This talk will highlight some of these aspects using Decibel Therapeutics’ approach as an example, and includes some highlights from our research & development.

October 18, 2018

Kelly L. Whiteford, Ph.D., Postdoctoral Fellow, University of Minnesota

Mechanisms for Coding Frequency Modulation

Modulations in frequency (FM) and amplitude (AM) are fundamental for human and animal communication. Humans are most sensitive to FM at low carrier frequencies (fc < ~4 kHz) when the modulation rate is slow (fm < 10 Hz), which are also the frequencies and rates most important for speech and music perception. The leading explanation for our exquisite sensitivity within this range is that slow FM is coded by precise, phase-locked spike times in the auditory nerve (time code). Low-carrier FM at faster rates and higher carriers at all rates, on the other hand, are thought to be represented by tonotopic (place) coding, based on the conversion of FM to AM via cochlear filtering. We utilized individual differences in sensitivity to a variety of psychophysical tasks, including low-carrier FM and AM at slow (fm = 1 Hz) and fast (fm = 20 Hz) modulation rates, to better understand the peripheral code for FM. Tasks were assessed across three large groups of listeners: Young, normal-hearing (NH) listeners (n=100), NH listeners varying in age (n=85), and listeners varying in degree of sensorineural hearing loss (SNHL; n=56). Results from all three groups revealed high multicollinearity amongst FM and AM tasks, even tasks thought to be coded by separate peripheral mechanisms. For normal-hearing listeners, the bulk of variability in performance appeared to be driven by non-peripheral factors. Data from listeners varying in SNHL, however, showed strong correlations between the fidelity of cochlear place coding (frequency selectivity) and FM detection at both slow and fast rates, even after controlling for audibility, age, and sensitivity to AM. Overall, the evidence suggests a unitary code for FM that relies on the conversion of FM to AM via cochlear filtering across all FM rates and carrier frequencies.

October 11, 2018

Agudemu Borjigan, Ph.D. student, Weldon School of Biomedical Engineering

Investigating the Role of Temporal Fine Structure in Everyday Hearing

In challenging environments with multiple sound sources, successful listening relies on precise encoding and use of fine-grained spectrotemporal sound features. Indeed, human listeners with normal audiograms can derive substantial release from masking when there are discrepancies in the pitch or spatial location between the target and masking sounds. While the temporal fine-structure (TFS) in low-frequency sounds can convey information about both of these aspects of sound, a long standing and nuanced debate exists in the literature about the role of TFS cues in masking release in complex environments. Understanding the role of TFS in complex listening environments is important for optimizing the design of assistive devices such as cochlear implants. The long term goal of the present study is to leverage individual differences across normal-hearing listeners to address this question. As the first step, we are measuring individual TFS sensitivity via both psychophysical and electroencephalography (EEG) approaches. Preliminary data show large variance across subjects in both behavioral and EEG measures. Follow-up experiments will compare individual differences in these TFS-coding measures to speech-in-noise perception with complex maskers in co-located and spatially separated configurations to understand the role of TFS in everyday hearing.

October 4, 2018

Jeffrey Lucas​, Professor, Department of Biological Sciences

Using auditory information to keep eagles out of wind turbines

Golden eagles and bald eagles are known to be involved in collisions with wind turbines.  This source of mortality may be an important contributor to poor population viability for golden eagles in particular.  One potential technique that could be used to reduce collision rates is to identify alerting stimuli that make the turbine itself a more salient stimulus to the birds.  As part of a larger project, we have recently begun to collect data on the auditory physiology of eagles with an eye to finding stimuli that are maximally alerting. We are also looking for stimuli that are minimally influenced by noise masking because the conditions around wind turbines can potentially mask certain types of sounds.  We review preliminary results on bald eagles and offer some insight into what types of auditory stimuli might be useful in reducing death rates of eagles in a world where wind energy is becoming a more important source of energy for an ever-growing human population.

September 27, 2018

Elizabeth Strickland, Ph.D., CCC-A, Professor of Speech, Language, and Hearing Sciences

Preceding sound may improve detection in a forward masking task

There are physiological mechanisms that adjust the dynamic range of the peripheral auditory system in response to sound.  One of these, the medial olivocochlear reflex (MOCR) feeds back to the cochlea, and adjusts the gain in response to sound.  Our research uses behavioral measures that may reflect peripheral gain, and look for evidence of a decrease in gain after preceding sound.  When a signal and a masker are on at the same time (simultaneous masking), preceding sound may make the signal audible at a lower signal to masker ratio, thus improving perception.  However, when the masker precedes the signal (forward masking), preceding sound has been shown to increase signal threshold, decrease frequency selectivity, and decrease suppression.  While all of these effects are consistent with a decrease in gain, they all sound like bad things.  In this talk, I will show a condition in forward masking where the signal is audible at a lower signal to masker ratio following preceding sound, which might be a good thing.

September 20, 2018

Brandon S Coventry, Ph.D. Candidate, Weldon School of Biomedical Engineering, Purdue Institute of Integrative Neuroscience, and Center for Implantable Devices, Purdue University

Optical deep brain stimulation of the central auditory pathway

Neurological and sensory neuroprostheses based on electrical stimulation have proven effective in restoring auditory percepts in cochlear and auditory brainstem implants as well as treatment of Parkinson’s disease and Tourette’s syndrome with deep brain stimulation (DBS). However, deficits in modern devices, such as current spillover and inability to selectively target local circuits, results in undesirable auditory percepts in sensory prostheses and undesirable side effects in central nervous system implants. Infrared neural stimulation (INS) is an optical technique which has been shown to selectively stimulate nerves and neurons using long wavelength (> 1450 nm) infrared light. INS is a promising stimulation modality because it does not require genetic modification of the target, allowing translation to human patients without additional genetic manipulations. Furthermore, previous studies in nerve have suggested that INS is more spatially specific than conventional electrical stimulation. Preliminary studies in the central nervous system have suggested INS can elicit responses in cortical structures. However the efficacy of INS in generating biophysical responses in thalamocortical networks is unexplored. Demonstration of effective thalamocortical recruitment would establish INS a potential stimulation therapeutic which could theoretically improve on cochlear and brainstem implant performance. In this study, Sprague-Dawley rats of both sexes were implanted with optrodes in the medial geniculate body (MGB) in the auditory thalamus and 16 channel microwire arrays in the primary auditory cortex (A1). After recovery, auditory and infrared stimuli were presented to awake, restrained animals. Auditory stimuli consisted of click trains at sound levels between 60 and 90 dB, random spectrum stimuli with spectral contrasts of 5, 10, and 15 dB, and amplitude modulated broadband noise. Infrared stimuli operated in quasi-continuous wave with singular pulses of 0-600 mW power with varying pulse widths between 5-100 ms duration. Initial results show that infrared stimulation of MGB gives rise to repeatable and short-latency action potentials and local field potentials in the auditory cortex. Furthermore, joint-peristimulus time historgram analysis suggests that INS acts in a spatially specific manner, recruiting only local circuits for activation. Finally, the use of INS for next generation cochlear implants and auditory brainstem/midbrain implants will be discussed.

September 13, 2018

Prof. Lisa L. Hunter, Ph.D., FAAA, Scientific Director of Research, Division of Audiology, Cincinatti Children's Hospital Medical Center​

High frequency hearing, otoacoustic emissions and speech-in-noise deficits due to aminoglycoside ototoxicity in cystic fibrosis

Aminoglycoside antibiotics are used world-wide to treat drug-resistant chronic lung infections. These lifesaving drugs unfortunately cause hearing loss due to ototoxicity, the effects of which progress from the base to the apex of the basilar membrane (inner ear). Therefore, in order to detect ototoxicity sooner, the higher frequency region is important to assess.  This presentation will discuss extended high-frequency hearing and transient-evoked otoacoustic emissions to chirps (TEOAEs) to detect ototoxicity in pediatric patients with cystic fibrosis (CF) treated with aminoglycosides, compared to age-matched untreated controls. TEOAEs were measured using chirp stimuli at frequencies from 0.7-14.7 kHz, along with audiometry and speech-in-noise thresholds on the BKB-SIN test. Hearing thresholds were significantly poorer in the CF group than the control group at all frequencies, but particularly from 8-16 kHz, with thresholds in the CF group ranging up to 80 dB HL. Speech-in-noise performance using the BKB-SIN test was significantly poorer for the CF group compared to controls and age norms. TEOAE signal to noise ratios were significantly poorer in the CF group with significant hearing loss in the 8-10 kHz frequency regions, compared to controls without hearing loss. These results show that newly-developed chirp TEOAE measures in the extended high-frequency range are effective in detection of cochlear impacts of ototoxicity. Poorer speech-in-noise function in the group treated with aminoglycosides provides additional physiologic evidence of cochlear, and possibly neural deficits.

September 6, 2018

Alexandra Mai, Audiology Graduate Student, Purdue University, presenting NIH T35 student research conducted at BoysTown National Research Hospital

Beliefs Held by Parents of Infants and Toddlers with Hearing Loss 

It is understood that the amount of time children wear their hearing devices and the amount of parent involvement is associated with language outcomes for children. However, device use and parent involvement are highly variable. Additionally, it is known that parents’ beliefs affect parenting actions and a child’s early cognitive development (Keels 2009). The Scale of Parental Involvement and Self-Efficacy- Revised (SPISE-R) queries parents’ beliefs, knowledge, confidence, and actions as well as their child’s device use to examine parental self-efficacy. This study focused on the beliefs section of the questionnaire. Each of the eight beliefs has a cut-off where responses past this point are considered concerning and additional counseling to the parent is recommended. The purpose of this study was to see what percent of parents held concerning beliefs, examine how children and family factors (i.e. parental education level, child’s current age, age at confirmation of the hearing loss, degree of hearing loss, and hearing device type) affected parent beliefs, and determine if a parent holding a concerning belief was associated with differences in their child’s device use or language development. This was done via an online survey made up of a demographic questionnaire, the SPISE-R, the Developmental Profile- 3 communication subscale (DP-3), and the Parenting Sense of Confidence self-efficacy subscale. Parents were also asked to submit their child’s most recent audiological results. Results indicate that a significant number of parents held concerning beliefs for all statements except two involving family and early interventionist impact. Additionally, parental education level, degree of hearing loss, age at confirmation, and current age of the child were each correlated with holding a concerning belief for one belief statement. Finally, only a concerning belief about if a child’s hearing device(s) helps him/her to communicate was associated with device use. No beliefs in the concerning range were associated with language development.

August 23, 2018

Josh Alexander (Alexander Lab)

Potential Mechanisms for Perception of Frequency-Lowered Speech   

About 25% of the more than 36 million Americans with hearing loss and about 40% of all hearing aid users have at least a severe hearing impairment.  These individuals have significant difficulty perceiving high-frequency speech information even with the assistance of conventional hearing aids.  Frequency lowering is a special hearing aid feature that is designed to help these individuals by moving the mid- to high-frequency parts of speech to lower-frequency regions where hearing is better.  This feature is offered in various forms by every major hearing aid manufacturer and it is the standard of care for children when conventional amplification fails to provide audibility of the full speech spectrum (American Academy of Audiology, 2013).  However, there is a lack of strong evidence about when and how this feature should be used in the clinic.  This stems from a critical knowledge gap concerning mechanisms important for the perception of frequency-lowered speech.  Continued existence of this gap contributes to the lack of reproducibility of findings in this research area, suboptimal patient outcomes, and ineffective interventions.   This talk will focus on research conducted by the Experimental Amplification Research (EAR) lab on the latest commercially available method of frequency lowering, adaptive nonlinear frequency compression.  This method provides unprecedented control over how sounds are remapped onto the residual capabilities of the impaired cochlea.  A systematic investigation of the perceptual effects of this method in normal-hearing listeners was conducted using a variety of speech stimuli that had been processed with 8-9 different frequency-lowering settings for each of three hearing loss conditions.  Auditory nerve model and acoustic analyses revealed that broadband temporal modulation accounted for 64-94% of the variance across each of the data sets.  In fact, the data also revealed that current clinical recommendations for selecting frequency-lowering settings might significantly undermine potential benefit from this feature.  A working hypothesis is that frequency-lowering methods and settings that preserve the greatest amount of temporal modulation from the original speech at the auditory periphery will yield the best outcomes for speech perception.  Finally, this talk will discuss how the results from normalhearing listeners compare favorably to predictions generated from auditory nerve simulations of various degrees of sensorineural hearing loss. 

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at slhswebhelp@purdue.edu.