Seminars in Hearing Research at Purdue

Students, post docs, and faculty with interests in all aspects of hearing meeting weekly to share laboratory research, clinical case studies, and theoretical perspectives. Topics include basic and translational research as well as clinical practice. Participants are welcome from all of Purdue University including Speech, Language, and Hearing Science (SLHS), Biology (BIO), Biomedical Engineering (BME), Mechanical Engineering (ME), and Electrical Engineering (EE). Seminars provide an ideal venue for students to present their work with a supportive audience, for investigators to find common interests for collaborative efforts, and for speakers from outside Purdue to share their work. This seminar is partially supported by the Association for Research in Otolaryngology.

2018-2019 Talks

Past Talks

LYLE 1150: 1030-1120am (link to schedule)

January 24, 2019

Ruth Y. Litovsky, Professor of Communication Sciences & Disorders Professor of Surgery, Division of Otolaryngology

Restoring binaural and spatial hearing in cochlear implant users

Abstract: Patients with bilateral deafness are eligible to receive bilateral cochlear implants (BiCIs), and in some countries, patients who suffer from single-sided deafness are receiving a cochlear implant (SSD-CI) in the deaf ear. In both the BiCI and SSD-CI populations there is a potential benefit from the integration of inputs arriving from both ears. One of the demonstrated benefits is improved sound localization. To understand the factors that are important for binaural integration we use research processors, delivering pulsatile stimulation to multiple binaural pairs of electrodes. Our novel stimulation paradigms are designed to restore both binaural sensitivity and speech understanding. A second known benefit is improved ability to segregate speech from background noise or competing maskers. Our recent studies are aimed at measuring both release from masking and release from cognitive load. In these studies, we use real-time pupil dilation as a means to assess listening effort while subjects listen to speech stimuli.  We are interested in the extent to which bilateral hearing in BiCI patients, and in SSD-CI promote release from masking, and the corresponding cognitive load. By understanding the cost/benefit of integrating inputs to two ears, a more complete picture of the advantages of bilateral stimulation can emerge.

January 17, 2019

Vibha Viswanathan, Ph.D. Student, Weldon School of Biomedical Engineering

Evaluating Human Neural Envelope Coding as the Basis of Speech Intelligibility in Noise

Abstract: Models of speech intelligibility that accurately reflect human listening performance across a broad range of background-noise conditions could be clinically important (e.g., for deriving hearing-aid prescriptions, and optimizing cochlear-implant signal processing). A leading hypothesis in the field is that internal representations of envelope information ultimately determine intelligibility. However, this hypothesis has not been tested neurophysiologically. Here, we address this gap by combining human electroencephalography (EEG) with simultaneous perceptual intelligibility measurements. First, we derive a neural envelope-coding metric (ENVneural) from EEG responses to speech in multiple levels of stationary noise, and identify a mapping between the neural metric and corresponding speech intelligibility. Then, using the same mapping, we use only EEG measurements to test whether ENVneural is predictive of speech intelligibility in novel background-noise conditions and in the presence of linear and non-linear distortions. Preliminary results suggest that neural envelope coding can predict speech intelligibility to varying degrees for different realistic listening conditions. These results inform modeling approaches based on neural coding of envelopes, and may lead to the future development of physiological measures for characterizing individual differences in speech-in-noise perceptual abilities.

January 7, 2019

RCR presentation on animal use ethics and protocols

The first "presentation" will be a responsible conduct of research (RCR) discussion on animal use ethics and protocols. Please come and participate in the discussion on this important topic. Note that if you are funded through  a federal training grant, attending this session may be a requirement.

Please take a moment to look through the working schedule for this semester here:

The titles and abstracts of the talks will be updated here:


LYLE 1150: 1030-1120am 

December 6, 2018

Ravinderjit Singh, M.D./Ph.D. student, Weldon School of Biomedical Engineering

Neural Sensitivity to Dynamic Binaural Cues: human EEG and chinchilla single-unit responses

Animals encounter dynamic binaural information in broadband sounds such as speech and background noise. These dynamic cues can result from: 1) moving sound sources, 2) self-motion, or 3) reverberation. Two dynamic binaural cues that are investigated with this work are inter-aural time delay (ITD) and inter-aural correlation (IAC). Most studies investigating ITD or IAC sensitivity have used static or sinusoidally varying inter-aural signals while neural sensitivity to change in ITD or IAC is rarely systematically addressed. 

We are using a systems-identification technique to characterize neural responses to dynamics of changing ITD and IAC in broadband sounds. We use a maximum length sequence (MLS) to modulate either the ITD or IAC of a broadband noise carrier. Neural responses are recorded from humans using electroencephalography (EEG) and from auditory nerve fibers (ANFs) in terminally anesthetized chinchillas. Using the responses from ANFs, responses from a higher order brainstem structure, the medial superior olive (MSO), are simulated. Human behavioral data is also obtained to determine the upper limits of human detection of dynamic IAC and to quantify how thresholds for target detection in noise vary with IAC dynamics. 

Results thus far show that transfer functions from the MSO (simulated from ANF responses), are low pass with corner frequencies in the range of hundreds of Hz. In contrast, EEG-based transfer functions, presumably reflecting cortical responses, were also low pass, but with corner frequencies in the range of tens of Hz. Preliminary human behavioral results will also be presented.


November 29, 2018

Inyong Choi, PhD, Asst. Professor, Communication Sciences & Disorders, U. Iowa

Causal relationship between selective attention and speech unmasking during word-in-noise recognition.

This presentation will introduce results from two recent studies in normal-hearing listeners' speech-in-noise understanding: What cognitive factors explain its variability and how. The first study shows that speech unmasking, revealed by the amplitude ratio between cortical auditory evoked responses to target sound and noise, predicts accuracy and processing speed during a word-in-noise recognition task. Individual differences in speech unmasking were thought to be related to auditory selective attention process, which enhances the strength of neural responses to attended sounds while suppresses the neural responses to ignored sounds. In the second study, we tested whether training of selective attention can improve speech unmasking, which in turn improves the accuracy of word-in-noise recognition. During training, subjects were asked to attend to one of two simultaneous but asynchronous auditory streams. For the participants assigned to the experimental group, visual feedback was provided after each trial to demonstrate whether their attention was correctly decoded from their single-trial EEG response. The experimental group participants with four weeks of this neurofeedback training exhibited amplified cortical evoked responses to target speech as well as improved word-in-noise recognition, while the placebo group participants did not show consistent improvement. This result presents the causal relationship between auditory selective attention and speech-in-noise performance.


November 1, 2018

Miranda Skaggs and Nicole Mielnicki, Au.D. graduate students, SLHS (Strickland lab)

Behavioral measures of cochlear gain and gain reduction in listeners with normal hearing or minimal cochlear hearing loss

On the audiogram, hearing thresholds are divided into discrete categories of normal, mild, moderate, etc.  However, there is likely a continuum of hearing abilities even within the normal range. This is a continuation of a study examining the relationship between various psychoacoustic measures thought to be related to cochlear function, including gain reduction.  In the listeners tested, thresholds for long-duration tones ranged from well within the clinically normal range to just outside this range.  Where thresholds were elevated, other clinical tests were consistent with a cochlear origin.  Because the medial olivocochlear reflex (MOCR) decreases the gain of the cochlear active process in response to sound, when possible, measures were made with short stimuli.  Signal frequencies ranged from 1 to 8 kHz.  Maximum gain was estimated by measuring the threshold masker level for a masker at the signal frequency and a masker nearly an octave below the signal frequency.  One point on the lower leg of the input/output function was measured by finding the threshold masker level for a masker slightly less than one octave below the signal frequency needed to mask a signal at 5 dB SL.  Gain reduction was estimated by presenting a pink noise precursor before the signal and masker, and measuring the change in signal threshold as a function of precursor level. The relationship between these measures will be discussed.

October 25, 2018

Erik Larsen, Ph.D., Golbarg Mehraei, Ph.D., and Ann Hickox, Ph.D., Decibel Therapeutics

Making drug therapies for hearing a reality: What Does it Take?

There are still no approved drugs for preventing or treating hearing loss available, despite a massive unmet need and the limitations of current hearing assistive devices. However, recently there has been an increasing amount of investment in new companies in the hearing therapeutics space. What is actually needed to translate scientific discoveries into actual products that can meet regulatory approval? Why haven’t pharmaceutical and biotech companies been successful so far? This talk will highlight some of these aspects using Decibel Therapeutics’ approach as an example, and includes some highlights from our research & development.

October 18, 2018

Kelly L. Whiteford, Ph.D., Postdoctoral Fellow, University of Minnesota

Mechanisms for Coding Frequency Modulation

Modulations in frequency (FM) and amplitude (AM) are fundamental for human and animal communication. Humans are most sensitive to FM at low carrier frequencies (fc < ~4 kHz) when the modulation rate is slow (fm < 10 Hz), which are also the frequencies and rates most important for speech and music perception. The leading explanation for our exquisite sensitivity within this range is that slow FM is coded by precise, phase-locked spike times in the auditory nerve (time code). Low-carrier FM at faster rates and higher carriers at all rates, on the other hand, are thought to be represented by tonotopic (place) coding, based on the conversion of FM to AM via cochlear filtering. We utilized individual differences in sensitivity to a variety of psychophysical tasks, including low-carrier FM and AM at slow (fm = 1 Hz) and fast (fm = 20 Hz) modulation rates, to better understand the peripheral code for FM. Tasks were assessed across three large groups of listeners: Young, normal-hearing (NH) listeners (n=100), NH listeners varying in age (n=85), and listeners varying in degree of sensorineural hearing loss (SNHL; n=56). Results from all three groups revealed high multicollinearity amongst FM and AM tasks, even tasks thought to be coded by separate peripheral mechanisms. For normal-hearing listeners, the bulk of variability in performance appeared to be driven by non-peripheral factors. Data from listeners varying in SNHL, however, showed strong correlations between the fidelity of cochlear place coding (frequency selectivity) and FM detection at both slow and fast rates, even after controlling for audibility, age, and sensitivity to AM. Overall, the evidence suggests a unitary code for FM that relies on the conversion of FM to AM via cochlear filtering across all FM rates and carrier frequencies.

October 11, 2018

Agudemu Borjigan, Ph.D. student, Weldon School of Biomedical Engineering

Investigating the Role of Temporal Fine Structure in Everyday Hearing

In challenging environments with multiple sound sources, successful listening relies on precise encoding and use of fine-grained spectrotemporal sound features. Indeed, human listeners with normal audiograms can derive substantial release from masking when there are discrepancies in the pitch or spatial location between the target and masking sounds. While the temporal fine-structure (TFS) in low-frequency sounds can convey information about both of these aspects of sound, a long standing and nuanced debate exists in the literature about the role of TFS cues in masking release in complex environments. Understanding the role of TFS in complex listening environments is important for optimizing the design of assistive devices such as cochlear implants. The long term goal of the present study is to leverage individual differences across normal-hearing listeners to address this question. As the first step, we are measuring individual TFS sensitivity via both psychophysical and electroencephalography (EEG) approaches. Preliminary data show large variance across subjects in both behavioral and EEG measures. Follow-up experiments will compare individual differences in these TFS-coding measures to speech-in-noise perception with complex maskers in co-located and spatially separated configurations to understand the role of TFS in everyday hearing.

October 4, 2018

Jeffrey Lucas​, Professor, Department of Biological Sciences

Using auditory information to keep eagles out of wind turbines

Golden eagles and bald eagles are known to be involved in collisions with wind turbines.  This source of mortality may be an important contributor to poor population viability for golden eagles in particular.  One potential technique that could be used to reduce collision rates is to identify alerting stimuli that make the turbine itself a more salient stimulus to the birds.  As part of a larger project, we have recently begun to collect data on the auditory physiology of eagles with an eye to finding stimuli that are maximally alerting. We are also looking for stimuli that are minimally influenced by noise masking because the conditions around wind turbines can potentially mask certain types of sounds.  We review preliminary results on bald eagles and offer some insight into what types of auditory stimuli might be useful in reducing death rates of eagles in a world where wind energy is becoming a more important source of energy for an ever-growing human population.

September 27, 2018

Elizabeth Strickland, Ph.D., CCC-A, Professor of Speech, Language, and Hearing Sciences

Preceding sound may improve detection in a forward masking task

There are physiological mechanisms that adjust the dynamic range of the peripheral auditory system in response to sound.  One of these, the medial olivocochlear reflex (MOCR) feeds back to the cochlea, and adjusts the gain in response to sound.  Our research uses behavioral measures that may reflect peripheral gain, and look for evidence of a decrease in gain after preceding sound.  When a signal and a masker are on at the same time (simultaneous masking), preceding sound may make the signal audible at a lower signal to masker ratio, thus improving perception.  However, when the masker precedes the signal (forward masking), preceding sound has been shown to increase signal threshold, decrease frequency selectivity, and decrease suppression.  While all of these effects are consistent with a decrease in gain, they all sound like bad things.  In this talk, I will show a condition in forward masking where the signal is audible at a lower signal to masker ratio following preceding sound, which might be a good thing.

September 20, 2018

Brandon S Coventry, Ph.D. Candidate, Weldon School of Biomedical Engineering, Purdue Institute of Integrative Neuroscience, and Center for Implantable Devices, Purdue University

Optical deep brain stimulation of the central auditory pathway

Neurological and sensory neuroprostheses based on electrical stimulation have proven effective in restoring auditory percepts in cochlear and auditory brainstem implants as well as treatment of Parkinson’s disease and Tourette’s syndrome with deep brain stimulation (DBS). However, deficits in modern devices, such as current spillover and inability to selectively target local circuits, results in undesirable auditory percepts in sensory prostheses and undesirable side effects in central nervous system implants. Infrared neural stimulation (INS) is an optical technique which has been shown to selectively stimulate nerves and neurons using long wavelength (> 1450 nm) infrared light. INS is a promising stimulation modality because it does not require genetic modification of the target, allowing translation to human patients without additional genetic manipulations. Furthermore, previous studies in nerve have suggested that INS is more spatially specific than conventional electrical stimulation. Preliminary studies in the central nervous system have suggested INS can elicit responses in cortical structures. However the efficacy of INS in generating biophysical responses in thalamocortical networks is unexplored. Demonstration of effective thalamocortical recruitment would establish INS a potential stimulation therapeutic which could theoretically improve on cochlear and brainstem implant performance. In this study, Sprague-Dawley rats of both sexes were implanted with optrodes in the medial geniculate body (MGB) in the auditory thalamus and 16 channel microwire arrays in the primary auditory cortex (A1). After recovery, auditory and infrared stimuli were presented to awake, restrained animals. Auditory stimuli consisted of click trains at sound levels between 60 and 90 dB, random spectrum stimuli with spectral contrasts of 5, 10, and 15 dB, and amplitude modulated broadband noise. Infrared stimuli operated in quasi-continuous wave with singular pulses of 0-600 mW power with varying pulse widths between 5-100 ms duration. Initial results show that infrared stimulation of MGB gives rise to repeatable and short-latency action potentials and local field potentials in the auditory cortex. Furthermore, joint-peristimulus time historgram analysis suggests that INS acts in a spatially specific manner, recruiting only local circuits for activation. Finally, the use of INS for next generation cochlear implants and auditory brainstem/midbrain implants will be discussed.

September 13, 2018

Prof. Lisa L. Hunter, Ph.D., FAAA, Scientific Director of Research, Division of Audiology, Cincinatti Children's Hospital Medical Center​

High frequency hearing, otoacoustic emissions and speech-in-noise deficits due to aminoglycoside ototoxicity in cystic fibrosis

Aminoglycoside antibiotics are used world-wide to treat drug-resistant chronic lung infections. These lifesaving drugs unfortunately cause hearing loss due to ototoxicity, the effects of which progress from the base to the apex of the basilar membrane (inner ear). Therefore, in order to detect ototoxicity sooner, the higher frequency region is important to assess.  This presentation will discuss extended high-frequency hearing and transient-evoked otoacoustic emissions to chirps (TEOAEs) to detect ototoxicity in pediatric patients with cystic fibrosis (CF) treated with aminoglycosides, compared to age-matched untreated controls. TEOAEs were measured using chirp stimuli at frequencies from 0.7-14.7 kHz, along with audiometry and speech-in-noise thresholds on the BKB-SIN test. Hearing thresholds were significantly poorer in the CF group than the control group at all frequencies, but particularly from 8-16 kHz, with thresholds in the CF group ranging up to 80 dB HL. Speech-in-noise performance using the BKB-SIN test was significantly poorer for the CF group compared to controls and age norms. TEOAE signal to noise ratios were significantly poorer in the CF group with significant hearing loss in the 8-10 kHz frequency regions, compared to controls without hearing loss. These results show that newly-developed chirp TEOAE measures in the extended high-frequency range are effective in detection of cochlear impacts of ototoxicity. Poorer speech-in-noise function in the group treated with aminoglycosides provides additional physiologic evidence of cochlear, and possibly neural deficits.

September 6, 2018

Alexandra Mai, Audiology Graduate Student, Purdue University, presenting NIH T35 student research conducted at BoysTown National Research Hospital

Beliefs Held by Parents of Infants and Toddlers with Hearing Loss 

It is understood that the amount of time children wear their hearing devices and the amount of parent involvement is associated with language outcomes for children. However, device use and parent involvement are highly variable. Additionally, it is known that parents’ beliefs affect parenting actions and a child’s early cognitive development (Keels 2009). The Scale of Parental Involvement and Self-Efficacy- Revised (SPISE-R) queries parents’ beliefs, knowledge, confidence, and actions as well as their child’s device use to examine parental self-efficacy. This study focused on the beliefs section of the questionnaire. Each of the eight beliefs has a cut-off where responses past this point are considered concerning and additional counseling to the parent is recommended. The purpose of this study was to see what percent of parents held concerning beliefs, examine how children and family factors (i.e. parental education level, child’s current age, age at confirmation of the hearing loss, degree of hearing loss, and hearing device type) affected parent beliefs, and determine if a parent holding a concerning belief was associated with differences in their child’s device use or language development. This was done via an online survey made up of a demographic questionnaire, the SPISE-R, the Developmental Profile- 3 communication subscale (DP-3), and the Parenting Sense of Confidence self-efficacy subscale. Parents were also asked to submit their child’s most recent audiological results. Results indicate that a significant number of parents held concerning beliefs for all statements except two involving family and early interventionist impact. Additionally, parental education level, degree of hearing loss, age at confirmation, and current age of the child were each correlated with holding a concerning belief for one belief statement. Finally, only a concerning belief about if a child’s hearing device(s) helps him/her to communicate was associated with device use. No beliefs in the concerning range were associated with language development.

August 23, 2018

Josh Alexander (Alexander Lab)

Potential Mechanisms for Perception of Frequency-Lowered Speech   

About 25% of the more than 36 million Americans with hearing loss and about 40% of all hearing aid users have at least a severe hearing impairment.  These individuals have significant difficulty perceiving high-frequency speech information even with the assistance of conventional hearing aids.  Frequency lowering is a special hearing aid feature that is designed to help these individuals by moving the mid- to high-frequency parts of speech to lower-frequency regions where hearing is better.  This feature is offered in various forms by every major hearing aid manufacturer and it is the standard of care for children when conventional amplification fails to provide audibility of the full speech spectrum (American Academy of Audiology, 2013).  However, there is a lack of strong evidence about when and how this feature should be used in the clinic.  This stems from a critical knowledge gap concerning mechanisms important for the perception of frequency-lowered speech.  Continued existence of this gap contributes to the lack of reproducibility of findings in this research area, suboptimal patient outcomes, and ineffective interventions.   This talk will focus on research conducted by the Experimental Amplification Research (EAR) lab on the latest commercially available method of frequency lowering, adaptive nonlinear frequency compression.  This method provides unprecedented control over how sounds are remapped onto the residual capabilities of the impaired cochlea.  A systematic investigation of the perceptual effects of this method in normal-hearing listeners was conducted using a variety of speech stimuli that had been processed with 8-9 different frequency-lowering settings for each of three hearing loss conditions.  Auditory nerve model and acoustic analyses revealed that broadband temporal modulation accounted for 64-94% of the variance across each of the data sets.  In fact, the data also revealed that current clinical recommendations for selecting frequency-lowering settings might significantly undermine potential benefit from this feature.  A working hypothesis is that frequency-lowering methods and settings that preserve the greatest amount of temporal modulation from the original speech at the auditory periphery will yield the best outcomes for speech perception.  Finally, this talk will discuss how the results from normalhearing listeners compare favorably to predictions generated from auditory nerve simulations of various degrees of sensorineural hearing loss. 

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at