Seminars in Hearing Research at Purdue

Students, post docs, and faculty with interests in all aspects of hearing meeting weekly to share laboratory research, clinical case studies, and theoretical perspectives. Topics include basic and translational research as well as clinical practice. Participants are welcome from all of Purdue University including Speech, Language, and Hearing Science (SLHS), Biology (BIO), Biomedical Engineering (BME), Mechanical Engineering (ME), and Electrical Engineering (EE). Seminars provide an ideal venue for students to present their work with a supportive audience, for investigators to find common interests for collaborative efforts, and for speakers from outside Purdue to share their work. This seminar is partially supported by the Association for Research in Otolaryngology.

2017-2018 Talks

Past Talks

LYLE 1150: 1030-1120am (link to schedule)

August 31, 2017

Mike Heinz, PhD (Heinz lab)

The Effects of Inner-Hair-Cell-Specific Dysfunction on Neural Coding in the Auditory Periphery

David Axe, Vijay Muthaiah, Michael Heinz

The goal of this work was to investigate one underlying mechanism of sensorineural hearing loss by selectively perturbing the inner hair cells of the cochlea. This was accomplished by measuring both invasive single-unit and non-invasive evoked neural responses in chinchillas that were administered a specific ototoxic drug, carboplatin. Responses were measured to stimuli ranging from simple tones to more complex sounds, including amplitude- and frequency-modulated tones and broadband noise, which represent fundamental acoustic features of speech and music in real-world environments. This experimental approach made it possible to measure the effects of damage to this specific hair-cell type on peripheral neural processing in isolation (i.e., without the confounding interaction of outer-hair-cell damage that often occurs with noise overexposure). Inner-hair-cell dysfunction produced subtle or no effects on common threshold measurements, but perceptually relevant effects were predicted for suprathreshold sounds. Inner-hair-cell damage has long been viewed primarily in terms of cochlear dead regions (i.e., missing inner hair cells). However, our physiological and anatomical evidence suggests that even remaining functional inner hair cells may have degraded responses that provide less neural information for the perception of complex sounds, but do not affect thresholds in a major way. Our results support the idea that frequency-modulated tones may be effective stimuli for suprathreshold inner-hair-cell diagnostics.

September 7, 2017

Liz Marler (Audiology, NIH-T35 student-research awardee)

Vestibular Evoked Myogenic Potential (VEMP) Test-Retest Reliability in Children

Vestibular evoked myogenic potentials (VEMPs) are short-latency muscle potentials measured from the neck (cervical, cVEMP) or under the eyes (ocular, oVEMP), which provide information regarding otolith organ function: the saccule and utricle, respectively. VEMPs have been shown to be a reliable test of otolith function in adults; however, research has not been done to assess whether VEMPs are reliable in children. Therefore, the purpose of the study was to determine the test-retest reliability of c- and oVEMP testing in children and to identify factors that affect VEMP response characteristics. Twenty-six children and 10 adults participated in this two-part study, which included a variety of VEMP parameters (air and bone stimuli, eyes open and eyes closed), a comfort questionnaire and physical measurements. Results suggest that VEMPs are a reliable test to assess otolith function in children using air and bone conduction stimuli.

September 14, 2017

Ryan Verner (PhD student, BME, Bartlett lab)

Mutual Information in the Auditory Thalamocortical Circuit Diminishes with Loss of Consciousness

In addition to its scientific importance, understanding the mechanisms of loss of consciousness is crucial to the development of intraoperative tools to assess global neural state. We provide empirical support for the information integration theory of consciousness, which attempts to characterize unconsciousness as a state of reduced or uncorrelated information transmission. Four male Sprague-Dawley rats were each implanted with two Neuronexus 16-channel electrode arrays, targeting the medial geniculate body (MGB) and the primary auditory cortex (A1). After recovery, electrical stimulation at 4 different levels, ranging from subthreshold to well above threshold behaviorally, occurred on one of their electrodes while neural responses were recorded on the other. Thalamocortical stimulation consisted of a mock thalamic burst triplet of pulses at 300 Hz in MGB. Corticothalamic stimulation consisted of a singular pulse delivered to deeper layers of A1 (layers 5-6). In addition, responses to simple sounds were recorded, including frequency tuning, rate-level, and click train responses. Neural responses were filtered into both local field potentials and multiunit activity (sorted offline with waveclus2). Near-loss of consciousness was induced via IV administration of sub-hypnotic and just-hypnotic doses of isoflurane (approximately 0.6% or 0.9%) and low doses of the sedative dexmedetomidine (i.v. 0.016 or 0.024 mg/kg/hour). Data were collected in the wakeful state before and after sub- or just-hypnotic levels of unconsciousness were induced. Mutual information was assessed using binwise rates stimulus offset, or by assessing information per bin across all 16 recording channels to assess network information. Preliminary results show a reduction in effective information using both measurement schemes for both agents and not stimulation amplitude dependent. These results suggest that mutual information can be a sensitive measure of brain state, such that clinical

Septmber 21, 2017

Chandan Suresh (PhD student, SLHS, Krishnan Lab)

Search for Electrophysiological Indices of Hidden Hearing Loss

Recent studies in animals suggest that even moderate levels of noise exposure can damage synaptic ribbons between the inner hair cells and auditory nerve fibers without affecting audiometric thresholds, giving rise to the use of the term “hidden hearing loss” (HHL). Given the pervasive exposure to occupational and recreational noise in the general population, it is likely that individuals afflicted with HHL will go unidentified unless sensitive clinical measures are developed to diagnose this condition. To date, the studies employed to characterize HHL in humans have yielded confounding results. For example, Stamper & Johnson (2015) reported that the magnitude of wave I amplitude decrease is related to amount of noise exposure, and suggestive of fewer intact auditory nerve synapses; Liberman et al., (2016) reported enhanced summating potential to action potential ratio in individuals at risk for HHL; and Prendergast et al. (2017) found no differences in ABR or frequency following responses (FFR) in individuals with normal hearing and a wide range of noise exposure history. The objective of the project is to develop sensitive clinical electrophysiologic measures for early detection of HHL. We utilized specific stimulus manipulations that will likely produce a greater degradation of responses (recorded from the different levels-inner ear, auditory nerve, and brainstem) in individuals at high risk for HHL compared to controls, due to loss of synapses and/or neurons. The specific stimulus manipulations include response measures across sound levels, response measure in noise, two different adaptation paradigm (stimulus rate neural adaptation and adaptation recovery for click train paradigm), and changes in the rate of the frequency sweep. Preliminary results are presented here from three experiments. Consistent with previous studies, there were no differences between the low- and high-risk groups in audiometric thresholds or DPOAE amplitude. The high-risk group had significantly lower Wave I amplitude at high sound levels only; a different pattern of amplitude recovery from adaptation; and greater disruption in the encoding of rapid frequency change. These results suggest that certain stimulus manipulations could potentially isolate individuals at risk for HHL.

September 28, 2017

Ankita Thawani (PhD student, BIO, Fekete Lab)

Zika virus tropism in the early developing brain and inner ear             

Zika virus (ZIKV) is an emerging mosquito-borne tropical pathogen that was recently associated with severe congenital defects in fetuses, such as microencephaly, retinopathy and sensorineural hearing loss, 70 years after its discovery. Various cellular, organoid, murine and primate models demonstrate that ZIKV preferentially infects neural progenitor cells and causes increased cell death and reduced proliferation.

Detailed information about the relative permissiveness of the early developing brain is lacking. To address whether all the neural progenitors are equally susceptible to ZIKV, we employed the easily accessible embryonic chicken model. Direct ZIKV injections into the neural tube yielded predominantly periventricular infection within 3 days-post-infection. However, we found regions of heavy infection, or “hot-spots” associated with certain key signaling centers of the brain that are known to secrete morphogens to pattern the neighboring neuroepithelium. We analyzed three such morphogens- Shh, Fgf8 and Bmp7. We observed reduced expression of each when heavily infected with ZIKV, and demonstrated a patterning defect associated with one of them (Shh). Thus, while ZIKV preferentially infects neural progenitors, it also exhibits differential tropism for specific subregions of the developing brain, possibly abating their function(s) during embryonic brain development.

Around 6% of newborns exposed prenatally to ZIKV presented with diminished otoacoustic emissions and auditory brainstem responses, hence indicating sensorineural hearing loss, perhaps originating in the cochlea. A key knowledge gap is to explore the spatial and temporal susceptibility of the developing inner ear to ZIKV infection. ZIKV injection into the chicken otocyst resulted in sensory epithelial infection frequently, with infection found in all the cochleas analyzed at 10 days-post infection. Non-sensory infection was also observed, albeit with lower frequency. The study is still in the preliminary stages and will be extended with E2 to E5 ear injections, along with short-term and long-term analyses of ZIKV infection.  We hope to determine what inner ear cell types are most susceptible at each stage of infection.

October 5, 2017

Ed Bartlett, PhD (Bartlett lab; from Salamanca Spain)

Paribas or Bury Pa? – Age-related changes in the neural representations of voice onset time in the inferior colliculus

The inferior colliculus (IC) integrates a variety of inputs to perform spectrotemporal processing in the primary auditory pathway, including temporal to rate transformations. The temporal to rate transformations occurring in the IC make it important to understand how the auditory pathway is able to adapt to changes in hearing abilities, such as due to aging or noise-induced hearing loss. In this study, we used consonant-vowel sounds that varied in voice onset time (VOT) (ba to pa), either with tokens or the envelopes of those tokens to modulate a noise carrier. Synchronized neural populations were recorded non-invasively as envelope-following responses (EFRs) in young and aged rats. In addition, local field potentials (LFPs) and unit activities were recorded in the inferior colliculus, enabling some measure of input to output transformation within the IC. We found that both EFRs and LFPs were degraded in older animals, even after compensating for hearing thresholds.  However, IC unit activity was similar between young and aged rats in many cases using simple measures such as firing rates. We then tested the related question of whether individual sites or populations were able to discriminate between the different VOT stimuli. A template-matching classification model was generated in which single-trial responses were correlated with aggregate trends. IC units were found to discriminate stimuli above chance but still made errors. Integration over a population of units reduced variability and increased performance. It was found that stimulus discrimination was similar for VOT envelopes modulating a noise carrier, but declined in older animals for the original tokens. These results suggest that there may be multiple mechanisms of compensation to maintain neural representations in older animals, including compensation within the IC.

October 12, 2017

Phil Smith, PhD (Dept. of Neuroscience, University of Wisconsin)

Trouble in paradise. What are LSO cells doing!?

 The brainstem’s lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). For almost 50 years the dogma has been that LSO principal cells act like “sluggish integrators” weighing contralateral inhibition against ipsilateral excitation and making their sustained firing rate a function of the azimuthal position of a sound source. Our in vivo patch clamp recordings from labeled LSO neurons, in the Mongolian gerbil tell a different story. Light and electron microscopic analysis of labeled neurons allowed us to distinguish principal and non-principal LSO neurons and unequivocally assign a given set of response features to a given cell. We find that although both principal and non-principal neurons contribute to LSO tonotopy, principal neurons only respond at sound onset and show fast membrane features suggesting an importance for timing. In contrast, non-principal LSO neurons act more like sluggish integrators often generating sustained responses to sound, and have slower membrane features with larger action potentials. Similarity of current injection and sound-evoked responses suggests that differences in intrinsic properties are primarily responsive for these differences. Remarkably, the almost simultaneous convergence of transient click-evoked ipsilateral excitation and contralateral inhibition provides a mechanism for localizing transient stimuli. Finally, our anatomical evidence indicates that LSO cells may have an influence on the MSO/ITD pathway.

October 19, 2017

Yangfan Liu, PhD (Herrick Acoustics Lab)

Modeling, Reproducing and Active Control of Noise Sources. How to Bring All These Together?

The modeling of an acoustic source usually involves the use of a series of mathematical basis functions to represent the sound field generated by the actual source, and the coefficients (or the parameters) of the basis functions can be estimated based on the sound field measurements at different spatial locations. Once this estimation is done, the sound field at any location can be predicted. This technique can be used in the applications of source identification, the study of source characteristics, sound field reproduction, etc. A reduced order modeling method based on the multipole decomposition of a sound field will be focused on in this seminar along with an introduction of some traditional source modeling methods. The sound field decomposition technique can also be implemented in the active noise control (ANC) applications which allows the system to selectively control certain source contents or important source characteristics with limited computing resources. After a general introduction of active noise control, an ANC method based on the independent sound field component decomposition will be described, which can extract and control certain source components. Some potential use of different modal decomposition methods in the ANC applications will also be mentioned in this seminar.

October 26, 2017

Alex Francis, PhD (Francis lab)

Is it possible to distinguish between listening effort and noise annoyance?

Listening to speech in noise is effortful and can be unpleasant. Current theories of effortful listening attribute listeners’ dissatisfaction with listening to speech in noise to demand on cognitive resources such as working memory and selective attention. However, research on human response to environmental and workplace noise distinguishes between noise annoyance and distraction, separating affective/emotional responses from cognitive/attentional ones. In my lab we have been studying psychophysiological responses to challenging listening situations in an attempt to identify physiological markers that may help to differentiate between different aspects of listening effort and now noise annoyance. Here we present some results from an initial study (Francis, et al. 2016) suggesting that a decrease in blood volume pulse amplitude (BVPA) is stronger when listening to noise-masked speech compared to equally intelligible synthetic speech, suggesting that BVPA may reflect a response specific to interference from noise (i.e. annoyance). We have now extended this research, asking how individual traits such as noise sensitivity interact with ANS responses associated with listening effort, including BVPA, skin conductance level, facial EMG, and heart rate variability. Traits included selective attention, working memory capacity, vocabulary (PPVT-IV), noise sensitivity (NoiseQ), hearing thresholds, and “Big 5” personality traits (BFI-10). Listeners heard 10 short stories and answered questions about them. Listening effort was manipulated in two ways: Half the stories were spoken in non-native accented English, half in native-accented English masked by speech-shaped noise. Signal-to-masker ratio was adjusted for each subject to equate performance across conditions. Preliminary results suggest that that physiological responses did indeed differ across sources of difficulty (accent, noise) even when overall performance is controlled for, but that cognitive, not personality or sensitivity trait factors play a stronger role in determining these patterns. Implications for theories of listening effort and future research will be discussed.

November 2, 2017

Ross Maddox, PhD (Bharadwaj lab visitor; Asst. Prof Biomedical Engineering and Neuroscience, University of Rochester)

New approaches to the auditory brainstem response for the clinic and lab

The auditory brainstem response (ABR) has been an extremely useful tool for studying the early auditory pathway since its discovery in the 1970s. In its most basic form it represents the average evoked scalp potential to a couple thousand repetitions of a short stimulus such as a click. However, despite its clinical value, the ABR does have weaknesses. In the two parts of this talk I will present our efforts to address two of these principal limitations, which we hope will improve and extend the ABR's utility in both the clinic and the lab, respectively.

In the clinic, infant hearing thresholds are estimated by presenting trains of tonebursts to each ear over a range of frequencies and intensities. While each of these toneburst conditions only takes a couple minutes to record, they constitute a large combinatorial space, leading to burdensome overall test durations. We are exploring the possibility of measuring the toneburst ABR at all frequencies in both ears simultaneously. Preliminary data suggest that this may speed up the paradigm, and modeling suggests that it may also provide more place-specific responses at higher stimulus intensities.

In the neuroscience lab, there is significant interest in understanding how subcortical areas process speech. However, the rapidity of the ABR's response components necessitates short evoking stimuli, making these studies difficult to perform. We have recently developed a paradigm for measuring the ABR to continuous, non-repeating, naturally uttered speech. These methods allow the design of engaging behavioral tasks, facilitating new investigations of cognitive processes like language processing and attention in the auditory brainstem.

November 9, 2017

Ryan Verner (PhD student, BME, Bartlett lab)

Electrophysiological, Behavioral, and Histological Assessment of the Thalamocortical Network as a Stimulation Target for Central Auditory Neuroprostheses

Brain-machine interfaces aim to restore natural sensation or locomotion to individuals who have lost such ability. While the field of neuroprostheses has developed some flagship technologies which have enjoyed great clinical success, such as the cochlear implant, it is generally understood that no single device will be ideal for all patients. For example, the cochlear implant is unable to help patients suffering from neurofibromatosis type 2, which is commonly characterized by bilateral vestibular schwannomas for which surgical removal requires transection of the auditory nerve. In an effort to develop stimulatory neuroprostheses which can help the maximum number of patients, research groups have developed central sensory neuroprostheses. However, moving through ascending sensory processing centers introduces more uniqueness of neuronal feature selectivity and greater coding complexity, and chronic implantation of devices becomes less efficacious as the brain’s glial cells respond to implanted devices. In this work, we propose a neuroprosthetic targeting auditory thalamus, specifically the ventral division of the medial geniculate body (MGV). Thalamus represents an information bottleneck through which many sensory systems send information. Primary (MGV) and non- primary (MGD, MGM) subdivisions provide parallel auditory inputs to cortex and receive feedback excitation and inhibition from cortex and thalamic reticular nucleus (TRN), respectively. We characterized the potential of the thalamocortical circuit as a neuroprosthetic target through electrophysiological, behavioral, and histological methods. Preliminary results suggest some features of intracortical microstimulation (ICMS) are more salient than intrathalamic microstimulation (ITMS), such as sensitivity to perceived intensity cues. Additionally, we have identified a profound immune response in MGV to the implanted electrode and propose alternative surgical approaches which may mitigate this response.

November 16, 2017

Josh Alexander, PhD (Alexander lab)

Preliminary data on mechanisms for perception of frequency-lowered speech

Frequency lowering (FL) is a class of advanced digital signal processing techniques designed to help individuals with high-frequency hearing loss by moving the mid- to high-frequency parts of the speech that cannot be heard with conventional hearing aids, to lower frequency regions where hearing is better.  However, clinicians face numerous decisions when setting the parameters that control the frequency range to be lowered and the frequency range where the new information is to be placed.  The appropriate selection of these parameters is critical to patients’ outcomes because no other hearing aid technology has as much ability to alter the identity of individual speech sounds.  Currently, clinicians and researchers lack a clear set of objectives when programming the parameters that control the re-coding of sound.  The latest commercial variant of this technology, adaptive nonlinear frequency compression (ANFC), has two FL states that are conditional on whether the incoming sound has a low- vs. high-frequency emphasis.  ANFC compounds the clinical decision-making problem because clinicians now have to consider how the FL parameters affect the sounds produced by each of these two processing states.

To help develop evidence-based guidelines for optimizing the selection of FL parameters, we have been working to find a perceptual basis for discrimination of speech contrasts processed with ANFC.  Our innovative approach uses a psychoacoustic model for describing the perceptual effects of frequency lowering and uses a computer-based hearing aid simulator that mimics signal processing used in commercial devices.  Our psychoacoustic model is able to account for 80-90% of the variance in speech recognition results obtained from normal-hearing adults.  Recently, we have discovered that a neural metric based on mean rate statistics obtained from an auditory nerve model captures even more of the variance in the perceptual data than its psychoacoustic equivalent.  The next step will be to use the neural model to generate predictions for how a variety of hearing losses will influence speech perception with different ANFC settings.  The expected outcomes of this research will be a set of recommended guidelines for optimizing parameter selection in hearing aids with ANFC and other FL algorithms. 

November 30, 2017

Hari Bharadwaj, PhD (Bharadwaj lab)

“Cocktail-party” listening beyond hearing -- Electrophysiology of scene analysis and attention

In contrast to quiet backgrounds, listening in noisy everyday environments places enormous demands on the auditory system. Successful listening under such conditions relies not only on precise encoding of information by the early portions of the auditory pathway, but also on higher neural processes that analyze this information to yield coherent percepts of individual sources, and cognitive processes like selective attention that enable extracting information from one source while ignoring the others. In this talk, I will describe two experiments using electro- and magnetoencephalography (EEG and MEG) that investigate the latter two aspects of listening in noise. The first experiment seeks to understand the mechanisms of anomalous scene analysis in children with autism spectrum disorders (ASD). The second experiment seeks to understand individual differences among young and middle-aged adults in their ability to focus attention.

December 7, 2017

Alex Francis, PhD (Francis lab)

Looking for psychophysiological markers of noise annoyance: Measurements during cognitively demanding work in subjectively annoying background noise

Alexander L. Francis, Jordan Oliver

People who work in noisy environments are at greater risk for stress-related diseases, including hypertension and stroke, even when noise levels are too low to damage hearing. Such noise may be harmful, especially to noise-sensitive individuals, because the psychological annoyance that it causes induces physiological stress responses that are damaging to health over the long term. This study was designed to investigate the link between subjective noise annoyance and physiological measures of arousal and displeasure due to the presence of background noise. Cardiovascular, electrodermal, respiratory, and facial muscular activity were recorded from 32 listeners during the completion of a demanding working memory task under different listening conditions. Participants completed four levels of memory task demand in silence and in two different continuous noises similar to that produced by HVAC equipment. Both noises were comparable in terms of loudness and presentation level (54-60 dBA) but differed in perceived annoyance (based on ratings from a panel of listeners in a previous study) and in acoustic properties associated with noise annoyance (roughness, tonality, sharpness). Behavioral measures of memory task performance, noise sensitivity, personality traits, and subjective effort will be presented and if I can manage it I’ll also try to scrape together some preliminary physiological measures, and talk about implications for future research.

January 18, 2018

Katie Scott (Fekete lab)

Development of innervation patterns in the chicken inner ear

The basilar papilla (BP) is the auditory organ of the bird and is homologous to the mammalian organ of Corti. Similar to the organ of Corti, one side (neural side) of the BP primarily receives afferent innervation while the other abneural side is primarily innervated by efferents. We are interested in studying the development of these innervation patterns and the genes that influence them in the chicken.

 As a first step to understanding the molecular controls of innervation, information is needed about the timing of arrival and distribution of efferent axons across the auditory organ. To address this, we used NeuroVue lipophilic tracer dye implanted into the auditory ganglion or the brainstem to label these projections. Our data so far show that ipsilateral projections are present on the abneural and neural sides of the BP at embryonic day (E) 14. Contralateral projections are present on the abneural side at E14 but our data suggest that they arrive on the neural side later.

A second aim of my work examines molecular cues and their influence on innervation patterns in the inner ear. Semaphorin (Sema) ligands and their Neuropilin (Nrp) receptors and Plexin (Plxn) coreceptors function as repulsive axon guidance cues elsewhere in the nervous system. We have RNA sequencing data showing that class 3 Semas, Nrps, and PlxnA1 are expressed in the E6 BP, suggesting that this signaling pathway might contribute to the neural/abneural asymmetry BP innervation patterns. To address this, we studied the spatial expression patterns of these genes in the inner ear and ganglion using in-situ hybridization and immunohistochemistry. Contrary to our prediction, the spatial expression of these genes is not consistent with a role in guiding afferents (or efferents) to the BP.  Instead, our data indicate potential roles in guiding axons to vestibular organs, in boundary formation in the cochlea, and in vasculogenesis within the mesenchyme surrounding the inner ear.

January 25, 2018

Will Salloom (Strickland lab)

The effects of an ipsilateral, contralateral, and bilateral precursor on gain reduction across the frequency range

The medial olivocochlear reflex (MOCR) can modulate the gain of the cochlear amplifier when activated. However, basic questions remain: are there differences between contralateral, ipsilateral, and bilateral MOCR strength, and how does MOCR strength vary across the frequency range? Physiological research using otoacoustic emissions (OAEs) has shown that bilateral and ipsilateral MOCR strength is greater than contralateral strength. OAE data also suggests that the effect of the MOCR is stronger at lower frequencies. In the current study, a psychoacoustic forward masking paradigm is used to measure gain reduction at 1, 2, and 4 kHz in normal-hearing humans, which would be consistent with the MOCR. Transient evoked otoacoustic emissions (TEOAEs) were also measured in the same subjects to explore the relationship between psychoacoustic and physiological measures of gain reduction. The psychoacoustic and physiological data collected in the present study support the hypotheses that there are differences in MOCR strength across frequencies and between contralateral, ipsilateral, and bilateral activation.


February 8, 2018

ARO Prep – no meeting 


February 15, 2018

Manolo Malmierca, PhD (Bartlett lab visitor, Institute for Neuroscience, University of Salamanca, Spain)

Emergence of deviance detection along the auditory neuroaxis: The neuronal basis of predictive coding

Perception is characterized by a reciprocal exchange of predictions and prediction error signals between neural regions. However, the relationship between sensory mismatch responses and hierarchical predictive processing remains to be demonstrated at the neuronal level specifically in the auditory pathway. We recorded single-neuron activity from different auditory centers in anaesthetized rats and awake mice while playing a sequence of sounds that allowed us to separate the responses due to prediction error from those due to adaptation effects. Our results reveal a hierarchical organization of prediction error along the central auditory pathway. These prediction errors could be detected in subcortical regions and increased as the signals moved towards auditory cortex, which demonstrated a large-scale mismatch potential. Additionally, we demonstrate that the predictive activity of single auditory neurons underlies automatic deviance detection at subcortical levels of processing.

February 22, 2018

Special Lectures in Neuroscience (in MRGN 121)

Prof. Philip X. Joris (Univ. of Leuven, Belgium)

Making superior olives: more complex than we thought

The superior olivary complex is a collection of nuclei in the auditory brainstem with a diversity of functions. The most studied function is the processing of acoustical differences between the two ears, which are critical for spatial hearing. One of the circuits embedded in this complex is
classically associated with the processing of interaural intensity differences. However, such processing does not provide a rationale for the striking specializations found in this circuit, including one of the largest synapses in the brain, which suggest a role for timing in this circuit. I will review the relevant physiology and anatomy to illustrate the “textbook” physiology of this circuit and to illustrate the puzzle of its specializations. Using arguments derived from physiological, anatomical, behavioral, acoustical, and ecological sources, I will then formulate a new functional hypothesis. To conclude, I will show recent findings which cast a new light on these nuclei.

March 1, 2018

Katherine Gentry, PhD (Postdoc, BIO, Lucas lab)

The influence of noise on acoustic communication in Nuttall’s white-crowned sparrows

High amplitude background noise from anthropogenic activity interferes with acoustic communication, which many animals rely on for species recognition, mate selection and territorial defense. Anthropogenic noise in urban environments has become an important and novel source of selection on acoustic signaling behavior, ultimately changing the way animals communicate with one another. Noise-induced signal modification and evolution is particularly well studied in the Nuttall’s white-crowned sparrow (Zonotrichia leucophrys nuttalli, NWCS), a nonmigratory, territorial songbird that persists in both urban and rural landscapes. Given this background, NWCS has become a model system for understanding signal modification and its potential to alter receivers' perception of signaler quality in noisy urban landscapes. In this talk, I will discuss signal modification in urban and rural NWCS populations, and potential fitness implications associated with signal modification. Our research underscores the importance of noise mitigation in urban settings where populations are negatively impacted by high levels of anthropogenic noise.

March 8, 2018

Lata Krishnan, PhD, CCC-A

Unilateral Hearing Loss: The Evidence and the Challenges

The negative effects of unilateral hearing loss (UHL) have been well documented; however there are no specific guidelines for the management of UHL because of the wide range of variability in the effects of UHL and management options available. This presentation will briefly review the effects of UHL and use case studies to demonstrate variability in outcomes.

March 15th, 2018

Spring Break- No meeting

March 22,2018

Mike Heinz, PhD (Heinz lab)

The effects of cochlear synaptopathy on chinchilla amplitude-modulation detection thresholds

Amanda C. Maulden, Michael K. Walls, Vijaya P.K. Muthaiah, Michael G. Heinz

Recent animal studies suggest that moderate-level noise exposure can create permanent cochlear synaptopathy despite not permanently damaging hair cells or elevating hearing thresholds in quiet. This hidden hearing loss has been hypothesized to underlie the difficulties some listeners with normal audiograms have in noisy situations. However, it is difficult to test this hypothesis directly because of the inability to measure cochlear synaptopathy directly in humans and the fact that synaptopathy has mainly been shown in rodent models for which behavioral measures at speech frequencies are difficult. We recently established a relevant mammalian behavioral model by showing that chinchillas have corresponding neural and behavioral amplitude-modulation (AM) detection thresholds that are in line with human thresholds. Furthermore, we have shown that cochlear synaptopathy occurs at speech frequencies in chinchillas following moderate noise exposure. In the present study, behavioral AM-detection thresholds were measured in chinchillas, before and after noise exposure, using the method of constant stimuli. Animals were trained to discriminate a sinusoidal AM (SAM) tone from a pure tone, in the presence of a notched-noise masker designed to limit off-frequency listening. Successfully trained animals were tested at a range of AM depths (3-dB steps between -30 and 0 dB), for a range of modulation frequencies. Within a signal-detection-theoretic framework, psychometric AM-detection thresholds were computed as a function of modulation frequency.  Behavioral thresholds before noise exposure were in the range of -15 to -26 dB, and were consistent across individual animals.  Following synaptopathy-inducing noise exposure, amplitude-modulation detection for SAM tones was not significantly degraded. The low behavioral AM-detection thresholds found here contrast with previous work on other mammalian species (e.g., rabbits, gerbils). Our data are more in line with AM-detection thresholds found in avian species (e.g., budgerigars) and in humans. The lack of an observed deficit in performance in chinchillas with confirmed synaptopathy suggests that amplitude-modulation detection for high-level SAM tones in notched noise may not be sensitive enough for the diagnosis of cochlear synaptopathy. More complex tasks that provide a greater challenge to population neural coding are likely to be required.

March 29, 2018

Alexandra Mai (Au.D. student, Bharadwaj lab)

Suprathreshold effects of acoustic overexposure and middle age -- Measurements in a clinical setting

Alexandra R. Mai, Brooke E. Flesher, Jennifer M. Simpson, Michael G. Heinz, Hari M. Bharadwaj

Currently, the audiogram is the basis for hearing evaluations in the clinic. However, many patients present with normal thresholds but complain of difficulty understanding speech in noise, i.e., at restaurants, cocktail parties, and other crowded environments. Even among those with similar audiograms, there is large variability in the ability to perceive subtle features of sounds. A possible explanation for this difficulty is cochlear synaptopathy from noise exposure or early aging, as robustly demonstrated in animal models. However, currently, the extent to which synaptopathy affects humans, or how it may be diagnosed is unclear.  We are working on a large scale study studying suprathreshold hearing in young adults regularly exposed to significant amounts of loud sounds, and middle-aged individuals, in comparison to controls with nominal exposure and young age from the general population. The study includes both an elaborate battery of measures performed in a laboratory setting (~9 hours of testing), and a shorter battery of measures performed in a clinical setting (~2 hours). Here, we describe the preliminary results from our measures performed with equipment available in most clinics.

April 5th, 2018

Ed Bartlett, PhD (Bartlett lab)

Electrophysiological, Behavioral, and Histological Assessment of the Thalamocortical Network as a Stimulation Target for Central Auditory Neuroprostheses

Brain-machine interfaces aim to restore natural sensation or locomotion to individuals who have lost such ability. While the field of neuroprostheses has developed some flagship technologies which have enjoyed great clinical success, such as the cochlear implant, it is generally understood that no single device will be ideal for all patients. For example, the cochlear implant is unable to help patients suffering from neurofibromatosis type 2, which is commonly characterized by bilateral vestibular schwannomas for which surgical removal requires transection of the auditory nerve. In an effort to develop stimulatory neuroprostheses which can help the maximum number of patients, research groups have developed central sensory neuroprostheses. However, moving through ascending sensory processing centers introduces more uniqueness of neuronal feature selectivity and greater coding complexity. Chronic implantation of devices becomes less efficacious as the brain’s glial cells respond to implanted devices. In this work, we propose a neuroprosthetic targeting auditory thalamus, specifically, the ventral division of the medial geniculate body (MGV). Thalamus represents an information bottleneck through which many sensory systems send information. Primary (MGV) and nonprimary (MGD, MGM) subdivisions provide parallel auditory inputs to cortex and receive feedback excitation and inhibition from cortex and thalamic reticular nucleus (TRN), respectively. We characterize the potential of the thalamocortical circuit as a neuroprosthetic target through electrophysiological, behavioral, and histological methods. 

April 11, 2018 *** Bonus Talk ***

1230-120(*** in LYLE 1028)

Rich Freyman, PhD (U Mass Amherst)

Exploring the Relationship Between Sound Localization and Spatial Release from Masking

Spatial unmasking and sound localization are assumed to share common mechanisms, but it is still unclear whether any aspect of spatial release from masking actually depends on localization. One possible indication that there is a causal relationship between the two phenomena for some conditions comes from the observation that spatial masking release can often remain strong even when known cues for spatial unmasking are dramatically weakened by acoustic reflections in rooms. With the aid of the precedence effect, separate localization of targets and maskers in those same reverberant environments is largely preserved, leading to the suspicion that localization is responsible for the spatial release that remains in the face of the diminished cues. However, there are also arguments against a causal relationship. This talk will review what is known about this question from previously published work, and will also discuss newer research focusing on listeners with asymmetric hearing loss. The literature demonstrates that people with such losses adapt to their asymmetries with respect to localization, at least to some degree, but whether this adaptation helps restore spatial release from masking is not yet obvious.

April 12, 2018

Special Lectures in Neuroscience (in MRGN 121)

Prof. Daniel B. Polley (Harvard Medical School)

Pathologically over-powered neural amplifiers: the neural circuit origins of disordered perception

Sensory brain plasticity exhibits a fundamental duality, a yin and yang, in that it is both a source and possible solution for various types of perceptual impairments. When normal signaling from the ear is disrupted, the balance of excitation and inhibition tips toward hyperexcitability throughout auditory processing centers of the brain, increasing the ‘central gain’ on afferent signals so as to partially compensate for diminished inputs from the periphery. Excess amplification in sensory circuits distorts the temporal coding of complex communication sounds and may even induce the perception of phantom sounds, contributing to pathophysiological processes such as hyperacusis and tinnitus. This is the ‘yin’, the dark side of brain plasticity, wherein the transcriptional, physiological and neurochemical changes that compensate for the loss or degradation of peripheral inputs can incur debilitating perceptual costs. Our research is also committed to understand the ‘yang’ of brain plasticity, how the remarkable malleability of the adult brain can be harnessed and directed towards an adaptive – or even therapeutic – endpoint through pharmacology, direct brain stimulation and non-invasive approaches such as immersive sensory training. My lecture will focus on the mechanisms underlying the yin and yang of plasticity in the auditory cortex, midbrain and basal forebrain. I will conclude by describing our recent efforts to translate findings from animal models to human subjects with auditory perceptual disorders.

April 19, 2018

Special Lectures in Neuroscience (in MRGN 121)

Prof. Laurence Trussell (Oregon Health & Science Univ)

Radical transformation of neural signals by excitatory interneurons in auditory and vestibular systems

The unipolar brush cell (UBC) is an excitatory interneuron in the cochlear nucleus and vestibular cerebellum of mammals, and in the electrosensory lobe of mormyrid electric fish. Recent studies suggest that the UBC may be required for these systems to cancel signals associated with expected or self-generated sensory signals. We have examined the synaptic and circuitlevel features of UBCs in mice, using electrophysiology, imaging and optogenetics. We found that UBCs can respond to synaptic input with a complex array of output, including long lasting periods of excitation, inhibition or delayed excitation. This talk will describe the cell physiological basis for these diverse responses, and report on studies that are revealing the neural connectivity associated with each type of response.

April 26, 2018

Robert Burkard, PhD, CCC-A (University of Buffalo)

Auditory brainstem responses across mammals (and one avian species)

We often use animal models for experimental manipulations, where human subjects are not feasible. For experimental studies of hearing, we can use auditory evoked potentials to assess changes in threshold, latencies and/or amplitudes, due to myriad subject factors, including development, aging and pathology. For the evaluation of hearing (both clinically and experimentally), the auditory brainstem response (ABR) is the most often used evoked potential measure. Knowing how the ABR is similar and how it varies across animal species is useful, so that an appropriate animal model can be chosen, for a given experimental manipulation. In this presentation, we will compare human ABRs to those of a variety of animal species. We will look at the ABR morphology, and assess the effects of various stimulus manipulations, across (mostly) mammalian species. 


May 3, 2018

Special Lectures in Neuroscience (in MRGN 121)

Prof. Torsten Dau (Technical Univ. of Denmark)

From data-driven auditory profiling to scene-aware signal processing in hearing aids

Despite advances in acoustic technology, modern hearing aids have yet to solve the fundamental problem of restoring hearing in everyday sound environments. Finding the best compensation strategy for an individual hearing-impaired person represents a major challenge since the consequences of a typical hearing loss are much more complex than just a reduced sensitivity to sound as reflected in the pure-tone audiogram. In fact, characterizing the “auditory profile”, which would require many more measurements than are currently conducted in audiology clinics, seems essential for optimizing the amplification strategy in hearing-aid fitting for each individual. Furthermore, while normal-hearing listeners are able to focus attention on one particular sound source and ignore others, this ability is reduced in listeners with hearing impairment. The crucial problem of modern hearing aids is that they do not know which acoustical source an impaired listener would like to hear. To solve this problem, hearing aids need to evolve from sound processors to 'brain processors' collecting information from the listener to selectively amplify only the sounds that the listener is trying to focus on. Such a revolution requires significant breakthroughs in terms of our fundamental understanding of
hearing. Specifically, we need models that bridge the fundamental gap between sound processing in the inner ear and processing in the brain. The ability of the auditory system to extract meaningful 'auditory objects', like speech or music, from a mixture of sound waves arriving at the ear involves multiple stages of processing. The goal of modern hearing research and technology is to develop functional models of hearing that integrate these levels of processing to investigate how the 'listening brain' actively modulates sound processing to serve behavioral goals.

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at slhswebhelp@purdue.edu.