Seminars in Hearing Research at Purdue

Students, post docs, and faculty with interests in all aspects of hearing meeting weekly to share laboratory research, clinical case studies, and theoretical perspectives. Topics include basic and translational research as well as clinical practice. Participants are welcome from all of Purdue University including Speech, Language, and Hearing Science (SLHS), Biology (BIO), Biomedical Engineering (BME), Mechanical Engineering (ME), and Electrical Engineering (EE). Seminars provide an ideal venue for students to present their work with a supportive audience, for investigators to find common interests for collaborative efforts, and for speakers from outside Purdue to share their work. This seminar is partially supported by the Association for Research in Otolaryngology.

2016-2017 Talks

LYLE 1150: 1030-1120am (link to schedule)

Jump to Spring Semester Talks

September 1, 2016

Hari Bharadwaj, PhD (Bharadwaj lab)

Feedback-mediated enhancement of temporal coding in the human auditory periphery

The olivocochlear efferent system is thought to improve hearing in noise by modulating the input to the ascending auditory system. Although much is known from direct neural recordings about how efferents affect afferent responses in small laboratory animals, much less is known in humans. This is particularly true for conditions most relevant to everyday listening -- how does efferent feedback to the cochlea affect neural population encoding of broadband sounds at moderate to high sounds levels? In this talk, I will present our attempts to address this question through a series of human experiments using non-invasive electrophysiology and otoacousticemission measurements.

September 8, 2016

Mark Sayles, PhD (Sayles lab)

On the level-dependence of filtering in the cochlear apex

Narrowband cochlear filtering is fundamental to audition. Auditory-nerve-fiber responses provide a valuable window onto cochlear tuning, particularly in the poorly accessible cochlear apex. Typically, nerve-fiber tuning is quantified using pure-tone threshold curves. Superficially, these suggest dramatic and systematic filter broadening with increasing sound level; a view often invoked to explain psychoacoustic phenomena. In this talk I will present apical auditory-nervefiber responses to broadband sounds over a wide intensity range which show “broadening” is not as severe as threshold tuning curves might otherwise imply.

September 15, 2016

Josh Alexander, PhD (Alexander lab)

Acoustic and perceptual effects of amplitude and frequency compression on highfrequency speech

Fricative recognition is challenging for hearing aid users because cues are high frequency and low intensity. Therefore, hearing aids must apply significant amplitude compression to make the full bandwidth (FBW) audible, which distorts temporal envelope information. Alternatively, highfrequency cues can be shifted to lower-frequency regions where thresholds are better, using nonlinear frequency compression (NFC). This study examined how the audibility-distortion tradeoff applies to frequency-lowered speech using 31 hearing-impaired participants who identified seven fricatives with an initial /i/ produced by three female talkers. Stimuli and a carrier phrase were processed with/without NFC and by linear amplification and five amplitude compression varieties. Frequency-compressed filters that precisely aligned 1/3-octave bands between input and output quantified audibility and the modulation-transfer function (MTF) ratio (envelope distortion) relative to the input. Modulation was favorable for slow amplitude compression and linear processing, while audibility was favorable for fast amplitude compression. This tradeoff did not differ between NFC and FBW, which were equally effective at improving fricative recognition. Audibility and MTF ratio were significant predictors of recognition across all phonemes. The FBW/NFC covariate and its interactions were not significant, indicating that audibility and modulation were equally important regardless of whether high-frequency information was processed with FBW or NFC. 

September 22, 2016

Ankita Thawani (Fekete lab)

Zika virus exhibits differential tropism in developing chicken brain

Zika virus (ZIKV), a member of Flaviviridae family transmitted by arthropods, was first reported to be found in the Aedes spp. in 1950s in the Zika forest of Uganda. The recent outbreak in Central and South Americas has brought with it the need to study the pathogenesis of this virus better with the long term goal to develop therapeutic or preventative aids for the patients. Zika infection in adults is correlated with a range of symptoms from flu to neurodegenerative Guillian Barré Syndrome. Vertical (in utero) transmission of the virus has been implicated in causing microcephaly, eye defects, hearing deficits and impaired growth in fetuses when the mother is infected during gestation. In the recent months, many cell, organoid (in vitro) and murine (in vivo) models have been published demonstrating the transmission, tropism and pathogenesis of ZIKV. We aim to develop a medium throughput in vivo model to study ZIKV infection and pathogenesis using the chicken embryo. We injected midbrain ventricle of embryonic day (E)2 chicken embryos in ovo and looked for the viral load up to 3-days-post-infection via Real-time Polymerase Chain Reaction (RTPCR) and immunohistochemistry with co-staining for dividing cells, dying cells or neural progenitors. Our results reveal sub-regions of the brain that get preferentially infected at this stage suggesting that ZIKV exhibits tropism for specific parts in the developing brain. Heavily infected regions show reduced cell proliferation and increased cell death. Other methods of viral delivery were also explored including blood vessel injection, limb-bud injection and chorio-allantoic membrane (~placenta) infection. This study will add to our knowledge of how Zika virus spreads during embryonic development and will develop a model organism for testing preventative and therapeutic strategies.

September 29, 2016

Emily Han (Bartlett lab)

Circular Analysis and Computational Modelling of Age-Related Effects on Single-Unit Amplitude Modulation Depth Processing in Inferior Colliculus

Past studies have shown reduced AMFR response in aged rat auditory midbrain, especially with reduced amplitude modulation (AM) depth or under the presence of noise, a phenomenon that has been proposed to be the result of GABAregic loss in aged IC. However, single-unit studies have shown that blockade of GABA subunits has no negative effect on IC phase-locking ability. In this study we analyzed single-unit response and local field potential (LFP) of young and aged rat IC neuron towards AM stimuli with decreased depth. Aged units show significantly lower rate coding strength even when hearing threshold has been compensated, while temporal coding strength in aged IC neurons, represented in Vector Strength (VS), shows higher resistance to decrease in modulation depth. Circular analysis reveals potential loss of phase-following information in aged units but results are not statistically conclusive. LFP amplitude and FFT ratio results are consistent with observations in single-unit response. Conductance-based modelling indicates that decrease of AMPA/NMDA/GABA co-conductance might play a role in shaping the age-related change in AM depth response.

October 6, 2016

Jeff Lucas, PhD (Lucas lab)

Now for something completely different: the extraordinarily complex vocal repertoire of Carolina chickadees

There is enormous variation in the complexity of communicative systems.  We still have no clear answer as to why this variation has evolved, though there are a number of alternative hypotheses.  We are studying a model system, chickadees and titmice, that share the chick-a-dee call system.  This call system is one of a very few examples of a syntactically complex vocal system.  I'll describe the system itself and why it is so extraordinarily complex, and then talk about ecology, social networks and space use in 3 populations of these birds that may offer some insight into the evolution of vocal complexity.

October 13, 2016

Dave Axe (Heinz lab)

Changes in temporal coding of frequency modulation following inner-hair-cell impairment

Hearing loss is classically characterized by an increase in the threshold of audibility, measured through a pure-tone audiogram. The sensitivity of this test has been called into question since some who enter the audiology clinic reporting difficulty with auditory tasks (e.g., speech understanding in noise) show normal thresholds. It has been suggested that one possible source for these supra-threshold deficits is from degraded coding brought on by the loss or damage of inner hair cells (IHCs). Recent psychophysical work has suggested that frequency modulated (FM) stimuli are ideal for probing these supra-threshold deficits in temporal coding. Temporal coding of FM stimuli is dependent on robust phase-locking to a rapidly changing carrier frequency. At sufficiently high modulation rates when the auditory system is tasked with encoding cycle to cycle differences, robust coding is dependent on the large amount of redundancy built into the system. Damage to IHCs degrades this redundancy in two ways. First, loss of IHCs reduces the total number of inputs into the brain. The volley theory posits that individual auditory-nerve fibers (ANFs) are not individually capable of encoding all the temporal aspects of a stimulus, but that the auditory system overcomes this limitation by relying on a large number of ANFs to fully encode all of the important features. When an IHC is lost, the 10-30 ANFs that synapse onto it lose their input and therefore are no longer able to carry information. The second way in which IHC damage reduces the redundancy of the auditory system is through reduced firing rates in fibers that synapse onto dysfunctional IHCs. Our neurophysiological recordings from ANFs in anesthetized chinchillas treated with carboplatin have demonstrated reduced firing rates and shallower slopes of rate-level functions in these animals with IHC dysfunction. Using computational modeling techniques we explore the hypothesis that reduced cochlear output degrades temporal coding of FM stimuli.

October 20, 2016


October 27, 2016

Katie Gill (Audiology T35 student research)

Refractory recovery function of the auditory nerve measured in implanted children with and without cochlear nerve deficiency.

Cochlear nerve deficiency (CND) is characterized by a small or absent auditory nerve. Patients with CND will receive fair to poor benefits from cochlear implantation. It has been proposed that CI recipients who present with a slower refractory time constant, such as the CND population, may benefit from slower programming rates (Shpak et al., 2004). The purpose of this study was to investigate 1) the relationship between the refractory recovery time constant and the slope of the amplitude growth function of the auditory nerve for implanted children with CND and implanted children with sensorineural hearing loss (SNHL); and 2) if a correlation exists between these measures and the size of the auditory nerve based on imaging results. Preliminary analysis of refractory recovery time reveals slower refractory recovery in CND subjects and faster refractory recovery in SNHL subjects. This is inconsistent with the results from Botros & Psarros (2010). Results from the current study indicate that slower recovery times are associated with compromised neural populations (CND) and faster recovery times are associated with intact neural populations. Further analysis of the slope of the amplitude growth function and analysis of MRI imagining is currently underway.

November 3, 2016

Youyi Bi (Herrick Acoustics Lab)  [PIs Tahira Reid and Patricia Davies]

Development of new product sounds

Car designers are interested in understanding what attributes of naturally occurring, generated, and modified sounds make them more or less desirable to end-users. In this research, we investigated millennials’ perception of the proposed next-generation car sounds and other product sounds. A subjective test was conducted to determine sound preferences when people were presented with a current sound, a very different sound, and something in between the two. Intentional sounds (e.g., turn signal) and consequential sounds (e.g., car door closing) were considered in six contexts. Because of the focus on next-generation cars, responses from millennials (purchasers of cars over the next 40-50 years) are of particular interest. The very different sounds were inspired by the music preferences of the millennial generation (e.g. music and film). The influences of visual information and perceived functionality on the sound preferences were also examined. Forty university students and staff volunteered to participate in the test. The results showed that millennials preferred traditional sounds in most contexts and their sound preferences aligned with certain sound evaluations and verbal descriptions. Participants’ verbal descriptions of the sounds provided interesting insights into the relationship between the sound evaluations and participants’ perception of the sounds. In several cases the pictorial and textual cues of context and their presentation order could impact how people perceived the sounds. These results may shed light on how to integrate millennials’ preferences into the design of future product sounds.

November 10, 2016

Integrative Audiology Grand Rounds-Lata Krishnan, PhD, CCC-A

Integrative Audiology Grand Rounds: Auditory neuropathy spectrum disorder (ANSD)

Auditory neuropathy spectrum disorder (ANSD) is a disorder in which the transmission of afferent signals from the inner ear to the brain is impaired, although the exact site of lesion is not clear. ANSD accounts for approximately 7-10% of children with congenital hearing loss; about 6% of whom present with unilateral ANSD and normal hearing in the contralateral ear. This case presentation focuses on the complexities of diagnosis, management, and family counseling in working with a child identified with ANSD in one ear and sensorineural hearing loss (SNHL) in the other ear.

November 17, 2016

Beth Strickland, PhD (Strickland lab)

Behavioral measurements of efferent effects: addressing some basic questions

It has long been known that humans have an olivocochlear efferent system.  However, many basic questions about the role of this system in auditory perception remain unanswered.  In this presentation, I will present data and also some future directions in our research on basic topics such as the size of the contralateral reflex, testing for possible effects of the middle ear reflex on our measures, and exploring variability in the reflex among listeners with hearing thresholds on the high end of the normal range.

December 1,2016

[****NOTE: Will meet in LYLE 2066 this day only****]

Ryan Verner (Bartlett lab)

Electrophysiological, Behavioral, and Histological Assessment of the Thalamocortical Network as a Stimulation Target for Central Auditory Neuroprostheses

Brain-machine interfaces aim to restore natural sensation or locomotion to individuals who have lost such ability.  While the field of neuroprostheses has developed some flagship technologies which have enjoyed great clinical success, such as the cochlear implant, it is generally understood that no single device will be ideal for all patients.  For example, the cochlear implant is unable to help patients suffering from neurofibromatosis type 2, which is commonly characterized by bilateral vestibular schwannomas for which surgical removal requires transection of the auditory nerve.  In an effort to develop stimulatory neuroprostheses which can help the maximum number of patients, research groups have developed central sensory neuroprostheses.  However, moving through ascending sensory processing centers introduces more uniqueness of neuronal feature selectivity and greater coding complexity, and chronic implantation of devices becomes less efficacious as the brain’s glial cells respond to implanted devices.  In this work, we propose a neuroprosthetic targeting auditory thalamus, specifically the ventral division of the medial geniculate body (vMGB).  Thalamus represents an information bottleneck through which many sensory systems send information.  Primary (vMGB) and non-primary (dMGB, mMGB) subdivisions provide parallel auditory inputs to cortex and receive feedback excitation and inhibition from cortex and thalamic reticular nucleus (TRN), respectively.  We characterized the potential of the thalamocortical circuit as a neuroprosthetic target through electrophysiological, behavioral, and histological methods.  Preliminary results suggest some features of intrathalamic microstimulation are more salient than intracortical microstimulation, such as sensitivity to perceived pitch or bandwidth, while others are weaker, such as sensitivity to loudness.  Additionally, we have identified a profound immune response in vMGB to the implanted electrode and propose alternative surgical approaches which may mitigate this response.

December 8, 2016

Katie Scott (Fekete lab)

Molecular regulators of innervation and patterning across the developing cochlea

The basilar papilla (BP) is the auditory organ of the bird and is the equivalent of the organ of Corti in the mammalian cochlea. Across the radial axis of the BP, the organization of cell types and connectivity is such that hair cells on the neural side of the BP take on a tall morphology (tall hair cells) and are primarily innervated by afferent neurons. Conversely, hair cells on the abneural side take on a short morphology (short hair cells) and are primarily innervated by efferents. It is currently unknown what factors drive this differential innervation. One factor that may influence this pattern is Wnt9a. When Wnt9a is overexpressed, we observe an increase in both the number of tall hair cells and their afferent innervation. To identify genes downstream of Wnt9a that may be mediating these changes, we used RNA deep sequencing of control and Wnt9a-overexpressing BPs.

One candidate gene identified by RNA-seq was Slit2, which was significantly upregulated (1.2-fold change). Slit is an axon guidance factor that signals through Robo receptors to repel axons.  Additionally, Slit-Robo signaling has been shown to activate β-catenin and influence proliferation. Because many Wnts also act through β-catenin, we hypothesized that Slit2 may influence radial identity of the sensory organ rather than, or in addition to, its well-known effects on axonal innervation. Using in-situ hybridization, we determined that Slit2 is endogenously expressed on the neural side of the BP; however, its expression domain does not expand when Wnt9a is overexpressed. To determine the effects of Slit/Robo signaling on patterning and innervation, we overexpressed Slit2 or a truncated version of its receptor, Roundabout-1(Robo1). When Slit2 or truncated Robo1 were overexpressed, no changes in proliferation, innervation, or phosphorylated β-catenin were observed.  Thus, the function of Slit-Robo signaling in BP development remains an open question.

Another candidate axon repellent identified as down-regulated (0.6-fold) by RNA-seq was Semaphorin-3D (Sema3D), which acts through neuropillin (Npn) receptors. We find that Sema3D transcripts are expressed in the sensory domain in the BP and decrease when Wnt9a is overexpressed, using in-situ hybridization. The loss of this repellent may allow afferents to spread more widely across Wnt9a-treated BPs. Because Sema3D is present throughout the BP in normal animals, we predict that subpopulations of afferent neurites may differ in the levels or types of Npn receptors that they express, leading to different responses to this repulsive cue.

January 26, 2017

Mark Sayles, PhD (Sayles lab)

Iceberg Ahead! - Neural Mechanisms of Interaural Time Processing in the Lateral Lemniscus

Neural sensitivity to micro-second differences in the ongoing temporal structure of sounds at the two ears underlies our sense of auditory space. This sensitivity first emerges in specialized neurons in the medial superior olive (MSO) through a process of coincidence detection. MSO principal neurons’ axons project in the lateral lemniscus to target neurons in the inferior colliculus (IC) and dorsal nucleus of the lateral lemniscus (DNLL). These neurons integrate other inputs with those from the MSO to further process acoustic spatial cues. In this talk I will present in-vivo intra-cellular electrophysiological data from labelled neurons in the DNLL. Our data reveal a novel cellular mechanism for shaping interaural-correlation sensitivity: we term it the “iceberg” effect.

Spring Talks

February 2, 2017

Amanda Maulden, BS (Heinz lab)

Comparison of Psychometric and Neurometric Amplitude-Modulation Detection Thresholds in Normal-Hearing Chinchillas

Slow fluctuations in amplitude are important for speech perception in quiet and noisy backgrounds. The effects of sensorineural hearing loss on modulation coding have been hypothesized to have implications for listening in real-world environments, both in cases of permanent threshold shift due to hair-cell dysfunction and in cases of temporary threshold shift where permanent, but “hidden," cochlear synaptopathy occurs. Animal studies correlating neural with behavioral modulation-detection thresholds have suggested a close correspondence between the two metrics in birds, but not in mammals. In addition, avian thresholds appear to match more closely with human thresholds than data from other mammals. Here, we examine the correspondence between neural and behavioral thresholds in a chinchilla. Behavioral amplitude-modulation (AM) detection thresholds were determined for 10 chinchillas using the method of constant stimuli.  Animals were trained to discriminate a sinusoidal AM (SAM) tone (4-kHz carrier) from a pure tone. Stimuli were embedded in a notched-noise masker to prevent off-frequency listening. Successfully trained animals were tested at a range of AM depths (3-dB steps between -30 and 0-dB), for a range of modulation frequencies. In a separate group of animals, responses were recorded under barbiturate anesthesia from a population of single units in the auditory nerve and ventral cochlear nucleus. Spike times were collected in response to SAM tones (characteristic-frequency-tone carrier, range of modulation frequencies and depths), and analyzed in terms of synchronization to the modulation frequency. Within a signal-detection-theoretic framework, both psychometric and neurometric AM-detection thresholds were computed as a function of modulation frequency. Behavioral thresholds were consistent across individual animals, typically in the range -25 to -15 dB. The most-sensitive single-unit neurometric temporal AM-detection thresholds were similar to whole-animal behavioral performance at a range of modulation frequencies. The low behavioral AM-detection thresholds found here contrast with previous work on other mammalian species (e.g., rabbits, gerbils). Our data are more in line with AM-detection thresholds found in avian species (e.g., budgerigars) and in humans. The establishment of a mammalian model with corresponding neural and behavioral modulation thresholds will allow us to examine the effects of cochlear synaptopathy on behavioral and neural assays of temporal processing following “hidden hearing loss.”

February 9, 2017

Hari Bharadwaj, PhD (Bharadwaj lab)

What is hindering the widespread clinical use of objective physiological measures? 

Objective physiological measures such as otoacoustic emissions (OAEs) and evoked potentials (e.g., auditory brainstem responses, envelope-following responses) provide a non-invasive window into the function of specific early portions of the auditory pathway. Yet, clinical use of such measures on an individual patient often tends to be limited to binary evaluations (e.g., pass-fail hearing screenings). Indeed, clinical utility is limited by systematic variability from confounding physiological factors (e.g., individual differences in cochlear dispersion, head and brain anatomy, efferent effects), the non-standardized nature of acoustic stimulation and in-ear measurements (e.g., difficulty in taking into account individual ear-canal properties), and measurement-related factors (e.g., variability in contact impedance, subject motion, and unrelated brain activity). Over the last 5 years, owing to our interest in individual differences in suprathreshold hearing, we have conducted several pilot experiments with the goal of addressing some of these sources of variability. In this presentation, I will compile the results from those experiments and describe some methodological choices that we have employed, or plan to employ in our attempt to improve the test-retest reliability of such physiological measures.

February 16, 2017

Weonchan Sung (Herrick Acoustics Lab)  [PIs Stuart Bolton and Patricia Davies]

Development of sound criteria for HVAC&R equipment noise

People complain when they find the sounds made by heating, ventilating, air conditioning and refrigeration (HVAC&R) equipment annoying. Equipment designers would like to have better noise criteria to guide their designs so that the equipment sound is not annoying.  HVAC&R sounds often have a rich harmonic structure attributable to various rotating or reciprocating components. There are also broadband components arising from air motion, turbulence and fluid pulsations. To build a robust annoyance model, we need to identify important sound features that influence annoyance and have the ability to vary them.  Techniques to decompose sounds into tonal and broadband components and to modify the sounds to independently control sound attributes, while retaining the realism of the sounds, will be described. Three subjective tests were planned and the results of two of the tests are presented. In the first test subjects wrote descriptions of sounds and also rated sounds on an annoyance scale. The second test was a semantic differential test, designed based on findings from the first test.  Three strong factors were identified related to loudness, tonal content/sharpness, and fluctuation/irregularity. The results of this and the first test are being used in the design of a third test to develop sound quality models for use in HVAC&R equipment design and evaluation.

February 23, 2017

Anders Højsgaard Thomsen (Systems Manager of Audiology & Algorithms at Oticon A/S, Denmark)

Audiology development of the Velox platform used in Oticon OPN hearing aids

This talk will discuss the layered architecture of the latest DSP platform used in Oticon hearing aids.  Topics will include a discussion on considerations for acoustic variations across hearing aids style, efforts to reduce DSP parameter complexity and DSP parameter dependency on acoustics, audiology development flow from Matlab to final hearing aids in field-trials, hearing aid system integration points, and observations on results generated in lab-trials vs field trials.

Jan Petersen (Lead Development Engineer at Oticon A/S, Denmark)

Compression in Oticon hearing aids

This talk will discuss key features of a unique method of nonlinear amplification.  Topics will include principles behind the Oticon guided level estimator, the use of adaptive time constants, local linear - global nonlinear relationships, cue preservation, and modulation preservation.

March 2, 2017

Chandan Suresh (PhD student, SLHS, Krishnan Lab)

Influence of Structural and Functional Asymmetries on Pitch-Relevant Neural Activity in Auditory Cortex

Long-term language experience enhances pitch-rele­vant neural activity in the auditory cortex, as reflected by the components (Na-Pb and Pb-Nb) of the cortical pitch response (CPR). Previous results have consistent­ly shown that this neural activity shows a relative right­ward asymmetry in Chinese only. These experience-de­pendent results have been interpreted to suggest that extrasensory processes modulate early sensory level processes to optimize representation of temporal attri­butes of pitch that are behaviorally relevant. However, it is not known how structural asymmetries—charac­terized by contralateral dominance—in the ascending auditory pathways interplay with functional hemispheric asymmetries at the cortical level. We recorded cortical pitch responses (CPR) from native speakers of Man­darin and English in response to speech (S; /i/) and nonspeech (NS; iterated rippled noise) sounds, both of which carried a pitch homolog of Mandarin high rising Tone 2. Stimuli were presented under diotic (either S or NS in both ears), dichotic (S-LE/NS-RE; S-RE/NS-LE), and monotic (S-RE; NS-RE; S-LE; NS-LE) conditions. Irrespective of the mode of stimulation, the magnitude of CPR components in the Chinese group was great­er compared to the English; the Chinese group also showed a rightward asymmetry. In contrast, the English group showed no asymmetry when either speech was presented in the LE or nonspeech in the RE; and only a weak rightward asymmetry when speech was presented to the RE or nonspeech to the LE. In monotic conditions, the Chinese group showed a rightward asymmetry only, when speech or nonspeech was presented to the LE, with no asymmetry when speech or nonspeech was pre­sented to the RE. In contrast, the English group showed the largest response in the contralateral hemisphere re­gardless of stimulus type (S, NS). These findings sug­gest that the rightward asymmetry observed in the Chi­nese group mostly represents an experience-dependent functional asymmetry. A right-sided preference with LE monaural stimulation further suggests that long-term experience may also influence structural asymmetry. In monotic conditions, the English group’s responses more likely reflect a structural asymmetry in which the dom­inant contralateral pathway suppresses activity of the ipsilateral pathway. Based on these pattern of results for diotic, dichotic, and monotic conditions in the two groups, long term language experience in the Chinese group not only enhances representation of pitch relevant information in the auditory cortex, facilitated by selective reinforcement of structural asymmetries along the as­cending auditory pathway to deliver the optimal input to the preferred pitch processing site in the auditory cortex.

March 9, 2017

Vidhya Munnamalai (BIO, Fekete lab)

A Wnt-er’s tale of two feuding families (with a happy ending)

The sensory cells of the mammalian organ of Corti assume a precise mosaic arrangement during embryonic development. How are these cells instructed early in development to adopt a specific fate in a specific arrangement? For example, what makes an inner hair cell versus an outer hair cell? Several pathways such as the Notch and Bmp pathways have already been implicated on this patterning. Here, we focused on the Wnt pathway and addressed whether its manipulation can alter patterning of compartments across the radial axis. To investigate this, we used mouse cochlea cultures to disrupt the Wnt pathway using pharmacological agents. At E12.5, the prosensory cells are still proliferation and have not yet been committed to a specific sensory fate. Through temporal manipulation, we show that the timing of Wnt ligand expression alters patterning differently. Some of these changes in patterning were secondary effects downstream of the Bmp pathway; thus, revealing interesting crosstalk between these pathways. The Wnt and Bmp4 pathways appear to behave antagonistically to each other to produce the mosaic patterning we are familiar with. My future goals will further investigate the nature of this crosstalk across the radial axis.

March 23, 2017

Francis Lab (Alex Francis)

Listening Effort: Phenomenology, Psychology, Physiology and Application

Interest in the phenomenon of listening effort has increased significantly in recent years, but there is still little consensus as to what it is, or why it matters. Here I will present a brief overview of some key issues and methods in this developing field, and will demonstrate some applications in the context of four studies conducted in my lab.

March 30, 2017

Liz Marler (Au.D. student)  [PI: Lata Krishnan]

Hidden Hearing Loss in College-age Musicians: Electrophysiologic Measures

Recent animal studies have shown that even moderate levels of noise exposure can lead to cochlear synaptopathy; a disruption of auditory nerve synapse structure and function. This has been described as hidden hearing loss because of preservation of normal hearing sensitivity in the presence of auditory nerve damage. We evaluated distortion product otoacoustic emissions (DPOAE) and auditory brainstem responses (ABR) in normal-hearing college-age musicians and non-musicians, to determine if recreational exposure to music produced changes in these responses consistent with hidden hearing loss. For the musicians, results showing larger DPOAE amplitudes above 2000 Hz; reduced SP amplitude; and reduced ABR Wave I and Wave V amplitudes across stimulus levels and rates are consistent with hidden hearing loss. Findings suggest intact or enhanced outer hair cell function (larger DPOAE amplitude); reduced inner hair cell drive (decreased SP) and disruption of synchronized neural activity generating both the auditory nerve response (Wave I) and the more rostral brainstem response (Wave V). Clinical implications of these results include the possibility of early identification of hidden hearing loss among young adults, and counseling regarding hearing protection to prevent hidden hearing loss.

April 6, 2017

Brian Leavell & Hoover Pantoja (Bernal Lab, BIO)

From ear to ear: an exploration of hearing in mosquitos and frog-biting midges

Insects are the most diverse group of animals and unsurprisingly exhibit an extraordinary variety of adaptations for processing and receiving acoustic signals. In this talk we will provide a brief overview of the main mechanisms underlying reception of airborne sounds in insects. We will use this framework as a springboard to discuss two ongoing investigations of sound production and audition in mosquitoes and frog-biting midges. In the first project, we used acoustic measures commonly used in speech recognition to examine signal modulation in mosquitos. By using jitter and shimmer measures we quantified variation in signal frequency and amplitude of individual and courtship signals. Our results revealed the active role of each sex at signal modulation during courtship. In the second part of the talk, we will discuss signal production and hearing in frog-biting midges, a closely related family to mosquitos. We will talk about our current understanding of hearing in the midges while presenting on-going and future venues of our research with this group. 

April 13, 2017

Vijaya Muthaiah, PhD (Heinz Lab) 

Effects of cochlear-synaptopathy inducing moderate noise exposure on auditory-nerve-fiber responses in chinchillas

It has been hypothesized that selective loss of low-spontaneous-rate (low-SR) auditory-nerve (AN) fibers following moderate noise exposure may underlie perceptual difficulties some people experience in noisy situations, despite normal audiograms. However, the finding of selective low-SR-fiber loss hasn’t been replicated in an animal model with behavioral thresholds similar to humans. We recently established a behavioral chinchilla model for which neural and behavioral AM-detection thresholds are in line with each other and similar to humans. Here, we report physiological AN-fiber response properties from anesthetized chinchillas exposed to noise that produced cochlear synaptopathy, as confirmed by immunofluorescence histology. Auditory-brainstem responses, distortion-product otoacoustic emissions, and compound action potentials confirmed no significant permanent threshold shift. Stimuli included both simple (pure tones, as studied previously) and complex (broadband noise) sounds. Low-SR fibers were reduced in percentage (but not eliminated) following noise exposure, as shown previously in guinea pigs. Saturated rates to tones were reduced. Similar tuning and temporal coding were observed in broadband-noise responses following noise exposure. Complete characterization of AN-fiber responses to complex sounds in a mammalian behavioral model of noise-induced cochlear synaptopathy will be useful for understanding suprathreshold deficits that may occur due to hidden hearing loss.

April 20, 2017

Ian Mertes, PhD, AuD, CCC-A (Assistant Professor, Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign)

Assessing Olivocochlear Efferent Activity Using Cochlear, Neural, and Perceptual Measures

The olivocochlear efferent system modulates cochlear function to improve sound detection in noise. The efferent system is also likely involved in everyday listening in background noise. Efferent dysfunction may therefore contribute to hearing in noise difficulties, so assessments of efferent activity could be clinically useful. In humans, efferent activity is typically measured using cochlear responses that require normal or near-normal hearing. I will present on work determining if neural responses allow for assessment of efferent activity in listeners with mild hearing loss. I will also discuss recent work examining the relationship between efferent activity and word recognition in noise.

April 27,2017

Brandon Coventry (BME, Bartlett lab)

Infrared neural stimulation and the auditory system: can infrared light be the answer to many of the problems of the cochlear implant?

Stimulation of the nervous system has the unprecedented ability to remedy neurological and neuropsychiatric diseases and disorders which were thought untreatable. The first and most successful clinical neuroprosthesis is the cochlear implant and has been used to restore hearing percepts in over 300,000 patients world-wide (NIHDCD,2017). However, devices are limited by electrical current spillover leading to non-specific activation of neural tissue, toxic electrochemical reactions, and the need for the electrode to be in direct contact with the tissue. Perceptually, this leads to aberrant pitch and temporal processing in implanted patients. Infrared neural stimulation (INS) is a novel optical technique which uses infrared light to selectively stimulate nerves and neurons. INS is an intrinsic mechanism of the cell, requiring no exogenous genetic modification. Stimulation is limited to the size of the optical aperture and optical penetration depth, the optrode does not need to be in direct contact with the cell and does not produce toxic reactive species. Furthermore, initial animal models suggest INS can be used to create an optical cochlear implant (Matic et al., 2013) which would address many of the problems of modern devices.

Despite these advantages, translational INS is limited by a lack of understanding of the cellular mechanisms of stimulation. In this work, I review the use of INS with particular focus in its use as a cochlear implant. I will also show our findings of novel INS stimulation wavelengths and initial data showing that INS has an ion channel component and, in conjunction with other studies, suggests activation is likely based on natural biophysical mechanisms. Finally, I will discuss the future work in our lab which will elucidate the cellular mechanisms using slice electrophysiology and the auditory thalamus.

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at