Seminars in Hearing Research at Purdue

Students, post-docs, and faculty with interests in all aspects of hearing meet weekly to share laboratory research, clinical case studies, and theoretical perspectives. Topics include basic and translational research as well as clinical practice. Participants are welcome from all of Purdue University including Speech, Language, and Hearing Science (SLHS), Biology (BIO), Biomedical Engineering (BME), Mechanical Engineering (ME), and Electrical Engineering (EE). Seminars provide an ideal venue for students to present their work with a supportive audience, for investigators to find common interests for collaborative efforts, and for speakers from outside Purdue to share their work. This seminar is partially supported by the Association for Research in Otolaryngology.

2021-2022 Talks

Thursdays, 10:30-11:20 AM (EST) in LYLE 1150 & ZOOM

Zoom Info: https://purdue-edu.zoom.us/j/93108158900?pwd=RDdTQ0Z4UE9Rb0JUenhjMG1SMkp2QT09
Meeting ID: 931 0815 8900
Passcode: 11501150

April 28, 2022

Title: Developing equitable neurotechnologies and addressing racial and phenotypic bias in human neuroscience methods

Speaker: Invited Speaker Jasmine Kwasa, University of Pittsburgh

Speaker Bio: Jasmine Kwasa, PhD, is an NIH-funded post-doctoral fellow in the Neuroscience Institute at Carnegie Mellon University (CMU). Originally from the South Side of Chicago, Jasmine earned her B.S. from Washington University in St. Louis, her M.S. from Boston University (both in Biomedical Engineering), and her Ph.D. in Electrical and Computer Engineering from CMU. Her ongoing post-doctoral research seeks to develop neurotechnologies, such as EEG and fNIRS, optimized for coarse, curly hair and dark skin pigmentation with collaborators at CMU. She is also a neuro-ethicist and writes about the future of inclusive neurotech and the history of racial bias in neuroscience, medicine, and technology. Jasmine has received several honors throughout her training, including being named a Ford Foundation Fellow, an NSF Graduate Research Fellow, a Society for Neuroscience fellow, and a “Rising Star in Biomedical Sciences" by MIT. In her free time, Jasmine is a dance fitness instructor and enjoys travel and time with her enormous family.

This seminar is presented in conjunction with Purdue Speech, Language, and Hearing Sciences (SLHS) Diversity, Equity, and Inclusion Committee, and the Purdue Institute for Integrative Neuroscience (PIIN).

April 21, 2022

Title: Modern Hearing Aid Technology to Improve Speech Intelligibility in Noise

Speaker: Joshua Alexander, PhD, Associate Professor, SLHS

Abstract: This talk will be an abbreviated overview of the contents of the fall 2021 issue of Seminars in Hearing on approaches currently being used to combat problems related to environmental and wind noise.  As the guest editor of this issue, the intended audience is professionals and students in audiology, hearing science, and engineering.  The goal is to bring the innovators and the users of the technology closer together by introducing each group to the nuances and limitations associated with the various solutions available today.  Technologies that will be discussed include classification, directional microphones, binaural signal processing, beamformers, motion sensors, and machine learning.

April 14, 2022

Title: Barriers to the Implementation of Hearing Research in the Audiology Clinic

Speaker: Samantha Hauser, AuD, SLHS PhD student (Bharadwaj/Heinz labs)

Abstract: Despite tremendous growth in our understanding of the auditory system over the last few decades, the audiological test battery and clinicians’ options for addressing hearing loss in the clinic have remained largely unchanged. Some practices like probe microphone verification of hearing aid fittings have years of research supporting their benefits but are not routinely used by many audiologists. Drawing on my personal experience as a practicing audiologist before joining the SLHS PhD program, this talk will examine the practical barriers to implementation of scientific ideals in the clinic. I will review a brief history of the Doctorate of Audiology degree and curriculum, the audiologist’s scope of practice, difficult financial decisions related to maintaining a successful practice, limitations in clinical time, and considerations related to patient goals and measurement of clinical outcomes. Through this discussion, I hope to give a broad appreciation of the challenges audiologists face, suggest ways to consider the clinical relevance of research, and to encourage more united collaboration between the disciplines.

April 7, 2022

Title: Open Science: Benefits, and barriers to participation

Speaker: T32 RCR Discussion

Abstract: According to a Consensus Study Report of the National Academies of Sciences, Engineering, and Medicine, “Openness [in science] increases transparency and reliability, facilitates more effective collaboration, accelerates the pace of discovery, and fosters broader and more equitable access to scientific knowledge and to the research process itself.”  Furthermore, the new NIH policy on data management and sharing asserts that “Sharing scientific data accelerates biomedical research discovery, in part, by enabling validation of research results, providing accessibility to high-value datasets, and promoting data reuse for future research studies”, with data-sharing related mandates set to take effect in January, 2023. This week, we will have a T32/TPAN-led discussion on the benefits of open science, some available tools, and barriers to participation in the open-science movement. Please bring your experience and/or thoughts in this realm for discussion!

March 31, 2022

Seminar presented in conjunction with the Purdue Biological Sciences Seminar Series (Neurobiology & Physiology Area).

Host: Donna M. Fekete, PhD, Purdue University

Title: Ephrin-Eph forward signaling in tonotopic map formation in the mouse cochlear nucleus 

Speaker: Wei-Ming Yu, PhD, Loyola University

Abstract: Tonotopy is a fundamental feature of the vertebrate auditory system and forms the basis for sound discrimination, but the molecular basis underlying its formation remains largely elusive. Ephrin/Eph signaling is known to play important roles in topographic mapping in other sensory systems. Here, we found that ephrin-A3 molecules are differentially expressed along the tonotopic axis in developing mouse cochlear nucleus. Ephrin-A3 forward signaling is sufficient to repel auditory nerve fibers in a developmental stage-dependent manner. In ephrin-A3 mutant animals, the tonotopic map is degraded and isofrequency bands of neuronal activation upon pure tone exposure become imprecise in the anteroventral cochlear nucleus. Ephrin-A3 mutant mice exhibit a delayed second wave in auditory brainstem responses and impaired detection sound frequency changes. Our findings establish an essential role of ephrin-A3 forward signaling in forming precise tonotopy in the mouse cochlear nucleus to ensure accurate sound discrimination.

March 24, 2022

Title: Characterization of Mayer-wave oscillations in functional near-infrared spectroscopy data

Speaker: Maureen Shader, Assistant Professor, SLHS

Abstract: Mayer waves are spontaneous oscillations in arterial blood pressure that can mask cortical hemodynamic responses associated with the neural activity of interest. This presentation will describe a new method to characterize the properties of oscillations in the functional near-infrared spectroscopy (fNIRS) signal generated by Mayer waves using a physiologically informed model of the neural power spectra. The impact of short-channel correction for the attenuation of these unwanted signal components will also be discussed.

March 17, 2022 - No Seminar (Spring Break)

March 10, 2022

Title: Preliminary analyses of cortical impact on subcortical and cortical responses to sound 

Speaker: Edward Bartlett, Professor, BIO/BME

Abstract: Concussions and other head traumas often produce diffuse and difficult to diagnose pathologies. These injuries can have persistent symptoms in affected individuals. Biomarkers for these injuries are typically poor or invasive. Here, auditory evoked potentials are assessed as a potential biomarker for brain injury and recovery, using sounds likely to generate responses from cortical and subcortical regions.   

March 3, 2022

Title: A Comparison of Gain Reduction Estimated From Behavioral Measures and Various Transient-Evoked Otoacoustic Emission Measures as a Function of Broadband Elicitor Duration 

Speaker: William Salloom, PhD Candidate, SLHS (Strickland lab)

Abstract: One mechanism that may support the broad dynamic range in hearing is the medial olivocochlear reflex (MOCR), a bilateral feedback loop at the level of the brainstem that can adjust the gain of the cochlear amplifier. Much of the previous physiological MOCR research has used long broadband noise elicitors. In behavioral measures of gain reduction, a fairly short elicitor has been found to be maximally effective for an on-frequency, tonal elicitor. However, the effect of the duration of broadband noise elicitors on behavioral tasks is unknown. Additionally, MOCR effects measured using otoacoustic emissions (OAEs), have not consistently shown a positive correlation with behavioral gain reduction tasks. This finding seems counterintuitive if both measurements share a common generation mechanism. This lack of a positive correlation may be due to different methodologies being utilized for the OAE and behavioral tasks, and/or due to the analysis techniques not being optimized to observe a relationship. In the current study, we explored the effects of ipsilateral broadband noise elicitor duration both physiologically and behaviorally in the same subjects, using a forward-masking paradigm. TEOAE measures included the change in magnitude, or in magnitude and phase, for different frequency analysis bands. For the same subjects, a psychoacoustic forward-masking paradigm was used to measure the effects of the elicitor on masking by an off-frequency masker for a 2-kHz signal, and for the effect of the elicitor on a signal with 20 ms of silence replacing the masker. The goal is to determine how duration of a broadband MOCR elicitor affects cochlear gain physiologically and perceptually, and if there is a relationship between these measures. The current study highlights the importance of choosing appropriate methodology used in estimating MOCR strength, particularly with OAEs.

February 24, 2022

Title: Individual differences in personality traits affect performance, frustration, and effort in a speech-in-noise task (Part II)

Speaker: Alexander Francis, Professor, SLHS

Abstract: TBA

February 17, 2022

Title: Estimation of cochlear frequency tuning using forward-masked Compound Action Potentials 

Speaker: Francois Deloche, Postdoctoral Associate, SLHS (Heinz Lab)

Abstract: Frequency selectivity is a fundamental property of the peripheral auditory system; however, the invasiveness of auditory nerve (AN) experiments limits its study in the human ear. Compound Action Potentials (CAPs) associated with forward-masking have been suggested as an alternative means to assess cochlear frequency tuning. Previous methods relied on an empirical comparison of AN and CAP tuning curves in animal models, arguably not taking full advantage of forward-masked CAPs to provide an accurate estimate of frequency selectivity.

I presented at the midwinter ARO meeting a new method that seeks to directly estimate the quality factor characterizing AN frequency tuning using forward-masked CAPs, without the need for an empirical correction factor. In this talk, I will return to the presentation of the newly developed model, provide further details on the estimation procedure and discuss about possible future developments.

The technique we propose is built on the long-standing convolution model of the CAP but considers the effect of masking that highlights contributions originating from different cochlear regions. The model produces masking patterns that, once convolved by a unitary response, predict forward-masked CAP waveforms. Model parameters, including those characterizing frequency selectivity, are fine-tuned by minimizing waveform prediction errors across many different masking conditions, yielding robust estimates. The method was applied to click-evoked CAPs at the round window of anesthetized chinchillas. We found a close match between our estimates of frequency selectivity and those derived from AN fiber tuning curves. Beyond this result, the model was able to predict CAP waveform differences when the probe is masked with more than 90% of the variance explained.

February 10, 2022 - No Seminar

February 3, 2022 - No Seminar

January 27, 2022 - No Seminar

January 20, 2022 - No Seminar

January 13, 2022

Title: Precision Diagnostics for Personalized Hearing Intervention and Perceptual Training

Speaker: Subong Kim, Postdoctoral Associate, SLHS

Abstract: In audiology clinics, currently, hearing loss is mainly characterized using threshold audiometry, and hearing intervention is provided to maximize audibility while maintaining comfortable loudness. Yet, even with prescriptive amplification, speech understanding abilities in noisy environments vary substantially across individuals with similar audiograms. Because speech-in-noise perception is considerably more complex than detecting quiet sound, the nature of deficits in suprathreshold hearing likely differ from person to person; however, the physiological bases of such individual variability remain unknown. My research aims to advance our understanding of the precise auditory and cognitive mechanisms leading to impaired hearing, with the intent of improving personalized hearing intervention or providing perceptual training, based on each listener's physiology. In this presentation, first, I will introduce the notion of cortical "neural signal-to-noise ratio" that can predict individual speech-in-noise outcomes from noise-reduction (NR) processing. Then, I will discuss my approach to building profiles of peripheral pathophysiology and top-down selective attention efficacy to obtain a detailed characterization of individual hearing loss across listeners with a similar hearing sensitivity using a range of physiological and psychoacoustical measures. I will also describe novel subcortical measures for quantifying individual tolerance to noise and sensitivity to speech-cue distortions induced by NR processing. Next, I will demonstrate the effect of NR on cortical speech processing across brain regions beyond the auditory cortex. Further, I will discuss how the neurofeedback training of auditory selective attention can enhance the neural encoding of target speech and improve speech-in-noise performance. Lastly, I will describe future research plans, including how I plan to leverage the individual hearing and cognitive profiles to predict behavioral, subcortical, and cortical metrics of speech-in-noise perception outcomes and NR benefits, and guide individualized intervention.

December 9, 2021

Title: Technology innovations for increasing access to hearing healthcare – from concept to commercialization

Speaker: Odile Clavier, Principal Engineer, Creare LLC

Abstract: This presentation will describe the process by which innovation can mature, from a simple idea to a commercial product, including the many challenges that get in the way but also lead better products. Over the past 15 years, Creare has developed several products that have the potential to greatly increase access to hearing healthcare. For example: the Wireless Automated Hearing Test System is an audiometer, sound booth and headphone all-in-one, small enough to fit in a backpack or purse that can measure accurate audiograms almost anywhere; the low-cost wireless otoacoustic emissions probe that works with a smartphone may one day lead to new infant hearing screening programs across the developing world; Open Source tools such as the tympan platform and TabSINT allow researchers and students to test out new concepts while reaching populations that may difficult to include in human studies. Collaboration between academia, researchers, clinicians and a small business provides an ideal environment that ensures both scientific rigor, clinical relevance and a successful transition to the commercial marketplace where all users can benefit from these innovations. 

December 2, 2021 - No Seminar (Acoustical Society of America)

November 25, 2021 - No Seminar (Thanksgiving Break)

November 18, 2021

Title: Individual differences in personality traits affect performance, frustration, and effort in a speech-in-noise task

Speaker: Alexander Francis, Associate Professor, SLHS

Authors: Paola Medina Lopez, Timothy Stump, Nicole Kirk, Vincent Jung, Jane E. Clougherty, Alexander L. Francis

Abstract: Individual personality traits may influence psychological and physiological response to background noise. Here, we assessed participants’ (N = 80) performance on a speech-in-noise arithmetic task along with self-rated effort and frustration and several emotion/personality-related questionnaires including the NoiseQ assessment of sensitivity to noise and the Rotters locus of control scale. Locus of control reflects the degree to which individuals feel they control the experiences in their life. Those with a more internalized locus of control attribute their experiences to their own choices and actions, while those with a more external locus of control tend to attribute successes and failures to external factors beyond their control. In the experiment, we manipulated the feeling of control that participants had over noise level by allowing half to choose task difficulty ("easy" or "hard") while assigning the others to a difficulty level without agency. On each trial, participants heard three spoken digits and added them. In half of the trials, one digit was masked by noise. To better assess the impact of choice and perceived difficulty, signal-to-noise ratio and spoken digits were identical across all conditions. Results suggest an important role for both sensitivity to noise and locus of control. For example, individuals with greater sensitivity to noise expressed greater frustration with the task, and assessed the task as being more effortful, but only when they were not permitted to choose the task difficulty (assigned condition). On the other hand, individuals with a more externally oriented locus of control expressed greater frustration and reported greater effort when they were informed that they were performing the difficult task (whether they chose it or were assigned to it), but were not as frustrated and reported lower effort when performing the easy task (whether assigned or chosen).  Results will be discussed in terms of implications for future research on noise sensitivity and long-term health.

November 11, 2021

Title: Building the Capacity for Machine Learning in Neuroscience Research

Speaker: Elle O'Brien, PhD, Lecturer & Research Investigator, University of Michigan School of Information

Abstract: Many labs in basic science and translational research are eager to capitalize on the explosion of machine learning developments in the last decade. However, these methods are associated with tremendous technical, organizational, and mathematical complexity that even the world’s wealthiest tech companies struggle to support. This talk will explore ways that scientific (and specifically biomedical) researchers will have to confront new-and old- kinds of complexity to deploy machine learning techniques in the service of discovery.  

November 4, 2021

Title: Interactions between peripheral and central measures of temporal coding in a chinchilla model of noise-induced cochlear synaptopathy 

Speaker: Jonatan Märcher-Rørsted, PhD candidate, Technical University of Denmark (DTU)

Abstract: Steady-state electrophysiological responses phase-locked to the carrier or modulation frequencies of an auditory stimulus are reduced with age in humans. Age-related reductions in frequency following responses (FFRs) have been attributed to a decline in temporal processing in the central auditory system. Yet, age-related cochlear synaptopathy may reduce synchronized activity in the auditory nerve, which may also contribute to reduced FFR responses. Here, we investigate the effect of noise-induced cochlear synaptopathy on peripheral and brainstem temporal coding (i.e., the FFR) in a chinchilla model of temporary threshold shift (TTS) by simultaneously recording electrocochleography (ECochG) and (brainstem) FFR responses.

We collected simultaneous electroencephalography (EEG) and ECochG responses from 10 anesthetized chinchillas. Four of the chinchillas were exposed to two hours of 100 dB SPL octave-band noise (centered at 1 kHz), producing a significant TTS measured one day post exposure. This exposure has been shown previously to create a broad region of significant (up to 50%) cochlear synaptopathy in chinchillas, with minimal permanent threshold shift. Electrophysiological responses in exposed chinchillas were measured at least two weeks post exposure. Two other animals were treated as controls. Additionally, we collected data from an older (n=4, 4-5 years of age) group of behavioral chinchillas, which were treated as an ”urban aged” group. FFRs to the carrier frequency of 10-ms tone bursts at low (516 Hz), mid (1032 Hz) and high (4064 Hz) frequencies, presented at levels ranging from 40 to 80 dB SPL, were recorded to examine potential level- and frequency-dependent effects. Additionally, responses to tonal frequency sweeps from 0.2-1.2 kHz at 80 dB SPL were collected. Evoked responses to clicks from 0-80 dB SPL were also recorded to quantify level-dependent latencies of different sources in the auditory pathway.

Reduced ECochG responses to the carrier of low (516 Hz) frequency tones were observed in exposed animals. In peripheral neurophonic ECochG responses, we observed more pronounced reductions at higher levels. Brainstem FFR responses to lower-level low-frequency tones (516Hz at 60 dB SPL) showed more pronounced reductions compared to the peripheral neurophonic response in this condition, suggesting a level-dependent interaction between peripheral and central responses to low-frequency tonal stimuli. Reductions of the phase-independent second harmonic of the tonal carrier (two times the fundamental) were also observed in both peripheral and central measures, consistent with a neural origin of the reduced response. Aged chinchillas followed a similar trend as the noise exposed group in the ECochG responses, but showed further reduction of the FFR measured with a vertical EEG montage. This suggests that aging and noise induced cochlear synaptopathy could be disentangled measuring both ECochG and EEG responses simultaneously.

Disentangling peripheral and central responses is crucial for our understanding of the underlying generation mechanisms of the FFR, and its connection to peripheral neural degeneration. These results suggest level- and frequency-dependent interactions between peripheral and central generators in both normal, noise-exposed TTS and aged animals, which may guide further advancements in diagnostics for peripheral neural degeneration.

October 28, 2021

Title: An alternative aging model produces central auditory deficits that precede peripheral deficits

Speaker: Edward Bartlett, Professor, BIOL/BME

Authors: Edward L. Bartlett, Sydney Cason, Aravindakshan Parthasarathy 

Abstract: Oxidative stress and other cellular damage may contribute to Presbycusis, or age-related hearing loss, through cellular mechanisms that differ from acoustic overexposure, by having differential effects on the peripheral and central structures of the auditory pathway. Oxidative stress from hyperglycemia, induced by sugars such as D-galactose, is known to cause hearing alterations in animals and humans. One question for the development of presbycusis via oxidative stress is which structures are vulnerable earliest. Here we investigated the effects of D-galactose induced hyperglycemia on young adult Fischer-344 rats using electrophysiological markers that are sensitive to age-related changes in auditory processing.  Baseline evoked potentials were recorded in young rats to test the auditory brainstem response (ABR), envelope following response (EFR), and the middle latency auditory evoked response (MLR), followed by subcutaneously daily injections with either 500 mg/kg of d-galactose or an equivalent volume of saline for eight weeks.  Animals were reevaluated for the same auditory responses nine weeks after the initial test and nine months after the initial test.  Hearing thresholds assessed using tone ABRs were unaffected by D-galactose during the initial nine-week period but declined significantly more than in normal aging during the following months.  EFRs in response to low modulation frequency stimuli and ABR amplitudes of waves IV and V, indicative of rostral brainstem activities, decreased significantly during the initial nine-week period. EFR amplitudes in response to high modulation frequency stimuli and ABR amplitudes of waves I and III decreased significantly during the following months.  These results indicate that oxidative stress caused by D-galactose causes aging-like effects on the central auditory system earlier than the peripheral parts of the pathway, such as the cochlea, auditory nerve, and cochlear nucleus.  Thus, hyperglycemia/oxidative stress caused by brief exposure to D-galactose may be considered a model of central auditory aging for researchers. This model produces a window of central auditory decline without the accompanying peripheral decline, decoupling electrophysiological markers of cochlear damage from those of central auditory dysfunction. 

October 21, 2021

Title: Pitch processing in the context of cochlear anatomic damage

Speaker: Andrew Sivaprakasam, MD/PhD (MSTP) program, Weldon School of Biomedical Engineering

Abstract: Pitch is a complex psychoacoustic phenomenon that we rely on to communicate with others and listen to music. While pitch perception has been studied for decades, the exact neurophysiological mechanism responsible for coding this percept has not yet been determined. Because of this, our knowledge of how specific forms of cochlear impairment impact pitch perception is limited. There are three categories of hypotheses that attempt to explain pitch coding in terms of the tonotopic organization of our auditory system and the temporal information present in neural firing: place, time, and place-time. However, these hypotheses are rarely assessed in the context of commonly investigated cochlear anatomic deficits: outer hair cell, inner hair cell, and auditory nerve synapse damage. Understanding how these deficits affect our pitch coding and perception is instrumental in designing better hearing-assistive technology. Here, I will propose some aims and a cross-species experiment to assess pitch processing in the context of cochlear damage-- with the intent of receiving feedback and spurring pitch-related discussion.

October 14, 2021 - No Seminar (October Break)

October 7, 2021

Title: Neurofeedback Training of Auditory Selective Attention Enhances Speech-In-Noise Perception 

Speaker: Subong Kim, Postdoctoral Associate, SLHS

Abstract: Selective attention enhances cortical responses to attended sensory inputs while suppressing others, which can be an effective strategy for speech-in-noise (SiN) understanding. Emerging evidence exhibits a large variance in attentional control during SiN tasks, even among normal-hearing listeners. Yet whether training can enhance the efficacy of attentional control and, if so, whether the training effects can be transferred to performance on a SiN task has not been explicitly studied. Here, we introduce a neurofeedback training paradigm designed to reinforce the attentional modulation of auditory evoked responses. Young normal-hearing adults attended one of two competing speech streams consisting of five repeating words (“up”) in a straight rhythm spoken by a female speaker and four straight words (“down”) spoken by a male speaker. Our electroencephalography-based attention decoder classified every single trial using a template-matching method based on pre-defined patterns of cortical auditory responses elicited by either an “up” or “down” stream. The result of decoding was provided on the screen as online feedback. After four sessions of this neurofeedback training over 4 weeks, the subjects exhibited improved attentional modulation of evoked responses to the training stimuli as well as enhanced cortical responses to target speech and better performance during a post-training SiN task. Such training effects were not found in the Placebo Group that underwent similar attention training except that feedback was given only based on behavioral accuracy. These results indicate that the neurofeedback training may reinforce the strength of attentional modulation, which likely improves SiN understanding. Our finding suggests a potential rehabilitation strategy for SiN deficits.

September 30, 2021

Title: Resources that are relevant in the entire hierarchy of life?  Information is one… 

Speaker: Jeffrey Lucas, Professor, Department of Biological Sciences

Abstract: Biology is a hierarchical phenomenon. The investigation of biological systems is also, necessarily, hierarchical. Unfortunately, there is relatively little crosstalk across disciplines that address biological phenomena at different scales. For example, structural biologists don’t often talk to community ecologists.  NSF funded a series of country-wide workshops that focused on this issue with the hope of finding possible research protocols that bridge disciplines.  Our group in these workshops focused on the possibility that scale-independent resources might help scale-dependent scientific inquiry.  I’ll talk about the 4 “resources” we identified: energy, conductance, storage, and information, with an emphasis on the idea that information is truly a resource that is critical to all biological systems irrespective of scale.  I offer 3 (+1) examples of how information organizes systems from tiny to massive. You’ll also see where niche construction fits into the big, and sometimes into the smaller, picture.

September 23, 2021

Title: A system-identification approach to characterize cortical temporal coding

Speaker: Ravinderjit Singh, PhD candidate, BME/MSTP (Bharadwaj lab)

Abstract: Many studies have investigated how subcortical temporal processing, measured via brainstem evoked potentials (e.g., ABRs and FFRs) may be influenced by aging, hearing loss, musicianship, and other auditory processing disorders. However, human studies of cortical temporal processing are often restricted to the 40 Hz steady-state response. One possible reason for the limited investigation is the lack of a fast and easy method to characterize temporal processing noninvasively in humans over a range of modulation frequencies. Without a broadband characterization of cortical temporal processing, it is difficult to disentangle the different components that may contribute to the overall EEG response, and discover their respective functional correlates. Here, we use a system-identification approach where white noise, modulated using a modified maximum length sequence (m-seq), is presented to quickly obtain a stereotypical and repeatable auditory cortical “impulse” response (ACR) capturing broadband cortical modulation coding (up to 75 Hz) with EEG. Using principal component analysis (PCA) across different EEG sensors, we found that the overall response is composed of five components that can be distinguished by virtue of latency, and/or scalp topography. Furthermore, the components spanned different frequency ranges within the overall temporal modulation transfer function (tMTF), and differed in their sensitivities to manipulations of attention and/or task demands. Interestingly, we also find that the ACR shows nonlinear behavior, in that the relative magnitudes of the constituent components are different when measured using broadband modulations versus a series of sinusoidal modulations. 

September 16, 2021

Title: The effect of broadband elicitor duration on transient-evoked otoacoustic emissions and a behavioral measure of gain reduction

Speaker: William Salloom, PhD candidate, SLHS (Strickland lab)

Abstract: Humans are able to encode sound over a wide range of intensities despite the fact that neurons in the auditory periphery have much smaller dynamic ranges. There is a feedback system that originates at the level of the brainstem that may help solve the dynamic range problem. This system is the medial olivocochlear reflex (MOCR), which is a bilateral sound-activated system which decreases amplification of sound by the outer hair cells in the cochlea. Much of the previous research on the MOCR in animals and humans has been physiologically based, and has used long broadband noise elicitors. However, the effect of the duration of broadband noise elicitors on similar behavioral tasks is unknown. Additionally, MOCR effects measured using otoacoustic emissions (OAEs), have not consistently shown a positive correlation with behavioral gain reduction tasks. This may be due to different methodologies being utilized for the OAE and behavioral tasks, and/or due to the analysis techniques not being optimized to observe a relationship. In the current study, we explored the effects of ipsilateral broadband noise elicitor duration both physiologically and behaviorally in the same subjects. Both measures used similar stimuli in a forward-masking paradigm. We tested two research questions: 1) Are the time constants of the physiological and behavioral measures similar to one another (thus reflecting the same mechanism) 2) Can the changes in physiological responses by the elicitor predict the changes in behavioral responses in the same subjects, as a function of elicitor duration. By keeping our stimuli and subjects consistent throughout the study, as well as using various methods analyze our OAE data, we have optimized the conditions to determine the relationship between physiological and behavioral measures of gain reduction. The findings for both of these questions will be discussed. Understanding these effects is not only of fundamental importance to how the auditory system adapts to sound over time, but is also of practical importance in laboratory settings that use broadband noise to elicit the MOCR.

September 9, 2021

Title: Age-related reduction in frequency-following responses as a potential marker of cochlear neural degeneration 

Speaker: Jonatan Märcher-Rørsted, PhD candidate, Technical University of Denmark (DTU)

Abstract: Previous studies have reported an age-related reduction in frequency-following responses (FFRs) in listeners with clinically normal audiometric thresholds. This has been argued to reflect an age-dependent decline in neural synchrony in the central auditory system. However, age-dependent degeneration of auditory nerve (AN) fibers may have little effect on audiometric sensitivity and may yet affect the suprathreshold coding of temporal information. This peripheral loss of temporal information may not be recovered centrally and may thus also contribute to reduced phase-locking accuracy in the auditory midbrain. Here, we investigated whether age-related reductions in the FFR could, at least in part, reflect age-dependent peripheral neural degeneration.

We combined human electrophysiology and auditory nerve (AN) modeling to investigate whether age-related changes in the FFR would be consistent with peripheral neural degeneration. A reduction in the FFR response in the older listeners was found across stimulation frequencies for both sweep and static pure-tone stimulation. Older listeners also showed significantly shallower MEMR growth level functions compared to the younger listeners, which could indicate a loss of low spontaneous rate fibers in the AN. Despite having clinically normal audiometric thresholds, the older listeners had significantly reduced sensitivity at frequencies above 8 kHz compared to the young group. The computational simulations suggested that such experimental results can be accounted for by neural degeneration already at the stage of the AN whereas a loss of sensitivity due to OHC dysfunction at higher frequencies could not explain the observed reduced FFR in the older listeners. These results are consistent with a peripheral source of the FFR reductions observed in older normal-hearing listeners, and indicate that FFRs at lower carrier frequencies may potentially be a sensitive marker of peripheral neural degeneration.

September 2, 2021

Title: Deep Neural Networks for Speech Enhancement in Cochlear Implants

Speaker: Agudemu Borjigin, PhD candidate, Weldon School of Biomedical Engineering

Abstract: Despite excellent performance in quiet, cochlear implants (CIs) usually fail to restore normal levels of intelligibility in noisy environments. Current state-of-the-art signal processing strategies in CIs provide limited benefits in terms of noise reduction or masking release. Recent developments in the field of machine learning have produced deep neural network (DNN) models with impressive performance in both speech enhancement and separation tasks. With sponsorship from hearing implant manufacturer— MED-EL, this work was an exploratory attempt to evaluate the use of DNN models as front-end pre-processors for enhancing CI users’ speech understanding in noisy environments. This talk will focus on model architectures, dataset, workflow (tools and resources), objective evaluation results, and pilot data collected from CI subjects.

August 26, 2021

We are excited to kick off the Fall 2021 edition of SHRP starting August 26th! This Fall, the seminars will be held in person. However, an informally managed Zoom option will be available to accommodate remote attendees and/or (sometimes) remote speakers.

Our inaugural "seminar" for this season will be an update on the PIIN Grand Challenges project towards establishing infrastructure for precision auditory neuroscience at Purdue.

Title: Audiology Research Diagnostic Core (ARDC): Updates and discussion on data sharing

Speaker(s): Discussion led by Michael Heinz, Hari Bharadwaj, and Andrew Sivaprakasam (SLHS/BME)

 


The working schedule is available here:  https://purdue.edu/TPAN/hearing/shrp_schedule

The titles and abstracts of the talks will be added here:  https://purdue.edu/TPAN/hearing/shrp_abstracts

 

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2020 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact Purdue Marketing and Media at marketing@purdue.edu.