Brown Bag

2018-2019 Talks

LYLE 1150: 12:30-1:30pm

March 4, 2019

Elizabeth [Liz] Heller Murray, M.S., CCC-SLP, Boston University

Vocal motor control in school-age children

Although it is well known that the vocal mechanism changes throughout development, little is known about vocal motor control in children. Understanding vocal motor control in vocally healthy children will provide a necessary foundation upon which pediatric-specific voice therapies can be developed and optimized. Therefore, the first study investigates vocal motor control in vocally healthy school-age children and vocally healthy adults. This work is based on the theoretical framework of the Directions Into Velocities of Articulators (DIVA) model, a widely used neural network model of speech motor control. The response to perturbation of fundamental frequency (fo) during both unexpected quick changes in fo as well as sustained changes over time was examined. Magnitude, timing, and direction of the vocal responses were compared to fo auditory acuity. Based on the results of the first study, a subsequent study was developed to examine potential auditory acuity differences between school-age children with and without voice disorders. Overall, results from both studies inform our knowledge of vocal motor control differences in children and adults and begin a discussion on how this foundation can be applied to children with voice disorders.

February 25, 2019

Ryan Peters, Justin Kueser, Arielle Borovsky 

Vocabulary structure, attentional biases, and word learning in 24-30 mo toddlers

This project seeks to explore how the semantic connectivity of known and novel objects influence patterns of pre-labeling attention and subsequent object-label processing/learning outcomes in 24- to 30-mo toddlers. A secondary goal is to determine whether any semantic connectivity effects are related to individual differences such as age, attentional skill, word learning skill or lexico-semantic connectivity characteristics of the toddler’s productive vocabulary. We explore these questions using a combination of graph-theoretic semantic network modeling and eye-tracking methodologies.

February 18, 2019

No Brown Bag this week

February 11, 2019

No Brown Bag this week

February 4, 2019

William Salloom

Physiological and Psychoacoustic Measures of Two Different Auditory Efferent Systems

The human auditory pathway has two efferent systems that can dynamically adjust our ears to incoming sound at the periphery. One system, known as the middle-ear muscle reflex (MEMR), causes contraction of the muscles in the middle-ear in response to loud sound, and decreases transmission of energy to the cochlea. A second system, known as the medial olivocochlear reflex (MOCR), can decrease amplification of sound by the outer hair cells in the cochlea. While these systems have been studied in humans and animals for decades, their functional roles are still under debate. This is especially true of their roles in auditory perception. One significant factor that has limited our understanding of these systems is that both efferent systems are activated by sound, and separating MOCR effects from MEMR effects has become crucial to isolate their individual contributions on hearing. In this talk, I will discuss the similarities and differences between these two systems, and how their effects have been measured at different stages of the auditory pathway. Next, I will discuss my strategy to study both systems, physiologically and behaviorally, as well as how I plan to analyze the data. Ultimately, the goal of my project is to characterize both systems in humans with normal hearing, which may provide key insight to the roles of these systems.

January 7, 2019

A short organization meeting 

December 3, 2018

Jill Lany (Notre Dame)

How Can Infants Learn Words From Statistics?

Accumulating lexical knowledge is a fundamental achievement in early language development. Lexical development is likely to draw on multiple learning systems, including those involved in tracking sequential structure within speech, and also in detecting relations between speech and other features of the environment. I will discuss recent research highlighting how infants’ ability to track different kinds of statistical regularities may support both of these aspects of word learning. First, tracking statistics within speech, such as frequency and co-occurrence, may support learning both associative and referential mappings between words and their referents. Second, overlap in the organization of structure within speech and in the environment may facilitate determining what words mean, and may even help lend speech it’s unique referential character. I will also consider how the relative importance of these processes in word learning may change across development.

November 26, 2018

Arielle Borovsky

Conversation on Open Science

November 12, 2018

Laurence Leonard & Patricia Deevy

Retrieval-Based Word Learning in Young Children with Developmental Language Disorder and their Peers with Typical Language Development.

For many years, scholars have noted that long-term retention improves significantly when learners frequently test themselves on the new material rather than engage in continuous study with no intermittent testing. This longstanding observation has recently been the subject of a resurgence in research. In this presentation, we will discuss a collaborative project in which we apply concepts of repeated testing (retrieval) to the study of novel word learning by preschoolers with developmental language disorder (DLD) and their peers with typical language development. Each of the experiments reported involved a within-subjects design in which half of the novel words were presented in a repeated-retrieval condition hypothesized to facilitate long-term retention and the other half of the words were presented in a comparison condition that approximated traditional procedures. Children were presented with both the novel words themselves (word forms) as well their arbitrary definitions (meanings). Testing occurred immediately after the learning period and one week later. Testing involved word form and meaning recall, recognition (picture identification), and neural responses (N400 event-related brain potentials) on a match-mismatch task. Children showed enhanced recall for novel words that were taught in the repeated retrieval conditions. Furthermore, larger N400s were elicited for mismatches involving words in the repeated retrieval conditions than for those involving words in the comparison conditions, indicating that priming for lexical access was stronger when children learned the novel words with repeated retrieval. Although the children with DLD appeared to have weaker initial encoding than their peers, they benefited as much from repeated retrieval as their peers and were indistinguishable from peers in their ability to retain information from immediate testing to testing one week later. We are pursuing additional studies along these lines, for the findings have important implications: Along with advocating for children hearing many words in their input, we might have reason to encourage child-friendly activities that encourage the children to practice retrieving the words they recently heard. 

November 5, 2018

Justin Kueser

Sound-meaning systematicity in early word learning

Early word learning relates both to a word's semantic content and its phonological form. Young children tend to learn highly salient, concrete words made up of sounds within their phonological inventories before other words. Young children have also been shown to attend to how semantics and phonology systematically vary across words and to use this information as a cue to support word learning in controlled, experimental settings across a wide range of ages. However, less is known about whether children use sound-meaning systematicity to learn words in naturalistic settings or in languages without large classes of sound-symbolic words (e.g., Japanese). Working from findings suggesting that the English language as a whole demonstrates more sound-meaning systematicity than would be expected by chance, we investigated the role that sound-meaning systematicity plays in word learning in a large sample of administrations of MacArthur-Bates Communicative Development Inventories in American English, British English, and Mexican Spanish. We found that in each of the languages, across a wide range of vocabulary sizes, children's vocabularies tended to demonstrate more sound-meaning systematicity than would be expected based on word length, part of speech, phonotactic probability, neighborhood density, and consonant age-of-acquisition. We also found that the use of systematicity was relatively stable for smaller vocabularies, but tended to diverge in different languages with larger vocabularies. In addition, we measured the contribution of individual words to sound-meaning systematicity across vocabulary development in a series of normative vocabularies and investigated how their contribution to sound-meaning systematicity influenced the trajectory of word learning. Depending on the language and stage of vocabulary growth, we found that new words that were added to the normative vocabularies tended to demonstrate variable relationships between their semantic and phonological similarity to known words and known words' contributions to overall sound-meaning systematicity. Variations in these relationships during vocabulary growth may be explained by the growth in different semantic or phonological neighborhoods, with words that contribute more to sound-meaning systematicity being more lexically prominent in the early stages of expansion into new neighborhoods and and less lexically prominent as these neighborhoods become more dense.

October 29, 2018

Lisa Rague

Acoustic Properties of Early Vocalizations in Infants with Fragile X Syndrome.

Fragile X syndrome (FXS) is a neurogenetic syndrome characterized by cognitive impairments and high rates of autism spectrum disorder (ASD). FXS is often used as a model for exploring mechanisms and pathways of symptom expression in ASD due to the high prevalence of ASD in this population and the known single-gene cause for ASD in FXS. Early vocalization features – including volubility, canonical complexity, vocalization duration and vocalization pitch – have shown promise in detecting ASD in idiopathic ASD populations but have yet to be extensively studied in a population with a known cause for ASD, such as FXS. The present study characterizes early vocalization features in FXS, demonstrating how these features are associated with language ability and ASD outcomes, as well as highlighting how these features in FXS may diverge from patterns observed in typically developing (TD) populations. We coded vocalization features during a standardized child-examiner interaction in 39 nine-month-old infants (22 FXS, 17 TD) who were then followed up at 24 months to determine developmental and clinical outcomes. Results present preliminary evidence suggesting infants with FXS demonstrate patterns of associations with language outcomes that diverge from those observed in typical development, and that certain vocalization features demonstrate associations with later ASD outcomes in the FXS group. Characterizing the associations of early vocalization features with ASD outcomes in FXS can inform mechanisms of ASD development that can then be tested broadly with other etiologically-distinct populations at risk for ASD. Thus, further characterization of these early vocalization features in typical and atypical development may lead to improved early identification methods, treatment approaches, and overall well-being of individuals in the ASD population.

October 22, 2018

Jordan Oliver, Ph.D Candidate

Do orienting responses predict annoyance from irrelevant sounds?

Unexpected sounds are distracting and can be annoying, but individuals may differ in susceptibility to them. Irrelevant sounds occurring at sparse temporal intervals induce a psychophysiological orienting response reflecting involuntary capture of attention away from the primary task. We hypothesize that the frequency and/or magnitude of individual listeners’ orienting responses to irrelevant sounds will predict annoyance ratings and task performance in distracting noise. Participants read essays while seated in a comfortable chair in a sound-shielded booth facing a semicircular array of 6 speakers located 1.5 m away at 30°, 60° and 90° to the left and right. Unintelligible background speech (ISTS) played at 60 dB(A) SPL from each loudspeaker (unsynchronized). At 50-70 s intervals one of 12 non-speech sounds (IADS) played for 6 s from one loudspeaker at approximately 70 dB(A) SPL. Order and location of sounds were randomized, but each sound played from each speaker exactly once over the experiment (72 trials, ~80 minutes). Cardiovascular, electrodermal, electro-ocular, and bilateral posterior auricular muscle activity were recorded from participants to quantify orienting response. Behavioral measures of reading comprehension, noise sensitivity, personality traits, and subjective effort and annoyance were also collected and will be related to physiological measures.​ This is a preview of an invited talk to be presented at the Acoustical Society of America meeting on November 5th.

August 27, 2018

Alexander Francis, PhD

Open discussion on doctoral students' resources, interests, and needs related to coding and learning to code (MATLAB, R, python, etc.)

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at slhswebhelp@purdue.edu.