Brown Bag

2018-2019 Talks

LYLE 1150: 12:30-1:30pm

April 22, 2019

Girija Kadlaskar (with Keehn, B., Seidl, A., Tager-Flusberg, H., & Nelson, C.A.)

Caregiver-Infant tactile communication in infants at-risk for Autism Spectrum Disorder

Background: Caregivers of children diagnosed with and at-risk for autism spectrum disorder (ASD) may modify their interactive style to adapt to their child’s needs. We investigated the frequency of the tactile input presented to infants at high- and low-risk for ASD along with the percentage of touch aligned with speech (touch+speech). Touch is of particular interest, as it forms the basis of early caregiver-infant interactions. For instance, greater amounts of maternal affectionate touch in early development is associated with an increase in infant smiles and vocalizations and predicts later cognitive and neurobehavioral development. Yet, prior findings indicate that, maternal touch frequency decreases after 6 months of age, mainly because, as infants become more socially competent and physically independent, mothers start using other forms of communication, primarily speech, to interact with infants. However, it remains unclear whether caregivers with infants at-risk for ASD use touch and associated speech in a different manner compared to controls. Furthermore, whether mothers’ use of touch or touch+speech input is sensitive to infants’ responsiveness, given prior research showing that high-risk infants who later received a diagnosis of ASD were less responsive to maternal touch.

Objective:(1)To examine the frequency of touch and the percentage of touch aligned with speech provided to 12-month-olds at-risk for autism (HRA) compared to low-risk comparison (LRC) infants. (2)To examine infant responsiveness to touch and touch+speech alignment.

Methods: Data for 58 (HRA=31,LRC=27) mother-infant dyads were selected from a larger sample that was obtained as a part of a longitudinal study. Dyads participated in 10-minute play sessions using identical sets of toys and were instructed to play as they would at home. Trained coders, blind to group membership, evaluated the frequency of caregiver-initiated touches to infants during play interactions along with maternal speech and infants’ looking behaviors before, during, and after each touch.

Results: Independent samples t-tests revealed no differences in the frequency of touch delivered to infants in the HRA (M=19.87,SD=9.30) and LRC (M=16.18,SD=6.36) groups, t(56)=-1.78, p=0.09. However, the percentage of touch+speech alignment was significantly higher in the HRA (42.4%) compared to the LRC (34.7%, p=0.03) group. Lastly, infants in both the groups responded equally to touch, t(56)=0.19, p=0.8 and touch+speech input, t(56)=-0.20, p=0.8.

Discussion: Mothers in the HRA and LRC groups deliver equal amounts of touch to their 12-month-olds. However, percentage of touch+speech alignment is higher in the HRA group. This difference is not attributable to infants’ responsiveness to either touch or touch+speech input. One possible explanation for the greater alignment of touch+speech input in the HRA group could be attributed to strategies that mothers draw from their experiences of interacting with their older child with ASD, rather than HRA infants’ responsivity to specific types of input at 12 months. In other words, differences in touch+speech alignment could be more related to mothers’ interactive styles with infants at-risk for ASD rather than being related to infant behaviors. These findings have broader implications for caregiver-infant interactions in ASD, since providing a richer multimodal input have been suggested to promote learning in typical development.  

April 15, 2019

Elizabeth Roepke

Perception of Own Speech Errors among Preschoolers

This study examined how preschoolers with and without s/sh collapse perceived their own speech containing these sounds. Preschoolers completed an identification task with s/sh minimal pairs (sip/ship) while listening to their own speech and the speech of others. Overall, preschoolers with speech sound disorder who collapsed s/sh discriminated these sounds poorly in their own recorded speech. 

April 8, 2019

Professor Amanda Seidl, Ph.D.

Using OSF to increase the openness, integrity, and repoducibility of your work.

This is a training session. I will briefly discuss the advantages to using OSF, but will spend most of the brown bag time going over how to set your project up in OSF. It would be a good idea to bring a laptop and a project you are working on so that we can work on setting up your project in OSF.

April 1, 2019

Sarah Hargus Ferguson, Ph.D., CCC-A University of Utah Dept. of Communication Sciences and Disorders

What *might* make clear speech clear: Lessons learned from the Ferguson Clear Speech Database

Extensive acoustic and perceptual analyses have been carried out on the materials from the Ferguson Clear Speech Database (FCSD), which was recorded at Indiana University in 2002. The FCSD consists of 41 untrained talkers reading 188 sentences under instructions first to speak in a manner “as much like your normal conversational style as possible” and later to “speak clearly, so that a hearing-impaired person would be able to understand you.” My intent in developing the FCSD was to exploit the expected wide acoustic and perceptual variability among the talkers and use a talker-differences approach to answer the question, “What makes clear speech clear?” In this presentation, I will summarize data from studies of vowel intelligibility, word intelligibility, and perceived sentence clarity along with global and fine-grained acoustic analyses, and discuss how all of these measures are related across the 41 talkers. My hope is that this birds-eye view of the FCSD data will reveal subgroups of talkers in which the talkers adopted certain “profiles” of clear speech acoustic changes that yielded specific helpful perceptual changes.

March 25, 2019

Jessica E. Huber, Professor

Respiratory Control for Speech

This is a practice run of the keynote talk that I will be giving at Boston Speech Motor Control Symposium (BSMCS). We know that the respiratory system changes with aging, including increased chest wall compliance and a reduction in lung elasticity. These changes have profound effects on speech breathing in typically aging adults. Overlaid on these changes, older adults with Parkinson’s disease also experience significant changes to the respiratory system which increase the work of breathing during speech. I discuss what we know about age-related changes to respiratory physiology and how those changes impact speech breathing. I will then discuss how Parkinson’s disease impacts respiratory physiology and breathing during speech. I will be combining data from a number of studies in my laboratory and others to provide a complete picture of age-related and disease-related changes to respiratory function during speech.

March 18, 2019

Laurence B. Leonard, Distinguished Professor

Atypical Grammatical Patterns in Children Can Arise from Weak Intake of Typical Language Input

Children with developmental language disorder often show an extraordinary weakness in their use of grammar, seen especially in a prolonged period of using tense/agreement morphology inconsistently. Here it will be argued that this seemingly atypical pattern can be a natural outcome of weak language learning aptitude in interaction with the language input, and, in principle, should vary in degree across the language aptitude continuum rather than being unique to a particular diagnostic group. Supportive evidence will be drawn from novel verb learning, sentence comprehension tasks, looking-while-listening patterns, response to intervention, and electrophysiological data. 

March 4, 2019

Elizabeth [Liz] Heller Murray, M.S., CCC-SLP, Boston University

Vocal motor control in school-age children

Although it is well known that the vocal mechanism changes throughout development, little is known about vocal motor control in children. Understanding vocal motor control in vocally healthy children will provide a necessary foundation upon which pediatric-specific voice therapies can be developed and optimized. Therefore, the first study investigates vocal motor control in vocally healthy school-age children and vocally healthy adults. This work is based on the theoretical framework of the Directions Into Velocities of Articulators (DIVA) model, a widely used neural network model of speech motor control. The response to perturbation of fundamental frequency (fo) during both unexpected quick changes in fo as well as sustained changes over time was examined. Magnitude, timing, and direction of the vocal responses were compared to fo auditory acuity. Based on the results of the first study, a subsequent study was developed to examine potential auditory acuity differences between school-age children with and without voice disorders. Overall, results from both studies inform our knowledge of vocal motor control differences in children and adults and begin a discussion on how this foundation can be applied to children with voice disorders.

February 25, 2019

Ryan Peters, Justin Kueser, Arielle Borovsky 

Vocabulary structure, attentional biases, and word learning in 24-30 mo toddlers

This project seeks to explore how the semantic connectivity of known and novel objects influence patterns of pre-labeling attention and subsequent object-label processing/learning outcomes in 24- to 30-mo toddlers. A secondary goal is to determine whether any semantic connectivity effects are related to individual differences such as age, attentional skill, word learning skill or lexico-semantic connectivity characteristics of the toddler’s productive vocabulary. We explore these questions using a combination of graph-theoretic semantic network modeling and eye-tracking methodologies.

February 18, 2019

No Brown Bag this week

February 11, 2019

No Brown Bag this week

February 4, 2019

William Salloom

Physiological and Psychoacoustic Measures of Two Different Auditory Efferent Systems

The human auditory pathway has two efferent systems that can dynamically adjust our ears to incoming sound at the periphery. One system, known as the middle-ear muscle reflex (MEMR), causes contraction of the muscles in the middle-ear in response to loud sound, and decreases transmission of energy to the cochlea. A second system, known as the medial olivocochlear reflex (MOCR), can decrease amplification of sound by the outer hair cells in the cochlea. While these systems have been studied in humans and animals for decades, their functional roles are still under debate. This is especially true of their roles in auditory perception. One significant factor that has limited our understanding of these systems is that both efferent systems are activated by sound, and separating MOCR effects from MEMR effects has become crucial to isolate their individual contributions on hearing. In this talk, I will discuss the similarities and differences between these two systems, and how their effects have been measured at different stages of the auditory pathway. Next, I will discuss my strategy to study both systems, physiologically and behaviorally, as well as how I plan to analyze the data. Ultimately, the goal of my project is to characterize both systems in humans with normal hearing, which may provide key insight to the roles of these systems.

January 7, 2019

A short organization meeting 

December 3, 2018

Jill Lany (Notre Dame)

How Can Infants Learn Words From Statistics?

Accumulating lexical knowledge is a fundamental achievement in early language development. Lexical development is likely to draw on multiple learning systems, including those involved in tracking sequential structure within speech, and also in detecting relations between speech and other features of the environment. I will discuss recent research highlighting how infants’ ability to track different kinds of statistical regularities may support both of these aspects of word learning. First, tracking statistics within speech, such as frequency and co-occurrence, may support learning both associative and referential mappings between words and their referents. Second, overlap in the organization of structure within speech and in the environment may facilitate determining what words mean, and may even help lend speech it’s unique referential character. I will also consider how the relative importance of these processes in word learning may change across development.

November 26, 2018

Arielle Borovsky

Conversation on Open Science

November 12, 2018

Laurence Leonard & Patricia Deevy

Retrieval-Based Word Learning in Young Children with Developmental Language Disorder and their Peers with Typical Language Development.

For many years, scholars have noted that long-term retention improves significantly when learners frequently test themselves on the new material rather than engage in continuous study with no intermittent testing. This longstanding observation has recently been the subject of a resurgence in research. In this presentation, we will discuss a collaborative project in which we apply concepts of repeated testing (retrieval) to the study of novel word learning by preschoolers with developmental language disorder (DLD) and their peers with typical language development. Each of the experiments reported involved a within-subjects design in which half of the novel words were presented in a repeated-retrieval condition hypothesized to facilitate long-term retention and the other half of the words were presented in a comparison condition that approximated traditional procedures. Children were presented with both the novel words themselves (word forms) as well their arbitrary definitions (meanings). Testing occurred immediately after the learning period and one week later. Testing involved word form and meaning recall, recognition (picture identification), and neural responses (N400 event-related brain potentials) on a match-mismatch task. Children showed enhanced recall for novel words that were taught in the repeated retrieval conditions. Furthermore, larger N400s were elicited for mismatches involving words in the repeated retrieval conditions than for those involving words in the comparison conditions, indicating that priming for lexical access was stronger when children learned the novel words with repeated retrieval. Although the children with DLD appeared to have weaker initial encoding than their peers, they benefited as much from repeated retrieval as their peers and were indistinguishable from peers in their ability to retain information from immediate testing to testing one week later. We are pursuing additional studies along these lines, for the findings have important implications: Along with advocating for children hearing many words in their input, we might have reason to encourage child-friendly activities that encourage the children to practice retrieving the words they recently heard. 

November 5, 2018

Justin Kueser

Sound-meaning systematicity in early word learning

Early word learning relates both to a word's semantic content and its phonological form. Young children tend to learn highly salient, concrete words made up of sounds within their phonological inventories before other words. Young children have also been shown to attend to how semantics and phonology systematically vary across words and to use this information as a cue to support word learning in controlled, experimental settings across a wide range of ages. However, less is known about whether children use sound-meaning systematicity to learn words in naturalistic settings or in languages without large classes of sound-symbolic words (e.g., Japanese). Working from findings suggesting that the English language as a whole demonstrates more sound-meaning systematicity than would be expected by chance, we investigated the role that sound-meaning systematicity plays in word learning in a large sample of administrations of MacArthur-Bates Communicative Development Inventories in American English, British English, and Mexican Spanish. We found that in each of the languages, across a wide range of vocabulary sizes, children's vocabularies tended to demonstrate more sound-meaning systematicity than would be expected based on word length, part of speech, phonotactic probability, neighborhood density, and consonant age-of-acquisition. We also found that the use of systematicity was relatively stable for smaller vocabularies, but tended to diverge in different languages with larger vocabularies. In addition, we measured the contribution of individual words to sound-meaning systematicity across vocabulary development in a series of normative vocabularies and investigated how their contribution to sound-meaning systematicity influenced the trajectory of word learning. Depending on the language and stage of vocabulary growth, we found that new words that were added to the normative vocabularies tended to demonstrate variable relationships between their semantic and phonological similarity to known words and known words' contributions to overall sound-meaning systematicity. Variations in these relationships during vocabulary growth may be explained by the growth in different semantic or phonological neighborhoods, with words that contribute more to sound-meaning systematicity being more lexically prominent in the early stages of expansion into new neighborhoods and and less lexically prominent as these neighborhoods become more dense.

October 29, 2018

Lisa Rague

Acoustic Properties of Early Vocalizations in Infants with Fragile X Syndrome.

Fragile X syndrome (FXS) is a neurogenetic syndrome characterized by cognitive impairments and high rates of autism spectrum disorder (ASD). FXS is often used as a model for exploring mechanisms and pathways of symptom expression in ASD due to the high prevalence of ASD in this population and the known single-gene cause for ASD in FXS. Early vocalization features – including volubility, canonical complexity, vocalization duration and vocalization pitch – have shown promise in detecting ASD in idiopathic ASD populations but have yet to be extensively studied in a population with a known cause for ASD, such as FXS. The present study characterizes early vocalization features in FXS, demonstrating how these features are associated with language ability and ASD outcomes, as well as highlighting how these features in FXS may diverge from patterns observed in typically developing (TD) populations. We coded vocalization features during a standardized child-examiner interaction in 39 nine-month-old infants (22 FXS, 17 TD) who were then followed up at 24 months to determine developmental and clinical outcomes. Results present preliminary evidence suggesting infants with FXS demonstrate patterns of associations with language outcomes that diverge from those observed in typical development, and that certain vocalization features demonstrate associations with later ASD outcomes in the FXS group. Characterizing the associations of early vocalization features with ASD outcomes in FXS can inform mechanisms of ASD development that can then be tested broadly with other etiologically-distinct populations at risk for ASD. Thus, further characterization of these early vocalization features in typical and atypical development may lead to improved early identification methods, treatment approaches, and overall well-being of individuals in the ASD population.

October 22, 2018

Jordan Oliver, Ph.D Candidate

Do orienting responses predict annoyance from irrelevant sounds?

Unexpected sounds are distracting and can be annoying, but individuals may differ in susceptibility to them. Irrelevant sounds occurring at sparse temporal intervals induce a psychophysiological orienting response reflecting involuntary capture of attention away from the primary task. We hypothesize that the frequency and/or magnitude of individual listeners’ orienting responses to irrelevant sounds will predict annoyance ratings and task performance in distracting noise. Participants read essays while seated in a comfortable chair in a sound-shielded booth facing a semicircular array of 6 speakers located 1.5 m away at 30°, 60° and 90° to the left and right. Unintelligible background speech (ISTS) played at 60 dB(A) SPL from each loudspeaker (unsynchronized). At 50-70 s intervals one of 12 non-speech sounds (IADS) played for 6 s from one loudspeaker at approximately 70 dB(A) SPL. Order and location of sounds were randomized, but each sound played from each speaker exactly once over the experiment (72 trials, ~80 minutes). Cardiovascular, electrodermal, electro-ocular, and bilateral posterior auricular muscle activity were recorded from participants to quantify orienting response. Behavioral measures of reading comprehension, noise sensitivity, personality traits, and subjective effort and annoyance were also collected and will be related to physiological measures.​ This is a preview of an invited talk to be presented at the Acoustical Society of America meeting on November 5th.

August 27, 2018

Alexander Francis, PhD

Open discussion on doctoral students' resources, interests, and needs related to coding and learning to code (MATLAB, R, python, etc.)

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at slhswebhelp@purdue.edu.