Brown Bag

2016-2017 Talks

LYLE 1150: 12:30-1:20pm 

September 26, 2016

Sara Benham

An Application of Network Science to Speech Production in Children with SLI

Research on specific language impairment (SLI) traditionally focuses on morphosyntactic deficits, while speech sound errors are widely understudied. Classic speech analyses do not reveal core features of error patterns, and a new approach is needed. Network science may provide relevant insights. Twenty-four preschoolers (ages 4;0-6;0) participated in a non-word repetition task; twelve with SLI, and twelve typically developing children. Transcription and kinematic trajectories were obtained to assess accuracy and variability. Network analysis determined syllable co-occurrence patterns. Children with SLI are less accurate and more variable than typical peers. Network analysis reveals group differences in syllable co-occurrence patterns, where children with SLI have more syllable variations and co-occurring connections that are not as robustly interconnected as typical children. Network analysis demonstrates deficits in sound pattern sequencing that are not captured by standard error analysis. Current theories of SLI reveal deficits in core aspects of procedural learning that may also extend to sound sequencing. How the data align with developmental theories and procedural deficits will be discussed.

October 3, 2016

Laurence B. Leonard, PhD

The Input as a Source of Grammatical Inconsistency in Children with Specific Language Impairment

One of the hallmarks of English-speaking children with language impairments is a protracted period of using tense/agreement morphemes inconsistently (e.g., “Mommy talking/is talking on the phone”). Recent research has shown that details in the input may contribute significantly to this inconsistency. Investigators have differed on some of the details of their proposals, but all agree that such inconsistency could occur through children’s incomplete interpretation of fully grammatical utterances that they hear in the input. In this presentation, one input-based approach will be presented in detail. Evidence supporting its feasibility will come from five different types of studies, namely, those employing: (1) novel verb learning; (2) conventional comprehension tasks; (3) eye gaze measures; (4) electrophysiological measures; and (5) actual treatment comparisons. The findings will be discussed in terms of their clinical implications and their application to the study of languages other than English.

October 17, 2016

Katherine Gordon, U Iowa

Preschool Age Children’s Short- and Long-Term Memory for Word Forms

Long-term retention and generalization of trained material is a primary goal of language interventions, yet children’s ability to retain, generalize, and build upon intervention gains is rarely measured weeks, months, or years post-intervention. To address this limitation, the primary goals of Dr. Gordon’s research are: 1-To describe differences between children with and without language impairment (LI) in their ability to encode and retain word-referent pairs over short- and long-term delays, and 2-To identify the most efficient training and post-training follow-up procedures to enhance long-term retention of word-referent pairs in children with LI. Because encoding word forms has been identified as a particular area of difficulty for children with LI, she currently focuses her research on assessing children’s ability to encode and retrieve word forms that are linked to referents during training. As a Postdoctoral Fellow at the University of Iowa, she has completed several studies assessing the ability of children with typical development (TD) to retain word forms over short- and long-term delays.  Specifically, she has refined a novel test of word form that is more sensitive than tests used in past work, which has provided information about the ability of children with TD to retrieve word forms 10 mins, 2 to 3 days, 1 week, and 6 months to 1 year after initial training. She is currently conducting a study comparing children with and without LI in their ability to encode and retain word forms across a delay of several days when both groups are given the same number of exposures to the forms during training. Through several future projects, she will further assess differences between children with and without LI in their ability to encode and retain word-referent pairs. After gaining a better understanding of these differences, she will address the second goal of her work, to identify training protocol that enhances long-term retention of word-referent pairs in children with LI.

October 31, 2016

Rana Abu-Zhaya

Infant heart rate during bimodal touch and speech events

This study examined heart-rate variability in 5-month-olds during intersensory redundant speech+touch events compared with non-redundant speech+touch events. Infants heard a continuous stream of syllables with no acoustic or distributional cues to word boundaries. An experimenter, present with the infant in the same room, listened to masking noise and received signals in the form of high or low pitched tones. These tones served as signals for the experimenter to touch the infant on a specific body-part at a particular point in time. Experimenter touches were either redundant with speech (occurring during a specific trisyllabic sequence and on a specific body part) or non-redundant with speech (occurring during a random trisyllabic sequence to another body part) and created unpredictable new events in each repetition of the stream of syllables. Results show a significant interaction effect of redundancy and time resulting in a deceleration in infants’ heart rate following the redundant stimuli compared to non-redundant stimuli. This result is consistent with previous literature showing heart rate deceleration during periods of sustained attention. Further, it is also consistent with literature from non-human embryos showing heart rate deceleration in response to redundant vs. non-redundant stimuli.  The implications of this study for how touch impacts speech perception are discussed.

November 7, 2016

Janet Vuolo

Motor and Language Factors in Developmental Speech and Language Disorders

Clinicians and researchers define developmental speech and language disorders based on the primary domain that is impaired (language, speech, speech motor), yet children frequently display impairments or weakness across multiple areas. Furthermore, children with speech and language disorders often show clinical or subclinical limb motor deficits. My research program focuses on delineating this interactivity for the purpose of establishing a theoretical account of the domain general cognitive mechanisms that contribute to development within and across language, speech, speech motor, and limb motor domains. The goal of my research is to apply this theoretical model to developing more efficacious approaches to intervention for children with communication disorders. In this talk, I present research from two clinical populations that serve as important test cases of this domain general view: children with specific language impairment (SLI) and children with childhood apraxia of speech (CAS). 

November 21, 2016

Shireen Kanakri, Ball State University

Links Between Classroom Acoustics and Repetitive Behaviors in Children with Autism: An Observational Study

The objective of the present study is to explore the impact of acoustical design on children with autism in school classrooms. Empirical research on this topic will provide information on how interior space features and spatial environment characteristics can be used to support the learning and developmental needs of children with autism. Specifically, the connection between repetitive behaviors and ambient noise levels in school classroom environments was observed in four classrooms. The occurrence of repetitive motor movements, repetitive speech, ear covering, hitting, loud vocalizations, blinking, and verbally complaining in relation to decibel levels were analyzed using Noldus Observer XT software. As hypothesized, a correlation between noise levels and frequency of target behaviors was found; that is, as decibel levels increased, several of the observed behaviors occurred with greater frequency. Further empirical testing is necessary to test a causal relationship between increased ambient noise levels and autism-related behaviors, and sensory discomfort as a mediator of that relationship. Findings are applied to the development of classroom design guidelines.

November 28, 2016

Keith R. Kluender
Department of Speech, Language, and Hearing Sciences
Purdue University

Virtues of (co)variance for perceptual learning

Objects and events in the sensory environment are generally predictable, making most of the energy impinging upon sensory transducers redundant. Given this fact, efficient sensory systems should detect, extract, and exploit predictability in order to optimize sensitivity to less predictable inputs that are, by definition, more informative. Not only are perceptual systems sensitive to changes in physical stimulus properties, but growing evidence reveals sensitivity both to relative predictability of stimuli and to co-occurrence of stimulus attributes within stimuli. Speech signals are notoriously redundant, and our experiments are revealing that auditory perception rapidly reorganizes to efficiently capture covariance among stimulus attributes. Acoustic properties per se are perceptually abandoned, and sounds are instead processed respecting patterns of co-occurrence. Listeners' ability to distinguish sounds from one another is driven primarily by the extent to which they are consistent or inconsistent with patterns of covariation among stimulus attributes and, to a lesser extent, whether they are heard frequently or infrequently. These findings have implications for perceptual learning most broadly, for experience-dependent auditory development and plasticity, for learning of speech sound contrasts, and for learning to talk. 

February 13,2017

Profs. Jessica E. Huber and Preeti M. Sivasankar

Responsible Conduct of Research: Record Keeping, Data Suppression and Outliers

The NIH defines responsible conduct of research (RCR) as the practice of scientific investigation with integrity.  It involves the awareness and application of established professional norms and ethical principles in the performance of all activities related to scientific research. NIH and many other funding agencies now require that all trainees, fellows, and scholars receiving support through any training, career development award (individual or institutional), research education grant, or dissertation research grants must receive instruction in RCR. This week, Profs. Huber and Sivasankar will lead a discussion on one component of RCR: on recording keeping, data suppressions, and outliers. While students are expected to attend, faculty are also encouraged to attend and participate in the discussions.

February 20,2017

Eileen K. Haebig, Ph.D., CCC-SLP, Post-Doctoral Research Fellow, Department of Speech, Language, & Hearing Sciences, Purdue University

Exploring Language Learning and Processing in Children with Language Impairments

The current talk will discuss a multi-level approach to exploring language learning and processing in children with specific language impairment (SLI). In addition to exploring language abilities in early points in development, we will discuss results from a longitudinal study that examined language processing in adolescents who followed different language trajectories: adolescents with normal language developments, adolescents who have persistent language impairments, and adolescents who had a history of language impairment.

March 6,2017

Hari Bharadwaj, Ph.D

Exploring bottom-up sensory processing and top-down control in Autism Spectrum Disorders

Difficulty communicating in social settings is a core deficit reported in Autism Spectrum Disorders (ASD). Further, many individuals with ASD report sensory "overload", including being overly sensitive to all sources in a scene, and not just those of interest. Here, I'll describe our recent and ongoing efforts to study the automatic processes that underlie scene analysis, and the cortical dynamics of top-down control in ASD using magneto- and electroencephalography (MEG and EEG).

April 3, 2017

Prof. Yunjie Tong, Biomedical Engineering, Purdue University

Introduction to Near Infrared Spectroscopy

Difficulty communicating in social settings is a core deficit reported in Autism Spectrum Disorders (ASD). Further, many individuals with ASD report sensory "overload", including being overly sensitive to all sources in a scene, and not just those of interest. Here, I'll describe our recent and ongoing efforts to study the automatic processes that underlie scene analysis, and the cortical dynamics of top-down control in ASD using magneto- and electroencephalography (MEG and EEG).

April 10, 2017

Kristina Milvae, AuD, PhD Candidate, Department of Speech, Language, and Hearing Sciences, Purdue University

Exploration of the Relationship between Cochlear Gain Reduction and Speech-in-Noise Performance

Listening to speech in noisy environments is difficult for listeners with normal hearing, but it is a task that is often accomplished successfully. The mechanisms that underlie this ability are not well understood. One proposed mechanism is the medial olivocochlear refex (MOCR). The MOCR is a bilateral reflex between the brainstem and cochlea that reduces cochlear gain with preceding sound. The magnitude of ipsilateral cochlear gain reduction was explored in the present study at 1, 2, and 4 kHz using forward masking techniques, in an effort to evaluate the magnitude of cochlear gain reduction at a range of frequencies important for speech perception. The relationship between psychoacoustic and physiologic measures of cochlear gain reduction was also explored, as well as how these measures relate to speech-in-noise performance at a range of signal-to-noise ratios. It is possible that the utility of cochlear gain reduction depends on the signal-to-noise ratio. 

April 17, 2017

Suzy Ahn, PhD Candidate, Department of Linguistics , New York University

The role of tongue position in voicing contrasts in English, German, and Brazilian Portuguese: An ultrasound study

In utterance-initial position, American English and German stops /b, d, g/ are often phonetically voiceless whereas Brazilian Portuguese stops /b, d, g/ are realized with phonation during closure. This study uses ultrasound imaging to examine how tongue position correlates with voicing contrasts in these languages, comparing /b, d, g/ and /p, t, k/ utterance-initially. One adjustment for initiating or maintaining phonation during the closure is enlarging the supraglottal cavity volume primarily via tongue root advancement (Westbury 1983). Eight speakers of each language recorded stops at three places of articulation (labial, alveolar, and velar). There was a clear distinction between /b, d, g/ and /p, t, k/ in the tongue position for all three languages. Brazilian Portuguese speakers show more fronted tongue position for /b, d, g/ than /p, t, k/. In English and German, the tongue position is more fronted for /d, g/ compared to /t, k/ and the tongue body/front is lowered for /b/ compared to /p/ even without acoustic phonation during closure. Although the difference in tongue position found in these languages seem similar, the goal of the articulator movement may be different: Brazilian Portuguese speakers may advance the tongue root for the active voicing of /b, d, g/ whereas English and German speakers may retract the tongue root for aspiration of /t, k/. Gestural models which refer to aerodynamic gestures (e.g. McGowan & Saltzman 1995) might be a helpful way of understanding what the speakers’ targets are.

April 24, 2017

Evan R. Usler, PhD Candidate, Department of Speech, Language, & Hearing Sciences, Purdue University

Neural bases of emotion perception in children who do and do not stutter

Childhood stuttering is a neurodevelopmental disorder whose etiology and chronicity is likely associated with temperamental, emotional, and/or psychosocial factors. More specifically, differences between children who do (CWS) and do not stutter (CWNS) regarding emotional reactivity and regulation have been observed. Event-related potentials (ERPs) can serve as neurophysiological markers of emotional processes with fine temporal resolution. The purpose of this study is to determine if the ERP correlates of emotional processing differ between 5- to 8-year-old CWS and CWNS during the presentation of child facial expressions. Images of threatening (angry/fearful) and neutral child facial expressions, preceded by an audio contextual cue, were presented. Audio cues differed in neutral, negative, or reappraisal context. Threatening facial expressions with a negative context elicited enhanced ERP amplitudes for CWS. Relationships between dimensions of temperament, such as effortful control, and the effects of emotion on ERP amplitudes were also investigated.

May 1, 2017

Renee Kemp, PhD Candidate in Linguistics, University of California, Davis

Phonological and lexical development in L2 learners

Adult second language (L2) learners, like children learning their first language (L1), must acquire a novel phonological system and new lexical items.  Unlike children, however, adult L2 learners have a mature linguistic system in their L1 with the potential to either facilitate or interfere with L2 acquisition.  Using a lexical decision task, three word-specific properties – phonological neighborhood density (PND), lexical Age of Acquisition (AoA), and usage frequency – were investigated to examine phonological and lexical development in L2 learners, in contrast with native speaker performance.  Stimuli were also presented in both plain and foreigner-directed speech (FDS) conditions to examine any possible interactions between word-specific properties and clear speech style on perception.  Performance by the non-native speakers, late Japanese-English bilinguals, was found to mirror that of native English speakers for lexical AoA and frequency in some conditions; however, the effect of PND on non-native word recognition was found to resemble patterns observed during L1 acquisition.  The implications for this finding on L2 lexical and phonological access will be discussed.

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at slhswebhelp@purdue.edu.