Auditory Cognitive Neuroscience Laboratory

Director: Natalya Kaganovich, PhD

In the absolute majority of cases, speech perception is multisensory – we not only hear speech sounds, but also see the moving face of a talker, which significantly improves our ability to understand speech, especially under noisy conditions. While adults are skilled integrators of auditory and visual speech cues, children are not. This ability develops surprisingly slowly, with some aspects of audiovisual processing not maturing until late adolescence. Work in our lab focuses on how the developing brain changes as it learns to merge different senses. We also study differences in cognitive and neural mechanisms underlying audiovisual speech perception in children with typical development and in those who have difficulty acquiring language, such as children with specific language impairment (SLI).

kl1     kl2

The Auditory Cognitive Neuroscience Laboratory is equipped with a 32 channel Biosemi ERP Data Acquisition System, which allows one to collect brain’s electrical signals in response to various stimuli – such as sounds, images, and their combination. The event-related potentials (ERP) technique is a non-invasive method and can be safely and easily used with children and adults. It has an excellent temporal resolution and can be used to study a range of cognitive processes, including language, attention, and memory.

Contact Us

Lyles-Porter Hall 
Phone: 765-494-4445
kaganovi@purdue.edu

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at slhswebhelp@purdue.edu.