Current Projects

1. Neural Representation of Natural Speech Processed Through Enhancement Algorithms and Amplification Devices

Amplification devices and speech enhancement algorithms are designed to improve how clearly people can hear speech. However, most studies that test these technologies measure brain responses using very short sounds, such as single syllables. These brief sounds do not strongly engage speech enhancement systems and often produce large onset responses in the brain that do not reflect how specific speech sounds are encoded. This project studies how the brain represents natural, continuous speech after it has been processed by hearing aids and speech enhancement algorithms. By using more realistic speech materials, we can better evaluate how these technologies influence the neural encoding of speech in everyday listening situations. The goal is to develop brain-based measures that more accurately reflect how well hearing technologies support real-world speech understanding.

2. Developmental Factors Affecting Speech Processing in Children With and Without Hearing Loss

This project examines how children’s brains learn to process natural speech and how early hearing loss may affect this development. Using brain recordings, behavioral testing, and brain imaging, we study how children with typical hearing encode the sounds, words, and meaning of everyday speech. We also investigate how these processes differ in children with hearing loss. In particular, we examine how visual information, such as lip movements, may help compensate for reduced auditory input. By linking brain activity, brain structure, and language experience, this project aims to identify the neural mechanisms that support successful speech understanding. The long-term goal is to develop objective measures of speech processing that can guide hearing rehabilitation strategies, including hearing aids, cochlear implants, and auditory training programs. These insights may help children with hearing loss develop stronger language and communication skills.

3. Behavioral and Neurophysiological Signatures of Speech Fine-Structure Processing

Fine-structure (FS) cues are rapid fluctuations in speech that are thought to play an important role in understanding speech in noisy environments. However, current clinical tests do not directly measure how listeners process fine-structure information in speech, and there are few tools for assessing how the brain encodes these cues. This project aims to develop both behavioral tests and neural measures that can evaluate how listeners process speech fine structure. These methods are designed to be realistic, reliable, and suitable for clinical use. The results will provide new tools for identifying fine-structure processing difficulties and for improving hearing technologies. Ultimately, this work may help guide hearing aid fitting strategies and inform the design of next-generation cochlear implant algorithms, particularly for improving speech understanding in noisy environments.

4. Neural Basis of Auditory Plasticity Following Different Approaches to Speech-Category Training

Learning new speech sounds—such as those from a second language—requires the brain to adapt and reorganize how it processes auditory information. This process, known as auditory plasticity, occurs across multiple levels of the auditory system, from the brainstem to the cortex. This project studies how different types of speech training influence these neural changes. Using the frequency-following response (FFR), a brain signal that reflects how speech features are encoded in the auditory system, we track how neural representations change as people learn new speech categories. Participants undergo different types of training, including perceptual learning, speech production training, biofeedback-based training, and classroom-style instruction. By following both behavioral improvements and neural changes over time, this research aims to better understand how experience reshapes the auditory system during speech learning.

5. Influence of Top-Down Factors on Speech Processing

Speech perception is not driven by sound alone. Higher-level processes—such as attention, expectations, language knowledge, and visual cues—can strongly influence how speech is interpreted by the brain. This project investigates how these top-down factors shape the neural processing of speech. Using behavioral experiments and neurophysiological recordings, we study how listeners use contextual information, visual speech cues, and cognitive resources to understand speech, especially in challenging listening conditions. Understanding how top-down processes interact with sensory encoding will help explain why speech perception varies across individuals and listening environments. These insights may also inform new strategies for improving communication for listeners with hearing difficulties.

6. Development of Open-Source Tools for Speech and Auditory Neuroscience Research

Modern research in speech and auditory neuroscience often requires specialized tools for stimulus generation, experimental control, neural signal analysis, and data visualization. However, many of these tools are not easily accessible or reproducible across research labs. This project focuses on developing open-source software tools that support experimental design, data analysis, and reproducible research in auditory neuroscience. These tools are freely shared through our GitHub repository and are designed to be flexible, well-documented, and accessible to researchers and students. By making these resources publicly available, we aim to promote transparency, collaboration, and innovation within the speech and hearing research community.