We are interested in how the brain listens.
This leads us to examine sounds ranging from simple tones to complex, real-world conversations.
It leads us to ask questions of perception, attention, learning, memory, and communication.
It leads us to use diverse methods and collaborate with cross-disciplinary teams.
Relevant publications: Wade & Holt, 2005; Leech et al. 2009; Lim & Holt, 2011; Liu & Holt, 2011; Lim et al. 2015; Gabay et al. 2015; Gabay & Holt, 2015; Lim, Fiez & Holt, 2019; Wiener et al. 2019; Roark, Lehet, Dick and Holt, (2020); Martinez, Holt, Reed, & Tan 2020; Gabay et al., 2022; Gabay, Karni & Holt, 2023
In the News: Video Game Helps Neuroscientists Understand Second Language Learners
This work is generously supported by the National Science Foundation.
How do we learn complex sounds like speech?
For many important classes of sounds like speech and voice, we learn categories. But our learning is rarely guided by explicit instruction or overt feedback. We are investigating how listeners learn to categorize complex distributions of sounds incidentally, as they navigate a (videogame) environment. This learning takes place without overt category decisions, directed attention to the sounds, or explicit feedback about categorization. It is ‘statistical learning’ in the sense that it helps listeners accumulate knowledge about the patterns of input in the environment, but in some circumstances it is speedier and more robust than learning via passive exposure. Our studies demonstrate the importance of this learning in speech and nonspeech auditory learning, and in understanding developmental dyslexia and supporting university students learning a foreign language.
How does speech communication proceed in the real world?
In a new project, we are partnering with Dr. Fernando de La Torre of Carnegie Mellon University to better understand the subtle perceptual cues available to us when we communicate face-to-face. We’re using eyeglasses that have video cameras, eye trackers, gyroscopes and audio recorders embedded in them to track the detailed interactions that happen when two strangers sit down for a conversation.
Representative publication:
Ma et al. 2024
This work is generously supported by the James S. McDonnell Foundation.
Representative publications: Holt, Tierney, Guerra, Laffere, & Dick, 2018; Laffere, Dick, Holt & Tierney, 2020; Zhao et al., 2022; Luthra et al. 2024a; Luthra et al., 2024b
This work is supported by the National Institutes of Health.
How does attention support listening to complex sounds?
Complex sounds like speech are conveyed by multiple acoustic dimensions and we rely on different sets of cues for different judgments. A snippet of speech tells us a lot about both who is talking and what they are saying, but these percepts rely on different perceptual dimensions. We are working to understand how dimension-based auditory selective attention may support listening as contexts and task demands shift. With a mix of behavioral and neuroimaging approaches across both speech and nonspeech sounds, we are working to advance understanding of how learning across patterns of sounds shifts dimension-based auditory attention. Another leg of this work (pictured to the left) has involved development of an online videogame that we have shown to improves auditory attention.
How does speech perception balance long-term and short-term learning?
The complex mapping of speech to language-specific units like phonemes and words must be learned over time. But the learning is not complete when we have mastered a language. Each time we run into a talker with a slightly different accent or dialect, we learn new patterns and speech perception adjusts. We have been investigating how the perceptual system maintains stable long-term representations of language-specific units even as it flexibly adapts to short-term speech input. In new work, we are investigating how the statistical learning that takes place over accented speech input transfers to influence speech production. Listening to accented speech turns out to influence subtle characteristics of our own speech, too.
Representative publications: Idemaru & Holt, 2011; Guediche et al. 2014; Guediche et al. 2015; Liu & Holt, 2015; Lehet & Holt, 2015; Zhang & Holt, 2018; Gabay, Y. & Holt, L. L., 2020; Lehet, M. & Holt, L. L., 2020; Idemaru, K. & Holt, L. L., 2020; Wu & Holt, 2022; Hodson et al., 2023; Murphy et al. 2023
This work is generously supported by the National Science Foundation.
Representative publications: Dick, Lehet, Callaghan, Keller, Sereno, & Holt, 2017; Zhao et al., 2022; Luthra et al., 2024a; Luthra et al., 2024b
Online Webinar: The Future of Neuroscience
How does the auditory system shift its focus to different qualities of sound?
Listening to a friend while walking down a busy street, tracking the quality of a sick child’s breathing through a nursery monitor, and following the melody of a violin within an orchestra all require singling out a sound stream (selective attention) and maintaining focus on this stream over time (sustained attention) so that the information it conveys can be remembered and responded to appropriately. In collaboration with Dr. Fred Dick of University College London and Dr. Adam Tierney of Birkbeck College, University of London, we have been investigating the neurobiological basis of sustained auditory selective attention in human auditory cortex. In ongoing work, we are mapping auditory selective attention using psychophysics, behavioral training, electroencephalography, and structural and functional neuroimaging.
How does auditory learning differ among adults and children with dyslexia?
The economic and societal costs of low literacy are enormous. Yet, we do not adequately understand learning mechanisms that support literacy, or how they may fail in low literacy. In ongoing work with Israeli collaborators Drs. Yafit Gabay and Avi Karni of the University of Haifa, we are examining procedural auditory category learning across distinct samples varying in literacy attainment, age, and native language. Our goal is to advance understanding of the basic building blocks of literacy by examining how differences in auditory learning may snowball to influence speech representations that support learning to read.
Relevant publications: Gabay & Holt, 2015; Gabay, Thiessen, & Holt, 2015; Gabay et al. 2015; Lim, Fiez & Holt, 2019; Gabay & Holt, 2020; Gabay et al., 2023
Webinar: Dispelling the Myths of Dyslexia, CMUThink, Dr. Lori Holt, February 2018; Educational Neuroscience: What Every Teacher Should Know, Dr. Lori Holt, July 2020
This work has been supported by the National Science Foundation.
Representative publications: Haigh et al., 2019; Chrabaszcz et al. 2019; Sharma et al., 2019; Lipski et al., 2018; Rupp et al. 2022; Chrabaszcz et al., 2021; Dastolfo-Hromack et al., 2021
Collaborations
Our lab is inherently collaborative. We work with neurosurgeons who record from the human brain as it listens to and produces speech, educational psychologists interested in the root cause of dyslexia, pedagogical experts in second language acquisition, and engineers working to make the next generation of Siri and Alexa work better. In a current NSF-supported project we are examining how statistical learning influences attentional gain using psychophysics, sEEG in neurosurgical patients, 7T fMRI, EEG, and animal electrophysiology across an international group of collaborating laboratories.
Our trainees benefit from tight connections with other laboratories across the street as well as with national and international collaborators across disciplines.