Neural coding of continuous speech in auditory cortex during monaural and dichotic listening
Nai Ding,Jonathan Z. Simon +1 more
TLDR
These findings characterize how the spectrotemporal features of speech are encoded in human auditory cortex and establish a single-trial-based paradigm to study the neural basis underlying the cocktail party phenomenon.Abstract:
The cortical representation of the acoustic features of continuous speech is the foundation of speech perception. In this study, noninvasive magnetoencephalography (MEG) recordings are obtained from human subjects actively listening to spoken narratives, in both simple and cocktail party-like auditory scenes. By modeling how acoustic features of speech are encoded in ongoing MEG activity as a spectrotemporal response function, we demonstrate that the slow temporal modulations of speech in a broad spectral region are represented bilaterally in auditory cortex by a phase-locked temporal code. For speech presented monaurally to either ear, this phase-locked response is always more faithful in the right hemisphere, but with a shorter latency in the hemisphere contralateral to the stimulated ear. When different spoken narratives are presented to each ear simultaneously (dichotic listening), the resulting cortical neural activity precisely encodes the acoustic features of both of the spoken narratives, but slightly weakened and delayed compared with the monaural response. Critically, the early sensory response to the attended speech is considerably stronger than that to the unattended speech, demonstrating top-down attentional gain control. This attentional gain is substantial even during the subjects' very first exposure to the speech mixture and therefore largely independent of knowledge of the speech content. Together, these findings characterize how the spectrotemporal features of speech are encoded in human auditory cortex and establish a single-trial-based paradigm to study the neural basis underlying the cocktail party phenomenon.read more
Citations
More filters
Journal ArticleDOI
EEG alpha and pupil diameter reflect endogenous auditory attention switching and listening effort
TL;DR: In this article , the authors developed two variants of endogenous attention switching and a sustained attention control, and characterized these three experimental conditions under the context of decoding auditory attention, while simultaneously evaluating listening effort and neural markers of spatial audio cues.
Posted ContentDOI
Neural attention filters do not predict behavioral success in a large cohort of aging listeners
TL;DR: Using an unprecedentedly large, age-varying sample, single-trial models and cross-validated predictive analyses in this large sample challenge the immediate functional relevance of these neural attentional filters to overt behavior.
Posted ContentDOI
Neurophysiological indices of audiovisual speech integration are enhanced at the phonetic level for speech in noise
Aisling E. O’Sullivan,Michael J. Crosse,Giovanni M. Di Liberto,Alain de Cheveigné,Alain de Cheveigné,Edmund C. Lalor +5 more
TL;DR: The encoding of both spectrotemporal and phonetic features was shown to be more robust in audiovisual speech responses then what would have been expected from the summation of the audio and visual speech responses, consistent with the literature on multisensory integration.
Journal ArticleDOI
A Graphical Model for Online Auditory Scene Modulation Using EEG Evidence for Attention
Marzieh Haghighi,Mohammad Moghadamfalahi,Murat Akcakaya,Barbara G. Shinn-Cunningham,Deniz Erdogmus +4 more
TL;DR: It is demonstrated that using data- and model-driven cross-correlation features yield competitive binary auditory attention classification results with at most 20 s of EEG from 16 channels or even a single well-positioned channel, and EEG-based auditory attention classifiers may generalize across individuals, leading to reduced or eliminated calibration time and effort.
Journal ArticleDOI
40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.
TL;DR: The 40 Hz ASSR evoked by modulated speech and reversed speech was investigated and showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech, and the auditory cortex of left hemispheric responded more actively.
References
More filters
Book
Elements of information theory
Thomas M. Cover,Joy A. Thomas +1 more
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI
The cortical organization of speech processing
Gregory Hickok,David Poeppel +1 more
TL;DR: A dual-stream model of speech processing is outlined that assumes that the ventral stream is largely bilaterally organized — although there are important computational differences between the left- and right-hemisphere systems — and that the dorsal stream is strongly left- Hemisphere dominant.
Journal ArticleDOI
Some Experiments on the Recognition of Speech, with One and with Two Ears
TL;DR: In this paper, the relation between the messages received by the two ears was investigated, and two types of test were reported: (a) the behavior of a listener when presented with two speech signals simultaneously (statistical filtering problem) and (b) behavior when different speech signals are presented to his two ears.
Journal ArticleDOI
Speech recognition with primarily temporal cues.
TL;DR: Nearly perfect speech recognition was observed under conditions of greatly reduced spectral information; the presentation of a dynamic temporal pattern in only a few broad spectral regions is sufficient for the recognition of speech.
Journal ArticleDOI
Electrical Signs of Selective Attention in the Human Brain
TL;DR: Auditory evoked potentials were recorded from the vertex of subjects who listened selectively to a series of tone pipping in one ear and ignored concurrent tone pips in the other ear to study the response set established to recognize infrequent, higher pitched tone pipped in the attended series.
Related Papers (5)
Emergence of neural encoding of auditory objects while listening to competing speakers
Nai Ding,Jonathan Z. Simon +1 more