scispace - formally typeset
Open AccessJournal ArticleDOI

Neural coding of continuous speech in auditory cortex during monaural and dichotic listening

Nai Ding, +1 more
- 01 Jan 2012 - 
- Vol. 107, Iss: 1, pp 78-89
TLDR
These findings characterize how the spectrotemporal features of speech are encoded in human auditory cortex and establish a single-trial-based paradigm to study the neural basis underlying the cocktail party phenomenon.
Abstract
The cortical representation of the acoustic features of continuous speech is the foundation of speech perception. In this study, noninvasive magnetoencephalography (MEG) recordings are obtained from human subjects actively listening to spoken narratives, in both simple and cocktail party-like auditory scenes. By modeling how acoustic features of speech are encoded in ongoing MEG activity as a spectrotemporal response function, we demonstrate that the slow temporal modulations of speech in a broad spectral region are represented bilaterally in auditory cortex by a phase-locked temporal code. For speech presented monaurally to either ear, this phase-locked response is always more faithful in the right hemisphere, but with a shorter latency in the hemisphere contralateral to the stimulated ear. When different spoken narratives are presented to each ear simultaneously (dichotic listening), the resulting cortical neural activity precisely encodes the acoustic features of both of the spoken narratives, but slightly weakened and delayed compared with the monaural response. Critically, the early sensory response to the attended speech is considerably stronger than that to the unattended speech, demonstrating top-down attentional gain control. This attentional gain is substantial even during the subjects' very first exposure to the speech mixture and therefore largely independent of knowledge of the speech content. Together, these findings characterize how the spectrotemporal features of speech are encoded in human auditory cortex and establish a single-trial-based paradigm to study the neural basis underlying the cocktail party phenomenon.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Effects of Hearing Aid Noise Reduction on Early and Late Cortical Representations of Competing Talkers in Noise.

TL;DR: In this paper, the authors investigated the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG).
Journal ArticleDOI

Auditory stimulus-response modeling with a Match-Mismatch task.

TL;DR: In this paper, the authors focus on the amatch-mismatch (MM) task that determines whether a segment of brain signal matches, via a model, the auditory stimulus that evoked it.
Journal ArticleDOI

Cortical Classification with Rhythm Entropy for Error Processing in Cocktail Party Environment Based on Scalp EEG Recording

TL;DR: The findings revealed that the rhythm information based on single cortical signals could be well used to describe characteristics of error-related EEG signals and further provided a novel application about auditory attention for brain computer interfaces (BCIs).
Journal ArticleDOI

Supervised binaural source separation using auditory attention detection in realistic scenarios

TL;DR: A new robust binaural speech separation system based on a supervised deep neural network (DNN) is introduced to separate attended speaker in different simulated room conditions and can be considered as an important processing tool in the structure of neuro-steered hearing aid devices employed in cocktail party scenarios.
Journal ArticleDOI

Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience

- 12 Sep 2022 - 
TL;DR: Weineck et al. as discussed by the authors found that the spectral flux of music rather than the amplitude envelope evokes the strongest neural response, and that music with slower beat rates, high familiarity, and easy-to-perceive beats elicited the strongest response.
References
More filters
Book

Elements of information theory

TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI

The cortical organization of speech processing

TL;DR: A dual-stream model of speech processing is outlined that assumes that the ventral stream is largely bilaterally organized — although there are important computational differences between the left- and right-hemisphere systems — and that the dorsal stream is strongly left- Hemisphere dominant.
Journal ArticleDOI

Some Experiments on the Recognition of Speech, with One and with Two Ears

TL;DR: In this paper, the relation between the messages received by the two ears was investigated, and two types of test were reported: (a) the behavior of a listener when presented with two speech signals simultaneously (statistical filtering problem) and (b) behavior when different speech signals are presented to his two ears.
Journal ArticleDOI

Speech recognition with primarily temporal cues.

TL;DR: Nearly perfect speech recognition was observed under conditions of greatly reduced spectral information; the presentation of a dynamic temporal pattern in only a few broad spectral regions is sufficient for the recognition of speech.
Journal ArticleDOI

Electrical Signs of Selective Attention in the Human Brain

TL;DR: Auditory evoked potentials were recorded from the vertex of subjects who listened selectively to a series of tone pipping in one ear and ignored concurrent tone pips in the other ear to study the response set established to recognize infrequent, higher pitched tone pipped in the attended series.
Related Papers (5)