scispace - formally typeset
Search or ask a question
Author

Nathaniel J. Zuk

Bio: Nathaniel J. Zuk is an academic researcher from Trinity College, Dublin. The author has contributed to research in topics: Neural coding & Perception. The author has an hindex of 4, co-authored 13 publications receiving 44 citations. Previous affiliations of Nathaniel J. Zuk include University of Rochester & Massachusetts Eye and Ear Infirmary.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors focus on experimental design, data preprocessing and stimulus feature extraction, model design, training and evaluation, and interpretation of model weights, and demonstrate how to implement each stage in MATLAB using the mTRF toolbox.
Abstract: Cognitive neuroscience has seen an increase in the use of linear modelling techniques for studying the processing of natural, environmental stimuli. The availability of such computational tools has prompted similar investigations in many clinical domains, facilitating the study of cognitive and sensory deficits within an ecologically relevant context. However, studying clinical (and often highly-heterogeneous) cohorts introduces an added layer of complexity to such modelling procedures, leading to an increased risk of improper usage of such techniques and, as a result, inconsistent conclusions. Here, we outline some key methodological considerations for applied research and include worked examples of both simulated and empirical electrophysiological (EEG) data. In particular, we focus on experimental design, data preprocessing and stimulus feature extraction, model design, training and evaluation, and interpretation of model weights. Throughout the paper, we demonstrate how to implement each stage in MATLAB using the mTRF-Toolbox and discuss how to address issues that could arise in applied cognitive neuroscience research. In doing so, we highlight the importance of understanding these more technical points for experimental design and data analysis, and provide a resource for applied and clinical researchers investigating sensory and cognitive processing using ecologically-rich stimuli.

27 citations

Journal ArticleDOI
TL;DR: EEG techniques used here suggest that the brain produces temporally individualized responses to speech and music sounds that are stronger than the responses to other natural sounds.

26 citations

Journal ArticleDOI
TL;DR: In this article, a method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening was used to compare neural tracking of speech and music envelopes.
Abstract: The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one’s cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.

16 citations

Journal ArticleDOI
TL;DR: Using bottom-up processes alone is insufficient to produce beat-locked activity, and a learned and possibly top-down mechanism that scales the synchronization frequency to derive the beat frequency greatly improves the performance of tempo identification.
Abstract: Prior research has shown that musical beats are salient at the level of the cortex in humans. Yet below the cortex there is considerable sub-cortical processing that could influence beat perception. Some biases, such as a tempo preference and an audio frequency bias for beat timing, could result from subcortical processing. Here, we used models of the auditory-nerve and midbrain-level amplitude modulation filtering to simulate sub-cortical neural activity to various beat-inducing stimuli, and we used the simulated activity to determine the tempo or beat frequency of the music. First, irrespective of the stimulus being presented, the preferred tempo was around 100 beats per minute, which is within the range of tempi where tempo discrimination and tapping accuracy are optimal. Second, sub-cortical processing predicted a stronger influence of lower audio frequencies on beat perception. However, the tempo identification algorithm that was optimized for simple stimuli often failed for recordings of music. For music, the most highly synchronized model activity occurred at a multiple of the beat frequency. Using bottom-up processes alone is insufficient to produce beat-locked activity. Instead, a learned and possibly top-down mechanism that scales the synchronization frequency to derive the beat frequency greatly improves the performance of tempo identification.

16 citations

Posted ContentDOI
14 Dec 2020-bioRxiv
TL;DR: Electrophysiological indices based on the semantic dissimilarity of words to their context reflect a listener’s understanding of those words relative to that context, and this work highlights the relative insensitivity of neural measures of low-level speech processing to speech comprehension.
Abstract: Speech comprehension relies on the ability to understand the meaning of words within a coherent context. Recent studies have attempted to obtain electrophysiological indices of this process by modelling how brain activity is affected by a word9s semantic dissimilarity to preceding words. While the resulting indices appear robust and are strongly modulated by attention, it remains possible that, rather than capturing the contextual understanding of words, they may actually reflect word-to-word changes in semantic content without the need for a narrative-level understanding on the part of the listener. To test this possibility, we recorded EEG from subjects who listened to speech presented in either its original, narrative form, or after scrambling the word order by varying amounts. This manipulation affected the ability of subjects to comprehend the narrative content of the speech, but not the ability to recognize the individual words. Neural indices of semantic understanding and low-level acoustic processing were derived for each scrambling condition using the temporal response function (TRF) approach. Signatures of semantic processing were observed for conditions where speech was unscrambled or minimally scrambled and subjects were able to understand the speech. The same markers were absent for higher levels of scrambling when speech comprehension dropped below chance. In contrast, word recognition remained high and neural measures related to envelope tracking did not vary significantly across the different scrambling conditions. This supports the previous claim that electrophysiological indices based on the semantic dissimilarity of words to their context reflect a listener9s understanding of those words relative to that context. It also highlights the relative insensitivity of neural measures of low-level speech processing to speech comprehension.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors focus on experimental design, data preprocessing and stimulus feature extraction, model design, training and evaluation, and interpretation of model weights, and demonstrate how to implement each stage in MATLAB using the mTRF toolbox.
Abstract: Cognitive neuroscience has seen an increase in the use of linear modelling techniques for studying the processing of natural, environmental stimuli. The availability of such computational tools has prompted similar investigations in many clinical domains, facilitating the study of cognitive and sensory deficits within an ecologically relevant context. However, studying clinical (and often highly-heterogeneous) cohorts introduces an added layer of complexity to such modelling procedures, leading to an increased risk of improper usage of such techniques and, as a result, inconsistent conclusions. Here, we outline some key methodological considerations for applied research and include worked examples of both simulated and empirical electrophysiological (EEG) data. In particular, we focus on experimental design, data preprocessing and stimulus feature extraction, model design, training and evaluation, and interpretation of model weights. Throughout the paper, we demonstrate how to implement each stage in MATLAB using the mTRF-Toolbox and discuss how to address issues that could arise in applied cognitive neuroscience research. In doing so, we highlight the importance of understanding these more technical points for experimental design and data analysis, and provide a resource for applied and clinical researchers investigating sensory and cognitive processing using ecologically-rich stimuli.

27 citations

Journal ArticleDOI
TL;DR: EEG techniques used here suggest that the brain produces temporally individualized responses to speech and music sounds that are stronger than the responses to other natural sounds.

26 citations

01 Jan 2010
TL;DR: “EML”自罗传开先生译为“民族音乐学”以来,于是也相应出现有”
Abstract: “EML”自罗传开先生译为“民族音乐学”以来,一直在中国音乐学术界使用,已有近30年的时间。随着“EML”与不同学科的交叉,其研究对象、范围和方法得到相应的拓展,于是也相应出现有“音乐人类学”“音乐文化人类学”“应用音乐人类学”等不同的学术概念。这些概念是否都是“EML”的中文译名,是否可以等同使用,本文通过回顾“EML”在中国的发展状态以及对相关几个概念的分析,阐述了作者的看法。

25 citations

01 Jan 2011
TL;DR: The authors found that the beat elicits a sustained periodic EEG response tuned to the beat frequency, while meter imagery elicits an additional frequency tuned to a corresponding metric interpretation of this beat.
Abstract: Feeling the beat and meter is fundamental to the experience of music. However, how these periodicities are represented in the brain remains largely unknown. Here, we test whether this function emerges from the entrainment of neurons resonating to the beat and meter. We recorded the electroencephalogram while participants listened to a musical beat and imagined a binary or a ternary meter of this beat (i.e. a march or a waltz). We found that the beat elicits a sustained periodic EEG response tuned to the beat frequency. Most importantly, we found that meter imagery elicits an additional frequency tuned to the corresponding metric interpretation of this beat. These results provide compelling evidence that neural entrainment to beat and meter can be captured directly in the electroencephalogram. More generally, our results suggest that music constitutes a unique context to explore entrainment phenomena in dynamic cognitive processing at the level of neural networks.

24 citations

Journal ArticleDOI
TL;DR: Findings suggest that bass felt in the body produces a multimodal auditory-tactile percept that promotes movement through the close connection between tactile and motor systems.
Abstract: Music is both heard and felt-tactile sensation is especially pronounced for bass frequencies. Although bass frequencies have been associated with enhanced bodily movement, time perception, and groove (the musical quality that compels movement), the underlying mechanism remains unclear. In 2 experiments, we presented high-groove music to auditory and tactile senses and examined whether tactile sensation affected body movement and ratings of enjoyment and groove. In Experiment 1, participants (N = 22) sat in a parked car and listened to music clips over sound-isolating earphones (auditory-only condition), and over earphones plus a subwoofer that stimulated the body (auditory-tactile condition). Experiment 2 (N = 18) also presented music in auditory-only and auditory-tactile conditions, but used a vibrotactile backpack to stimulate the body and included 2 loudness levels. Participants tapped their finger with each clip, rated each clip, and, in Experiment 1, we additionally video recorded spontaneous body movement. Results showed that the auditory-tactile condition yielded more forceful tapping, more spontaneous body movement, and higher ratings of groove and enjoyment. Loudness had a small, but significant, effect on ratings. In sum, findings suggest that bass felt in the body produces a multimodal auditory-tactile percept that promotes movement through the close connection between tactile and motor systems. We discuss links to embodied aesthetics and applications of tactile stimulation to boost rhythmic movement and reduce hearing damage. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

24 citations