scispace - formally typeset
Search or ask a question
Author

Katrin Krumbholz

Bio: Katrin Krumbholz is an academic researcher from University of Nottingham. The author has contributed to research in topics: Auditory cortex & Binaural recording. The author has an hindex of 26, co-authored 64 publications receiving 2193 citations. Previous affiliations of Katrin Krumbholz include Forschungszentrum Jülich & University of Münster.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel stimulus paradigm designed to circumvent the energy-onset response and thereby isolate the response of those neural elements specifically involved in pitch processing is described.
Abstract: There have been several attempts to use the neuromagnetic response to the onset of a tonal sound (N100m) to study pitch processing in auditory cortex. Unfortunately, a large proportion of the N100m is simply a response to the onset of sound energy, independent of whether the sound produces a pitch. The current study describes a novel stimulus paradigm designed to circumvent the energy-onset response and thereby isolate the response of those neural elements specifically involved in pitch processing. The temporal resolution of magnetoencephalography enables us to show that the latency and amplitude of this pitch-onset response (POR) vary with the pitch and pitch strength of the tone. The spatial resolution is sufficient to show that its source lies somewhat anterior and inferior to that of the N100m, probably in the medial part of Heschl’s gyrus.

241 citations

Journal ArticleDOI
TL;DR: A computational auditory model that extracts pitch information with autocorrelation can reproduce all of the observed effects, provided the contribution of longer time intervals is progressively reduced by a linear weighting function that limits the mechanism to time intervals of less than about 33 ms.
Abstract: An objective melody task was used to determine the lower limit of melodic pitch (LLMP) for harmonic complex tones. The LLMP was defined operationally as the repetition rate below which listeners could no longer recognize that one of the notes in a four-note, chromatic melody had changed by a semitone. In the first experiment, the stimuli were broadband tones with all their components in cosine phase, and the LLMP was found to be around 30 Hz. In the second experiment, the tones were filtered into bands about 1 kHz in width to determine the influence of frequency region on the LLMP. The results showed that whenever there was energy present below 800 Hz, the LLMP was still around 30 Hz. When the energy was limited to higher-frequency regions, however, the LLMP increased progressively, up to 270 Hz when the energy was restricted to the region above 3.2 kHz. In the third experiment, the phase relationship between spectral components was altered to determine whether the shape of the waveform affects the LLMP. When the envelope peak factor was reduced using the Schroeder phase relationship, the LLMP was not affected. When a secondary peak was introduced into the envelope of the stimuli by alternating the phase of successive components between two fixed values, there was a substantial reduction in the LLMP, for stimuli containing low-frequency energy. A computational auditory model that extracts pitch information with autocorrelation can reproduce all of the observed effects, provided the contribution of longer time intervals is progressively reduced by a linear weighting function that limits the mechanism to time intervals of less than about 33 ms.

177 citations

Journal ArticleDOI
TL;DR: This paper describes a functional magnetic resonance imaging experiment, which shows that the interaural temporal processing of lateralized sounds produces an enhanced response in the contralateral planum temporale (PT) when the sound is moving than when it is stationary.
Abstract: The localization of low-frequency sounds mainly relies on the processing of microsecond temporal disparities between the ears, since low frequencies produce little or no interaural energy differences. The overall auditory cortical response to low-frequency sounds is largely symmetrical between the two hemispheres, even when the sounds are lateralized. However, the effects of unilateral lesions in the superior temporal cortex suggest that the spatial information mediated by lateralized sounds is distributed asymmetrically across the hemispheres. This paper describes a functional magnetic resonance imaging experiment, which shows that the interaural temporal processing of lateralized sounds produces an enhanced response in the contralateral planum temporale (PT). The response is stronger and extends further into adjacent regions of the inferior parietal lobe (IPL) when the sound is moving than when it is stationary. This suggests that the interaural temporal information mediated by lateralized sounds is projected along a posterior pathway comprising the PT and IPL of the respective contralateral hemisphere. The differential responses to moving sounds further revealed that the left hemisphere responded predominantly to sound movement within the right hemifield, whereas the right hemisphere responded to sound movement in both hemifields. This rightward asymmetry parallels the asymmetry associated with the allocation of visuo-spatial attention and may underlie unilateral auditory neglect phenomena.

145 citations

Journal ArticleDOI
TL;DR: A hierarchical organization of auditory spatial processing is suggested in which the general analysis of binaural information begins as early as the brainstem, while the representation of dynamic bINAural cues relies on non‐primary auditory fields in the planum temporale.
Abstract: Horizontal sound localization relies on the extraction of binaural acoustic cues by integration of the signals from the two ears at the level of the brainstem. The present experiment was aimed at detecting the sites of binaural integration in the human brainstem using functional magnetic resonance imaging and a binaural difference paradigm, in which the responses to binaural sounds were compared with the sum of the responses to the corresponding monaural sounds. The experiment also included a moving sound condition, which was contrasted against a spectrally and energetically matched stationary sound condition to assess which of the structures that are involved in general binaural processing are specifically specialized in motion processing. The binaural difference contrast revealed a substantial binaural response suppression in the inferior colliculus in the midbrain, the medial geniculate body in the thalamus and the primary auditory cortex. The effect appears to reflect an actual reduction of the underlying activity, probably brought about by binaural inhibition or refractoriness at the level of the superior olivary complex. Whereas all structures up to and including the primary auditory cortex were activated as strongly by the stationary as by the moving sounds, non-primary auditory fields in the planum temporale responded selectively to the moving sounds. These results suggest a hierarchical organization of auditory spatial processing in which the general analysis of binaural information begins as early as the brainstem, while the representation of dynamic binaural cues relies on non-primary auditory fields in the planum temporale.

140 citations

Journal ArticleDOI
TL;DR: The hypothesis that in the low-frequency region, the pitch limit is determined by a temporal mechanism, which analyzes time intervals between peaks in the neural activity pattern, is supported.
Abstract: This paper is concerned with the lower limit of pitch for complex, harmonic sounds, like the notes produced by low-pitched musical instruments. The lower limit of pitch is investigated by measuring rate discrimination thresholds for harmonic tones filtered into 1.2-kHz-wide bands with a lower cutoff frequency, Fc, ranging from 0.2 to 6.4 kHz. When Fc is below 1 kHz and the harmonics are in cosine phase, rate discrimination threshold exhibits a rapid, tenfold decrease as the repetition rate is increased from 16 to 64 Hz, and over this range, the perceptual quality of the stimuli changes from flutter to pitch. When Fc is increased above 1 kHz, the slope of the transition from high to low thresholds becomes shallower and occurs at progressively higher rates. A quantitative comparison of the cosine-phase thresholds with subjective estimates of the existence region of pitch from the literature shows that the transition in rate discrimination occurs at approximately the same rate as the lower limit of pitch. Th...

138 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds, based on the well-known autocorrelation method with a number of modifications that combine to prevent errors.
Abstract: An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds. It is based on the well-known autocorrelation method with a number of modifications that combine to prevent errors. The algorithm has several desirable features. Error rates are about three times lower than the best competing methods, as evaluated over a database of speech recorded together with a laryngograph signal. There is no upper limit on the frequency search range, so the algorithm is suited for high-pitched voices and music. The algorithm is relatively simple and may be implemented efficiently and with low latency, and it involves few parameters that must be tuned. It is based on a signal model (periodic signal) that may be extended in several ways to handle various forms of aperiodicity that occur in particular applications. Finally, interesting parallels may be drawn with models of auditory processing.

1,975 citations

01 Jan 2016
TL;DR: As you may know, people have search numerous times for their chosen novels like this statistical parametric mapping the analysis of functional brain images, but end up in malicious downloads.
Abstract: Thank you very much for reading statistical parametric mapping the analysis of functional brain images. As you may know, people have search numerous times for their chosen novels like this statistical parametric mapping the analysis of functional brain images, but end up in malicious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they cope with some infectious bugs inside their desktop computer.

1,719 citations

Journal ArticleDOI
14 Nov 2002-Neuron
TL;DR: The results support the view that there is hierarchy of pitch processing in which the center of activity moves anterolaterally away from primary auditory cortex as the processing of melodic sounds proceeds.

730 citations

Journal ArticleDOI
TL;DR: Sound-related somatotopic activation in precentral gyrus shows that, during speech perception, specific motor circuits are recruited that reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroimaging support for specific links between the phonological mechanisms for speech perception and production.
Abstract: The processing of spoken language has been attributed to areas in the superior temporal lobe, where speech stimuli elicit the greatest activation. However, neurobiological and psycholinguistic models have long postulated that knowledge about the articulatory features of individual phonemes has an important role in their perception and in speech comprehension. To probe the possible involvement of specific motor circuits in the speech-perception process, we used event-related functional MRI and presented experimental subjects with spoken syllables, including [p] and [t] sounds, which are produced by movements of the lips or tongue, respectively. Physically similar nonlinguistic signal-correlated noise patterns were used as control stimuli. In localizer experiments, subjects had to silently articulate the same syllables and, in a second task, move their lips or tongue. Speech perception most strongly activated superior temporal cortex. Crucially, however, distinct motor regions in the precentral gyrus sparked by articulatory movements of the lips and tongue were also differentially activated in a somatotopic manner when subjects listened to the lip- or tongue-related phonemes. This sound-related somatotopic activation in precentral gyrus shows that, during speech perception, specific motor circuits are recruited that reflect phonetic distinctive features of the speech sounds encountered, thus providing direct neuroimaging support for specific links between the phonological mechanisms for speech perception and production.

618 citations

Journal ArticleDOI
TL;DR: The area of exponentially increasing research in arsenene, antimonene, and bismuthene, which belong to the fifth main group of elements, the so-called pnictogens, is provided.
Abstract: Two-dimensional materials are responsible for changing research in materials science. After graphene and its counterparts, graphane, fluorographene, and others were introduced, waves of renewed interest in 2D binary compounds occurred, such as in metal oxides, transition-metal dichalcogenides (most often represented by MoS2 ), metal oxy/hydroxide borides, and MXenes, to name the most prominent. Recently, interest has turned to two-dimensional monoelemental structures, such as monolayer black phosphorus and, very recently, to monolayer arsenic, antimony, and bismuth. Here, a short overview is provided of the area of exponentially increasing research in arsenene, antimonene, and bismuthene, which belong to the fifth main group of elements, the so-called pnictogens. A short review of historical work is provided, the properties of bulk allotropes of As, Sb, and Bi discussed, and then theoretical and experimental research on mono- and few-layered arsenene, antimonene, and bismuthene addressed, discussing their structures and properties.

558 citations