scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Neural systems underlying British Sign Language and audio‐visual English processing in native users

01 Jul 2002-Brain (Oxford University Press)-Vol. 125, Iss: 7, pp 1583-1593
TL;DR: This first neuroimaging study of the perception of British Sign Language (BSL) measures brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence-acceptability task and suggests that left- temporal auditory regions may be privileged for processing heard speech even in hearing native signers.
Abstract: In order to understand the evolution of human language, it is necessary to explore the neural systems that support language processing in its many forms. In particular, it is informative to separate those mechanisms that may have evolved for sensory processing (hearing) from those that have evolved to represent events and actions symbolically (language). To what extent are the brain systems that support language processing shaped by auditory experience and to what extent by exposure to language, which may not necessarily be acoustically structured? In this first neuroimaging study of the perception of British Sign Language (BSL), we explored these questions by measuring brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence-acceptability task. Eight hearing, non-signing subjects performed an analogous task that involved audio-visual English sentences. The data support the argument that there are both modality-independent and modality-dependent language localization patterns in native users. In relation to modality-independent patterns, regions activated by both BSL in deaf signers and by spoken English in hearing non-signers included inferior prefrontal regions bilaterally (including Broca's area) and superior temporal regions bilaterally (including Wernicke's area). Lateralization patterns were similar for the two languages. There was no evidence of enhanced right-hemisphere recruitment for BSL processing in comparison with audio-visual English. In relation to modality-specific patterns, audio-visual speech in hearing subjects generated greater activation in the primary and secondary auditory cortices than BSL in deaf signers, whereas BSL generated enhanced activation in the posterior occipito-temporal regions (V5), reflecting the greater movement component of BSL. The influence of hearing status on the recruitment of sign language processing systems was explored by comparing deaf and hearing adults who had BSL as their first language (native signers). Deaf native signers demonstrated greater activation in the left superior temporal gyrus in response to BSL than hearing native signers. This important finding suggests that left- temporal auditory regions may be privileged for processing heard speech even in hearing native signers. However, in the absence of auditory input this region can be recruited for visual processing.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once the authors honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages.
Abstract: Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of "universal," we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition. Linguistic diversity then becomes the crucial datum for cognitive science: we are the only species with a communication system that is fundamentally variable at all levels. Recognizing the true extent of structural diversity in human language opens up exciting new research directions for cognitive scientists, offering thousands of different natural experiments given by different languages, with new opportunities for dialogue with biological paradigms concerned with change and diversity, and confronting us with the extraordinary plasticity of the highest human skills.

1,385 citations


Cites background from "Neural systems underlying British S..."

  • ...The neurocognition of sign does not look, for example, like the neurocognition of gesture, but instead recruits, for example, auditory cortex (MacSweeney et al. 2002; Nishimura et al. 1999)....

    [...]

  • ...…The modality transfer in sign versus spoken language can be exploited to explore the nature of language processing when the input/output systems are switched, thus allowing glimpses into language-specific cognition beyond the vocal-auditory specializations (Emmorey 2002; MacSweeney et al. 2002)....

    [...]

Journal ArticleDOI
TL;DR: Crossmodal neuroplasticity with regards to behavioural adaptation after sensory deprivation is discussed and the possibility of maladaptive consequences within the context of rehabilitation is highlighted.
Abstract: There is growing evidence that sensory deprivation is associated with crossmodal neuroplastic changes in the brain. After visual or auditory deprivation, brain areas that are normally associated with the lost sense are recruited by spared sensory modalities. These changes underlie adaptive and compensatory behaviours in blind and deaf individuals. Although there are differences between these populations owing to the nature of the deprived sensory modality, there seem to be common principles regarding how the brain copes with sensory loss and the factors that influence neuroplastic changes. Here, we discuss crossmodal neuroplasticity with regards to behavioural adaptation after sensory deprivation and highlight the possibility of maladaptive consequences within the context of rehabilitation.

613 citations

Journal ArticleDOI
TL;DR: The present paper focuses on four aspects of the model which have led to the current, updated version: the language generality assumption; the mismatch assumption; chronological age; and the episodic buffer function of rapid, automatic multimodal binding of phonology (RAMBPHO).
Abstract: A general working memory system for ease of language understanding (ELU, Ronnberg, 2003a) is presented. The purpose of the system is to describe and predict the dynamic interplay between explicit and implicit cognitive functions, especially in conditions of poorly perceived or poorly specified linguistic signals. In relation to speech understanding, the system based on (1) the quality and precision of phonological representations in long-term memory, (2) phonologically mediated lexical access speed, and (3) explicit, storage, and processing resources. If there is a mismatch between phonological information extracted from the speech signal and the phonological information represented in long-term memory, the system is assumed to produce a mismatch signal that invokes explicit processing resources. In the present paper, we focus on four aspects of the model which have led to the current, updated version: the language generality assumption; the mismatch assumption; chronological age; and the episodic buffer function of rapid, automatic multimodal binding of phonology (RAMBPHO). We evaluate the language generality assumption in relation to sign language and speech, and the mismatch assumption in relation to signal processing in hearing aids. Further, we discuss the effects of chronological age and the implications of RAMBPHO.

440 citations


Cites background from "Neural systems underlying British S..."

  • ...…tactile speech (Levänen, 1998; Rönnberg, 1993), and auditory imagery (Zatorre, 2007), as well as by WM-relevant sign language functions (e.g. Capek et al, 2008; Emmorey et al, 2003; MacSweeney et al, 2002, 2008; McGuire et al, 1997; Petitto et al, 2000; Sakai et al, 2005; Rönnberg et al, 2000)....

    [...]

Journal ArticleDOI
TL;DR: A comparative model of shared, parallel, and distinctive features of the neural systems supporting music and language is outlined, assuming thatMusic and language show parallel combinatoric generativity for complex sound structures but distinctly different informational content (semantics).
Abstract: Parallel generational tasks for music and language were compared using positron emission tomography. Amateur musicians vocally improvised melodic or linguistic phrases in response to unfamiliar, auditorily presented melodies or phrases. Core areas for generating melodic phrases appeared to be in left Brodmann area (BA) 45, right BA 44, bilateral temporal planum polare, lateral BA 6, and pre-SMA. Core areas for generating sentences seemed to be in bilateral posterior superior and middle temporal cortex (BA 22, 21), left BA 39, bilateral superior frontal (BA 8, 9), left inferior frontal (BA 44, 45), anterior cingulate, and pre-SMA. Direct comparisons of the two tasks revealed activations in nearly identical functional brain areas, including the primary motor cortex, supplementary motor area, Broca’s area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus, and posterior cerebellum. Most of the differences between melodic and sentential generation were seen in lateralization tendencies, with the language task favouring the left hemisphere. However, many of the activations for each modality were bilateral, and so there was significant overlap. While clarification of this overlapping activity awaits higher-resolution measurements and interventional assessments, plausible accounts for it include component sharing, interleaved representations, and adaptive coding. With these and related findings, we outline a comparative model of shared, parallel, and distinctive features of the neural systems supporting music and language. The model assumes that music and language show parallel combinatoric generativity for complex sound structures (phonology) but distinctly different informational content (semantics).

362 citations

References
More filters
Journal ArticleDOI

9,362 citations


"Neural systems underlying British S..." refers methods in this paper

  • ...An inversion recovery EPI (echoplanar imaging) data set was also acquired to facilitate registration of each individual's fMRI data set to Talairach space (Talairach and Tournoux, 1988)....

    [...]

  • ...The voxel-wise SSQ ratios calculated for each subject from the observed data and following time-series permutation were transformed into the standard space of Talairach and Tournoux (1988) as described previously (Brammer et al., 1997)....

    [...]

Journal ArticleDOI
TL;DR: Almost entirely automated procedures for estimation of global, voxel, and cluster-level statistics to test the null hypothesis of zero neuroanatomical difference between two groups of structural magnetic resonance imaging (MRI) data are described.
Abstract: The authors describe almost entirely automated procedures for estimation of global, voxel, and cluster-level statistics to test the null hypothesis of zero neuroanatomical difference between two groups of structural magnetic resonance imaging (MRI) data. Theoretical distributions under the null hypothesis are available for (1) global tissue class volumes; (2) standardized linear model [analysis of variance (ANOVA and ANCOVA)] coefficients estimated at each voxel; and (3) an area of spatially connected clusters generated by applying an arbitrary threshold to a two-dimensional (2-D) map of normal statistics at voxel level. The authors describe novel methods for economically ascertaining probability distributions under the null hypothesis, with fewer assumptions, by permutation of the observed data. Nominal Type I error control by permutation testing is generally excellent; whereas theoretical distributions may be over conservative. Permutation has the additional advantage that it can be used to test any statistic of interest, such as the sum of suprathreshold voxel statistics in a cluster (or cluster mass), regardless of its theoretical tractability under the null hypothesis. These issues are illustrated by application to MRI data acquired from 18 adolescents with hyperkinetic disorder and 16 control subjects matched for age and gender.

1,036 citations


"Neural systems underlying British S..." refers methods in this paper

  • ...The sum of a1* for each cluster was then tested for signi®cance against the identically derived randomization distribution (Bullmore et al., 1999)....

    [...]

  • ...This comprised 43 nearaxial 3 mm slices (0.3 mm gap), which were acquired parallel to the AC±PC line (TE = 80 ms, TI (inversion time) = 180 ms, TR = 16 s)....

    [...]

Journal ArticleDOI
25 Apr 1997-Science
TL;DR: Three experiments suggest that these auditory cortical areas are not engaged when an individual is viewing nonlinguistic facial movements but appear to be activated by silent meaningless speechlike movements (pseudospeech), which supports psycholinguistic evidence that seen speech influences the perception of heard speech at a prelexical stage.
Abstract: Watching a speaker's lips during face-to-face conversation (lipreading) markedly improves speech perception, particularly in noisy conditions. With functional magnetic resonance imaging it was found that these linguistic visual cues are sufficient to activate auditory cortex in normal hearing individuals in the absence of auditory speech sounds. Two further experiments suggest that these auditory cortical areas are not engaged when an individual is viewing nonlinguistic facial movements but appear to be activated by silent meaningless speechlike movements (pseudospeech). This supports psycholinguistic evidence that seen speech influences the perception of heard speech at a prelexical stage.

963 citations


"Neural systems underlying British S..." refers background or methods in this paper

  • ...We have shown that hearing people reliably activate the auditory cortices, often including the primary auditory cortex, during silent speech-reading (Calvert et al., 1997; MacSweeney et al., 2000, 2001)....

    [...]

  • ...In this ®rst neuroimaging study of the perception of British Sign Language (BSL), we explored these questions by measuring brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence-acceptability task....

    [...]

Journal ArticleDOI
TL;DR: The theory and techniques upon which conclusions based on nonlinear system identification based on the use of Volterra series were based are described and the implications for experimental design and analysis are discussed.
Abstract: This paper presents an approach to characterizing evoked hemodynamic responses in fMRI based on nonlinear system identification, in particular the use of Volterra series. The approach employed enables one to estimate Volterra kernels that describe the relationship between stimulus presentation and the hemodynamic responses that ensue. Volterra series are essentially high-order extensions of linear convolution or "smoothing." These kernels, therefore, represent a nonlinear characterization of the hemodynamic response function that can model the responses to stimuli in different contexts (in this work, different rates of word presentation) and interactions among stimuli. The nonlinear components of the responses were shown to be statistically significant, and the kernel estimates were validated using an independent event-related fMRI experiment. One important manifestation of these nonlinear effects is a modulation of stimulus-specific responses by preceding stimuli that are proximate in time. This means that responses at high-stimulus presentation rates saturate and, in some instances, show an inverted U behavior. This behavior appears to be specific to BOLD effects (as distinct from evoked changes in cerebral blood flow) and may represent a hemodynamic "refractoriness." The aim of this paper is to describe the theory and techniques upon which these conclusions were based and to discuss the implications for experimental design and analysis.

652 citations


"Neural systems underlying British S..." refers methods in this paper

  • ...Following motion correction, a least-squares ®t was carried out between the observed time series at each voxel and a mixture of two one-parameter gamma variate functions (peak responses 4 and 8 s) convolved with the experimental design (Friston et al., 1998)....

    [...]