scispace - formally typeset
Search or ask a question
Author

Mairéad MacSweeney

Other affiliations: UCL Institute of Child Health
Bio: Mairéad MacSweeney is an academic researcher from University College London. The author has contributed to research in topics: Speechreading & Sign language. The author has an hindex of 26, co-authored 72 publications receiving 2720 citations. Previous affiliations of Mairéad MacSweeney include UCL Institute of Child Health.


Papers
More filters
Journal ArticleDOI
01 Jul 2002-Brain
TL;DR: This first neuroimaging study of the perception of British Sign Language (BSL) measures brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence-acceptability task and suggests that left- temporal auditory regions may be privileged for processing heard speech even in hearing native signers.
Abstract: In order to understand the evolution of human language, it is necessary to explore the neural systems that support language processing in its many forms. In particular, it is informative to separate those mechanisms that may have evolved for sensory processing (hearing) from those that have evolved to represent events and actions symbolically (language). To what extent are the brain systems that support language processing shaped by auditory experience and to what extent by exposure to language, which may not necessarily be acoustically structured? In this first neuroimaging study of the perception of British Sign Language (BSL), we explored these questions by measuring brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence-acceptability task. Eight hearing, non-signing subjects performed an analogous task that involved audio-visual English sentences. The data support the argument that there are both modality-independent and modality-dependent language localization patterns in native users. In relation to modality-independent patterns, regions activated by both BSL in deaf signers and by spoken English in hearing non-signers included inferior prefrontal regions bilaterally (including Broca's area) and superior temporal regions bilaterally (including Wernicke's area). Lateralization patterns were similar for the two languages. There was no evidence of enhanced right-hemisphere recruitment for BSL processing in comparison with audio-visual English. In relation to modality-specific patterns, audio-visual speech in hearing subjects generated greater activation in the primary and secondary auditory cortices than BSL in deaf signers, whereas BSL generated enhanced activation in the posterior occipito-temporal regions (V5), reflecting the greater movement component of BSL. The influence of hearing status on the recruitment of sign language processing systems was explored by comparing deaf and hearing adults who had BSL as their first language (native signers). Deaf native signers demonstrated greater activation in the left superior temporal gyrus in response to BSL than hearing native signers. This important finding suggests that left- temporal auditory regions may be privileged for processing heard speech even in hearing native signers. However, in the absence of auditory input this region can be recruited for visual processing.

276 citations

Journal ArticleDOI
TL;DR: The authors found that the neural systems supporting signed and spoken languages are very similar: both involve a predominantly left-lateralised perisylvian network. But they also highlighted processing differences between languages in these different modalities.

226 citations

Journal ArticleDOI
TL;DR: Findings suggest that superior temporal gyrus and neighbouring regions are activated bilaterally when subjects view face actions--at different scales--that can be interpreted as speech.

217 citations

Journal ArticleDOI
TL;DR: Three main types of acquisitions are considered: compressed, partially silent, and silent; for each implementation, paradigms using block and event‐related designs are assessed and higher blood oxygen level‐dependent response to a simple auditory cue is demonstrated when compared to a conventional image acquisition.
Abstract: Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition.

175 citations

Journal ArticleDOI
TL;DR: Using fMRI, the neural correlates of viewing a gestural language (BSL) and a manual-brachial code (Tic Tac) relative to a low-level baseline task suggest that the planum temporale may be responsive to visual movement in both deaf and hearing people, yet when hearing is absent early in development, the visual processing role of this region is enhanced.

136 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The mentalizing (theory of mind) system of the brain is probably in operation from 18 months of age, allowing implicit attribution of intentions and other mental states, and from this age children are able to explain the misleading reasons that have given rise to a false belief.
Abstract: The mentalizing (theory of mind) system of the brain is probably in operation from ca. 18 months of age, allowing implicit attribution of intentions and other mental states. Between the ages of 4 and 6 years explicit mentalizing becomes possible, and from this age children are able to explain the misleading reasons that have given rise to a false belief. Neuroimaging studies of mentalizing have so far only been carried out in adults. They reveal a system with three components consistently activated during both implicit and explicit mentalizing tasks: medial prefrontal cortex (MPFC), temporal poles and posterior superior temporal sulcus (STS). The functions of these components can be elucidated, to some extent, from their role in other tasks used in neuroimaging studies. Thus, the MPFC region is probably the basis of the decoupling mechanism that distinguishes mental state representations from physical state representations; the STS region is probably the basis of the detection of agency, and the temporal poles might be involved in access to social knowledge in the form of scripts. The activation of these components in concert appears to be critical to mentalizing.

2,110 citations

Journal ArticleDOI
TL;DR: There was greater activity during imitation, compared with observation of emotions, in premotor areas including the inferior frontal cortex, as well as in the superior temporal cortex, insula, and amygdala, which may be a critical relay from action representation to emotion.
Abstract: How do we empathize with others? A mechanism according to which action representation modulates emotional activity may provide an essential functional architecture for empathy. The superior temporal and inferior frontal cortices are critical areas for action representation and are connected to the limbic system via the insula. Thus, the insula may be a critical relay from action representation to emotion. We used functional MRI while subjects were either imitating or simply observing emotional facial expressions. Imitation and observation of emotions activated a largely similar network of brain areas. Within this network, there was greater activity during imitation, compared with observation of emotions, in premotor areas including the inferior frontal cortex, as well as in the superior temporal cortex, insula, and amygdala. We understand what others feel by a mechanism of action representation that allows empathy and modulates our emotional content. The insula plays a fundamental role in this mechanism.

1,871 citations

Journal ArticleDOI
TL;DR: An anatomical model is presented that indicates the location of the language areas and the most consistent functions that have been assigned to them and the implications for cognitive models of language processing are considered.

1,700 citations

Journal ArticleDOI
TL;DR: This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once the authors honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages.
Abstract: Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of "universal," we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition. Linguistic diversity then becomes the crucial datum for cognitive science: we are the only species with a communication system that is fundamentally variable at all levels. Recognizing the true extent of structural diversity in human language opens up exciting new research directions for cognitive scientists, offering thousands of different natural experiments given by different languages, with new opportunities for dialogue with biological paradigms concerned with change and diversity, and confronting us with the extraordinary plasticity of the highest human skills.

1,385 citations