scispace - formally typeset
Search or ask a question
Topic

Crossmodal

About: Crossmodal is a research topic. Over the lifetime, 1629 publications have been published within this topic receiving 69208 citations.


Papers
More filters
Journal ArticleDOI
01 Dec 1976-Nature
TL;DR: The study reported here demonstrates a previously unrecognised influence of vision upon speech perception, on being shown a film of a young woman's talking head in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga].
Abstract: MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.

5,506 citations

Journal ArticleDOI
TL;DR: The literature reviewed here supports the view thatCrossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help the authors' brains solve the crossmodal binding problem.
Abstract: In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain “know” which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.

1,133 citations

Journal ArticleDOI
14 Dec 2000-Nature
TL;DR: It is shown that auditory information can qualitatively alter the perception of an unambiguous visual stimulus to create a striking visual illusion, indicating that visual perception can be manipulated by other sensory modalities.
Abstract: Vision is believed to dominate our multisensory perception of the world. Here we overturn this established view by showing that auditory information can qualitatively alter the perception of an unambiguous visual stimulus to create a striking visual illusion. Our findings indicate that visual perception can be manipulated by other sensory modalities.

1,080 citations

Book
01 Jan 2004
TL;DR: This landmark reference work brings together for the first time in one volume the most recent research from different areas of the emerging field of multisensory integration with broad underlying principles that govern this interaction, regardless of the specific senses involved.
Abstract: This landmark reference work brings together for the first time in one volume the most recent research from different areas of the emerging field of multisensory integration. After many years of using a modality-specific "sense-by-sense" approach, researchers across different disciplines in neuroscience and psychology now recognize that perception is fundamentally a multisensory experience. To understand how the brain synthesizes information from the different senses, we must study not only how information from each sensory modality is decoded but also how this information interacts with the sensory processing taking place within other sensory channels. The findings cited in The Handbook of Multisensory Processes suggest that there are broad underlying principles that govern this interaction, regardless of the specific senses involved.The book is organized thematically into eight sections; each of the 55 chapters presents a state-of-the-art review of its topic by leading researchers in the field. The key themes addressed include multisensory contributions to perception in humans; whether the sensory integration involved in speech perception is fundamentally different from other kinds of multisensory integration; multisensory processing in the midbrain and cortex in model species, including rat, cat, and monkey; behavioral consequences of multisensory integration; modern neuroimaging techniques, including EEG, PET, and fMRI, now being used to reveal the many sites of multisensory processing in the brain; multisensory processes that require postnatal sensory experience to emerge, with examples from multiple species; brain specialization and possible equivalence of brain regions; and clinical studies of such breakdowns of normal sensory integration as brain damage and synesthesia.

1,026 citations

Journal ArticleDOI
TL;DR: The potential value of using statistical interaction effects to model electrophysiological responses to crossmodal stimuli in order to identify possible sites of multisensory integration in the human brain is highlighted.
Abstract: Modern brain imaging techniques have now made it possible to study the neural sites and mechanisms underlying crossmodal processing in the human brain. This paper reviews positron emission tomography, functional magnetic resonance imaging (fMRI), event-related potential and magnetoencephalographic studies of crossmodal matching, the crossmodal integration of content and spatial information, and crossmodal learning. These investigations are beginning to produce some consistent findings regarding the neuronal networks involved in these distinct crossmodal operations. Increasingly, specific roles are being defined for the superior temporal sulcus, the inferior parietal sulcus, regions of frontal cortex, the insula cortex and claustrum. The precise network of brain areas implicated in any one study, however, seems to be heavily dependent on the experimental paradigms used, the nature of the information being combined and the particular combination of modalities under investigation. The different analytic strategies adopted by different groups may also be a significant factor contributing to the variability in findings. In this paper, we demonstrate the impact of computing intersections, conjunctions and interaction effects on the identification of audiovisual integration sites using existing fMRI data from our own laboratory. This exercise highlights the potential value of using statistical interaction effects to model electrophysiological responses to crossmodal stimuli in order to identify possible sites of multisensory integration in the human brain.

983 citations


Network Information
Related Topics (5)
Visual perception
20.8K papers, 997.2K citations
90% related
Working memory
26.5K papers, 1.6M citations
87% related
Visual cortex
18.8K papers, 1.2M citations
86% related
Functional magnetic resonance imaging
15.4K papers, 1.1M citations
84% related
Prefrontal cortex
24K papers, 1.9M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202361
2022119
202187
202088
201972
201880