scispace - formally typeset
Search or ask a question

Showing papers on "Crossmodal published in 2003"


Journal ArticleDOI
17 Jul 2003-Neuron
TL;DR: It is suggested that human hippocampus mediates reactivation of crossmodal semantic associations, even in the absence of explicit memory processing, as indicating that human olfactory perception is notoriously unreliable but shows substantial benefits from visual cues.

503 citations


Journal ArticleDOI
TL;DR: Experimental results generalize to real life only when they reflect automatic perceptual processes, and not response strategies adopted to satisfy the particular demands of laboratory tasks.

346 citations


Journal ArticleDOI
26 Jun 2003-Nature
TL;DR: This work uses a paradigm in which 'preferential looking' is monitored to show that rhesus monkeys (Macaca mulatta), a species that communicates by means of elaborate facial and vocal expression, are able to recognize the correspondence between the auditory and visual components of their calls.
Abstract: Pulling a face to emphasize a spoken point is not seen as just a human prerogative. The perception of human speech can be enhanced by a combination of auditory and visual signals1,2. Animals sometimes accompany their vocalizations with distinctive body postures and facial expressions3, although it is not known whether their interpretation of these signals is unified. Here we use a paradigm in which 'preferential looking' is monitored to show that rhesus monkeys (Macaca mulatta), a species that communicates by means of elaborate facial and vocal expression4,5,6,7, are able to recognize the correspondence between the auditory and visual components of their calls. This crossmodal identification of vocal signals by a primate might represent an evolutionary precursor to humans' ability to match spoken words with facial articulation.

188 citations


Journal ArticleDOI
TL;DR: It is suggested that a relatively broad multimodal network of neurons is involved in generating and sustaining the tinnitus perception in some forms of the disorder.

146 citations


Journal ArticleDOI
TL;DR: A view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction is supported.

143 citations


Journal ArticleDOI
TL;DR: Neuroimaging and neuropsychological studies are beginning to elucidate the network of neural structures responsible for the processing of motion information in the different sensory modalities, an important first step that will ultimately lead to the determination of the neural substrates underlying these multisensory contributions to motion perception.

115 citations


Journal ArticleDOI
TL;DR: Results show for the first time the separable multimodal and unimodal components of such preparatory activations, irrespective of the attended side and modality, that activated a network of superior frontal and parietal association areas that may play a role in voluntary control of spatial attention for both vision and touch.
Abstract: We used event-related functional magnetic resonance imaging to study the neural correlates of endogenous spatial attention for vision and touch. We examined activity associated with attention-directing cues (central auditory pure tones), symbolically instructing subjects to attend to one hemifield or the other prior to upcoming stimuli, for a visual or tactile task. In different sessions, subjects discriminated either visual or tactile stimuli at the covertly attended side, during bilateral visuotactile stimulation. To distinguish cue-related preparatory activity from any modulation of stimulus processing, unpredictably on some trials only the auditory cue was presented. The use of attend-vision and attend-touch blocks revealed whether preparatory attentional effects were modality-specific or multimodal. Unimodal effects of spatial attention were found in somatosensory cortex for attention to touch, and in occipital areas for attention to vision, both contralateral to the attended side. Multimodal spatial effects (i.e. effects of attended side irrespective of task-relevant modality) were detected in contralateral intraparietal sulcus, traditionally considered a multimodal brain region; and also in the middle occipital gyrus, an area traditionally considered purely visual. Critically, all these activations were observed even on cue-only trials, when no visual or tactile stimuli were subsequently presented. Endogenous shifts of spatial attention result in changes of brain activity prior to the presentation of target stimulation (baseline shifts). Here, we show for the first time the separable multimodal and unimodal components of such preparatory activations. Additionally, irrespective of the attended side and modality, attention-directing auditory cues activated a network of superior frontal and parietal association areas that may play a role in voluntary control of spatial attention for both vision and touch.

100 citations


Journal ArticleDOI
TL;DR: Audition plays a bigger role than vision in temporal ventriloquism and is probably generally superior to vision for processing the temporal dimension of events, as well as split up the total crossmodal attraction into its modality-specific components.

91 citations


Journal ArticleDOI
TL;DR: Results showed that spatial attention modulated both early and late somatosensory and auditory ERPs when touch and tones were relevant, respectively, suggesting bi-directional crossmodal links between hearing and touch.
Abstract: An increasing number of animal and human studies suggests that different sensory systems share spatial representations in the brain. The aim of the present study was to test whether attending to auditory stimuli presented at a particular spatial location influences the processing of tactile stimuli at that position and vice versa (crossmodal attention). Moreover, it was investigated which processing stages are influenced by orienting attention to a certain stimulus modality (intermodal attention). Event-related brain potentials (ERPs) were recorded from 15 participants while tactile and auditory stimuli were presented at the left or right side of the body midline. The task of the participants was to attend to either the auditory or to the tactile modality, and to respond to infrequent double-stimuli of either the left or right side. Results showed that spatial attention modulated both early and late somatosensory and auditory ERPs when touch and tones were relevant, respectively. Moreover, early somatosensory (N70-100, N125-175) and auditory (N100-170) potentials, but not later deflections, were affected by spatial attention to the other modality, suggesting bi-directional crossmodal links between hearing and touch. Additionally, ERPs were modulated by intermodal selection mechanisms: stimuli elicited enhanced negative early and late ERPs when they belonged to the attended modality compared to those that belonged to the unattended modality. The present results provide evidence for the parallel influence of spatial and intermodal selection mechanisms at early processing stages while later processing steps are restricted to the relevant modality.

88 citations


Journal ArticleDOI
TL;DR: The findings demonstrate that the ERP correlates of crossmodal attention do not depend on selection being guided by ambient visible information in a lit environment, and suggest instead that spatial shifts of attention are controlled supramodally.

74 citations


Journal ArticleDOI
TL;DR: The temporal synchronization of cortical operations processing unimodal stimuli at different cortical sites reveals the importance of the temporal features of auditory and visual stimuli for audio-visual speech integration.

Journal ArticleDOI
TL;DR: Three experiments designed to investigate the nature of any crossmodal links between audition and touch in sustained endogenous covert spatial attention, using the orthogonal spatial cuing paradigm reveal that crossmodAL links in audiotactile attention operate on a representation of space that is updated following posture change.
Abstract: We report three experiments designed to investigate the nature of any crossmodal links between audition and touch in sustained endogenous covert spatial attention, using the orthogonal spatial cuing paradigm. Participants discriminated the elevation (up vs. down) of auditory and tactile targets presented to either the left or the right of fixation. In Experiment 1, targets were expected on a particular side in just one modality; the results demonstrated that the participants could spatially shift their attention independently in both audition and touch. Experiment 2 demonstrated that when the participants were informed that targets were more likely to be on one side for both modalities, elevation judgments were faster on that side in both audition and touch. The participants were also able to “split” their auditory and tactile attention, albeit at some cost, when targets in the two modalities were expected on opposite sides. Similar results were also reported in Experiment 3 when participants adopted a crossedhands posture, thus revealing that crossmodal links in audiotactile attention operate on a representation of space that is updated following posture change. These results are discussed in relation to previous findings regarding crossmodal links in audiovisual and visuotactile covert spatial attentional orienting.

Journal ArticleDOI
TL;DR: The age-related exacerbation suggests a developmental neuronal deficit, possibly related to magnocells, which exists before dyslexia and is its ontogenetic cause.

Journal ArticleDOI
TL;DR: A crossmodal congruity effect was discovered: Performance was better when the two letters in a stimulus pair were the same than when they differed in type.
Abstract: Observers were given brief presentations of pairs of simultaneous stimuli consisting of a visual and a spoken letter In the visual focused-attention condition, only the visual letter should be reported; in the auditory focused-attention condition, only the spoken letter should be reported; in the divided-attention condition, both letters, as well as their respective modalities, should be reported (forced choice) The proportions of correct reports were nearly the same in the three conditions (no significant dividedattention decrement), and in the divided-attention condition, the probability that the visual letter was correctly reported was independent of whether the auditory letter was correctly reported However, with a probability much higher than chance, the observers reported hearing the visual stimulus letter or seeing the spoken stimulus letter (modality confusions) The strength of the effect was nearly the same with focused as with divided attention We also discovered a crossmodal congruity effect: Performance was better when the two letters in a stimulus pair were the same than when they differed in type

Journal ArticleDOI
TL;DR: These findings show that visuo-tactile representation of peripersonal space can be formed despite the subject's explicit awareness concerning the physical impossibility for the hand to be touched, and indicates that multisensory integrative processing can occur in a bottom-up fashion without necessarily being modulated by more 'cognitive' processes.

Journal ArticleDOI
TL;DR: The absence of a significant interaction between distance and auditory intensity suggests that the intensity of the accessory stimulus has no direct influence on the process of crossmodal integration, and that spatial position and intensity are processed in separate stages.
Abstract: Saccadic reaction time (SRT) toward a visual target stimulus was measured under simultaneous presentation of an auditory non-target (accessory stimulus). Horizontal position of the target was varied (25° left and right of fixation) as well as position and intensity of the auditory accessory. SRT was reduced under the presence of the accessory, and it decreased both with increasing intensity of the auditory accessory and with decreasing distance between target and accessory. The absence of a significant interaction between distance and auditory intensity suggests (1) that the intensity of the accessory stimulus has no direct influence on the process of crossmodal integration, and (2) that spatial position and intensity of the accessory are processed in separate stages. This was supported by a probability inequality test showing that the amount of neural coactivation depends on spatial distance but not on auditory intensity. The results are discussed in the framework of a two-stage model assuming separate processing of unimodal and bimodal characteristics of the stimuli. These results are related to several recent neurophysiological findings.

Journal ArticleDOI
TL;DR: A recent paper by Alais and Burr on auditory and crossmodal flash-lag effects indicates that the authors' (often implicit) models of the perception of space and time might be flawed.


Journal ArticleDOI
01 Aug 2003-Leonardo
TL;DR: In this paper, the cerebral mechanisms involved in sensory perception and synesthesia are analyzed and the authors suggest that synesthesia can also be considered as a physiological behavior that involves a multimodal combination of all senses.
Abstract: Synesthesia is an unusual phenomenon that is occasionally reported in artists and writers. In its pathological context, synesthesia is described as a confusion of the senses where the excitation of one sense triggers stimulation in a completely different sensory modality. In contrast to this pathological form, synesthesia can also be considered as a physiological behavior that involves a multimodal combination of all senses. Such an expression of sensory perception can also be considered as a natural process that contributes to the adaptation of the living organism to its environment. The author attempts to analyze the cerebral mechanisms involved in sensory perception and synesthesia.

Journal ArticleDOI
Yuichi Wada1
TL;DR: Results demonstrate that substantial crossmodal links exist between vision and touch for covert exogenous orienting of attention in temporal order judgment tasks combined with spatial cueing paradigm.
Abstract: Four experiments investigated the effects of cross-modal attention between vision and touch in temporal order judgment tasks combined with spatial cueing paradigm. In Experiment 1, two vibrotactile stimuli with simultaneous or successive onsets were presented bimanually to the left and right index fingers and participants were asked to judge the temporal order of the two stimuli. The tactile stimuli were preceded by a spatially uninformative visual cue. Results indicated that shift of spatial attention yielded by visual cueing resulted in the modulation of accuracy of the subsequent tactile temporal order judgment. However, this cueing effect disappeared when participants judged simultaneity of the two stimuli, instead of their temporal order (Experiment 2) or when the cue lead time between the visual cue and the stimuli was relatively long (Experiment 3). Experiment 4 replicated an effect of crossmodal attention on the direction of visual illusory line motion induced by a somatosensory cue (Shimojo, Nliyauchi, & Hikosaka, 1997). These results demonstrate that substantial crossmodal links exist between vision and touch for covert exogenous orienting of attention.


01 Jan 2003
TL;DR: The authors found that double dissociations of object recognition do not exist cross-modal and adapted the paradigm of Tresch et al. to become a crossmodal task.
Abstract: Double dissociations are typically used to examine the modularity of the brain, and its effect on behaviour. This double dissociation method has typically been used within one modality. Tresch, Sinnamon & Seamon [1] found double dissociations in participants’ ability to recognize the form or location of an object. We adapted the paradigm of Tresch et al. to become a crossmodal task. In Experiment 1, participants felt or saw random objects and determined whether the two objects presented within a modality were the same or different. We found that double dissociations of object recognition do not exist crossmodally. In Experiment 2, the same paradigm was utilized, but all stimuli were presented haptically. If the double dissociation effect exists, this would support the evidence from previous studies of crossmodal perceptual load. The implications will be discussed further.

Dissertation
01 Jan 2003
TL;DR: In this article, the authors investigate how spatial information from different senses (e.g., hearing, vision, and touch) is integrated in the human brain and provide a detailed insight in the mechanisms at the basis of cross-modal perception.
Abstract: In a natural environment, many events have perceptual consequences in more than one modality. For example, when we hear someone speaking, there is not only auditory information about what is said, but also visual information about where the sound comes from, and phonetic information that can be lipread. It is known that our perceptual system integrates these various kinds of information into a coherent representation of the world. In the current project, we try to obtain a detailed insight in the mechanisms at the basis of crossmodal perception. More in particular, we investigate how spatial information from the different senses (i.e., hearing, vision, and touch) is integrated. It is known that when visual and auditory stimuli such as tone bursts and light flashes are presented synchronously but in different locations, the apparent location of the auditory stimulus is often shifted in the direction of the visual stimulus. This is known as the ventriloquist effect. Apart from this immediate bias, prolonged exposure to synchronous but spatially discordant sound-light pairs may result in recalibration of auditory space as demonstrated by the occurrence of aftereffects. In the current project, we address five inter-related topics concerning these two phenomena. These are: 1) whether higher-order knowledge affects the ventriloquist effect; 2) how crossmodally induced aftereffects built up, how they generalize across space, frequencies, and spectral composition, and how they dissipate; 3) whether ventriloquism affects where auditory attention is directed; 4) whether ventriloquism affects components in the EEG usually considered to reflect 'auditory' processes (i.e., a Mismatch Negativity, MMN, with ventriloquized sounds); and 5) whether spatially conflicting audio-tactile stimulus-pairs induce immediate bias and aftereffects. Together, these projects will give a detailed insight into how humans integrate spatial information from different senses, and they may provide a paradigm case for understanding the integrative functions of the human brain.